2012-09-27 16:00:32 +02:00
|
|
|
/*
|
|
|
|
* GlusterFS backend for QEMU
|
|
|
|
*
|
|
|
|
* Copyright (C) 2012 Bharata B Rao <bharata@linux.vnet.ibm.com>
|
|
|
|
*
|
2014-01-29 15:29:55 +01:00
|
|
|
* This work is licensed under the terms of the GNU GPL, version 2 or later.
|
|
|
|
* See the COPYING file in the top-level directory.
|
2012-09-27 16:00:32 +02:00
|
|
|
*
|
|
|
|
*/
|
2018-02-01 12:18:46 +01:00
|
|
|
|
2016-01-18 19:01:42 +01:00
|
|
|
#include "qemu/osdep.h"
|
2019-03-28 11:52:27 +01:00
|
|
|
#include "qemu/units.h"
|
2012-09-27 16:00:32 +02:00
|
|
|
#include <glusterfs/api/glfs.h>
|
2022-12-21 14:35:49 +01:00
|
|
|
#include "block/block-io.h"
|
2012-12-17 18:19:44 +01:00
|
|
|
#include "block/block_int.h"
|
2018-06-14 21:14:28 +02:00
|
|
|
#include "block/qdict.h"
|
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-14 09:01:28 +01:00
|
|
|
#include "qapi/error.h"
|
2018-02-01 12:18:39 +01:00
|
|
|
#include "qapi/qmp/qdict.h"
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
#include "qapi/qmp/qerror.h"
|
2012-12-17 18:20:00 +01:00
|
|
|
#include "qemu/uri.h"
|
2016-07-19 18:57:31 +02:00
|
|
|
#include "qemu/error-report.h"
|
2019-05-23 16:35:07 +02:00
|
|
|
#include "qemu/module.h"
|
2018-02-01 12:18:46 +01:00
|
|
|
#include "qemu/option.h"
|
block/gluster: improve defense over string to int conversion
using atoi() for converting string to int may be error prone in case if
string supplied in the argument is not a fold of numerical number,
This is not a bug because in the existing code,
static QemuOptsList runtime_tcp_opts = {
.name = "gluster_tcp",
.head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
.desc = {
...
{
.name = GLUSTER_OPT_PORT,
.type = QEMU_OPT_NUMBER,
.help = "port number ...",
},
...
};
port type is QEMU_OPT_NUMBER, before we actually reaches atoi() port is already
defended by parse_option_number()
However It is a good practice to use function like parse_uint_full()
over atoi() to keep port self defended
Note: As now the port string to int conversion has its defence code set,
and also we understand that port argument is actually a string type,
in the follow up patch let's move port type from QEMU_OPT_NUMBER to
QEMU_OPT_STRING
[Jeff Cody: removed spurious parenthesis]
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-08-09 10:50:09 +02:00
|
|
|
#include "qemu/cutils.h"
|
2012-09-27 16:00:32 +02:00
|
|
|
|
2019-03-05 16:46:33 +01:00
|
|
|
#ifdef CONFIG_GLUSTERFS_FTRUNCATE_HAS_STAT
|
|
|
|
# define glfs_ftruncate(fd, offset) glfs_ftruncate(fd, offset, NULL, NULL)
|
|
|
|
#endif
|
|
|
|
|
2016-07-19 18:57:30 +02:00
|
|
|
#define GLUSTER_OPT_FILENAME "filename"
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
#define GLUSTER_OPT_VOLUME "volume"
|
|
|
|
#define GLUSTER_OPT_PATH "path"
|
|
|
|
#define GLUSTER_OPT_TYPE "type"
|
|
|
|
#define GLUSTER_OPT_SERVER_PATTERN "server."
|
|
|
|
#define GLUSTER_OPT_HOST "host"
|
|
|
|
#define GLUSTER_OPT_PORT "port"
|
|
|
|
#define GLUSTER_OPT_TO "to"
|
|
|
|
#define GLUSTER_OPT_IPV4 "ipv4"
|
|
|
|
#define GLUSTER_OPT_IPV6 "ipv6"
|
|
|
|
#define GLUSTER_OPT_SOCKET "socket"
|
2016-07-19 18:57:30 +02:00
|
|
|
#define GLUSTER_OPT_DEBUG "debug"
|
2016-07-19 18:57:32 +02:00
|
|
|
#define GLUSTER_DEFAULT_PORT 24007
|
2016-07-19 18:57:30 +02:00
|
|
|
#define GLUSTER_DEBUG_DEFAULT 4
|
|
|
|
#define GLUSTER_DEBUG_MAX 9
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
#define GLUSTER_OPT_LOGFILE "logfile"
|
|
|
|
#define GLUSTER_LOGFILE_DEFAULT "-" /* handled in libgfapi as /dev/stderr */
|
2019-03-28 11:52:27 +01:00
|
|
|
/*
|
|
|
|
* Several versions of GlusterFS (3.12? -> 6.0.1) fail when the transfer size
|
|
|
|
* is greater or equal to 1024 MiB, so we are limiting the transfer size to 512
|
|
|
|
* MiB to avoid this rare issue.
|
|
|
|
*/
|
|
|
|
#define GLUSTER_MAX_TRANSFER (512 * MiB)
|
2016-07-19 18:57:30 +02:00
|
|
|
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
#define GERR_INDEX_HINT "hint: check in 'server' array index '%d'\n"
|
2016-07-19 18:57:30 +02:00
|
|
|
|
2012-09-27 16:00:32 +02:00
|
|
|
typedef struct GlusterAIOCB {
|
|
|
|
int64_t size;
|
|
|
|
int ret;
|
2013-12-21 10:21:24 +01:00
|
|
|
Coroutine *coroutine;
|
2014-05-08 16:34:41 +02:00
|
|
|
AioContext *aio_context;
|
2012-09-27 16:00:32 +02:00
|
|
|
} GlusterAIOCB;
|
|
|
|
|
|
|
|
typedef struct BDRVGlusterState {
|
|
|
|
struct glfs *glfs;
|
|
|
|
struct glfs_fd *fd;
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
char *logfile;
|
2016-03-10 19:38:00 +01:00
|
|
|
bool supports_seek_data;
|
2016-11-02 17:50:36 +01:00
|
|
|
int debug;
|
2012-09-27 16:00:32 +02:00
|
|
|
} BDRVGlusterState;
|
|
|
|
|
2016-07-19 18:57:30 +02:00
|
|
|
typedef struct BDRVGlusterReopenState {
|
|
|
|
struct glfs *glfs;
|
|
|
|
struct glfs_fd *fd;
|
|
|
|
} BDRVGlusterReopenState;
|
|
|
|
|
|
|
|
|
2016-10-27 17:24:50 +02:00
|
|
|
typedef struct GlfsPreopened {
|
|
|
|
char *volume;
|
|
|
|
glfs_t *fs;
|
|
|
|
int ref;
|
|
|
|
} GlfsPreopened;
|
|
|
|
|
|
|
|
typedef struct ListElement {
|
|
|
|
QLIST_ENTRY(ListElement) list;
|
|
|
|
GlfsPreopened saved;
|
|
|
|
} ListElement;
|
|
|
|
|
2018-12-06 11:58:10 +01:00
|
|
|
static QLIST_HEAD(, ListElement) glfs_list;
|
2016-10-27 17:24:50 +02:00
|
|
|
|
2016-07-19 18:57:30 +02:00
|
|
|
static QemuOptsList qemu_gluster_create_opts = {
|
|
|
|
.name = "qemu-gluster-create-opts",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
|
|
|
|
.desc = {
|
|
|
|
{
|
|
|
|
.name = BLOCK_OPT_SIZE,
|
|
|
|
.type = QEMU_OPT_SIZE,
|
|
|
|
.help = "Virtual disk size"
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = BLOCK_OPT_PREALLOC,
|
|
|
|
.type = QEMU_OPT_STRING,
|
2019-05-24 09:58:48 +02:00
|
|
|
.help = "Preallocation mode (allowed values: off"
|
|
|
|
#ifdef CONFIG_GLUSTERFS_FALLOCATE
|
|
|
|
", falloc"
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_GLUSTERFS_ZEROFILL
|
|
|
|
", full"
|
|
|
|
#endif
|
|
|
|
")"
|
2016-07-19 18:57:30 +02:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_DEBUG,
|
|
|
|
.type = QEMU_OPT_NUMBER,
|
|
|
|
.help = "Gluster log level, valid range is 0-9",
|
|
|
|
},
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_LOGFILE,
|
|
|
|
.type = QEMU_OPT_STRING,
|
|
|
|
.help = "Logfile path of libgfapi",
|
|
|
|
},
|
2016-07-19 18:57:30 +02:00
|
|
|
{ /* end of list */ }
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
static QemuOptsList runtime_opts = {
|
|
|
|
.name = "gluster",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
|
|
|
|
.desc = {
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_FILENAME,
|
|
|
|
.type = QEMU_OPT_STRING,
|
|
|
|
.help = "URL to the gluster image",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_DEBUG,
|
|
|
|
.type = QEMU_OPT_NUMBER,
|
|
|
|
.help = "Gluster log level, valid range is 0-9",
|
|
|
|
},
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_LOGFILE,
|
|
|
|
.type = QEMU_OPT_STRING,
|
|
|
|
.help = "Logfile path of libgfapi",
|
|
|
|
},
|
2016-07-19 18:57:30 +02:00
|
|
|
{ /* end of list */ }
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
static QemuOptsList runtime_json_opts = {
|
|
|
|
.name = "gluster_json",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
|
|
|
|
.desc = {
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_VOLUME,
|
|
|
|
.type = QEMU_OPT_STRING,
|
|
|
|
.help = "name of gluster volume where VM image resides",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_PATH,
|
|
|
|
.type = QEMU_OPT_STRING,
|
|
|
|
.help = "absolute path to image file in gluster volume",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_DEBUG,
|
|
|
|
.type = QEMU_OPT_NUMBER,
|
|
|
|
.help = "Gluster log level, valid range is 0-9",
|
|
|
|
},
|
|
|
|
{ /* end of list */ }
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
static QemuOptsList runtime_type_opts = {
|
|
|
|
.name = "gluster_type",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(runtime_type_opts.head),
|
|
|
|
.desc = {
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_TYPE,
|
|
|
|
.type = QEMU_OPT_STRING,
|
2017-03-06 20:00:48 +01:00
|
|
|
.help = "inet|unix",
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
},
|
|
|
|
{ /* end of list */ }
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
static QemuOptsList runtime_unix_opts = {
|
|
|
|
.name = "gluster_unix",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(runtime_unix_opts.head),
|
|
|
|
.desc = {
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_SOCKET,
|
|
|
|
.type = QEMU_OPT_STRING,
|
2018-04-03 13:08:10 +02:00
|
|
|
.help = "socket file path (legacy)",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_PATH,
|
|
|
|
.type = QEMU_OPT_STRING,
|
|
|
|
.help = "socket file path (QAPI)",
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
},
|
|
|
|
{ /* end of list */ }
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2017-03-06 20:00:48 +01:00
|
|
|
static QemuOptsList runtime_inet_opts = {
|
|
|
|
.name = "gluster_inet",
|
|
|
|
.head = QTAILQ_HEAD_INITIALIZER(runtime_inet_opts.head),
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
.desc = {
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_TYPE,
|
|
|
|
.type = QEMU_OPT_STRING,
|
2017-03-06 20:00:48 +01:00
|
|
|
.help = "inet|unix",
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_HOST,
|
|
|
|
.type = QEMU_OPT_STRING,
|
|
|
|
.help = "host address (hostname/ipv4/ipv6 addresses)",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = GLUSTER_OPT_PORT,
|
2016-08-09 11:48:14 +02:00
|
|
|
.type = QEMU_OPT_STRING,
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
.help = "port number on which glusterd is listening (default 24007)",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = "to",
|
|
|
|
.type = QEMU_OPT_NUMBER,
|
|
|
|
.help = "max port number, not supported by gluster",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = "ipv4",
|
|
|
|
.type = QEMU_OPT_BOOL,
|
|
|
|
.help = "ipv4 bool value, not supported by gluster",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.name = "ipv6",
|
|
|
|
.type = QEMU_OPT_BOOL,
|
|
|
|
.help = "ipv6 bool value, not supported by gluster",
|
|
|
|
},
|
|
|
|
{ /* end of list */ }
|
|
|
|
},
|
|
|
|
};
|
2016-07-19 18:57:30 +02:00
|
|
|
|
2016-10-27 17:24:50 +02:00
|
|
|
static void glfs_set_preopened(const char *volume, glfs_t *fs)
|
|
|
|
{
|
|
|
|
ListElement *entry = NULL;
|
|
|
|
|
|
|
|
entry = g_new(ListElement, 1);
|
|
|
|
|
|
|
|
entry->saved.volume = g_strdup(volume);
|
|
|
|
|
|
|
|
entry->saved.fs = fs;
|
|
|
|
entry->saved.ref = 1;
|
|
|
|
|
|
|
|
QLIST_INSERT_HEAD(&glfs_list, entry, list);
|
|
|
|
}
|
|
|
|
|
|
|
|
static glfs_t *glfs_find_preopened(const char *volume)
|
|
|
|
{
|
|
|
|
ListElement *entry = NULL;
|
|
|
|
|
|
|
|
QLIST_FOREACH(entry, &glfs_list, list) {
|
|
|
|
if (strcmp(entry->saved.volume, volume) == 0) {
|
|
|
|
entry->saved.ref++;
|
|
|
|
return entry->saved.fs;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void glfs_clear_preopened(glfs_t *fs)
|
|
|
|
{
|
|
|
|
ListElement *entry = NULL;
|
2016-11-17 11:30:08 +01:00
|
|
|
ListElement *next;
|
2016-10-27 17:24:50 +02:00
|
|
|
|
|
|
|
if (fs == NULL) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2016-11-17 11:30:08 +01:00
|
|
|
QLIST_FOREACH_SAFE(entry, &glfs_list, list, next) {
|
2016-10-27 17:24:50 +02:00
|
|
|
if (entry->saved.fs == fs) {
|
|
|
|
if (--entry->saved.ref) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
QLIST_REMOVE(entry, list);
|
|
|
|
|
|
|
|
glfs_fini(entry->saved.fs);
|
|
|
|
g_free(entry->saved.volume);
|
|
|
|
g_free(entry);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-07-19 18:57:32 +02:00
|
|
|
static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
|
|
|
char *p, *q;
|
|
|
|
|
|
|
|
if (!path) {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* volume */
|
|
|
|
p = q = path + strspn(path, "/");
|
|
|
|
p += strcspn(p, "/");
|
|
|
|
if (*p == '\0') {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2016-07-19 18:57:29 +02:00
|
|
|
gconf->volume = g_strndup(q, p - q);
|
2012-09-27 16:00:32 +02:00
|
|
|
|
2016-07-19 18:57:29 +02:00
|
|
|
/* path */
|
2012-09-27 16:00:32 +02:00
|
|
|
p += strspn(p, "/");
|
|
|
|
if (*p == '\0') {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2016-07-19 18:57:29 +02:00
|
|
|
gconf->path = g_strdup(p);
|
2012-09-27 16:00:32 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2016-07-19 18:57:29 +02:00
|
|
|
* file=gluster[+transport]://[host[:port]]/volume/path[?socket=...]
|
2012-09-27 16:00:32 +02:00
|
|
|
*
|
|
|
|
* 'gluster' is the protocol.
|
|
|
|
*
|
|
|
|
* 'transport' specifies the transport type used to connect to gluster
|
|
|
|
* management daemon (glusterd). Valid transport types are
|
2016-07-19 18:57:31 +02:00
|
|
|
* tcp or unix. If a transport type isn't specified, then tcp type is assumed.
|
2012-09-27 16:00:32 +02:00
|
|
|
*
|
2016-07-19 18:57:29 +02:00
|
|
|
* 'host' specifies the host where the volume file specification for
|
2016-07-19 18:57:31 +02:00
|
|
|
* the given volume resides. This can be either hostname or ipv4 address.
|
2016-07-19 18:57:29 +02:00
|
|
|
* If transport type is 'unix', then 'host' field should not be specified.
|
2012-09-27 16:00:32 +02:00
|
|
|
* The 'socket' field needs to be populated with the path to unix domain
|
|
|
|
* socket.
|
|
|
|
*
|
|
|
|
* 'port' is the port number on which glusterd is listening. This is optional
|
|
|
|
* and if not specified, QEMU will send 0 which will make gluster to use the
|
|
|
|
* default port. If the transport type is unix, then 'port' should not be
|
|
|
|
* specified.
|
|
|
|
*
|
2016-07-19 18:57:29 +02:00
|
|
|
* 'volume' is the name of the gluster volume which contains the VM image.
|
2012-09-27 16:00:32 +02:00
|
|
|
*
|
2016-07-19 18:57:29 +02:00
|
|
|
* 'path' is the path to the actual VM image that resides on gluster volume.
|
2012-09-27 16:00:32 +02:00
|
|
|
*
|
|
|
|
* Examples:
|
|
|
|
*
|
|
|
|
* file=gluster://1.2.3.4/testvol/a.img
|
|
|
|
* file=gluster+tcp://1.2.3.4/testvol/a.img
|
|
|
|
* file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
|
2016-07-19 18:57:29 +02:00
|
|
|
* file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
|
2012-09-27 16:00:32 +02:00
|
|
|
* file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
|
|
|
|
*/
|
2016-07-19 18:57:32 +02:00
|
|
|
static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
|
|
|
|
const char *filename)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
2017-04-26 09:36:40 +02:00
|
|
|
SocketAddress *gsconf;
|
2012-09-27 16:00:32 +02:00
|
|
|
URI *uri;
|
|
|
|
QueryParams *qp = NULL;
|
|
|
|
bool is_unix = false;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
uri = uri_parse(filename);
|
|
|
|
if (!uri) {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2020-11-13 02:13:37 +01:00
|
|
|
gsconf = g_new0(SocketAddress, 1);
|
|
|
|
QAPI_LIST_PREPEND(gconf->server, gsconf);
|
2016-07-19 18:57:32 +02:00
|
|
|
|
2012-09-27 16:00:32 +02:00
|
|
|
/* transport */
|
2014-02-17 14:43:54 +01:00
|
|
|
if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
|
2017-04-26 09:36:40 +02:00
|
|
|
gsconf->type = SOCKET_ADDRESS_TYPE_INET;
|
2012-09-27 16:00:32 +02:00
|
|
|
} else if (!strcmp(uri->scheme, "gluster+tcp")) {
|
2017-04-26 09:36:40 +02:00
|
|
|
gsconf->type = SOCKET_ADDRESS_TYPE_INET;
|
2012-09-27 16:00:32 +02:00
|
|
|
} else if (!strcmp(uri->scheme, "gluster+unix")) {
|
2017-04-26 09:36:40 +02:00
|
|
|
gsconf->type = SOCKET_ADDRESS_TYPE_UNIX;
|
2012-09-27 16:00:32 +02:00
|
|
|
is_unix = true;
|
|
|
|
} else if (!strcmp(uri->scheme, "gluster+rdma")) {
|
2017-04-26 09:36:40 +02:00
|
|
|
gsconf->type = SOCKET_ADDRESS_TYPE_INET;
|
2017-07-12 15:57:41 +02:00
|
|
|
warn_report("rdma feature is not supported, falling back to tcp");
|
2012-09-27 16:00:32 +02:00
|
|
|
} else {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = parse_volume_options(gconf, uri->path);
|
|
|
|
if (ret < 0) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
qp = query_params_parse(uri->query);
|
|
|
|
if (qp->n > 1 || (is_unix && !qp->n) || (!is_unix && qp->n)) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (is_unix) {
|
|
|
|
if (uri->server || uri->port) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
if (strcmp(qp->p[0].name, "socket")) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
2016-07-19 18:57:32 +02:00
|
|
|
gsconf->u.q_unix.path = g_strdup(qp->p[0].value);
|
2012-09-27 16:00:32 +02:00
|
|
|
} else {
|
2017-03-06 20:00:48 +01:00
|
|
|
gsconf->u.inet.host = g_strdup(uri->server ? uri->server : "localhost");
|
2016-07-19 18:57:32 +02:00
|
|
|
if (uri->port) {
|
2017-03-06 20:00:48 +01:00
|
|
|
gsconf->u.inet.port = g_strdup_printf("%d", uri->port);
|
2016-07-19 18:57:32 +02:00
|
|
|
} else {
|
2017-03-06 20:00:48 +01:00
|
|
|
gsconf->u.inet.port = g_strdup_printf("%d", GLUSTER_DEFAULT_PORT);
|
2016-07-19 18:57:32 +02:00
|
|
|
}
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (qp) {
|
|
|
|
query_params_free(qp);
|
|
|
|
}
|
|
|
|
uri_free(uri);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
|
|
|
|
Error **errp)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
struct glfs *glfs;
|
2012-09-27 16:00:32 +02:00
|
|
|
int ret;
|
|
|
|
int old_errno;
|
2017-04-26 09:36:40 +02:00
|
|
|
SocketAddressList *server;
|
2023-05-22 21:04:29 +02:00
|
|
|
uint64_t port;
|
2012-09-27 16:00:32 +02:00
|
|
|
|
2016-10-27 17:24:50 +02:00
|
|
|
glfs = glfs_find_preopened(gconf->volume);
|
|
|
|
if (glfs) {
|
|
|
|
return glfs;
|
|
|
|
}
|
|
|
|
|
2016-07-19 18:57:29 +02:00
|
|
|
glfs = glfs_new(gconf->volume);
|
2012-09-27 16:00:32 +02:00
|
|
|
if (!glfs) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-10-27 17:24:50 +02:00
|
|
|
glfs_set_preopened(gconf->volume, glfs);
|
|
|
|
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
for (server = gconf->server; server; server = server->next) {
|
2017-03-30 19:43:13 +02:00
|
|
|
switch (server->value->type) {
|
2017-04-26 09:36:40 +02:00
|
|
|
case SOCKET_ADDRESS_TYPE_UNIX:
|
2017-03-06 20:00:44 +01:00
|
|
|
ret = glfs_set_volfile_server(glfs, "unix",
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
server->value->u.q_unix.path, 0);
|
2017-03-30 19:43:13 +02:00
|
|
|
break;
|
2017-04-26 09:36:40 +02:00
|
|
|
case SOCKET_ADDRESS_TYPE_INET:
|
2023-05-22 21:04:29 +02:00
|
|
|
if (parse_uint_full(server->value->u.inet.port, 10, &port) < 0 ||
|
block/gluster: improve defense over string to int conversion
using atoi() for converting string to int may be error prone in case if
string supplied in the argument is not a fold of numerical number,
This is not a bug because in the existing code,
static QemuOptsList runtime_tcp_opts = {
.name = "gluster_tcp",
.head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
.desc = {
...
{
.name = GLUSTER_OPT_PORT,
.type = QEMU_OPT_NUMBER,
.help = "port number ...",
},
...
};
port type is QEMU_OPT_NUMBER, before we actually reaches atoi() port is already
defended by parse_option_number()
However It is a good practice to use function like parse_uint_full()
over atoi() to keep port self defended
Note: As now the port string to int conversion has its defence code set,
and also we understand that port argument is actually a string type,
in the follow up patch let's move port type from QEMU_OPT_NUMBER to
QEMU_OPT_STRING
[Jeff Cody: removed spurious parenthesis]
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-08-09 10:50:09 +02:00
|
|
|
port > 65535) {
|
|
|
|
error_setg(errp, "'%s' is not a valid port number",
|
2017-03-06 20:00:48 +01:00
|
|
|
server->value->u.inet.port);
|
block/gluster: improve defense over string to int conversion
using atoi() for converting string to int may be error prone in case if
string supplied in the argument is not a fold of numerical number,
This is not a bug because in the existing code,
static QemuOptsList runtime_tcp_opts = {
.name = "gluster_tcp",
.head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
.desc = {
...
{
.name = GLUSTER_OPT_PORT,
.type = QEMU_OPT_NUMBER,
.help = "port number ...",
},
...
};
port type is QEMU_OPT_NUMBER, before we actually reaches atoi() port is already
defended by parse_option_number()
However It is a good practice to use function like parse_uint_full()
over atoi() to keep port self defended
Note: As now the port string to int conversion has its defence code set,
and also we understand that port argument is actually a string type,
in the follow up patch let's move port type from QEMU_OPT_NUMBER to
QEMU_OPT_STRING
[Jeff Cody: removed spurious parenthesis]
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-08-09 10:50:09 +02:00
|
|
|
errno = EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
2017-03-06 20:00:44 +01:00
|
|
|
ret = glfs_set_volfile_server(glfs, "tcp",
|
2017-03-06 20:00:48 +01:00
|
|
|
server->value->u.inet.host,
|
block/gluster: improve defense over string to int conversion
using atoi() for converting string to int may be error prone in case if
string supplied in the argument is not a fold of numerical number,
This is not a bug because in the existing code,
static QemuOptsList runtime_tcp_opts = {
.name = "gluster_tcp",
.head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
.desc = {
...
{
.name = GLUSTER_OPT_PORT,
.type = QEMU_OPT_NUMBER,
.help = "port number ...",
},
...
};
port type is QEMU_OPT_NUMBER, before we actually reaches atoi() port is already
defended by parse_option_number()
However It is a good practice to use function like parse_uint_full()
over atoi() to keep port self defended
Note: As now the port string to int conversion has its defence code set,
and also we understand that port argument is actually a string type,
in the follow up patch let's move port type from QEMU_OPT_NUMBER to
QEMU_OPT_STRING
[Jeff Cody: removed spurious parenthesis]
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-08-09 10:50:09 +02:00
|
|
|
(int)port);
|
2017-03-30 19:43:13 +02:00
|
|
|
break;
|
2017-04-26 09:36:40 +02:00
|
|
|
case SOCKET_ADDRESS_TYPE_VSOCK:
|
|
|
|
case SOCKET_ADDRESS_TYPE_FD:
|
2017-03-30 19:43:13 +02:00
|
|
|
default:
|
|
|
|
abort();
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
goto out;
|
|
|
|
}
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
|
|
|
|
2016-11-02 17:50:36 +01:00
|
|
|
ret = glfs_set_logging(glfs, gconf->logfile, gconf->debug);
|
2012-09-27 16:00:32 +02:00
|
|
|
if (ret < 0) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = glfs_init(glfs);
|
|
|
|
if (ret) {
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
error_setg(errp, "Gluster connection for volume %s, path %s failed"
|
|
|
|
" to connect", gconf->volume, gconf->path);
|
|
|
|
for (server = gconf->server; server; server = server->next) {
|
2017-04-26 09:36:40 +02:00
|
|
|
if (server->value->type == SOCKET_ADDRESS_TYPE_UNIX) {
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
error_append_hint(errp, "hint: failed on socket %s ",
|
|
|
|
server->value->u.q_unix.path);
|
|
|
|
} else {
|
|
|
|
error_append_hint(errp, "hint: failed on host %s and port %s ",
|
2017-03-06 20:00:48 +01:00
|
|
|
server->value->u.inet.host,
|
|
|
|
server->value->u.inet.port);
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
}
|
2016-07-19 18:57:32 +02:00
|
|
|
}
|
2014-05-09 12:08:10 +02:00
|
|
|
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
error_append_hint(errp, "Please refer to gluster logs for more info\n");
|
|
|
|
|
2014-05-09 12:08:10 +02:00
|
|
|
/* glfs_init sometimes doesn't set errno although docs suggest that */
|
2016-07-19 18:57:32 +02:00
|
|
|
if (errno == 0) {
|
2014-05-09 12:08:10 +02:00
|
|
|
errno = EINVAL;
|
2016-07-19 18:57:32 +02:00
|
|
|
}
|
2014-05-09 12:08:10 +02:00
|
|
|
|
2012-09-27 16:00:32 +02:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
return glfs;
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (glfs) {
|
|
|
|
old_errno = errno;
|
2016-10-27 17:24:50 +02:00
|
|
|
glfs_clear_preopened(glfs);
|
2012-09-27 16:00:32 +02:00
|
|
|
errno = old_errno;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
/*
|
|
|
|
* Convert the json formatted command line into qapi.
|
|
|
|
*/
|
|
|
|
static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
|
|
|
|
QDict *options, Error **errp)
|
|
|
|
{
|
|
|
|
QemuOpts *opts;
|
2017-04-26 09:36:40 +02:00
|
|
|
SocketAddress *gsconf = NULL;
|
2021-01-13 23:10:13 +01:00
|
|
|
SocketAddressList **tail;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
QDict *backing_options = NULL;
|
|
|
|
Error *local_err = NULL;
|
|
|
|
char *str = NULL;
|
|
|
|
const char *ptr;
|
2017-06-05 19:01:38 +02:00
|
|
|
int i, type, num_servers;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
|
|
|
|
/* create opts info from runtime_json_opts list */
|
|
|
|
opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
|
2020-07-07 18:06:05 +02:00
|
|
|
if (!qemu_opts_absorb_qdict(opts, options, errp)) {
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
|
|
|
|
if (num_servers < 1) {
|
|
|
|
error_setg(&local_err, QERR_MISSING_PARAMETER, "server");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
|
|
|
|
if (!ptr) {
|
|
|
|
error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_VOLUME);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
gconf->volume = g_strdup(ptr);
|
|
|
|
|
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
|
|
|
|
if (!ptr) {
|
|
|
|
error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_PATH);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
gconf->path = g_strdup(ptr);
|
|
|
|
qemu_opts_del(opts);
|
2021-01-13 23:10:13 +01:00
|
|
|
tail = &gconf->server;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
|
|
|
|
for (i = 0; i < num_servers; i++) {
|
|
|
|
str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
|
|
|
|
qdict_extract_subqdict(options, &backing_options, str);
|
|
|
|
|
|
|
|
/* create opts info from runtime_type_opts list */
|
|
|
|
opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
|
2020-07-07 18:06:05 +02:00
|
|
|
if (!qemu_opts_absorb_qdict(opts, backing_options, errp)) {
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
|
|
|
|
if (!ptr) {
|
|
|
|
error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_TYPE);
|
|
|
|
error_append_hint(&local_err, GERR_INDEX_HINT, i);
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
}
|
2017-04-26 09:36:40 +02:00
|
|
|
gsconf = g_new0(SocketAddress, 1);
|
2017-03-06 20:00:48 +01:00
|
|
|
if (!strcmp(ptr, "tcp")) {
|
|
|
|
ptr = "inet"; /* accept legacy "tcp" */
|
|
|
|
}
|
2017-08-24 10:46:10 +02:00
|
|
|
type = qapi_enum_parse(&SocketAddressType_lookup, ptr, -1, NULL);
|
2017-04-26 09:36:40 +02:00
|
|
|
if (type != SOCKET_ADDRESS_TYPE_INET
|
|
|
|
&& type != SOCKET_ADDRESS_TYPE_UNIX) {
|
2017-03-30 19:43:13 +02:00
|
|
|
error_setg(&local_err,
|
|
|
|
"Parameter '%s' may be 'inet' or 'unix'",
|
|
|
|
GLUSTER_OPT_TYPE);
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
error_append_hint(&local_err, GERR_INDEX_HINT, i);
|
|
|
|
goto out;
|
|
|
|
}
|
2017-03-30 19:43:13 +02:00
|
|
|
gsconf->type = type;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
qemu_opts_del(opts);
|
|
|
|
|
2017-04-26 09:36:40 +02:00
|
|
|
if (gsconf->type == SOCKET_ADDRESS_TYPE_INET) {
|
2017-03-06 20:00:48 +01:00
|
|
|
/* create opts info from runtime_inet_opts list */
|
|
|
|
opts = qemu_opts_create(&runtime_inet_opts, NULL, 0, &error_abort);
|
2020-07-07 18:06:05 +02:00
|
|
|
if (!qemu_opts_absorb_qdict(opts, backing_options, errp)) {
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
|
|
|
|
if (!ptr) {
|
|
|
|
error_setg(&local_err, QERR_MISSING_PARAMETER,
|
|
|
|
GLUSTER_OPT_HOST);
|
|
|
|
error_append_hint(&local_err, GERR_INDEX_HINT, i);
|
|
|
|
goto out;
|
|
|
|
}
|
2017-03-06 20:00:48 +01:00
|
|
|
gsconf->u.inet.host = g_strdup(ptr);
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_PORT);
|
|
|
|
if (!ptr) {
|
|
|
|
error_setg(&local_err, QERR_MISSING_PARAMETER,
|
|
|
|
GLUSTER_OPT_PORT);
|
|
|
|
error_append_hint(&local_err, GERR_INDEX_HINT, i);
|
|
|
|
goto out;
|
|
|
|
}
|
2017-03-06 20:00:48 +01:00
|
|
|
gsconf->u.inet.port = g_strdup(ptr);
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
|
|
|
|
/* defend for unsupported fields in InetSocketAddress,
|
|
|
|
* i.e. @ipv4, @ipv6 and @to
|
|
|
|
*/
|
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_TO);
|
|
|
|
if (ptr) {
|
2017-03-06 20:00:48 +01:00
|
|
|
gsconf->u.inet.has_to = true;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
}
|
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV4);
|
|
|
|
if (ptr) {
|
2017-03-06 20:00:48 +01:00
|
|
|
gsconf->u.inet.has_ipv4 = true;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
}
|
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV6);
|
|
|
|
if (ptr) {
|
2017-03-06 20:00:48 +01:00
|
|
|
gsconf->u.inet.has_ipv6 = true;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
}
|
2017-03-06 20:00:48 +01:00
|
|
|
if (gsconf->u.inet.has_to) {
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
error_setg(&local_err, "Parameter 'to' not supported");
|
|
|
|
goto out;
|
|
|
|
}
|
2017-03-06 20:00:48 +01:00
|
|
|
if (gsconf->u.inet.has_ipv4 || gsconf->u.inet.has_ipv6) {
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
error_setg(&local_err, "Parameters 'ipv4/ipv6' not supported");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
qemu_opts_del(opts);
|
|
|
|
} else {
|
|
|
|
/* create opts info from runtime_unix_opts list */
|
|
|
|
opts = qemu_opts_create(&runtime_unix_opts, NULL, 0, &error_abort);
|
2020-07-07 18:06:05 +02:00
|
|
|
if (!qemu_opts_absorb_qdict(opts, backing_options, errp)) {
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-04-03 13:08:10 +02:00
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
|
|
|
|
if (!ptr) {
|
|
|
|
ptr = qemu_opt_get(opts, GLUSTER_OPT_SOCKET);
|
|
|
|
} else if (qemu_opt_get(opts, GLUSTER_OPT_SOCKET)) {
|
|
|
|
error_setg(&local_err,
|
|
|
|
"Conflicting parameters 'path' and 'socket'");
|
|
|
|
error_append_hint(&local_err, GERR_INDEX_HINT, i);
|
|
|
|
goto out;
|
|
|
|
}
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
if (!ptr) {
|
|
|
|
error_setg(&local_err, QERR_MISSING_PARAMETER,
|
2018-04-03 13:08:10 +02:00
|
|
|
GLUSTER_OPT_PATH);
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
error_append_hint(&local_err, GERR_INDEX_HINT, i);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
gsconf->u.q_unix.path = g_strdup(ptr);
|
|
|
|
qemu_opts_del(opts);
|
|
|
|
}
|
|
|
|
|
2021-01-13 23:10:13 +01:00
|
|
|
QAPI_LIST_APPEND(tail, gsconf);
|
2017-03-06 20:00:46 +01:00
|
|
|
gsconf = NULL;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
|
2018-04-19 17:01:43 +02:00
|
|
|
qobject_unref(backing_options);
|
2017-03-06 20:00:46 +01:00
|
|
|
backing_options = NULL;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
g_free(str);
|
|
|
|
str = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
out:
|
|
|
|
error_propagate(errp, local_err);
|
2017-04-26 09:36:40 +02:00
|
|
|
qapi_free_SocketAddress(gsconf);
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
qemu_opts_del(opts);
|
2017-03-06 20:00:46 +01:00
|
|
|
g_free(str);
|
2018-04-19 17:01:43 +02:00
|
|
|
qobject_unref(backing_options);
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
errno = EINVAL;
|
|
|
|
return -errno;
|
|
|
|
}
|
|
|
|
|
2018-01-31 16:27:38 +01:00
|
|
|
/* Converts options given in @filename and the @options QDict into the QAPI
|
|
|
|
* object @gconf. */
|
|
|
|
static int qemu_gluster_parse(BlockdevOptionsGluster *gconf,
|
|
|
|
const char *filename,
|
|
|
|
QDict *options, Error **errp)
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
if (filename) {
|
|
|
|
ret = qemu_gluster_parse_uri(gconf, filename);
|
|
|
|
if (ret < 0) {
|
block: include original filename when reporting invalid URIs
Consider passing a JSON based block driver to "qemu-img commit"
$ qemu-img commit 'json:{"driver":"qcow2","file":{"driver":"gluster",\
"volume":"gv0","path":"sn1.qcow2",
"server":[{"type":\
"tcp","host":"10.73.199.197","port":"24007"}]},}'
Currently it will commit the content and then report an incredibly
useless error message when trying to re-open the committed image:
qemu-img: invalid URI
Usage: file=gluster[+transport]://[host[:port]]volume/path[?socket=...][,file.debug=N][,file.logfile=/path/filename.log]
With this fix we get:
qemu-img: invalid URI json:{"server.0.host": "10.73.199.197",
"driver": "gluster", "path": "luks.qcow2", "server.0.type":
"tcp", "server.0.port": "24007", "volume": "gv0"}
Of course the root cause problem still exists, but now we know
what actually needs fixing.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180206105204.14817-1-berrange@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2018-02-06 11:52:04 +01:00
|
|
|
error_setg(errp, "invalid URI %s", filename);
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
error_append_hint(errp, "Usage: file=gluster[+transport]://"
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
"[host[:port]]volume/path[?socket=...]"
|
|
|
|
"[,file.debug=N]"
|
|
|
|
"[,file.logfile=/path/filename.log]\n");
|
2018-01-31 16:27:38 +01:00
|
|
|
return ret;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
ret = qemu_gluster_parse_json(gconf, options, errp);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_append_hint(errp, "Usage: "
|
|
|
|
"-drive driver=qcow2,file.driver=gluster,"
|
|
|
|
"file.volume=testvol,file.path=/path/a.qcow2"
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
"[,file.debug=9]"
|
|
|
|
"[,file.logfile=/path/filename.log],"
|
2017-03-06 20:00:48 +01:00
|
|
|
"file.server.0.type=inet,"
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
"file.server.0.host=1.2.3.4,"
|
|
|
|
"file.server.0.port=24007,"
|
|
|
|
"file.server.1.transport=unix,"
|
2018-04-03 13:08:10 +02:00
|
|
|
"file.server.1.path=/var/run/glusterd.socket ..."
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
"\n");
|
2018-01-31 16:27:38 +01:00
|
|
|
return ret;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
}
|
2018-01-31 16:27:38 +01:00
|
|
|
}
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
|
2018-01-31 16:27:38 +01:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
|
|
|
|
const char *filename,
|
|
|
|
QDict *options, Error **errp)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = qemu_gluster_parse(gconf, filename, options, errp);
|
|
|
|
if (ret < 0) {
|
|
|
|
errno = -ret;
|
|
|
|
return NULL;
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
return qemu_gluster_glfs_init(gconf, errp);
|
|
|
|
}
|
|
|
|
|
2013-12-21 10:21:25 +01:00
|
|
|
/*
|
|
|
|
* AIO callback routine called from GlusterFS thread.
|
|
|
|
*/
|
2019-03-05 16:46:34 +01:00
|
|
|
static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret,
|
|
|
|
#ifdef CONFIG_GLUSTERFS_IOCB_HAS_STAT
|
|
|
|
struct glfs_stat *pre, struct glfs_stat *post,
|
|
|
|
#endif
|
|
|
|
void *arg)
|
2013-12-21 10:21:25 +01:00
|
|
|
{
|
|
|
|
GlusterAIOCB *acb = (GlusterAIOCB *)arg;
|
|
|
|
|
|
|
|
if (!ret || ret == acb->size) {
|
|
|
|
acb->ret = 0; /* Success */
|
|
|
|
} else if (ret < 0) {
|
2016-04-06 05:11:34 +02:00
|
|
|
acb->ret = -errno; /* Read/Write failed */
|
2013-12-21 10:21:25 +01:00
|
|
|
} else {
|
|
|
|
acb->ret = -EIO; /* Partial read/write - fail it */
|
|
|
|
}
|
|
|
|
|
2017-02-13 14:52:31 +01:00
|
|
|
aio_co_schedule(acb->aio_context, acb->coroutine);
|
2013-12-21 10:21:25 +01:00
|
|
|
}
|
|
|
|
|
2014-02-17 17:11:11 +01:00
|
|
|
static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
|
|
|
|
{
|
|
|
|
assert(open_flags != NULL);
|
|
|
|
|
|
|
|
*open_flags |= O_BINARY;
|
|
|
|
|
|
|
|
if (bdrv_flags & BDRV_O_RDWR) {
|
|
|
|
*open_flags |= O_RDWR;
|
|
|
|
} else {
|
|
|
|
*open_flags |= O_RDONLY;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((bdrv_flags & BDRV_O_NOCACHE)) {
|
|
|
|
*open_flags |= O_DIRECT;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-10 19:38:00 +01:00
|
|
|
/*
|
|
|
|
* Do SEEK_DATA/HOLE to detect if it is functional. Older broken versions of
|
|
|
|
* gfapi incorrectly return the current offset when SEEK_DATA/HOLE is used.
|
|
|
|
* - Corrected versions return -1 and set errno to EINVAL.
|
|
|
|
* - Versions that support SEEK_DATA/HOLE correctly, will return -1 and set
|
|
|
|
* errno to ENXIO when SEEK_DATA is called with a position of EOF.
|
|
|
|
*/
|
|
|
|
static bool qemu_gluster_test_seek(struct glfs_fd *fd)
|
|
|
|
{
|
2016-10-07 23:48:12 +02:00
|
|
|
off_t ret = 0;
|
|
|
|
|
|
|
|
#if defined SEEK_HOLE && defined SEEK_DATA
|
|
|
|
off_t eof;
|
2016-03-10 19:38:00 +01:00
|
|
|
|
|
|
|
eof = glfs_lseek(fd, 0, SEEK_END);
|
|
|
|
if (eof < 0) {
|
|
|
|
/* this should never occur */
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* this should always fail with ENXIO if SEEK_DATA is supported */
|
|
|
|
ret = glfs_lseek(fd, eof, SEEK_DATA);
|
2016-10-07 23:48:12 +02:00
|
|
|
#endif
|
|
|
|
|
2016-03-10 19:38:00 +01:00
|
|
|
return (ret < 0) && (errno == ENXIO);
|
|
|
|
}
|
|
|
|
|
2013-04-12 20:02:37 +02:00
|
|
|
static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
|
2013-09-05 14:22:29 +02:00
|
|
|
int bdrv_flags, Error **errp)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
|
|
|
BDRVGlusterState *s = bs->opaque;
|
2014-02-17 17:11:11 +01:00
|
|
|
int open_flags = 0;
|
2012-09-27 16:00:32 +02:00
|
|
|
int ret = 0;
|
2016-07-19 18:57:32 +02:00
|
|
|
BlockdevOptionsGluster *gconf = NULL;
|
2013-04-12 17:50:16 +02:00
|
|
|
QemuOpts *opts;
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
const char *filename, *logfile;
|
2013-04-12 17:50:16 +02:00
|
|
|
|
2014-01-02 03:49:17 +01:00
|
|
|
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
|
2020-07-07 18:06:03 +02:00
|
|
|
if (!qemu_opts_absorb_qdict(opts, options, errp)) {
|
2013-04-12 17:50:16 +02:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-04-07 23:24:19 +02:00
|
|
|
filename = qemu_opt_get(opts, GLUSTER_OPT_FILENAME);
|
|
|
|
|
2016-11-02 17:50:36 +01:00
|
|
|
s->debug = qemu_opt_get_number(opts, GLUSTER_OPT_DEBUG,
|
|
|
|
GLUSTER_DEBUG_DEFAULT);
|
|
|
|
if (s->debug < 0) {
|
|
|
|
s->debug = 0;
|
|
|
|
} else if (s->debug > GLUSTER_DEBUG_MAX) {
|
|
|
|
s->debug = GLUSTER_DEBUG_MAX;
|
2016-04-07 23:24:19 +02:00
|
|
|
}
|
2013-04-12 17:50:16 +02:00
|
|
|
|
2016-07-19 18:57:32 +02:00
|
|
|
gconf = g_new0(BlockdevOptionsGluster, 1);
|
2016-11-02 17:50:36 +01:00
|
|
|
gconf->debug = s->debug;
|
|
|
|
gconf->has_debug = true;
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
|
|
|
|
logfile = qemu_opt_get(opts, GLUSTER_OPT_LOGFILE);
|
|
|
|
s->logfile = g_strdup(logfile ? logfile : GLUSTER_LOGFILE_DEFAULT);
|
|
|
|
|
|
|
|
gconf->logfile = g_strdup(s->logfile);
|
|
|
|
|
block/gluster: add support for multiple gluster servers
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1468947453-5433-6-git-send-email-prasanna.kalever@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-19 18:57:33 +02:00
|
|
|
s->glfs = qemu_gluster_init(gconf, filename, options, errp);
|
2012-09-27 16:00:32 +02:00
|
|
|
if (!s->glfs) {
|
|
|
|
ret = -errno;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2016-04-05 16:40:09 +02:00
|
|
|
#ifdef CONFIG_GLUSTERFS_XLATOR_OPT
|
|
|
|
/* Without this, if fsync fails for a recoverable reason (for instance,
|
|
|
|
* ENOSPC), gluster will dump its cache, preventing retries. This means
|
|
|
|
* almost certain data loss. Not all gluster versions support the
|
|
|
|
* 'resync-failed-syncs-after-fsync' key value, but there is no way to
|
|
|
|
* discover during runtime if it is supported (this api returns success for
|
|
|
|
* unknown key/value pairs) */
|
|
|
|
ret = glfs_set_xlator_option(s->glfs, "*-write-behind",
|
|
|
|
"resync-failed-syncs-after-fsync",
|
|
|
|
"on");
|
|
|
|
if (ret < 0) {
|
|
|
|
error_setg_errno(errp, errno, "Unable to set xlator key/value pair");
|
|
|
|
ret = -errno;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-02-17 17:11:11 +01:00
|
|
|
qemu_gluster_parse_flags(bdrv_flags, &open_flags);
|
2012-09-27 16:00:32 +02:00
|
|
|
|
2016-07-19 18:57:29 +02:00
|
|
|
s->fd = glfs_open(s->glfs, gconf->path, open_flags);
|
2018-10-08 17:27:18 +02:00
|
|
|
ret = s->fd ? 0 : -errno;
|
|
|
|
|
|
|
|
if (ret == -EACCES || ret == -EROFS) {
|
|
|
|
/* Try to degrade to read-only, but if it doesn't work, still use the
|
|
|
|
* normal error message. */
|
2023-09-29 16:51:53 +02:00
|
|
|
bdrv_graph_rdlock_main_loop();
|
2018-10-08 17:27:18 +02:00
|
|
|
if (bdrv_apply_auto_read_only(bs, NULL, NULL) == 0) {
|
|
|
|
open_flags = (open_flags & ~O_RDWR) | O_RDONLY;
|
|
|
|
s->fd = glfs_open(s->glfs, gconf->path, open_flags);
|
|
|
|
ret = s->fd ? 0 : -errno;
|
|
|
|
}
|
2023-09-29 16:51:53 +02:00
|
|
|
bdrv_graph_rdunlock_main_loop();
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
|
|
|
|
2016-03-10 19:38:00 +01:00
|
|
|
s->supports_seek_data = qemu_gluster_test_seek(s->fd);
|
|
|
|
|
2012-09-27 16:00:32 +02:00
|
|
|
out:
|
2013-04-12 17:50:16 +02:00
|
|
|
qemu_opts_del(opts);
|
2016-07-19 18:57:32 +02:00
|
|
|
qapi_free_BlockdevOptionsGluster(gconf);
|
2012-09-27 16:00:32 +02:00
|
|
|
if (!ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
g_free(s->logfile);
|
2012-09-27 16:00:32 +02:00
|
|
|
if (s->fd) {
|
|
|
|
glfs_close(s->fd);
|
|
|
|
}
|
2016-10-27 17:24:50 +02:00
|
|
|
|
|
|
|
glfs_clear_preopened(s->glfs);
|
|
|
|
|
2012-09-27 16:00:32 +02:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-03-28 11:52:27 +01:00
|
|
|
static void qemu_gluster_refresh_limits(BlockDriverState *bs, Error **errp)
|
|
|
|
{
|
|
|
|
bs->bl.max_transfer = GLUSTER_MAX_TRANSFER;
|
2022-05-20 09:59:22 +02:00
|
|
|
bs->bl.max_pdiscard = MIN(SIZE_MAX, INT64_MAX);
|
2019-03-28 11:52:27 +01:00
|
|
|
}
|
|
|
|
|
2014-02-17 17:11:12 +01:00
|
|
|
static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
|
|
|
|
BlockReopenQueue *queue, Error **errp)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
2016-04-07 23:24:19 +02:00
|
|
|
BDRVGlusterState *s;
|
2014-02-17 17:11:12 +01:00
|
|
|
BDRVGlusterReopenState *reop_s;
|
2016-07-19 18:57:32 +02:00
|
|
|
BlockdevOptionsGluster *gconf;
|
2014-02-17 17:11:12 +01:00
|
|
|
int open_flags = 0;
|
|
|
|
|
|
|
|
assert(state != NULL);
|
|
|
|
assert(state->bs != NULL);
|
|
|
|
|
2016-04-07 23:24:19 +02:00
|
|
|
s = state->bs->opaque;
|
|
|
|
|
block: Use g_new() & friends where that makes obvious sense
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer,
for two reasons. One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.
Patch created with Coccinelle, with two manual changes on top:
* Add const to bdrv_iterate_format() to keep the types straight
* Convert the allocation in bdrv_drop_intermediate(), which Coccinelle
inexplicably misses
Coccinelle semantic patch:
@@
type T;
@@
-g_malloc(sizeof(T))
+g_new(T, 1)
@@
type T;
@@
-g_try_malloc(sizeof(T))
+g_try_new(T, 1)
@@
type T;
@@
-g_malloc0(sizeof(T))
+g_new0(T, 1)
@@
type T;
@@
-g_try_malloc0(sizeof(T))
+g_try_new0(T, 1)
@@
type T;
expression n;
@@
-g_malloc(sizeof(T) * (n))
+g_new(T, n)
@@
type T;
expression n;
@@
-g_try_malloc(sizeof(T) * (n))
+g_try_new(T, n)
@@
type T;
expression n;
@@
-g_malloc0(sizeof(T) * (n))
+g_new0(T, n)
@@
type T;
expression n;
@@
-g_try_malloc0(sizeof(T) * (n))
+g_try_new0(T, n)
@@
type T;
expression p, n;
@@
-g_realloc(p, sizeof(T) * (n))
+g_renew(T, p, n)
@@
type T;
expression p, n;
@@
-g_try_realloc(p, sizeof(T) * (n))
+g_try_renew(T, p, n)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-08-19 10:31:08 +02:00
|
|
|
state->opaque = g_new0(BDRVGlusterReopenState, 1);
|
2014-02-17 17:11:12 +01:00
|
|
|
reop_s = state->opaque;
|
|
|
|
|
|
|
|
qemu_gluster_parse_flags(state->flags, &open_flags);
|
|
|
|
|
2016-07-19 18:57:32 +02:00
|
|
|
gconf = g_new0(BlockdevOptionsGluster, 1);
|
2016-11-02 17:50:36 +01:00
|
|
|
gconf->debug = s->debug;
|
|
|
|
gconf->has_debug = true;
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
gconf->logfile = g_strdup(s->logfile);
|
2019-07-15 15:28:44 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If 'state->bs->exact_filename' is empty, 'state->options' should contain
|
|
|
|
* the JSON parameters already parsed.
|
|
|
|
*/
|
|
|
|
if (state->bs->exact_filename[0] != '\0') {
|
|
|
|
reop_s->glfs = qemu_gluster_init(gconf, state->bs->exact_filename, NULL,
|
|
|
|
errp);
|
|
|
|
} else {
|
|
|
|
reop_s->glfs = qemu_gluster_init(gconf, NULL, state->options, errp);
|
|
|
|
}
|
2014-02-17 17:11:12 +01:00
|
|
|
if (reop_s->glfs == NULL) {
|
|
|
|
ret = -errno;
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
|
2016-04-05 16:40:09 +02:00
|
|
|
#ifdef CONFIG_GLUSTERFS_XLATOR_OPT
|
|
|
|
ret = glfs_set_xlator_option(reop_s->glfs, "*-write-behind",
|
|
|
|
"resync-failed-syncs-after-fsync", "on");
|
|
|
|
if (ret < 0) {
|
|
|
|
error_setg_errno(errp, errno, "Unable to set xlator key/value pair");
|
|
|
|
ret = -errno;
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-07-19 18:57:29 +02:00
|
|
|
reop_s->fd = glfs_open(reop_s->glfs, gconf->path, open_flags);
|
2014-02-17 17:11:12 +01:00
|
|
|
if (reop_s->fd == NULL) {
|
|
|
|
/* reops->glfs will be cleaned up in _abort */
|
|
|
|
ret = -errno;
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
exit:
|
|
|
|
/* state->opaque will be freed in either the _abort or _commit */
|
2016-07-19 18:57:32 +02:00
|
|
|
qapi_free_BlockdevOptionsGluster(gconf);
|
2014-02-17 17:11:12 +01:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void qemu_gluster_reopen_commit(BDRVReopenState *state)
|
|
|
|
{
|
|
|
|
BDRVGlusterReopenState *reop_s = state->opaque;
|
|
|
|
BDRVGlusterState *s = state->bs->opaque;
|
|
|
|
|
|
|
|
|
|
|
|
/* close the old */
|
|
|
|
if (s->fd) {
|
|
|
|
glfs_close(s->fd);
|
|
|
|
}
|
2016-10-27 17:24:50 +02:00
|
|
|
|
|
|
|
glfs_clear_preopened(s->glfs);
|
2014-02-17 17:11:12 +01:00
|
|
|
|
|
|
|
/* use the newly opened image / connection */
|
|
|
|
s->fd = reop_s->fd;
|
|
|
|
s->glfs = reop_s->glfs;
|
|
|
|
|
|
|
|
g_free(state->opaque);
|
|
|
|
state->opaque = NULL;
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void qemu_gluster_reopen_abort(BDRVReopenState *state)
|
|
|
|
{
|
|
|
|
BDRVGlusterReopenState *reop_s = state->opaque;
|
|
|
|
|
|
|
|
if (reop_s == NULL) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (reop_s->fd) {
|
|
|
|
glfs_close(reop_s->fd);
|
|
|
|
}
|
|
|
|
|
2016-10-27 17:24:50 +02:00
|
|
|
glfs_clear_preopened(reop_s->glfs);
|
2014-02-17 17:11:12 +01:00
|
|
|
|
|
|
|
g_free(state->opaque);
|
|
|
|
state->opaque = NULL;
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2013-12-21 10:21:25 +01:00
|
|
|
#ifdef CONFIG_GLUSTERFS_ZEROFILL
|
2016-06-01 23:10:08 +02:00
|
|
|
static coroutine_fn int qemu_gluster_co_pwrite_zeroes(BlockDriverState *bs,
|
2016-07-19 18:57:30 +02:00
|
|
|
int64_t offset,
|
block: use int64_t instead of int in driver write_zeroes handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver write_zeroes handlers bytes parameter to int64_t.
The only caller of all updated function is bdrv_co_do_pwrite_zeroes().
bdrv_co_do_pwrite_zeroes() itself is of course OK with widening of
callee parameter type. Also, bdrv_co_do_pwrite_zeroes()'s
max_write_zeroes is limited to INT_MAX. So, updated functions all are
safe, they will not get "bytes" larger than before.
Still, let's look through all updated functions, and add assertions to
the ones which are actually unprepared to values larger than INT_MAX.
For these drivers also set explicit max_pwrite_zeroes limit.
Let's go:
blkdebug: calculations can't overflow, thanks to
bdrv_check_qiov_request() in generic layer. rule_check() and
bdrv_co_pwrite_zeroes() both have 64bit argument.
blklogwrites: pass to blk_log_writes_co_log() with 64bit argument.
blkreplay, copy-on-read, filter-compress: pass to
bdrv_co_pwrite_zeroes() which is OK
copy-before-write: Calls cbw_do_copy_before_write() and
bdrv_co_pwrite_zeroes, both have 64bit argument.
file-posix: both handler calls raw_do_pwrite_zeroes, which is updated.
In raw_do_pwrite_zeroes() calculations are OK due to
bdrv_check_qiov_request(), bytes go to RawPosixAIOData::aio_nbytes
which is uint64_t.
Check also where that uint64_t gets handed:
handle_aiocb_write_zeroes_block() passes a uint64_t[2] to
ioctl(BLKZEROOUT), handle_aiocb_write_zeroes() calls do_fallocate()
which takes off_t (and we compile to always have 64-bit off_t), as
does handle_aiocb_write_zeroes_unmap. All look safe.
gluster: bytes go to GlusterAIOCB::size which is int64_t and to
glfs_zerofill_async works with off_t.
iscsi: Aha, here we deal with iscsi_writesame16_task() that has
uint32_t num_blocks argument and iscsi_writesame16_task() has
uint16_t argument. Make comments, add assertions and clarify
max_pwrite_zeroes calculation.
iscsi_allocmap_() functions already has int64_t argument
is_byte_request_lun_aligned is simple to update, do it.
mirror_top: pass to bdrv_mirror_top_do_write which has uint64_t
argument
nbd: Aha, here we have protocol limitation, and NBDRequest::len is
uint32_t. max_pwrite_zeroes is cleanly set to 32bit value, so we are
OK for now.
nvme: Again, protocol limitation. And no inherent limit for
write-zeroes at all. But from code that calculates cdw12 it's obvious
that we do have limit and alignment. Let's clarify it. Also,
obviously the code is not prepared to handle bytes=0. Let's handle
this case too.
trace events already 64bit
preallocate: pass to handle_write() and bdrv_co_pwrite_zeroes(), both
64bit.
rbd: pass to qemu_rbd_start_co() which is 64bit.
qcow2: offset + bytes and alignment still works good (thanks to
bdrv_check_qiov_request()), so tail calculation is OK
qcow2_subcluster_zeroize() has 64bit argument, should be OK
trace events updated
qed: qed_co_request wants int nb_sectors. Also in code we have size_t
used for request length which may be 32bit. So, let's just keep
INT_MAX as a limit (aligning it down to pwrite_zeroes_alignment) and
don't care.
raw-format: Is OK. raw_adjust_offset and bdrv_co_pwrite_zeroes are both
64bit.
throttle: Both throttle_group_co_io_limits_intercept() and
bdrv_co_pwrite_zeroes() are 64bit.
vmdk: pass to vmdk_pwritev which is 64bit
quorum: pass to quorum_co_pwritev() which is 64bit
Hooray!
At this point all block drivers are prepared to support 64bit
write-zero requests, or have explicitly set max_pwrite_zeroes.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-8-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: use <= rather than < in assertions relying on max_pwrite_zeroes]
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-09-03 12:28:03 +02:00
|
|
|
int64_t bytes,
|
2016-07-19 18:57:30 +02:00
|
|
|
BdrvRequestFlags flags)
|
2013-12-21 10:21:25 +01:00
|
|
|
{
|
|
|
|
int ret;
|
2015-10-01 13:04:38 +02:00
|
|
|
GlusterAIOCB acb;
|
2013-12-21 10:21:25 +01:00
|
|
|
BDRVGlusterState *s = bs->opaque;
|
|
|
|
|
block: use int64_t instead of int in driver write_zeroes handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver write_zeroes handlers bytes parameter to int64_t.
The only caller of all updated function is bdrv_co_do_pwrite_zeroes().
bdrv_co_do_pwrite_zeroes() itself is of course OK with widening of
callee parameter type. Also, bdrv_co_do_pwrite_zeroes()'s
max_write_zeroes is limited to INT_MAX. So, updated functions all are
safe, they will not get "bytes" larger than before.
Still, let's look through all updated functions, and add assertions to
the ones which are actually unprepared to values larger than INT_MAX.
For these drivers also set explicit max_pwrite_zeroes limit.
Let's go:
blkdebug: calculations can't overflow, thanks to
bdrv_check_qiov_request() in generic layer. rule_check() and
bdrv_co_pwrite_zeroes() both have 64bit argument.
blklogwrites: pass to blk_log_writes_co_log() with 64bit argument.
blkreplay, copy-on-read, filter-compress: pass to
bdrv_co_pwrite_zeroes() which is OK
copy-before-write: Calls cbw_do_copy_before_write() and
bdrv_co_pwrite_zeroes, both have 64bit argument.
file-posix: both handler calls raw_do_pwrite_zeroes, which is updated.
In raw_do_pwrite_zeroes() calculations are OK due to
bdrv_check_qiov_request(), bytes go to RawPosixAIOData::aio_nbytes
which is uint64_t.
Check also where that uint64_t gets handed:
handle_aiocb_write_zeroes_block() passes a uint64_t[2] to
ioctl(BLKZEROOUT), handle_aiocb_write_zeroes() calls do_fallocate()
which takes off_t (and we compile to always have 64-bit off_t), as
does handle_aiocb_write_zeroes_unmap. All look safe.
gluster: bytes go to GlusterAIOCB::size which is int64_t and to
glfs_zerofill_async works with off_t.
iscsi: Aha, here we deal with iscsi_writesame16_task() that has
uint32_t num_blocks argument and iscsi_writesame16_task() has
uint16_t argument. Make comments, add assertions and clarify
max_pwrite_zeroes calculation.
iscsi_allocmap_() functions already has int64_t argument
is_byte_request_lun_aligned is simple to update, do it.
mirror_top: pass to bdrv_mirror_top_do_write which has uint64_t
argument
nbd: Aha, here we have protocol limitation, and NBDRequest::len is
uint32_t. max_pwrite_zeroes is cleanly set to 32bit value, so we are
OK for now.
nvme: Again, protocol limitation. And no inherent limit for
write-zeroes at all. But from code that calculates cdw12 it's obvious
that we do have limit and alignment. Let's clarify it. Also,
obviously the code is not prepared to handle bytes=0. Let's handle
this case too.
trace events already 64bit
preallocate: pass to handle_write() and bdrv_co_pwrite_zeroes(), both
64bit.
rbd: pass to qemu_rbd_start_co() which is 64bit.
qcow2: offset + bytes and alignment still works good (thanks to
bdrv_check_qiov_request()), so tail calculation is OK
qcow2_subcluster_zeroize() has 64bit argument, should be OK
trace events updated
qed: qed_co_request wants int nb_sectors. Also in code we have size_t
used for request length which may be 32bit. So, let's just keep
INT_MAX as a limit (aligning it down to pwrite_zeroes_alignment) and
don't care.
raw-format: Is OK. raw_adjust_offset and bdrv_co_pwrite_zeroes are both
64bit.
throttle: Both throttle_group_co_io_limits_intercept() and
bdrv_co_pwrite_zeroes() are 64bit.
vmdk: pass to vmdk_pwritev which is 64bit
quorum: pass to quorum_co_pwritev() which is 64bit
Hooray!
At this point all block drivers are prepared to support 64bit
write-zero requests, or have explicitly set max_pwrite_zeroes.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-8-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: use <= rather than < in assertions relying on max_pwrite_zeroes]
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-09-03 12:28:03 +02:00
|
|
|
acb.size = bytes;
|
2015-10-01 13:04:38 +02:00
|
|
|
acb.ret = 0;
|
|
|
|
acb.coroutine = qemu_coroutine_self();
|
|
|
|
acb.aio_context = bdrv_get_aio_context(bs);
|
2013-12-21 10:21:25 +01:00
|
|
|
|
block: use int64_t instead of int in driver write_zeroes handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver write_zeroes handlers bytes parameter to int64_t.
The only caller of all updated function is bdrv_co_do_pwrite_zeroes().
bdrv_co_do_pwrite_zeroes() itself is of course OK with widening of
callee parameter type. Also, bdrv_co_do_pwrite_zeroes()'s
max_write_zeroes is limited to INT_MAX. So, updated functions all are
safe, they will not get "bytes" larger than before.
Still, let's look through all updated functions, and add assertions to
the ones which are actually unprepared to values larger than INT_MAX.
For these drivers also set explicit max_pwrite_zeroes limit.
Let's go:
blkdebug: calculations can't overflow, thanks to
bdrv_check_qiov_request() in generic layer. rule_check() and
bdrv_co_pwrite_zeroes() both have 64bit argument.
blklogwrites: pass to blk_log_writes_co_log() with 64bit argument.
blkreplay, copy-on-read, filter-compress: pass to
bdrv_co_pwrite_zeroes() which is OK
copy-before-write: Calls cbw_do_copy_before_write() and
bdrv_co_pwrite_zeroes, both have 64bit argument.
file-posix: both handler calls raw_do_pwrite_zeroes, which is updated.
In raw_do_pwrite_zeroes() calculations are OK due to
bdrv_check_qiov_request(), bytes go to RawPosixAIOData::aio_nbytes
which is uint64_t.
Check also where that uint64_t gets handed:
handle_aiocb_write_zeroes_block() passes a uint64_t[2] to
ioctl(BLKZEROOUT), handle_aiocb_write_zeroes() calls do_fallocate()
which takes off_t (and we compile to always have 64-bit off_t), as
does handle_aiocb_write_zeroes_unmap. All look safe.
gluster: bytes go to GlusterAIOCB::size which is int64_t and to
glfs_zerofill_async works with off_t.
iscsi: Aha, here we deal with iscsi_writesame16_task() that has
uint32_t num_blocks argument and iscsi_writesame16_task() has
uint16_t argument. Make comments, add assertions and clarify
max_pwrite_zeroes calculation.
iscsi_allocmap_() functions already has int64_t argument
is_byte_request_lun_aligned is simple to update, do it.
mirror_top: pass to bdrv_mirror_top_do_write which has uint64_t
argument
nbd: Aha, here we have protocol limitation, and NBDRequest::len is
uint32_t. max_pwrite_zeroes is cleanly set to 32bit value, so we are
OK for now.
nvme: Again, protocol limitation. And no inherent limit for
write-zeroes at all. But from code that calculates cdw12 it's obvious
that we do have limit and alignment. Let's clarify it. Also,
obviously the code is not prepared to handle bytes=0. Let's handle
this case too.
trace events already 64bit
preallocate: pass to handle_write() and bdrv_co_pwrite_zeroes(), both
64bit.
rbd: pass to qemu_rbd_start_co() which is 64bit.
qcow2: offset + bytes and alignment still works good (thanks to
bdrv_check_qiov_request()), so tail calculation is OK
qcow2_subcluster_zeroize() has 64bit argument, should be OK
trace events updated
qed: qed_co_request wants int nb_sectors. Also in code we have size_t
used for request length which may be 32bit. So, let's just keep
INT_MAX as a limit (aligning it down to pwrite_zeroes_alignment) and
don't care.
raw-format: Is OK. raw_adjust_offset and bdrv_co_pwrite_zeroes are both
64bit.
throttle: Both throttle_group_co_io_limits_intercept() and
bdrv_co_pwrite_zeroes() are 64bit.
vmdk: pass to vmdk_pwritev which is 64bit
quorum: pass to quorum_co_pwritev() which is 64bit
Hooray!
At this point all block drivers are prepared to support 64bit
write-zero requests, or have explicitly set max_pwrite_zeroes.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-8-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: use <= rather than < in assertions relying on max_pwrite_zeroes]
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-09-03 12:28:03 +02:00
|
|
|
ret = glfs_zerofill_async(s->fd, offset, bytes, gluster_finish_aiocb, &acb);
|
2013-12-21 10:21:25 +01:00
|
|
|
if (ret < 0) {
|
2015-10-01 13:04:38 +02:00
|
|
|
return -errno;
|
2013-12-21 10:21:25 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
qemu_coroutine_yield();
|
2015-10-01 13:04:38 +02:00
|
|
|
return acb.ret;
|
2013-12-21 10:21:25 +01:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2018-02-13 14:03:51 +01:00
|
|
|
static int qemu_gluster_do_truncate(struct glfs_fd *fd, int64_t offset,
|
|
|
|
PreallocMode prealloc, Error **errp)
|
|
|
|
{
|
2018-02-13 14:03:52 +01:00
|
|
|
int64_t current_length;
|
|
|
|
|
|
|
|
current_length = glfs_lseek(fd, 0, SEEK_END);
|
|
|
|
if (current_length < 0) {
|
|
|
|
error_setg_errno(errp, errno, "Failed to determine current size");
|
|
|
|
return -errno;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (current_length > offset && prealloc != PREALLOC_MODE_OFF) {
|
|
|
|
error_setg(errp, "Cannot use preallocation for shrinking files");
|
|
|
|
return -ENOTSUP;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (current_length == offset) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-02-13 14:03:51 +01:00
|
|
|
switch (prealloc) {
|
|
|
|
#ifdef CONFIG_GLUSTERFS_FALLOCATE
|
|
|
|
case PREALLOC_MODE_FALLOC:
|
2018-02-13 14:03:52 +01:00
|
|
|
if (glfs_fallocate(fd, 0, current_length, offset - current_length)) {
|
2018-02-13 14:03:51 +01:00
|
|
|
error_setg_errno(errp, errno, "Could not preallocate data");
|
|
|
|
return -errno;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
#endif /* CONFIG_GLUSTERFS_FALLOCATE */
|
|
|
|
#ifdef CONFIG_GLUSTERFS_ZEROFILL
|
|
|
|
case PREALLOC_MODE_FULL:
|
|
|
|
if (glfs_ftruncate(fd, offset)) {
|
|
|
|
error_setg_errno(errp, errno, "Could not resize file");
|
|
|
|
return -errno;
|
|
|
|
}
|
2018-02-13 14:03:52 +01:00
|
|
|
if (glfs_zerofill(fd, current_length, offset - current_length)) {
|
2018-02-13 14:03:51 +01:00
|
|
|
error_setg_errno(errp, errno, "Could not zerofill the new area");
|
|
|
|
return -errno;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
#endif /* CONFIG_GLUSTERFS_ZEROFILL */
|
|
|
|
case PREALLOC_MODE_OFF:
|
|
|
|
if (glfs_ftruncate(fd, offset)) {
|
|
|
|
error_setg_errno(errp, errno, "Could not resize file");
|
|
|
|
return -errno;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
error_setg(errp, "Unsupported preallocation mode: %s",
|
|
|
|
PreallocMode_str(prealloc));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-01-31 16:27:38 +01:00
|
|
|
static int qemu_gluster_co_create(BlockdevCreateOptions *options,
|
|
|
|
Error **errp)
|
|
|
|
{
|
|
|
|
BlockdevCreateOptionsGluster *opts = &options->u.gluster;
|
|
|
|
struct glfs *glfs;
|
|
|
|
struct glfs_fd *fd = NULL;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
assert(options->driver == BLOCKDEV_DRIVER_GLUSTER);
|
|
|
|
|
|
|
|
glfs = qemu_gluster_glfs_init(opts->location, errp);
|
|
|
|
if (!glfs) {
|
|
|
|
ret = -errno;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
fd = glfs_creat(glfs, opts->location->path,
|
|
|
|
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
|
|
|
|
if (!fd) {
|
|
|
|
ret = -errno;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = qemu_gluster_do_truncate(fd, opts->size, opts->preallocation, errp);
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (fd) {
|
|
|
|
if (glfs_close(fd) != 0 && ret == 0) {
|
|
|
|
ret = -errno;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
glfs_clear_preopened(glfs);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-03-26 02:12:17 +01:00
|
|
|
static int coroutine_fn qemu_gluster_co_create_opts(BlockDriver *drv,
|
|
|
|
const char *filename,
|
2018-01-18 13:43:45 +01:00
|
|
|
QemuOpts *opts,
|
|
|
|
Error **errp)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
2018-01-31 16:27:38 +01:00
|
|
|
BlockdevCreateOptions *options;
|
|
|
|
BlockdevCreateOptionsGluster *gopts;
|
2016-07-19 18:57:32 +02:00
|
|
|
BlockdevOptionsGluster *gconf;
|
2014-06-05 11:20:54 +02:00
|
|
|
char *tmp = NULL;
|
2017-05-28 08:31:14 +02:00
|
|
|
Error *local_err = NULL;
|
2018-01-31 16:27:38 +01:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
options = g_new0(BlockdevCreateOptions, 1);
|
|
|
|
options->driver = BLOCKDEV_DRIVER_GLUSTER;
|
|
|
|
gopts = &options->u.gluster;
|
2012-09-27 16:00:32 +02:00
|
|
|
|
2016-07-19 18:57:32 +02:00
|
|
|
gconf = g_new0(BlockdevOptionsGluster, 1);
|
2018-01-31 16:27:38 +01:00
|
|
|
gopts->location = gconf;
|
|
|
|
|
|
|
|
gopts->size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
|
|
|
|
BDRV_SECTOR_SIZE);
|
|
|
|
|
|
|
|
tmp = qemu_opt_get_del(opts, BLOCK_OPT_PREALLOC);
|
|
|
|
gopts->preallocation = qapi_enum_parse(&PreallocMode_lookup, tmp,
|
|
|
|
PREALLOC_MODE_OFF, &local_err);
|
|
|
|
g_free(tmp);
|
|
|
|
if (local_err) {
|
|
|
|
error_propagate(errp, local_err);
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
2016-11-02 17:50:36 +01:00
|
|
|
gconf->debug = qemu_opt_get_number_del(opts, GLUSTER_OPT_DEBUG,
|
|
|
|
GLUSTER_DEBUG_DEFAULT);
|
|
|
|
if (gconf->debug < 0) {
|
|
|
|
gconf->debug = 0;
|
|
|
|
} else if (gconf->debug > GLUSTER_DEBUG_MAX) {
|
|
|
|
gconf->debug = GLUSTER_DEBUG_MAX;
|
|
|
|
}
|
|
|
|
gconf->has_debug = true;
|
2016-04-07 23:24:19 +02:00
|
|
|
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
gconf->logfile = qemu_opt_get_del(opts, GLUSTER_OPT_LOGFILE);
|
|
|
|
if (!gconf->logfile) {
|
|
|
|
gconf->logfile = g_strdup(GLUSTER_LOGFILE_DEFAULT);
|
|
|
|
}
|
|
|
|
|
2018-01-31 16:27:38 +01:00
|
|
|
ret = qemu_gluster_parse(gconf, filename, NULL, errp);
|
|
|
|
if (ret < 0) {
|
|
|
|
goto fail;
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
|
|
|
|
2018-01-31 16:27:38 +01:00
|
|
|
ret = qemu_gluster_co_create(options, errp);
|
|
|
|
if (ret < 0) {
|
|
|
|
goto fail;
|
2017-05-28 08:31:14 +02:00
|
|
|
}
|
|
|
|
|
2018-01-31 16:27:38 +01:00
|
|
|
ret = 0;
|
|
|
|
fail:
|
|
|
|
qapi_free_BlockdevCreateOptions(options);
|
2012-09-27 16:00:32 +02:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2013-12-21 10:21:24 +01:00
|
|
|
static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
|
2016-07-19 18:57:30 +02:00
|
|
|
int64_t sector_num, int nb_sectors,
|
|
|
|
QEMUIOVector *qiov, int write)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
|
|
|
int ret;
|
2015-10-01 13:04:38 +02:00
|
|
|
GlusterAIOCB acb;
|
2012-09-27 16:00:32 +02:00
|
|
|
BDRVGlusterState *s = bs->opaque;
|
2013-12-21 10:21:24 +01:00
|
|
|
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
|
|
|
|
off_t offset = sector_num * BDRV_SECTOR_SIZE;
|
2012-09-27 16:00:32 +02:00
|
|
|
|
2015-10-01 13:04:38 +02:00
|
|
|
acb.size = size;
|
|
|
|
acb.ret = 0;
|
|
|
|
acb.coroutine = qemu_coroutine_self();
|
|
|
|
acb.aio_context = bdrv_get_aio_context(bs);
|
2012-09-27 16:00:32 +02:00
|
|
|
|
|
|
|
if (write) {
|
|
|
|
ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
|
2016-07-19 18:57:30 +02:00
|
|
|
gluster_finish_aiocb, &acb);
|
2012-09-27 16:00:32 +02:00
|
|
|
} else {
|
|
|
|
ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
|
2016-07-19 18:57:30 +02:00
|
|
|
gluster_finish_aiocb, &acb);
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (ret < 0) {
|
2015-10-01 13:04:38 +02:00
|
|
|
return -errno;
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
2013-12-21 10:21:24 +01:00
|
|
|
|
|
|
|
qemu_coroutine_yield();
|
2015-10-01 13:04:38 +02:00
|
|
|
return acb.ret;
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
|
|
|
|
block: Convert .bdrv_truncate callback to coroutine_fn
bdrv_truncate() is an operation that can block (even for a quite long
time, depending on the PreallocMode) in I/O paths that shouldn't block.
Convert it to a coroutine_fn so that we have the infrastructure for
drivers to make their .bdrv_co_truncate implementation asynchronous.
This change could potentially introduce new race conditions because
bdrv_truncate() isn't necessarily executed atomically any more. Whether
this is a problem needs to be evaluated for each block driver that
supports truncate:
* file-posix/win32, gluster, iscsi, nfs, rbd, ssh, sheepdog: The
protocol drivers are trivially safe because they don't actually yield
yet, so there is no change in behaviour.
* copy-on-read, crypto, raw-format: Essentially just filter drivers that
pass the request to a child node, no problem.
* qcow2: The implementation modifies metadata, so it needs to hold
s->lock to be safe with concurrent I/O requests. In order to avoid
double locking, this requires pulling the locking out into
preallocate_co() and using qcow2_write_caches() instead of
bdrv_flush().
* qed: Does a single header update, this is fine without locking.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-06-21 17:54:35 +02:00
|
|
|
static coroutine_fn int qemu_gluster_co_truncate(BlockDriverState *bs,
|
|
|
|
int64_t offset,
|
2019-09-18 11:51:40 +02:00
|
|
|
bool exact,
|
block: Convert .bdrv_truncate callback to coroutine_fn
bdrv_truncate() is an operation that can block (even for a quite long
time, depending on the PreallocMode) in I/O paths that shouldn't block.
Convert it to a coroutine_fn so that we have the infrastructure for
drivers to make their .bdrv_co_truncate implementation asynchronous.
This change could potentially introduce new race conditions because
bdrv_truncate() isn't necessarily executed atomically any more. Whether
this is a problem needs to be evaluated for each block driver that
supports truncate:
* file-posix/win32, gluster, iscsi, nfs, rbd, ssh, sheepdog: The
protocol drivers are trivially safe because they don't actually yield
yet, so there is no change in behaviour.
* copy-on-read, crypto, raw-format: Essentially just filter drivers that
pass the request to a child node, no problem.
* qcow2: The implementation modifies metadata, so it needs to hold
s->lock to be safe with concurrent I/O requests. In order to avoid
double locking, this requires pulling the locking out into
preallocate_co() and using qcow2_write_caches() instead of
bdrv_flush().
* qed: Does a single header update, this is fine without locking.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-06-21 17:54:35 +02:00
|
|
|
PreallocMode prealloc,
|
2020-04-24 14:54:39 +02:00
|
|
|
BdrvRequestFlags flags,
|
block: Convert .bdrv_truncate callback to coroutine_fn
bdrv_truncate() is an operation that can block (even for a quite long
time, depending on the PreallocMode) in I/O paths that shouldn't block.
Convert it to a coroutine_fn so that we have the infrastructure for
drivers to make their .bdrv_co_truncate implementation asynchronous.
This change could potentially introduce new race conditions because
bdrv_truncate() isn't necessarily executed atomically any more. Whether
this is a problem needs to be evaluated for each block driver that
supports truncate:
* file-posix/win32, gluster, iscsi, nfs, rbd, ssh, sheepdog: The
protocol drivers are trivially safe because they don't actually yield
yet, so there is no change in behaviour.
* copy-on-read, crypto, raw-format: Essentially just filter drivers that
pass the request to a child node, no problem.
* qcow2: The implementation modifies metadata, so it needs to hold
s->lock to be safe with concurrent I/O requests. In order to avoid
double locking, this requires pulling the locking out into
preallocate_co() and using qcow2_write_caches() instead of
bdrv_flush().
* qed: Does a single header update, this is fine without locking.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-06-21 17:54:35 +02:00
|
|
|
Error **errp)
|
2013-07-19 16:21:33 +02:00
|
|
|
{
|
|
|
|
BDRVGlusterState *s = bs->opaque;
|
2018-02-13 14:03:53 +01:00
|
|
|
return qemu_gluster_do_truncate(s->fd, offset, prealloc, errp);
|
2013-07-19 16:21:33 +02:00
|
|
|
}
|
|
|
|
|
2013-12-21 10:21:24 +01:00
|
|
|
static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
|
2016-07-19 18:57:30 +02:00
|
|
|
int64_t sector_num,
|
|
|
|
int nb_sectors,
|
|
|
|
QEMUIOVector *qiov)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
2013-12-21 10:21:24 +01:00
|
|
|
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
|
|
|
|
2013-12-21 10:21:24 +01:00
|
|
|
static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
|
2016-07-19 18:57:30 +02:00
|
|
|
int64_t sector_num,
|
|
|
|
int nb_sectors,
|
2018-04-25 00:01:57 +02:00
|
|
|
QEMUIOVector *qiov,
|
|
|
|
int flags)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
2013-12-21 10:21:24 +01:00
|
|
|
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
|
|
|
|
2016-04-15 22:29:06 +02:00
|
|
|
static void qemu_gluster_close(BlockDriverState *bs)
|
|
|
|
{
|
|
|
|
BDRVGlusterState *s = bs->opaque;
|
|
|
|
|
block/gluster: add support to choose libgfapi logfile
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE,
gfapi logs will be huge and fill/overflow the console view.
This patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.
Usage:
-----
*URI Style:
---------
-drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
file.logfile=/var/log/qemu/qemu-gfapi.log
*JSON Style:
----------
'json:{
"driver":"qcow2",
"file":{
"driver":"gluster",
"volume":"volname",
"path":"image.qcow2",
"debug":"9",
"logfile":"/var/log/qemu/qemu-gfapi.log",
"server":[
{
"type":"tcp",
"host":"1.2.3.4",
"port":24007
},
{
"type":"unix",
"socket":"/var/run/glusterd.socket"
}
]
}
}'
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-07-22 16:56:48 +02:00
|
|
|
g_free(s->logfile);
|
2016-04-15 22:29:06 +02:00
|
|
|
if (s->fd) {
|
|
|
|
glfs_close(s->fd);
|
|
|
|
s->fd = NULL;
|
|
|
|
}
|
2016-10-27 17:24:50 +02:00
|
|
|
glfs_clear_preopened(s->glfs);
|
2016-04-15 22:29:06 +02:00
|
|
|
}
|
|
|
|
|
2013-12-21 10:21:24 +01:00
|
|
|
static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
|
|
|
int ret;
|
2015-10-01 13:04:38 +02:00
|
|
|
GlusterAIOCB acb;
|
2012-09-27 16:00:32 +02:00
|
|
|
BDRVGlusterState *s = bs->opaque;
|
|
|
|
|
2015-10-01 13:04:38 +02:00
|
|
|
acb.size = 0;
|
|
|
|
acb.ret = 0;
|
|
|
|
acb.coroutine = qemu_coroutine_self();
|
|
|
|
acb.aio_context = bdrv_get_aio_context(bs);
|
2012-09-27 16:00:32 +02:00
|
|
|
|
2015-10-01 13:04:38 +02:00
|
|
|
ret = glfs_fsync_async(s->fd, gluster_finish_aiocb, &acb);
|
2012-09-27 16:00:32 +02:00
|
|
|
if (ret < 0) {
|
2016-04-05 16:40:09 +02:00
|
|
|
ret = -errno;
|
|
|
|
goto error;
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
2013-12-21 10:21:24 +01:00
|
|
|
|
|
|
|
qemu_coroutine_yield();
|
2016-04-05 16:40:09 +02:00
|
|
|
if (acb.ret < 0) {
|
|
|
|
ret = acb.ret;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
2015-10-01 13:04:38 +02:00
|
|
|
return acb.ret;
|
2016-04-05 16:40:09 +02:00
|
|
|
|
|
|
|
error:
|
|
|
|
/* Some versions of Gluster (3.5.6 -> 3.5.8?) will not retain its cache
|
|
|
|
* after a fsync failure, so we have no way of allowing the guest to safely
|
|
|
|
* continue. Gluster versions prior to 3.5.6 don't retain the cache
|
|
|
|
* either, but will invalidate the fd on error, so this is again our only
|
|
|
|
* option.
|
|
|
|
*
|
|
|
|
* The 'resync-failed-syncs-after-fsync' xlator option for the
|
|
|
|
* write-behind cache will cause later gluster versions to retain its
|
|
|
|
* cache after error, so long as the fd remains open. However, we
|
|
|
|
* currently have no way of knowing if this option is supported.
|
|
|
|
*
|
|
|
|
* TODO: Once gluster provides a way for us to determine if the option
|
|
|
|
* is supported, bypass the closure and setting drv to NULL. */
|
|
|
|
qemu_gluster_close(bs);
|
|
|
|
bs->drv = NULL;
|
|
|
|
return ret;
|
2012-09-27 16:00:32 +02:00
|
|
|
}
|
|
|
|
|
2013-07-16 18:17:42 +02:00
|
|
|
#ifdef CONFIG_GLUSTERFS_DISCARD
|
2016-07-16 01:23:00 +02:00
|
|
|
static coroutine_fn int qemu_gluster_co_pdiscard(BlockDriverState *bs,
|
block: use int64_t instead of int in driver discard handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver discard handlers bytes parameter to int64_t.
The only caller of all updated function is bdrv_co_pdiscard in
block/io.c. It is already prepared to work with 64bit requests, but
pass at most max(bs->bl.max_pdiscard, INT_MAX) to the driver.
Let's look at all updated functions:
blkdebug: all calculations are still OK, thanks to
bdrv_check_qiov_request().
both rule_check and bdrv_co_pdiscard are 64bit
blklogwrites: pass to blk_loc_writes_co_log which is 64bit
blkreplay, copy-on-read, filter-compress: pass to bdrv_co_pdiscard, OK
copy-before-write: pass to bdrv_co_pdiscard which is 64bit and to
cbw_do_copy_before_write which is 64bit
file-posix: one handler calls raw_account_discard() is 64bit and both
handlers calls raw_do_pdiscard(). Update raw_do_pdiscard, which pass
to RawPosixAIOData::aio_nbytes, which is 64bit (and calls
raw_account_discard())
gluster: somehow, third argument of glfs_discard_async is size_t.
Let's set max_pdiscard accordingly.
iscsi: iscsi_allocmap_set_invalid is 64bit,
!is_byte_request_lun_aligned is 64bit.
list.num is uint32_t. Let's clarify max_pdiscard and
pdiscard_alignment.
mirror_top: pass to bdrv_mirror_top_do_write() which is
64bit
nbd: protocol limitation. max_pdiscard is alredy set strict enough,
keep it as is for now.
nvme: buf.nlb is uint32_t and we do shift. So, add corresponding limits
to nvme_refresh_limits().
preallocate: pass to bdrv_co_pdiscard() which is 64bit.
rbd: pass to qemu_rbd_start_co() which is 64bit.
qcow2: calculations are still OK, thanks to bdrv_check_qiov_request(),
qcow2_cluster_discard() is 64bit.
raw-format: raw_adjust_offset() is 64bit, bdrv_co_pdiscard too.
throttle: pass to bdrv_co_pdiscard() which is 64bit and to
throttle_group_co_io_limits_intercept() which is 64bit as well.
test-block-iothread: bytes argument is unused
Great! Now all drivers are prepared to handle 64bit discard requests,
or else have explicit max_pdiscard limits.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-11-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-09-03 12:28:06 +02:00
|
|
|
int64_t offset, int64_t bytes)
|
2013-07-16 18:17:42 +02:00
|
|
|
{
|
|
|
|
int ret;
|
2015-10-01 13:04:38 +02:00
|
|
|
GlusterAIOCB acb;
|
2013-07-16 18:17:42 +02:00
|
|
|
BDRVGlusterState *s = bs->opaque;
|
|
|
|
|
block: use int64_t instead of int in driver discard handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver discard handlers bytes parameter to int64_t.
The only caller of all updated function is bdrv_co_pdiscard in
block/io.c. It is already prepared to work with 64bit requests, but
pass at most max(bs->bl.max_pdiscard, INT_MAX) to the driver.
Let's look at all updated functions:
blkdebug: all calculations are still OK, thanks to
bdrv_check_qiov_request().
both rule_check and bdrv_co_pdiscard are 64bit
blklogwrites: pass to blk_loc_writes_co_log which is 64bit
blkreplay, copy-on-read, filter-compress: pass to bdrv_co_pdiscard, OK
copy-before-write: pass to bdrv_co_pdiscard which is 64bit and to
cbw_do_copy_before_write which is 64bit
file-posix: one handler calls raw_account_discard() is 64bit and both
handlers calls raw_do_pdiscard(). Update raw_do_pdiscard, which pass
to RawPosixAIOData::aio_nbytes, which is 64bit (and calls
raw_account_discard())
gluster: somehow, third argument of glfs_discard_async is size_t.
Let's set max_pdiscard accordingly.
iscsi: iscsi_allocmap_set_invalid is 64bit,
!is_byte_request_lun_aligned is 64bit.
list.num is uint32_t. Let's clarify max_pdiscard and
pdiscard_alignment.
mirror_top: pass to bdrv_mirror_top_do_write() which is
64bit
nbd: protocol limitation. max_pdiscard is alredy set strict enough,
keep it as is for now.
nvme: buf.nlb is uint32_t and we do shift. So, add corresponding limits
to nvme_refresh_limits().
preallocate: pass to bdrv_co_pdiscard() which is 64bit.
rbd: pass to qemu_rbd_start_co() which is 64bit.
qcow2: calculations are still OK, thanks to bdrv_check_qiov_request(),
qcow2_cluster_discard() is 64bit.
raw-format: raw_adjust_offset() is 64bit, bdrv_co_pdiscard too.
throttle: pass to bdrv_co_pdiscard() which is 64bit and to
throttle_group_co_io_limits_intercept() which is 64bit as well.
test-block-iothread: bytes argument is unused
Great! Now all drivers are prepared to handle 64bit discard requests,
or else have explicit max_pdiscard limits.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-11-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-09-03 12:28:06 +02:00
|
|
|
assert(bytes <= SIZE_MAX); /* rely on max_pdiscard */
|
|
|
|
|
2015-10-01 13:04:38 +02:00
|
|
|
acb.size = 0;
|
|
|
|
acb.ret = 0;
|
|
|
|
acb.coroutine = qemu_coroutine_self();
|
|
|
|
acb.aio_context = bdrv_get_aio_context(bs);
|
2013-07-16 18:17:42 +02:00
|
|
|
|
block: use int64_t instead of int in driver discard handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver discard handlers bytes parameter to int64_t.
The only caller of all updated function is bdrv_co_pdiscard in
block/io.c. It is already prepared to work with 64bit requests, but
pass at most max(bs->bl.max_pdiscard, INT_MAX) to the driver.
Let's look at all updated functions:
blkdebug: all calculations are still OK, thanks to
bdrv_check_qiov_request().
both rule_check and bdrv_co_pdiscard are 64bit
blklogwrites: pass to blk_loc_writes_co_log which is 64bit
blkreplay, copy-on-read, filter-compress: pass to bdrv_co_pdiscard, OK
copy-before-write: pass to bdrv_co_pdiscard which is 64bit and to
cbw_do_copy_before_write which is 64bit
file-posix: one handler calls raw_account_discard() is 64bit and both
handlers calls raw_do_pdiscard(). Update raw_do_pdiscard, which pass
to RawPosixAIOData::aio_nbytes, which is 64bit (and calls
raw_account_discard())
gluster: somehow, third argument of glfs_discard_async is size_t.
Let's set max_pdiscard accordingly.
iscsi: iscsi_allocmap_set_invalid is 64bit,
!is_byte_request_lun_aligned is 64bit.
list.num is uint32_t. Let's clarify max_pdiscard and
pdiscard_alignment.
mirror_top: pass to bdrv_mirror_top_do_write() which is
64bit
nbd: protocol limitation. max_pdiscard is alredy set strict enough,
keep it as is for now.
nvme: buf.nlb is uint32_t and we do shift. So, add corresponding limits
to nvme_refresh_limits().
preallocate: pass to bdrv_co_pdiscard() which is 64bit.
rbd: pass to qemu_rbd_start_co() which is 64bit.
qcow2: calculations are still OK, thanks to bdrv_check_qiov_request(),
qcow2_cluster_discard() is 64bit.
raw-format: raw_adjust_offset() is 64bit, bdrv_co_pdiscard too.
throttle: pass to bdrv_co_pdiscard() which is 64bit and to
throttle_group_co_io_limits_intercept() which is 64bit as well.
test-block-iothread: bytes argument is unused
Great! Now all drivers are prepared to handle 64bit discard requests,
or else have explicit max_pdiscard limits.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-11-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-09-03 12:28:06 +02:00
|
|
|
ret = glfs_discard_async(s->fd, offset, bytes, gluster_finish_aiocb, &acb);
|
2013-07-16 18:17:42 +02:00
|
|
|
if (ret < 0) {
|
2015-10-01 13:04:38 +02:00
|
|
|
return -errno;
|
2013-07-16 18:17:42 +02:00
|
|
|
}
|
2013-12-21 10:21:24 +01:00
|
|
|
|
|
|
|
qemu_coroutine_yield();
|
2015-10-01 13:04:38 +02:00
|
|
|
return acb.ret;
|
2013-07-16 18:17:42 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2023-01-13 21:42:04 +01:00
|
|
|
static int64_t coroutine_fn qemu_gluster_co_getlength(BlockDriverState *bs)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
|
|
|
BDRVGlusterState *s = bs->opaque;
|
|
|
|
int64_t ret;
|
|
|
|
|
|
|
|
ret = glfs_lseek(s->fd, 0, SEEK_END);
|
|
|
|
if (ret < 0) {
|
|
|
|
return -errno;
|
|
|
|
} else {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-01-13 21:42:07 +01:00
|
|
|
static int64_t coroutine_fn
|
|
|
|
qemu_gluster_co_get_allocated_file_size(BlockDriverState *bs)
|
2012-09-27 16:00:32 +02:00
|
|
|
{
|
|
|
|
BDRVGlusterState *s = bs->opaque;
|
|
|
|
struct stat st;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = glfs_fstat(s->fd, &st);
|
|
|
|
if (ret < 0) {
|
|
|
|
return -errno;
|
|
|
|
} else {
|
|
|
|
return st.st_blocks * 512;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-03-10 19:38:00 +01:00
|
|
|
/*
|
|
|
|
* Find allocation range in @bs around offset @start.
|
|
|
|
* May change underlying file descriptor's file offset.
|
|
|
|
* If @start is not in a hole, store @start in @data, and the
|
|
|
|
* beginning of the next hole in @hole, and return 0.
|
|
|
|
* If @start is in a non-trailing hole, store @start in @hole and the
|
|
|
|
* beginning of the next non-hole in @data, and return 0.
|
|
|
|
* If @start is in a trailing hole or beyond EOF, return -ENXIO.
|
|
|
|
* If we can't find out, return a negative errno other than -ENXIO.
|
|
|
|
*
|
2018-07-12 21:51:20 +02:00
|
|
|
* (Shamefully copied from file-posix.c, only minuscule adaptions.)
|
2016-03-10 19:38:00 +01:00
|
|
|
*/
|
|
|
|
static int find_allocation(BlockDriverState *bs, off_t start,
|
|
|
|
off_t *data, off_t *hole)
|
|
|
|
{
|
|
|
|
BDRVGlusterState *s = bs->opaque;
|
|
|
|
|
|
|
|
if (!s->supports_seek_data) {
|
2016-10-07 23:48:12 +02:00
|
|
|
goto exit;
|
2016-03-10 19:38:00 +01:00
|
|
|
}
|
|
|
|
|
2016-10-07 23:48:12 +02:00
|
|
|
#if defined SEEK_HOLE && defined SEEK_DATA
|
|
|
|
off_t offs;
|
|
|
|
|
2016-03-10 19:38:00 +01:00
|
|
|
/*
|
|
|
|
* SEEK_DATA cases:
|
|
|
|
* D1. offs == start: start is in data
|
|
|
|
* D2. offs > start: start is in a hole, next data at offs
|
|
|
|
* D3. offs < 0, errno = ENXIO: either start is in a trailing hole
|
|
|
|
* or start is beyond EOF
|
|
|
|
* If the latter happens, the file has been truncated behind
|
|
|
|
* our back since we opened it. All bets are off then.
|
|
|
|
* Treating like a trailing hole is simplest.
|
|
|
|
* D4. offs < 0, errno != ENXIO: we learned nothing
|
|
|
|
*/
|
|
|
|
offs = glfs_lseek(s->fd, start, SEEK_DATA);
|
|
|
|
if (offs < 0) {
|
|
|
|
return -errno; /* D3 or D4 */
|
|
|
|
}
|
block/gluster: glfs_lseek() workaround
On current released versions of glusterfs, glfs_lseek() will sometimes
return invalid values for SEEK_DATA or SEEK_HOLE. For SEEK_DATA and
SEEK_HOLE, the returned value should be >= the passed offset, or < 0 in
the case of error:
LSEEK(2):
off_t lseek(int fd, off_t offset, int whence);
[...]
SEEK_HOLE
Adjust the file offset to the next hole in the file greater
than or equal to offset. If offset points into the middle of
a hole, then the file offset is set to offset. If there is no
hole past offset, then the file offset is adjusted to the end
of the file (i.e., there is an implicit hole at the end of
any file).
[...]
RETURN VALUE
Upon successful completion, lseek() returns the resulting
offset location as measured in bytes from the beginning of the
file. On error, the value (off_t) -1 is returned and errno is
set to indicate the error
However, occasionally glfs_lseek() for SEEK_HOLE/DATA will return a
value less than the passed offset, yet greater than zero.
For instance, here are example values observed from this call:
offs = glfs_lseek(s->fd, start, SEEK_HOLE);
if (offs < 0) {
return -errno; /* D1 and (H3 or H4) */
}
start == 7608336384
offs == 7607877632
This causes QEMU to abort on the assert test. When this value is
returned, errno is also 0.
This is a reported and known bug to glusterfs:
https://bugzilla.redhat.com/show_bug.cgi?id=1425293
Although this is being fixed in gluster, we still should work around it
in QEMU, given that multiple released versions of gluster behave this
way.
This patch treats the return case of (offs < start) the same as if an
error value other than ENXIO is returned; we will assume we learned
nothing, and there are no holes in the file.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Message-id: 87c0140e9407c08f6e74b04131b610f2e27c014c.1495560397.git.jcody@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-23 19:27:50 +02:00
|
|
|
|
|
|
|
if (offs < start) {
|
|
|
|
/* This is not a valid return by lseek(). We are safe to just return
|
|
|
|
* -EIO in this case, and we'll treat it like D4. Unfortunately some
|
|
|
|
* versions of gluster server will return offs < start, so an assert
|
|
|
|
* here will unnecessarily abort QEMU. */
|
|
|
|
return -EIO;
|
|
|
|
}
|
2016-03-10 19:38:00 +01:00
|
|
|
|
|
|
|
if (offs > start) {
|
|
|
|
/* D2: in hole, next data at offs */
|
|
|
|
*hole = start;
|
|
|
|
*data = offs;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* D1: in data, end not yet known */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* SEEK_HOLE cases:
|
|
|
|
* H1. offs == start: start is in a hole
|
|
|
|
* If this happens here, a hole has been dug behind our back
|
|
|
|
* since the previous lseek().
|
|
|
|
* H2. offs > start: either start is in data, next hole at offs,
|
|
|
|
* or start is in trailing hole, EOF at offs
|
|
|
|
* Linux treats trailing holes like any other hole: offs ==
|
|
|
|
* start. Solaris seeks to EOF instead: offs > start (blech).
|
|
|
|
* If that happens here, a hole has been dug behind our back
|
|
|
|
* since the previous lseek().
|
|
|
|
* H3. offs < 0, errno = ENXIO: start is beyond EOF
|
|
|
|
* If this happens, the file has been truncated behind our
|
|
|
|
* back since we opened it. Treat it like a trailing hole.
|
|
|
|
* H4. offs < 0, errno != ENXIO: we learned nothing
|
|
|
|
* Pretend we know nothing at all, i.e. "forget" about D1.
|
|
|
|
*/
|
|
|
|
offs = glfs_lseek(s->fd, start, SEEK_HOLE);
|
|
|
|
if (offs < 0) {
|
|
|
|
return -errno; /* D1 and (H3 or H4) */
|
|
|
|
}
|
block/gluster: glfs_lseek() workaround
On current released versions of glusterfs, glfs_lseek() will sometimes
return invalid values for SEEK_DATA or SEEK_HOLE. For SEEK_DATA and
SEEK_HOLE, the returned value should be >= the passed offset, or < 0 in
the case of error:
LSEEK(2):
off_t lseek(int fd, off_t offset, int whence);
[...]
SEEK_HOLE
Adjust the file offset to the next hole in the file greater
than or equal to offset. If offset points into the middle of
a hole, then the file offset is set to offset. If there is no
hole past offset, then the file offset is adjusted to the end
of the file (i.e., there is an implicit hole at the end of
any file).
[...]
RETURN VALUE
Upon successful completion, lseek() returns the resulting
offset location as measured in bytes from the beginning of the
file. On error, the value (off_t) -1 is returned and errno is
set to indicate the error
However, occasionally glfs_lseek() for SEEK_HOLE/DATA will return a
value less than the passed offset, yet greater than zero.
For instance, here are example values observed from this call:
offs = glfs_lseek(s->fd, start, SEEK_HOLE);
if (offs < 0) {
return -errno; /* D1 and (H3 or H4) */
}
start == 7608336384
offs == 7607877632
This causes QEMU to abort on the assert test. When this value is
returned, errno is also 0.
This is a reported and known bug to glusterfs:
https://bugzilla.redhat.com/show_bug.cgi?id=1425293
Although this is being fixed in gluster, we still should work around it
in QEMU, given that multiple released versions of gluster behave this
way.
This patch treats the return case of (offs < start) the same as if an
error value other than ENXIO is returned; we will assume we learned
nothing, and there are no holes in the file.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Message-id: 87c0140e9407c08f6e74b04131b610f2e27c014c.1495560397.git.jcody@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-05-23 19:27:50 +02:00
|
|
|
|
|
|
|
if (offs < start) {
|
|
|
|
/* This is not a valid return by lseek(). We are safe to just return
|
|
|
|
* -EIO in this case, and we'll treat it like H4. Unfortunately some
|
|
|
|
* versions of gluster server will return offs < start, so an assert
|
|
|
|
* here will unnecessarily abort QEMU. */
|
|
|
|
return -EIO;
|
|
|
|
}
|
2016-03-10 19:38:00 +01:00
|
|
|
|
|
|
|
if (offs > start) {
|
|
|
|
/*
|
|
|
|
* D1 and H2: either in data, next hole at offs, or it was in
|
|
|
|
* data but is now in a trailing hole. In the latter case,
|
|
|
|
* all bets are off. Treating it as if it there was data all
|
|
|
|
* the way to EOF is safe, so simply do that.
|
|
|
|
*/
|
|
|
|
*data = start;
|
|
|
|
*hole = offs;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* D1 and H1 */
|
|
|
|
return -EBUSY;
|
2016-10-07 23:48:12 +02:00
|
|
|
#endif
|
|
|
|
|
|
|
|
exit:
|
|
|
|
return -ENOTSUP;
|
2016-03-10 19:38:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2018-02-13 21:26:45 +01:00
|
|
|
* Returns the allocation status of the specified offset.
|
2016-03-10 19:38:00 +01:00
|
|
|
*
|
2018-02-13 21:26:45 +01:00
|
|
|
* The block layer guarantees 'offset' and 'bytes' are within bounds.
|
2016-03-10 19:38:00 +01:00
|
|
|
*
|
2018-02-13 21:26:45 +01:00
|
|
|
* 'pnum' is set to the number of bytes (including and immediately following
|
|
|
|
* the specified offset) that are known to be in the same
|
2016-03-10 19:38:00 +01:00
|
|
|
* allocated/unallocated state.
|
|
|
|
*
|
2021-08-12 10:41:47 +02:00
|
|
|
* 'bytes' is a soft cap for 'pnum'. If the information is free, 'pnum' may
|
|
|
|
* well exceed it.
|
2016-03-10 19:38:00 +01:00
|
|
|
*
|
2018-02-13 21:26:45 +01:00
|
|
|
* (Based on raw_co_block_status() from file-posix.c.)
|
2016-03-10 19:38:00 +01:00
|
|
|
*/
|
2018-02-13 21:26:45 +01:00
|
|
|
static int coroutine_fn qemu_gluster_co_block_status(BlockDriverState *bs,
|
|
|
|
bool want_zero,
|
|
|
|
int64_t offset,
|
|
|
|
int64_t bytes,
|
|
|
|
int64_t *pnum,
|
|
|
|
int64_t *map,
|
|
|
|
BlockDriverState **file)
|
2016-03-10 19:38:00 +01:00
|
|
|
{
|
|
|
|
BDRVGlusterState *s = bs->opaque;
|
2018-02-13 21:26:45 +01:00
|
|
|
off_t data = 0, hole = 0;
|
2016-03-10 19:38:00 +01:00
|
|
|
int ret = -EINVAL;
|
|
|
|
|
2021-08-05 16:36:03 +02:00
|
|
|
assert(QEMU_IS_ALIGNED(offset | bytes, bs->bl.request_alignment));
|
|
|
|
|
2016-03-10 19:38:00 +01:00
|
|
|
if (!s->fd) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-02-13 21:26:45 +01:00
|
|
|
if (!want_zero) {
|
|
|
|
*pnum = bytes;
|
|
|
|
*map = offset;
|
|
|
|
*file = bs;
|
|
|
|
return BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID;
|
2016-03-10 19:38:00 +01:00
|
|
|
}
|
|
|
|
|
2018-02-13 21:26:45 +01:00
|
|
|
ret = find_allocation(bs, offset, &data, &hole);
|
2016-03-10 19:38:00 +01:00
|
|
|
if (ret == -ENXIO) {
|
|
|
|
/* Trailing hole */
|
2018-02-13 21:26:45 +01:00
|
|
|
*pnum = bytes;
|
2016-03-10 19:38:00 +01:00
|
|
|
ret = BDRV_BLOCK_ZERO;
|
|
|
|
} else if (ret < 0) {
|
|
|
|
/* No info available, so pretend there are no holes */
|
2018-02-13 21:26:45 +01:00
|
|
|
*pnum = bytes;
|
2016-03-10 19:38:00 +01:00
|
|
|
ret = BDRV_BLOCK_DATA;
|
2018-02-13 21:26:45 +01:00
|
|
|
} else if (data == offset) {
|
|
|
|
/* On a data extent, compute bytes to the end of the extent,
|
2016-03-10 19:38:00 +01:00
|
|
|
* possibly including a partial sector at EOF. */
|
2021-08-12 10:41:47 +02:00
|
|
|
*pnum = hole - offset;
|
2021-08-05 16:36:03 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We are not allowed to return partial sectors, though, so
|
|
|
|
* round up if necessary.
|
|
|
|
*/
|
|
|
|
if (!QEMU_IS_ALIGNED(*pnum, bs->bl.request_alignment)) {
|
2023-01-13 21:42:04 +01:00
|
|
|
int64_t file_length = qemu_gluster_co_getlength(bs);
|
2021-08-05 16:36:03 +02:00
|
|
|
if (file_length > 0) {
|
|
|
|
/* Ignore errors, this is just a safeguard */
|
|
|
|
assert(hole == file_length);
|
|
|
|
}
|
|
|
|
*pnum = ROUND_UP(*pnum, bs->bl.request_alignment);
|
|
|
|
}
|
|
|
|
|
2016-03-10 19:38:00 +01:00
|
|
|
ret = BDRV_BLOCK_DATA;
|
|
|
|
} else {
|
2018-02-13 21:26:45 +01:00
|
|
|
/* On a hole, compute bytes to the beginning of the next extent. */
|
|
|
|
assert(hole == offset);
|
2021-08-12 10:41:47 +02:00
|
|
|
*pnum = data - offset;
|
2016-03-10 19:38:00 +01:00
|
|
|
ret = BDRV_BLOCK_ZERO;
|
|
|
|
}
|
|
|
|
|
2018-02-13 21:26:45 +01:00
|
|
|
*map = offset;
|
2016-03-10 19:38:00 +01:00
|
|
|
*file = bs;
|
|
|
|
|
2018-02-13 21:26:45 +01:00
|
|
|
return ret | BDRV_BLOCK_OFFSET_VALID;
|
2016-03-10 19:38:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2019-02-01 20:29:25 +01:00
|
|
|
static const char *const gluster_strong_open_opts[] = {
|
|
|
|
GLUSTER_OPT_VOLUME,
|
|
|
|
GLUSTER_OPT_PATH,
|
|
|
|
GLUSTER_OPT_TYPE,
|
|
|
|
GLUSTER_OPT_SERVER_PATTERN,
|
|
|
|
GLUSTER_OPT_HOST,
|
|
|
|
GLUSTER_OPT_PORT,
|
|
|
|
GLUSTER_OPT_TO,
|
|
|
|
GLUSTER_OPT_IPV4,
|
|
|
|
GLUSTER_OPT_IPV6,
|
|
|
|
GLUSTER_OPT_SOCKET,
|
|
|
|
|
|
|
|
NULL
|
|
|
|
};
|
|
|
|
|
2012-09-27 16:00:32 +02:00
|
|
|
static BlockDriver bdrv_gluster = {
|
|
|
|
.format_name = "gluster",
|
|
|
|
.protocol_name = "gluster",
|
|
|
|
.instance_size = sizeof(BDRVGlusterState),
|
|
|
|
.bdrv_file_open = qemu_gluster_open,
|
2014-02-17 17:11:12 +01:00
|
|
|
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
|
|
|
|
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
|
|
|
|
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
|
2012-09-27 16:00:32 +02:00
|
|
|
.bdrv_close = qemu_gluster_close,
|
2018-01-31 16:27:38 +01:00
|
|
|
.bdrv_co_create = qemu_gluster_co_create,
|
2018-01-18 13:43:45 +01:00
|
|
|
.bdrv_co_create_opts = qemu_gluster_co_create_opts,
|
2023-01-13 21:42:04 +01:00
|
|
|
.bdrv_co_getlength = qemu_gluster_co_getlength,
|
2023-01-13 21:42:07 +01:00
|
|
|
.bdrv_co_get_allocated_file_size = qemu_gluster_co_get_allocated_file_size,
|
block: Convert .bdrv_truncate callback to coroutine_fn
bdrv_truncate() is an operation that can block (even for a quite long
time, depending on the PreallocMode) in I/O paths that shouldn't block.
Convert it to a coroutine_fn so that we have the infrastructure for
drivers to make their .bdrv_co_truncate implementation asynchronous.
This change could potentially introduce new race conditions because
bdrv_truncate() isn't necessarily executed atomically any more. Whether
this is a problem needs to be evaluated for each block driver that
supports truncate:
* file-posix/win32, gluster, iscsi, nfs, rbd, ssh, sheepdog: The
protocol drivers are trivially safe because they don't actually yield
yet, so there is no change in behaviour.
* copy-on-read, crypto, raw-format: Essentially just filter drivers that
pass the request to a child node, no problem.
* qcow2: The implementation modifies metadata, so it needs to hold
s->lock to be safe with concurrent I/O requests. In order to avoid
double locking, this requires pulling the locking out into
preallocate_co() and using qcow2_write_caches() instead of
bdrv_flush().
* qed: Does a single header update, this is fine without locking.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-06-21 17:54:35 +02:00
|
|
|
.bdrv_co_truncate = qemu_gluster_co_truncate,
|
2013-12-21 10:21:24 +01:00
|
|
|
.bdrv_co_readv = qemu_gluster_co_readv,
|
|
|
|
.bdrv_co_writev = qemu_gluster_co_writev,
|
|
|
|
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
|
2013-07-16 18:17:42 +02:00
|
|
|
#ifdef CONFIG_GLUSTERFS_DISCARD
|
2016-07-16 01:23:00 +02:00
|
|
|
.bdrv_co_pdiscard = qemu_gluster_co_pdiscard,
|
2013-12-21 10:21:25 +01:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_GLUSTERFS_ZEROFILL
|
2016-06-01 23:10:08 +02:00
|
|
|
.bdrv_co_pwrite_zeroes = qemu_gluster_co_pwrite_zeroes,
|
2013-07-16 18:17:42 +02:00
|
|
|
#endif
|
2018-02-13 21:26:45 +01:00
|
|
|
.bdrv_co_block_status = qemu_gluster_co_block_status,
|
2019-03-28 11:52:27 +01:00
|
|
|
.bdrv_refresh_limits = qemu_gluster_refresh_limits,
|
2014-06-05 11:20:54 +02:00
|
|
|
.create_opts = &qemu_gluster_create_opts,
|
2019-02-01 20:29:25 +01:00
|
|
|
.strong_runtime_opts = gluster_strong_open_opts,
|
2012-09-27 16:00:32 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static BlockDriver bdrv_gluster_tcp = {
|
|
|
|
.format_name = "gluster",
|
|
|
|
.protocol_name = "gluster+tcp",
|
|
|
|
.instance_size = sizeof(BDRVGlusterState),
|
|
|
|
.bdrv_file_open = qemu_gluster_open,
|
2014-02-17 17:11:12 +01:00
|
|
|
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
|
|
|
|
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
|
|
|
|
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
|
2012-09-27 16:00:32 +02:00
|
|
|
.bdrv_close = qemu_gluster_close,
|
2018-01-31 16:27:38 +01:00
|
|
|
.bdrv_co_create = qemu_gluster_co_create,
|
2018-01-18 13:43:45 +01:00
|
|
|
.bdrv_co_create_opts = qemu_gluster_co_create_opts,
|
2023-01-13 21:42:04 +01:00
|
|
|
.bdrv_co_getlength = qemu_gluster_co_getlength,
|
2023-01-13 21:42:07 +01:00
|
|
|
.bdrv_co_get_allocated_file_size = qemu_gluster_co_get_allocated_file_size,
|
block: Convert .bdrv_truncate callback to coroutine_fn
bdrv_truncate() is an operation that can block (even for a quite long
time, depending on the PreallocMode) in I/O paths that shouldn't block.
Convert it to a coroutine_fn so that we have the infrastructure for
drivers to make their .bdrv_co_truncate implementation asynchronous.
This change could potentially introduce new race conditions because
bdrv_truncate() isn't necessarily executed atomically any more. Whether
this is a problem needs to be evaluated for each block driver that
supports truncate:
* file-posix/win32, gluster, iscsi, nfs, rbd, ssh, sheepdog: The
protocol drivers are trivially safe because they don't actually yield
yet, so there is no change in behaviour.
* copy-on-read, crypto, raw-format: Essentially just filter drivers that
pass the request to a child node, no problem.
* qcow2: The implementation modifies metadata, so it needs to hold
s->lock to be safe with concurrent I/O requests. In order to avoid
double locking, this requires pulling the locking out into
preallocate_co() and using qcow2_write_caches() instead of
bdrv_flush().
* qed: Does a single header update, this is fine without locking.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-06-21 17:54:35 +02:00
|
|
|
.bdrv_co_truncate = qemu_gluster_co_truncate,
|
2013-12-21 10:21:24 +01:00
|
|
|
.bdrv_co_readv = qemu_gluster_co_readv,
|
|
|
|
.bdrv_co_writev = qemu_gluster_co_writev,
|
|
|
|
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
|
2013-07-16 18:17:42 +02:00
|
|
|
#ifdef CONFIG_GLUSTERFS_DISCARD
|
2016-07-16 01:23:00 +02:00
|
|
|
.bdrv_co_pdiscard = qemu_gluster_co_pdiscard,
|
2013-12-21 10:21:25 +01:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_GLUSTERFS_ZEROFILL
|
2016-06-01 23:10:08 +02:00
|
|
|
.bdrv_co_pwrite_zeroes = qemu_gluster_co_pwrite_zeroes,
|
2013-07-16 18:17:42 +02:00
|
|
|
#endif
|
2018-02-13 21:26:45 +01:00
|
|
|
.bdrv_co_block_status = qemu_gluster_co_block_status,
|
2019-03-28 11:52:27 +01:00
|
|
|
.bdrv_refresh_limits = qemu_gluster_refresh_limits,
|
2014-06-05 11:20:54 +02:00
|
|
|
.create_opts = &qemu_gluster_create_opts,
|
2019-02-01 20:29:25 +01:00
|
|
|
.strong_runtime_opts = gluster_strong_open_opts,
|
2012-09-27 16:00:32 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static BlockDriver bdrv_gluster_unix = {
|
|
|
|
.format_name = "gluster",
|
|
|
|
.protocol_name = "gluster+unix",
|
|
|
|
.instance_size = sizeof(BDRVGlusterState),
|
|
|
|
.bdrv_file_open = qemu_gluster_open,
|
2014-02-17 17:11:12 +01:00
|
|
|
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
|
|
|
|
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
|
|
|
|
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
|
2012-09-27 16:00:32 +02:00
|
|
|
.bdrv_close = qemu_gluster_close,
|
2018-01-31 16:27:38 +01:00
|
|
|
.bdrv_co_create = qemu_gluster_co_create,
|
2018-01-18 13:43:45 +01:00
|
|
|
.bdrv_co_create_opts = qemu_gluster_co_create_opts,
|
2023-01-13 21:42:04 +01:00
|
|
|
.bdrv_co_getlength = qemu_gluster_co_getlength,
|
2023-01-13 21:42:07 +01:00
|
|
|
.bdrv_co_get_allocated_file_size = qemu_gluster_co_get_allocated_file_size,
|
block: Convert .bdrv_truncate callback to coroutine_fn
bdrv_truncate() is an operation that can block (even for a quite long
time, depending on the PreallocMode) in I/O paths that shouldn't block.
Convert it to a coroutine_fn so that we have the infrastructure for
drivers to make their .bdrv_co_truncate implementation asynchronous.
This change could potentially introduce new race conditions because
bdrv_truncate() isn't necessarily executed atomically any more. Whether
this is a problem needs to be evaluated for each block driver that
supports truncate:
* file-posix/win32, gluster, iscsi, nfs, rbd, ssh, sheepdog: The
protocol drivers are trivially safe because they don't actually yield
yet, so there is no change in behaviour.
* copy-on-read, crypto, raw-format: Essentially just filter drivers that
pass the request to a child node, no problem.
* qcow2: The implementation modifies metadata, so it needs to hold
s->lock to be safe with concurrent I/O requests. In order to avoid
double locking, this requires pulling the locking out into
preallocate_co() and using qcow2_write_caches() instead of
bdrv_flush().
* qed: Does a single header update, this is fine without locking.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-06-21 17:54:35 +02:00
|
|
|
.bdrv_co_truncate = qemu_gluster_co_truncate,
|
2013-12-21 10:21:24 +01:00
|
|
|
.bdrv_co_readv = qemu_gluster_co_readv,
|
|
|
|
.bdrv_co_writev = qemu_gluster_co_writev,
|
|
|
|
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
|
2013-07-16 18:17:42 +02:00
|
|
|
#ifdef CONFIG_GLUSTERFS_DISCARD
|
2016-07-16 01:23:00 +02:00
|
|
|
.bdrv_co_pdiscard = qemu_gluster_co_pdiscard,
|
2013-12-21 10:21:25 +01:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_GLUSTERFS_ZEROFILL
|
2016-06-01 23:10:08 +02:00
|
|
|
.bdrv_co_pwrite_zeroes = qemu_gluster_co_pwrite_zeroes,
|
2013-07-16 18:17:42 +02:00
|
|
|
#endif
|
2018-02-13 21:26:45 +01:00
|
|
|
.bdrv_co_block_status = qemu_gluster_co_block_status,
|
2019-03-28 11:52:27 +01:00
|
|
|
.bdrv_refresh_limits = qemu_gluster_refresh_limits,
|
2014-06-05 11:20:54 +02:00
|
|
|
.create_opts = &qemu_gluster_create_opts,
|
2019-02-01 20:29:25 +01:00
|
|
|
.strong_runtime_opts = gluster_strong_open_opts,
|
2012-09-27 16:00:32 +02:00
|
|
|
};
|
|
|
|
|
2016-07-19 18:57:31 +02:00
|
|
|
/* rdma is deprecated (actually never supported for volfile fetch).
|
|
|
|
* Let's maintain it for the protocol compatibility, to make sure things
|
|
|
|
* won't break immediately. For now, gluster+rdma will fall back to gluster+tcp
|
|
|
|
* protocol with a warning.
|
|
|
|
* TODO: remove gluster+rdma interface support
|
|
|
|
*/
|
2012-09-27 16:00:32 +02:00
|
|
|
static BlockDriver bdrv_gluster_rdma = {
|
|
|
|
.format_name = "gluster",
|
|
|
|
.protocol_name = "gluster+rdma",
|
|
|
|
.instance_size = sizeof(BDRVGlusterState),
|
|
|
|
.bdrv_file_open = qemu_gluster_open,
|
2014-02-17 17:11:12 +01:00
|
|
|
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
|
|
|
|
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
|
|
|
|
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
|
2012-09-27 16:00:32 +02:00
|
|
|
.bdrv_close = qemu_gluster_close,
|
2018-01-31 16:27:38 +01:00
|
|
|
.bdrv_co_create = qemu_gluster_co_create,
|
2018-01-18 13:43:45 +01:00
|
|
|
.bdrv_co_create_opts = qemu_gluster_co_create_opts,
|
2023-01-13 21:42:04 +01:00
|
|
|
.bdrv_co_getlength = qemu_gluster_co_getlength,
|
2023-01-13 21:42:07 +01:00
|
|
|
.bdrv_co_get_allocated_file_size = qemu_gluster_co_get_allocated_file_size,
|
block: Convert .bdrv_truncate callback to coroutine_fn
bdrv_truncate() is an operation that can block (even for a quite long
time, depending on the PreallocMode) in I/O paths that shouldn't block.
Convert it to a coroutine_fn so that we have the infrastructure for
drivers to make their .bdrv_co_truncate implementation asynchronous.
This change could potentially introduce new race conditions because
bdrv_truncate() isn't necessarily executed atomically any more. Whether
this is a problem needs to be evaluated for each block driver that
supports truncate:
* file-posix/win32, gluster, iscsi, nfs, rbd, ssh, sheepdog: The
protocol drivers are trivially safe because they don't actually yield
yet, so there is no change in behaviour.
* copy-on-read, crypto, raw-format: Essentially just filter drivers that
pass the request to a child node, no problem.
* qcow2: The implementation modifies metadata, so it needs to hold
s->lock to be safe with concurrent I/O requests. In order to avoid
double locking, this requires pulling the locking out into
preallocate_co() and using qcow2_write_caches() instead of
bdrv_flush().
* qed: Does a single header update, this is fine without locking.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2018-06-21 17:54:35 +02:00
|
|
|
.bdrv_co_truncate = qemu_gluster_co_truncate,
|
2013-12-21 10:21:24 +01:00
|
|
|
.bdrv_co_readv = qemu_gluster_co_readv,
|
|
|
|
.bdrv_co_writev = qemu_gluster_co_writev,
|
|
|
|
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
|
2013-07-16 18:17:42 +02:00
|
|
|
#ifdef CONFIG_GLUSTERFS_DISCARD
|
2016-07-16 01:23:00 +02:00
|
|
|
.bdrv_co_pdiscard = qemu_gluster_co_pdiscard,
|
2013-12-21 10:21:25 +01:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_GLUSTERFS_ZEROFILL
|
2016-06-01 23:10:08 +02:00
|
|
|
.bdrv_co_pwrite_zeroes = qemu_gluster_co_pwrite_zeroes,
|
2013-07-16 18:17:42 +02:00
|
|
|
#endif
|
2018-02-13 21:26:45 +01:00
|
|
|
.bdrv_co_block_status = qemu_gluster_co_block_status,
|
2019-03-28 11:52:27 +01:00
|
|
|
.bdrv_refresh_limits = qemu_gluster_refresh_limits,
|
2014-06-05 11:20:54 +02:00
|
|
|
.create_opts = &qemu_gluster_create_opts,
|
2019-02-01 20:29:25 +01:00
|
|
|
.strong_runtime_opts = gluster_strong_open_opts,
|
2012-09-27 16:00:32 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static void bdrv_gluster_init(void)
|
|
|
|
{
|
|
|
|
bdrv_register(&bdrv_gluster_rdma);
|
|
|
|
bdrv_register(&bdrv_gluster_unix);
|
|
|
|
bdrv_register(&bdrv_gluster_tcp);
|
|
|
|
bdrv_register(&bdrv_gluster);
|
|
|
|
}
|
|
|
|
|
|
|
|
block_init(bdrv_gluster_init);
|