docs/system/virtio-pmem.rst: Fix minor style issues

The virtio-pmem documentation has some minor style issues we hadn't
noticed since we weren't rendering it in our docs:

 * Sphinx doesn't complain about overlong title-underlining the
   way it complains about too-short underlining, but it looks odd;
   make the underlines of section headers the right length

 * Indent of paragraphs makes them render as blockquotes;
   remove the indent so they just render as normal text

 * Leading 'o' isn't rst markup, so it just renders as a literal
   "o"; reformat as a subsection heading instead

 * "QEMU" in the document title and section headings are a bit
   odd and unnecessary since this is the QEMU manual; delete
   or rephrase them

 * There's no need to specify what QEMU version the device first
   appeared in.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
This commit is contained in:
Peter Maydell 2020-11-12 14:40:36 +00:00
parent 71266bb4e9
commit c5d7cfdaac
1 changed files with 30 additions and 30 deletions

View File

@ -1,38 +1,37 @@
========================
QEMU virtio pmem
========================
===========
virtio pmem
===========
This document explains the setup and usage of the virtio pmem device
which is available since QEMU v4.1.0.
The virtio pmem device is a paravirtualized persistent memory device
on regular (i.e non-NVDIMM) storage.
This document explains the setup and usage of the virtio pmem device.
The virtio pmem device is a paravirtualized persistent memory device
on regular (i.e non-NVDIMM) storage.
Usecase
--------
-------
Virtio pmem allows to bypass the guest page cache and directly use
host page cache. This reduces guest memory footprint as the host can
make efficient memory reclaim decisions under memory pressure.
Virtio pmem allows to bypass the guest page cache and directly use
host page cache. This reduces guest memory footprint as the host can
make efficient memory reclaim decisions under memory pressure.
o How does virtio-pmem compare to the nvdimm emulation supported by QEMU?
How does virtio-pmem compare to the nvdimm emulation?
-----------------------------------------------------
NVDIMM emulation on regular (i.e. non-NVDIMM) host storage does not
persist the guest writes as there are no defined semantics in the device
specification. The virtio pmem device provides guest write persistence
on non-NVDIMM host storage.
NVDIMM emulation on regular (i.e. non-NVDIMM) host storage does not
persist the guest writes as there are no defined semantics in the device
specification. The virtio pmem device provides guest write persistence
on non-NVDIMM host storage.
virtio pmem usage
-----------------
A virtio pmem device backed by a memory-backend-file can be created on
the QEMU command line as in the following example::
A virtio pmem device backed by a memory-backend-file can be created on
the QEMU command line as in the following example::
-object memory-backend-file,id=mem1,share,mem-path=./virtio_pmem.img,size=4G
-device virtio-pmem-pci,memdev=mem1,id=nv1
where:
where:
- "object memory-backend-file,id=mem1,share,mem-path=<image>, size=<image size>"
creates a backend file with the specified size.
@ -40,8 +39,8 @@ virtio pmem usage
- "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem
pci device whose storage is provided by above memory backend device.
Multiple virtio pmem devices can be created if multiple pairs of "-object"
and "-device" are provided.
Multiple virtio pmem devices can be created if multiple pairs of "-object"
and "-device" are provided.
Hotplug
-------
@ -59,17 +58,18 @@ the guest::
Guest Data Persistence
----------------------
Guest data persistence on non-NVDIMM requires guest userspace applications
to perform fsync/msync. This is different from a real nvdimm backend where
no additional fsync/msync is required. This is to persist guest writes in
host backing file which otherwise remains in host page cache and there is
risk of losing the data in case of power failure.
Guest data persistence on non-NVDIMM requires guest userspace applications
to perform fsync/msync. This is different from a real nvdimm backend where
no additional fsync/msync is required. This is to persist guest writes in
host backing file which otherwise remains in host page cache and there is
risk of losing the data in case of power failure.
With virtio pmem device, MAP_SYNC mmap flag is not supported. This provides
a hint to application to perform fsync for write persistence.
With virtio pmem device, MAP_SYNC mmap flag is not supported. This provides
a hint to application to perform fsync for write persistence.
Limitations
------------
-----------
- Real nvdimm device backend is not supported.
- virtio pmem hotunplug is not supported.
- ACPI NVDIMM features like regions/namespaces are not supported.