docs/specs/tpm: reST-ify TPM documentation

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-Id: <20200121152935.649898-7-stefanb@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This commit is contained in:
Marc-André Lureau 2020-01-21 10:29:35 -05:00 committed by David Gibson
parent 942e7954c8
commit 6e8a3ff6ed
3 changed files with 504 additions and 445 deletions

View File

@ -13,3 +13,4 @@ Contents:
ppc-xive
ppc-spapr-xive
acpi_hw_reduced_hotplug
tpm

503
docs/specs/tpm.rst Normal file
View File

@ -0,0 +1,503 @@
===============
QEMU TPM Device
===============
Guest-side hardware interface
=============================
TIS interface
-------------
The QEMU TPM emulation implements a TPM TIS hardware interface
following the Trusted Computing Group's specification "TCG PC Client
Specific TPM Interface Specification (TIS)", Specification Version
1.3, 21 March 2013. (see the `TIS specification`_, or a later version
of it).
The TIS interface makes a memory mapped IO region in the area
0xfed40000-0xfed44fff available to the guest operating system.
QEMU files related to TPM TIS interface:
- ``hw/tpm/tpm_tis.c``
- ``hw/tpm/tpm_tis.h``
CRB interface
-------------
QEMU also implements a TPM CRB interface following the Trusted
Computing Group's specification "TCG PC Client Platform TPM Profile
(PTP) Specification", Family "2.0", Level 00 Revision 01.03 v22, May
22, 2017. (see the `CRB specification`_, or a later version of it)
The CRB interface makes a memory mapped IO region in the area
0xfed40000-0xfed40fff (1 locality) available to the guest
operating system.
QEMU files related to TPM CRB interface:
- ``hw/tpm/tpm_crb.c``
SPAPR interface
---------------
pSeries (ppc64) machines offer a tpm-spapr device model.
QEMU files related to the SPAPR interface:
- ``hw/tpm/tpm_spapr.c``
fw_cfg interface
================
The bios/firmware may read the ``"etc/tpm/config"`` fw_cfg entry for
configuring the guest appropriately.
The entry of 6 bytes has the following content, in little-endian:
.. code-block:: c
#define TPM_VERSION_UNSPEC 0
#define TPM_VERSION_1_2 1
#define TPM_VERSION_2_0 2
#define TPM_PPI_VERSION_NONE 0
#define TPM_PPI_VERSION_1_30 1
struct FwCfgTPMConfig {
uint32_t tpmppi_address; /* PPI memory location */
uint8_t tpm_version; /* TPM version */
uint8_t tpmppi_version; /* PPI version */
};
ACPI interface
==============
The TPM device is defined with ACPI ID "PNP0C31". QEMU builds a SSDT
and passes it into the guest through the fw_cfg device. The device
description contains the base address of the TIS interface 0xfed40000
and the size of the MMIO area (0x5000). In case a TPM2 is used by
QEMU, a TPM2 ACPI table is also provided. The device is described to
be used in polling mode rather than interrupt mode primarily because
no unused IRQ could be found.
To support measurement logs to be written by the firmware,
e.g. SeaBIOS, a TCPA table is implemented. This table provides a 64kb
buffer where the firmware can write its log into. For TPM 2 only a
more recent version of the TPM2 table provides support for
measurements logs and a TCPA table does not need to be created.
The TCPA and TPM2 ACPI tables follow the Trusted Computing Group
specification "TCG ACPI Specification" Family "1.2" and "2.0", Level
00 Revision 00.37. (see the `ACPI specification`_, or a later version
of it)
ACPI PPI Interface
------------------
QEMU supports the Physical Presence Interface (PPI) for TPM 1.2 and
TPM 2. This interface requires ACPI and firmware support. (see the
`PPI specification`_)
PPI enables a system administrator (root) to request a modification to
the TPM upon reboot. The PPI specification defines the operation
requests and the actions the firmware has to take. The system
administrator passes the operation request number to the firmware
through an ACPI interface which writes this number to a memory
location that the firmware knows. Upon reboot, the firmware finds the
number and sends commands to the TPM. The firmware writes the TPM
result code and the operation request number to a memory location that
ACPI can read from and pass the result on to the administrator.
The PPI specification defines a set of mandatory and optional
operations for the firmware to implement. The ACPI interface also
allows an administrator to list the supported operations. In QEMU the
ACPI code is generated by QEMU, yet the firmware needs to implement
support on a per-operations basis, and different firmwares may support
a different subset. Therefore, QEMU introduces the virtual memory
device for PPI where the firmware can indicate which operations it
supports and ACPI can enable the ones that are supported and disable
all others. This interface lies in main memory and has the following
layout:
+-------------+--------+--------+-------------------------------------------+
| Field | Length | Offset | Description |
+=============+========+========+===========================================+
| ``func`` | 0x100 | 0x000 | Firmware sets values for each supported |
| | | | operation. See defined values below. |
+-------------+--------+--------+-------------------------------------------+
| ``ppin`` | 0x1 | 0x100 | SMI interrupt to use. Set by firmware. |
| | | | Not supported. |
+-------------+--------+--------+-------------------------------------------+
| ``ppip`` | 0x4 | 0x101 | ACPI function index to pass to SMM code. |
| | | | Set by ACPI. Not supported. |
+-------------+--------+--------+-------------------------------------------+
| ``pprp`` | 0x4 | 0x105 | Result of last executed operation. Set by |
| | | | firmware. See function index 5 for values.|
+-------------+--------+--------+-------------------------------------------+
| ``pprq`` | 0x4 | 0x109 | Operation request number to execute. See |
| | | | 'Physical Presence Interface Operation |
| | | | Summary' tables in specs. Set by ACPI. |
+-------------+--------+--------+-------------------------------------------+
| ``pprm`` | 0x4 | 0x10d | Operation request optional parameter. |
| | | | Values depend on operation. Set by ACPI. |
+-------------+--------+--------+-------------------------------------------+
| ``lppr`` | 0x4 | 0x111 | Last executed operation request number. |
| | | | Copied from pprq field by firmware. |
+-------------+--------+--------+-------------------------------------------+
| ``fret`` | 0x4 | 0x115 | Result code from SMM function. |
| | | | Not supported. |
+-------------+--------+--------+-------------------------------------------+
| ``res1`` | 0x40 | 0x119 | Reserved for future use |
+-------------+--------+--------+-------------------------------------------+
|``next_step``| 0x1 | 0x159 | Operation to execute after reboot by |
| | | | firmware. Used by firmware. |
+-------------+--------+--------+-------------------------------------------+
| ``movv`` | 0x1 | 0x15a | Memory overwrite variable |
+-------------+--------+--------+-------------------------------------------+
The following values are supported for the ``func`` field. They
correspond to the values used by ACPI function index 8.
+----------+-------------------------------------------------------------+
| Value | Description |
+==========+=============================================================+
| 0 | Operation is not implemented. |
+----------+-------------------------------------------------------------+
| 1 | Operation is only accessible through firmware. |
+----------+-------------------------------------------------------------+
| 2 | Operation is blocked for OS by firmware configuration. |
+----------+-------------------------------------------------------------+
| 3 | Operation is allowed and physically present user required. |
+----------+-------------------------------------------------------------+
| 4 | Operation is allowed and physically present user is not |
| | required. |
+----------+-------------------------------------------------------------+
The location of the table is given by the fw_cfg ``tpmppi_address``
field. The PPI memory region size is 0x400 (``TPM_PPI_ADDR_SIZE``) to
leave enough room for future updates.
QEMU files related to TPM ACPI tables:
- ``hw/i386/acpi-build.c``
- ``include/hw/acpi/tpm.h``
TPM backend devices
===================
The TPM implementation is split into two parts, frontend and
backend. The frontend part is the hardware interface, such as the TPM
TIS interface described earlier, and the other part is the TPM backend
interface. The backend interfaces implement the interaction with a TPM
device, which may be a physical or an emulated device. The split
between the front- and backend devices allows a frontend to be
connected with any available backend. This enables the TIS interface
to be used with the passthrough backend or the swtpm backend.
QEMU files related to TPM backends:
- ``backends/tpm.c``
- ``include/sysemu/tpm_backend.h``
- ``include/sysemu/tpm_backend_int.h``
The QEMU TPM passthrough device
-------------------------------
In case QEMU is run on Linux as the host operating system it is
possible to make the hardware TPM device available to a single QEMU
guest. In this case the user must make sure that no other program is
using the device, e.g., /dev/tpm0, before trying to start QEMU with
it.
The passthrough driver uses the host's TPM device for sending TPM
commands and receiving responses from. Besides that it accesses the
TPM device's sysfs entry for support of command cancellation. Since
none of the state of a hardware TPM can be migrated between hosts,
virtual machine migration is disabled when the TPM passthrough driver
is used.
Since the host's TPM device will already be initialized by the host's
firmware, certain commands, e.g. ``TPM_Startup()``, sent by the
virtual firmware for device initialization, will fail. In this case
the firmware should not use the TPM.
Sharing the device with the host is generally not a recommended usage
scenario for a TPM device. The primary reason for this is that two
operating systems can then access the device's single set of
resources, such as platform configuration registers
(PCRs). Applications or kernel security subsystems, such as the Linux
Integrity Measurement Architecture (IMA), are not expecting to share
PCRs.
QEMU files related to the TPM passthrough device:
- ``hw/tpm/tpm_passthrough.c``
- ``hw/tpm/tpm_util.c``
- ``hw/tpm/tpm_util.h``
Command line to start QEMU with the TPM passthrough device using the host's
hardware TPM ``/dev/tpm0``:
.. code-block:: console
qemu-system-x86_64 -display sdl -accel kvm \
-m 1024 -boot d -bios bios-256k.bin -boot menu=on \
-tpmdev passthrough,id=tpm0,path=/dev/tpm0 \
-device tpm-tis,tpmdev=tpm0 test.img
The following commands should result in similar output inside the VM
with a Linux kernel that either has the TPM TIS driver built-in or
available as a module:
.. code-block:: console
# dmesg | grep -i tpm
[ 0.711310] tpm_tis 00:06: 1.2 TPM (device=id 0x1, rev-id 1)
# dmesg | grep TCPA
[ 0.000000] ACPI: TCPA 0x0000000003FFD191C 000032 (v02 BOCHS \
BXPCTCPA 0000001 BXPC 00000001)
# ls -l /dev/tpm*
crw-------. 1 root root 10, 224 Jul 11 10:11 /dev/tpm0
# find /sys/devices/ | grep pcrs$ | xargs cat
PCR-00: 35 4E 3B CE 23 9F 38 59 ...
...
PCR-23: 00 00 00 00 00 00 00 00 ...
The QEMU TPM emulator device
----------------------------
The TPM emulator device uses an external TPM emulator called 'swtpm'
for sending TPM commands to and receiving responses from. The swtpm
program must have been started before trying to access it through the
TPM emulator with QEMU.
The TPM emulator implements a command channel for transferring TPM
commands and responses as well as a control channel over which control
commands can be sent. (see the `SWTPM protocol`_ specification)
The control channel serves the purpose of resetting, initializing, and
migrating the TPM state, among other things.
The swtpm program behaves like a hardware TPM and therefore needs to
be initialized by the firmware running inside the QEMU virtual
machine. One necessary step for initializing the device is to send
the TPM_Startup command to it. SeaBIOS, for example, has been
instrumented to initialize a TPM 1.2 or TPM 2 device using this
command.
QEMU files related to the TPM emulator device:
- ``hw/tpm/tpm_emulator.c``
- ``hw/tpm/tpm_util.c``
- ``hw/tpm/tpm_util.h``
The following commands start the swtpm with a UnixIO control channel over
a socket interface. They do not need to be run as root.
.. code-block:: console
mkdir /tmp/mytpm1
swtpm socket --tpmstate dir=/tmp/mytpm1 \
--ctrl type=unixio,path=/tmp/mytpm1/swtpm-sock \
--log level=20
Command line to start QEMU with the TPM emulator device communicating
with the swtpm (x86):
.. code-block:: console
qemu-system-x86_64 -display sdl -accel kvm \
-m 1024 -boot d -bios bios-256k.bin -boot menu=on \
-chardev socket,id=chrtpm,path=/tmp/mytpm1/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-tis,tpmdev=tpm0 test.img
In case a pSeries machine is emulated, use the following command line:
.. code-block:: console
qemu-system-ppc64 -display sdl -machine pseries,accel=kvm \
-m 1024 -bios slof.bin -boot menu=on \
-nodefaults -device VGA -device pci-ohci -device usb-kbd \
-chardev socket,id=chrtpm,path=/tmp/mytpm1/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-spapr,tpmdev=tpm0 \
-device spapr-vscsi,id=scsi0,reg=0x00002000 \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0 \
-drive file=test.img,format=raw,if=none,id=drive-virtio-disk0
In case SeaBIOS is used as firmware, it should show the TPM menu item
after entering the menu with 'ESC'.
.. code-block:: console
Select boot device:
1. DVD/CD [ata1-0: QEMU DVD-ROM ATAPI-4 DVD/CD]
[...]
5. Legacy option rom
t. TPM Configuration
The following commands should result in similar output inside the VM
with a Linux kernel that either has the TPM TIS driver built-in or
available as a module:
.. code-block:: console
# dmesg | grep -i tpm
[ 0.711310] tpm_tis 00:06: 1.2 TPM (device=id 0x1, rev-id 1)
# dmesg | grep TCPA
[ 0.000000] ACPI: TCPA 0x0000000003FFD191C 000032 (v02 BOCHS \
BXPCTCPA 0000001 BXPC 00000001)
# ls -l /dev/tpm*
crw-------. 1 root root 10, 224 Jul 11 10:11 /dev/tpm0
# find /sys/devices/ | grep pcrs$ | xargs cat
PCR-00: 35 4E 3B CE 23 9F 38 59 ...
...
PCR-23: 00 00 00 00 00 00 00 00 ...
Migration with the TPM emulator
===============================
The TPM emulator supports the following types of virtual machine
migration:
- VM save / restore (migration into a file)
- Network migration
- Snapshotting (migration into storage like QoW2 or QED)
The following command sequences can be used to test VM save / restore.
In a 1st terminal start an instance of a swtpm using the following command:
.. code-block:: console
mkdir /tmp/mytpm1
swtpm socket --tpmstate dir=/tmp/mytpm1 \
--ctrl type=unixio,path=/tmp/mytpm1/swtpm-sock \
--log level=20 --tpm2
In a 2nd terminal start the VM:
.. code-block:: console
qemu-system-x86_64 -display sdl -accel kvm \
-m 1024 -boot d -bios bios-256k.bin -boot menu=on \
-chardev socket,id=chrtpm,path=/tmp/mytpm1/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-tis,tpmdev=tpm0 \
-monitor stdio \
test.img
Verify that the attached TPM is working as expected using applications
inside the VM.
To store the state of the VM use the following command in the QEMU
monitor in the 2nd terminal:
.. code-block:: console
(qemu) migrate "exec:cat > testvm.bin"
(qemu) quit
At this point a file called ``testvm.bin`` should exists and the swtpm
and QEMU processes should have ended.
To test 'VM restore' you have to start the swtpm with the same
parameters as before. If previously a TPM 2 [--tpm2] was saved, --tpm2
must now be passed again on the command line.
In the 1st terminal restart the swtpm with the same command line as
before:
.. code-block:: console
swtpm socket --tpmstate dir=/tmp/mytpm1 \
--ctrl type=unixio,path=/tmp/mytpm1/swtpm-sock \
--log level=20 --tpm2
In the 2nd terminal restore the state of the VM using the additional
'-incoming' option.
.. code-block:: console
qemu-system-x86_64 -display sdl -accel kvm \
-m 1024 -boot d -bios bios-256k.bin -boot menu=on \
-chardev socket,id=chrtpm,path=/tmp/mytpm1/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-tis,tpmdev=tpm0 \
-incoming "exec:cat < testvm.bin" \
test.img
Troubleshooting migration
-------------------------
There are several reasons why migration may fail. In case of problems,
please ensure that the command lines adhere to the following rules
and, if possible, that identical versions of QEMU and swtpm are used
at all times.
VM save and restore:
- QEMU command line parameters should be identical apart from the
'-incoming' option on VM restore
- swtpm command line parameters should be identical
VM migration to 'localhost':
- QEMU command line parameters should be identical apart from the
'-incoming' option on the destination side
- swtpm command line parameters should point to two different
directories on the source and destination swtpm (--tpmstate dir=...)
(especially if different versions of libtpms were to be used on the
same machine).
VM migration across the network:
- QEMU command line parameters should be identical apart from the
'-incoming' option on the destination side
- swtpm command line parameters should be identical
VM Snapshotting:
- QEMU command line parameters should be identical
- swtpm command line parameters should be identical
Besides that, migration failure reasons on the swtpm level may include
the following:
- the versions of the swtpm on the source and destination sides are
incompatible
- downgrading of TPM state may not be supported
- the source and destination libtpms were compiled with different
compile-time options and the destination side refuses to accept the
state
- different migration keys are used on the source and destination side
and the destination side cannot decrypt the migrated state
(swtpm ... --migration-key ... )
.. _TIS specification:
https://trustedcomputinggroup.org/pc-client-work-group-pc-client-specific-tpm-interface-specification-tis/
.. _CRB specification:
https://trustedcomputinggroup.org/resource/pc-client-platform-tpm-profile-ptp-specification/
.. _ACPI specification:
https://trustedcomputinggroup.org/tcg-acpi-specification/
.. _PPI specification:
https://trustedcomputinggroup.org/resource/tcg-physical-presence-interface-specification/
.. _SWTPM protocol:
https://github.com/stefanberger/swtpm/blob/master/man/man3/swtpm_ioctls.pod

View File

@ -1,445 +0,0 @@
QEMU TPM Device
===============
= Guest-side Hardware Interface =
The QEMU TPM emulation implements a TPM TIS hardware interface following the
Trusted Computing Group's specification "TCG PC Client Specific TPM Interface
Specification (TIS)", Specification Version 1.3, 21 March 2013. This
specification, or a later version of it, can be accessed from the following
URL:
https://trustedcomputinggroup.org/pc-client-work-group-pc-client-specific-tpm-interface-specification-tis/
The TIS interface makes a memory mapped IO region in the area 0xfed40000 -
0xfed44fff available to the guest operating system.
QEMU files related to TPM TIS interface:
- hw/tpm/tpm_tis.c
- hw/tpm/tpm_tis.h
QEMU also implements a TPM CRB interface following the Trusted Computing
Group's specification "TCG PC Client Platform TPM Profile (PTP)
Specification", Family "2.0", Level 00 Revision 01.03 v22, May 22, 2017.
This specification, or a later version of it, can be accessed from the
following URL:
https://trustedcomputinggroup.org/resource/pc-client-platform-tpm-profile-ptp-specification/
The CRB interface makes a memory mapped IO region in the area 0xfed40000 -
0xfed40fff (1 locality) available to the guest operating system.
QEMU files related to TPM CRB interface:
- hw/tpm/tpm_crb.c
pSeries (ppc64) machines offer a tpm-spapr device model.
QEMU files related to the SPAPR interface:
- hw/tpm/tpm_spapr.c
= fw_cfg interface =
The bios/firmware may read the "etc/tpm/config" fw_cfg entry for
configuring the guest appropriately.
The entry of 6 bytes has the following content, in little-endian:
#define TPM_VERSION_UNSPEC 0
#define TPM_VERSION_1_2 1
#define TPM_VERSION_2_0 2
#define TPM_PPI_VERSION_NONE 0
#define TPM_PPI_VERSION_1_30 1
struct FwCfgTPMConfig {
uint32_t tpmppi_address; /* PPI memory location */
uint8_t tpm_version; /* TPM version */
uint8_t tpmppi_version; /* PPI version */
};
= ACPI Interface =
The TPM device is defined with ACPI ID "PNP0C31". QEMU builds a SSDT and passes
it into the guest through the fw_cfg device. The device description contains
the base address of the TIS interface 0xfed40000 and the size of the MMIO area
(0x5000). In case a TPM2 is used by QEMU, a TPM2 ACPI table is also provided.
The device is described to be used in polling mode rather than interrupt mode
primarily because no unused IRQ could be found.
To support measurement logs to be written by the firmware, e.g. SeaBIOS, a TCPA
table is implemented. This table provides a 64kb buffer where the firmware can
write its log into. For TPM 2 only a more recent version of the TPM2 table
provides support for measurements logs and a TCPA table does not need to be
created.
The TCPA and TPM2 ACPI tables follow the Trusted Computing Group specification
"TCG ACPI Specification" Family "1.2" and "2.0", Level 00 Revision 00.37. This
specification, or a later version of it, can be accessed from the following
URL:
https://trustedcomputinggroup.org/tcg-acpi-specification/
== ACPI PPI Interface ==
QEMU supports the Physical Presence Interface (PPI) for TPM 1.2 and TPM 2. This
interface requires ACPI and firmware support. The specification can be found at
the following URL:
https://trustedcomputinggroup.org/resource/tcg-physical-presence-interface-specification/
PPI enables a system administrator (root) to request a modification to the
TPM upon reboot. The PPI specification defines the operation requests and the
actions the firmware has to take. The system administrator passes the operation
request number to the firmware through an ACPI interface which writes this
number to a memory location that the firmware knows. Upon reboot, the firmware
finds the number and sends commands to the TPM. The firmware writes the TPM
result code and the operation request number to a memory location that ACPI can
read from and pass the result on to the administrator.
The PPI specification defines a set of mandatory and optional operations for
the firmware to implement. The ACPI interface also allows an administrator to
list the supported operations. In QEMU the ACPI code is generated by QEMU, yet
the firmware needs to implement support on a per-operations basis, and
different firmwares may support a different subset. Therefore, QEMU introduces
the virtual memory device for PPI where the firmware can indicate which
operations it supports and ACPI can enable the ones that are supported and
disable all others. This interface lies in main memory and has the following
layout:
+----------+--------+--------+-------------------------------------------+
| Field | Length | Offset | Description |
+----------+--------+--------+-------------------------------------------+
| func | 0x100 | 0x000 | Firmware sets values for each supported |
| | | | operation. See defined values below. |
+----------+--------+--------+-------------------------------------------+
| ppin | 0x1 | 0x100 | SMI interrupt to use. Set by firmware. |
| | | | Not supported. |
+----------+--------+--------+-------------------------------------------+
| ppip | 0x4 | 0x101 | ACPI function index to pass to SMM code. |
| | | | Set by ACPI. Not supported. |
+----------+--------+--------+-------------------------------------------+
| pprp | 0x4 | 0x105 | Result of last executed operation. Set by |
| | | | firmware. See function index 5 for values.|
+----------+--------+--------+-------------------------------------------+
| pprq | 0x4 | 0x109 | Operation request number to execute. See |
| | | | 'Physical Presence Interface Operation |
| | | | Summary' tables in specs. Set by ACPI. |
+----------+--------+--------+-------------------------------------------+
| pprm | 0x4 | 0x10d | Operation request optional parameter. |
| | | | Values depend on operation. Set by ACPI. |
+----------+--------+--------+-------------------------------------------+
| lppr | 0x4 | 0x111 | Last executed operation request number. |
| | | | Copied from pprq field by firmware. |
+----------+--------+--------+-------------------------------------------+
| fret | 0x4 | 0x115 | Result code from SMM function. |
| | | | Not supported. |
+----------+--------+--------+-------------------------------------------+
| res1 | 0x40 | 0x119 | Reserved for future use |
+----------+--------+--------+-------------------------------------------+
| next_step| 0x1 | 0x159 | Operation to execute after reboot by |
| | | | firmware. Used by firmware. |
+----------+--------+--------+-------------------------------------------+
| movv | 0x1 | 0x15a | Memory overwrite variable |
+----------+--------+--------+-------------------------------------------+
The following values are supported for the 'func' field. They correspond
to the values used by ACPI function index 8.
+----------+-------------------------------------------------------------+
| value | Description |
+----------+-------------------------------------------------------------+
| 0 | Operation is not implemented. |
+----------+-------------------------------------------------------------+
| 1 | Operation is only accessible through firmware. |
+----------+-------------------------------------------------------------+
| 2 | Operation is blocked for OS by firmware configuration. |
+----------+-------------------------------------------------------------+
| 3 | Operation is allowed and physically present user required. |
+----------+-------------------------------------------------------------+
| 4 | Operation is allowed and physically present user is not |
| | required. |
+----------+-------------------------------------------------------------+
The location of the table is given by the fw_cfg tpmppi_address field.
The PPI memory region size is 0x400 (TPM_PPI_ADDR_SIZE) to leave
enough room for future updates.
QEMU files related to TPM ACPI tables:
- hw/i386/acpi-build.c
- include/hw/acpi/tpm.h
= TPM backend devices =
The TPM implementation is split into two parts, frontend and backend. The
frontend part is the hardware interface, such as the TPM TIS interface
described earlier, and the other part is the TPM backend interface. The backend
interfaces implement the interaction with a TPM device, which may be a physical
or an emulated device. The split between the front- and backend devices allows
a frontend to be connected with any available backend. This enables the TIS
interface to be used with the passthrough backend or the (future) swtpm backend.
QEMU files related to TPM backends:
- backends/tpm.c
- include/sysemu/tpm_backend.h
- include/sysemu/tpm_backend_int.h
== The QEMU TPM passthrough device ==
In case QEMU is run on Linux as the host operating system it is possible to
make the hardware TPM device available to a single QEMU guest. In this case the
user must make sure that no other program is using the device, e.g., /dev/tpm0,
before trying to start QEMU with it.
The passthrough driver uses the host's TPM device for sending TPM commands
and receiving responses from. Besides that it accesses the TPM device's sysfs
entry for support of command cancellation. Since none of the state of a
hardware TPM can be migrated between hosts, virtual machine migration is
disabled when the TPM passthrough driver is used.
Since the host's TPM device will already be initialized by the host's firmware,
certain commands, e.g. TPM_Startup(), sent by the virtual firmware for device
initialization, will fail. In this case the firmware should not use the TPM.
Sharing the device with the host is generally not a recommended usage scenario
for a TPM device. The primary reason for this is that two operating systems can
then access the device's single set of resources, such as platform configuration
registers (PCRs). Applications or kernel security subsystems, such as the
Linux Integrity Measurement Architecture (IMA), are not expecting to share PCRs.
QEMU files related to the TPM passthrough device:
- hw/tpm/tpm_passthrough.c
- hw/tpm/tpm_util.c
- hw/tpm/tpm_util.h
Command line to start QEMU with the TPM passthrough device using the host's
hardware TPM /dev/tpm0:
qemu-system-x86_64 -display sdl -accel kvm \
-m 1024 -boot d -bios bios-256k.bin -boot menu=on \
-tpmdev passthrough,id=tpm0,path=/dev/tpm0 \
-device tpm-tis,tpmdev=tpm0 test.img
The following commands should result in similar output inside the VM with a
Linux kernel that either has the TPM TIS driver built-in or available as a
module:
#> dmesg | grep -i tpm
[ 0.711310] tpm_tis 00:06: 1.2 TPM (device=id 0x1, rev-id 1)
#> dmesg | grep TCPA
[ 0.000000] ACPI: TCPA 0x0000000003FFD191C 000032 (v02 BOCHS \
BXPCTCPA 0000001 BXPC 00000001)
#> ls -l /dev/tpm*
crw-------. 1 root root 10, 224 Jul 11 10:11 /dev/tpm0
#> find /sys/devices/ | grep pcrs$ | xargs cat
PCR-00: 35 4E 3B CE 23 9F 38 59 ...
...
PCR-23: 00 00 00 00 00 00 00 00 ...
== The QEMU TPM emulator device ==
The TPM emulator device uses an external TPM emulator called 'swtpm' for
sending TPM commands to and receiving responses from. The swtpm program
must have been started before trying to access it through the TPM emulator
with QEMU.
The TPM emulator implements a command channel for transferring TPM commands
and responses as well as a control channel over which control commands can
be sent. The specification for the control channel can be found here:
https://github.com/stefanberger/swtpm/blob/master/man/man3/swtpm_ioctls.pod
The control channel serves the purpose of resetting, initializing, and
migrating the TPM state, among other things.
The swtpm program behaves like a hardware TPM and therefore needs to be
initialized by the firmware running inside the QEMU virtual machine.
One necessary step for initializing the device is to send the TPM_Startup
command to it. SeaBIOS, for example, has been instrumented to initialize
a TPM 1.2 or TPM 2 device using this command.
QEMU files related to the TPM emulator device:
- hw/tpm/tpm_emulator.c
- hw/tpm/tpm_util.c
- hw/tpm/tpm_util.h
The following commands start the swtpm with a UnixIO control channel over
a socket interface. They do not need to be run as root.
mkdir /tmp/mytpm1
swtpm socket --tpmstate dir=/tmp/mytpm1 \
--ctrl type=unixio,path=/tmp/mytpm1/swtpm-sock \
--log level=20
Command line to start QEMU with the TPM emulator device communicating with
the swtpm (x86):
qemu-system-x86_64 -display sdl -accel kvm \
-m 1024 -boot d -bios bios-256k.bin -boot menu=on \
-chardev socket,id=chrtpm,path=/tmp/mytpm1/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-tis,tpmdev=tpm0 test.img
In case a pSeries machine is emulated, use the following command line:
qemu-system-ppc64 -display sdl -machine pseries,accel=kvm \
-m 1024 -bios slof.bin -boot menu=on \
-nodefaults -device VGA -device pci-ohci -device usb-kbd \
-chardev socket,id=chrtpm,path=/tmp/mytpm1/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-spapr,tpmdev=tpm0 \
-device spapr-vscsi,id=scsi0,reg=0x00002000 \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0 \
-drive file=test.img,format=raw,if=none,id=drive-virtio-disk0
In case SeaBIOS is used as firmware, it should show the TPM menu item
after entering the menu with 'ESC'.
Select boot device:
1. DVD/CD [ata1-0: QEMU DVD-ROM ATAPI-4 DVD/CD]
[...]
5. Legacy option rom
t. TPM Configuration
The following commands should result in similar output inside the VM with a
Linux kernel that either has the TPM TIS driver built-in or available as a
module:
#> dmesg | grep -i tpm
[ 0.711310] tpm_tis 00:06: 1.2 TPM (device=id 0x1, rev-id 1)
#> dmesg | grep TCPA
[ 0.000000] ACPI: TCPA 0x0000000003FFD191C 000032 (v02 BOCHS \
BXPCTCPA 0000001 BXPC 00000001)
#> ls -l /dev/tpm*
crw-------. 1 root root 10, 224 Jul 11 10:11 /dev/tpm0
#> find /sys/devices/ | grep pcrs$ | xargs cat
PCR-00: 35 4E 3B CE 23 9F 38 59 ...
...
PCR-23: 00 00 00 00 00 00 00 00 ...
=== Migration with the TPM emulator ===
The TPM emulator supports the following types of virtual machine migration:
- VM save / restore (migration into a file)
- Network migration
- Snapshotting (migration into storage like QoW2 or QED)
The following command sequences can be used to test VM save / restore.
In a 1st terminal start an instance of a swtpm using the following command:
mkdir /tmp/mytpm1
swtpm socket --tpmstate dir=/tmp/mytpm1 \
--ctrl type=unixio,path=/tmp/mytpm1/swtpm-sock \
--log level=20 --tpm2
In a 2nd terminal start the VM:
qemu-system-x86_64 -display sdl -accel kvm \
-m 1024 -boot d -bios bios-256k.bin -boot menu=on \
-chardev socket,id=chrtpm,path=/tmp/mytpm1/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-tis,tpmdev=tpm0 \
-monitor stdio \
test.img
Verify that the attached TPM is working as expected using applications inside
the VM.
To store the state of the VM use the following command in the QEMU monitor in
the 2nd terminal:
(qemu) migrate "exec:cat > testvm.bin"
(qemu) quit
At this point a file called 'testvm.bin' should exists and the swtpm and QEMU
processes should have ended.
To test 'VM restore' you have to start the swtpm with the same parameters
as before. If previously a TPM 2 [--tpm2] was saved, --tpm2 must now be
passed again on the command line.
In the 1st terminal restart the swtpm with the same command line as before:
swtpm socket --tpmstate dir=/tmp/mytpm1 \
--ctrl type=unixio,path=/tmp/mytpm1/swtpm-sock \
--log level=20 --tpm2
In the 2nd terminal restore the state of the VM using the additional
'-incoming' option.
qemu-system-x86_64 -display sdl -accel kvm \
-m 1024 -boot d -bios bios-256k.bin -boot menu=on \
-chardev socket,id=chrtpm,path=/tmp/mytpm1/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-tis,tpmdev=tpm0 \
-incoming "exec:cat < testvm.bin" \
test.img
Troubleshooting migration:
There are several reasons why migration may fail. In case of problems,
please ensure that the command lines adhere to the following rules and,
if possible, that identical versions of QEMU and swtpm are used at all
times.
VM save and restore:
- QEMU command line parameters should be identical apart from the
'-incoming' option on VM restore
- swtpm command line parameters should be identical
VM migration to 'localhost':
- QEMU command line parameters should be identical apart from the
'-incoming' option on the destination side
- swtpm command line parameters should point to two different
directories on the source and destination swtpm (--tpmstate dir=...)
(especially if different versions of libtpms were to be used on the
same machine).
VM migration across the network:
- QEMU command line parameters should be identical apart from the
'-incoming' option on the destination side
- swtpm command line parameters should be identical
VM Snapshotting:
- QEMU command line parameters should be identical
- swtpm command line parameters should be identical
Besides that, migration failure reasons on the swtpm level may include
the following:
- the versions of the swtpm on the source and destination sides are
incompatible
- downgrading of TPM state may not be supported
- the source and destination libtpms were compiled with different
compile-time options and the destination side refuses to accept the
state
- different migration keys are used on the source and destination side
and the destination side cannot decrypt the migrated state
(swtpm ... --migration-key ... )