Refactor _do_accept() into _do_start_server() and _do_accept(). As of
this commit, the former calls the latter, but in subsequent commits
they'll be split apart.
(So please forgive the misnomer for _do_start_server(); it will live up
to its name shortly, and the docstring will be updated then too. I'm
just cutting down on some churn.)
Signed-off-by: John Snow <jsnow@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20220225205948.3693480-7-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
I would really like to keep this under 1000 lines, I promise. Doesn't
look like it's gonna happen.
Signed-off-by: John Snow <jsnow@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20220225205948.3693480-6-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
As part of disentangling the monolithic nature of _do_accept(), split
out the incoming callback to prepare for factoring out the "wait for a
peer" step. Namely, this means using an event signal we can wait on from
outside of this method.
Signed-off-by: John Snow <jsnow@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20220225205948.3693480-5-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
These two methods attempted to entirely envelop the logic of
establishing a connection to a peer start to finish. However, we need to
break apart the incoming connection step into more granular steps. We
will no longer be able to reasonably constrain the logic inside of these
helper functions.
So, remove them - with _session_guard(), they no longer serve a real
purpose.
Although the public API doesn't change, the internal API does. Now that
there are no intermediary methods between e.g. connect() and
_do_connect(), there's no hook where the runstate is set. As a result,
the test suite changes a little to cope with the new semantics of
_do_accept() and _do_connect().
Lastly, take some pieces of the now-deleted docstrings and move
them up to the public interface level. They were a little more detailed,
and it won't hurt to keep them.
Signed-off-by: John Snow <jsnow@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20220225205948.3693480-4-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Previously, I had a method named "accept()" that under-the-hood calls
bind(2), listen(2) *and* accept(2). I meant this as a simplification and
counterpart to the one-shot "connect()" method.
This is confusing to readers who expect accept() to mean *just*
accept(2). Since I need to split apart the "accept()" method into
multiple methods anyway (one of which strongly resembling accept(2)), it
feels pertinent to rename this method *now*.
Rename this all-in-one method "start_server_and_accept()" instead.
Signed-off-by: John Snow <jsnow@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20220225205948.3693480-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
In _new_session, there's a fairly complex except clause that's used to
give semantic errors to callers of accept() and connect(). We need to
create a new two-step replacement for accept(), so factoring out this
piece of logic will be useful.
Bolster the comments and docstring here to try and demystify what's
going on in this fairly delicate piece of Python magic.
(If we were using Python 3.7+, this would be an @asynccontextmanager. We
don't have that very nice piece of magic, however, so this must take an
Awaitable to manage the Exception contexts properly. We pay the price
for platform compatibility.)
Signed-off-by: John Snow <jsnow@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20220225205948.3693480-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Setuptools v60 and later include a bundled version of distutils, a
deprecated standard library scheduled for removal in future versions of
Python. Setuptools v60 is only possible to install for Python 3.7 and later.
Python has a distutils.sysconfig.get_python_lib() function that returns
'/usr/lib/pythonX.Y' on posix systems. RPM-based systems actually use
'/usr/lib64/pythonX.Y' instead, so Fedora patches stdlib distutils for
Python 3.7 and Python 3.8 to return the correct value.
Python 3.9 and later introduce a sys.platlibdir property, which returns
the correct value on RPM-based systems.
The change to a distutils package not provided by Fedora on Python 3.7
and 3.8 causes a regression in distutils.sysconfig.get_python_lib() that
ultimately causes false positives to be emitted by pylint, because it
can no longer find the system source libraries.
Many Python tools are fairly aggressive about updating setuptools
packages, and so even though this package is a fair bit newer than
Python 3.7/3.8, it's not entirely unreasonable for a given user to have
such a modern package with a fairly old Python interpreter.
Updates to Python 3.7 and Python 3.8 are being produced for Fedora which
will fix the problem on up-to-date systems. Until then, we can force the
loading of platform-provided distutils when running the pylint
test. This is the least-invasive yet most comprehensive fix.
References:
https://github.com/pypa/setuptools/pull/2896https://github.com/PyCQA/pylint/issues/5704https://github.com/pypa/distutils/issues/110
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20220204221804.2047468-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
When invoking setup.py directly, the default behavior for 'install' is
to run the bdist_egg installation hook, which is ... actually deprecated
by setuptools. It doesn't seem to work quite right anymore.
By contrast, 'pip install' will invoke the bdist_wheel hook
instead. This leads to differences in behavior for the two approaches. I
advocate using pip in the documentation in this directory, but the
'setup.py' which has been used for quite a long time in the Python world
may deceptively appear to work at first glance.
Add an error message that will save a bit of time and frustration
that points the user towards using the supported installation
invocation.
Reported-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
Message-id: 20220207213039.2278569-1-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
When running QMP commands with very large response payloads, it is often
not easy to spot the info you want. If we can save the response to a
file then tools like 'grep' or 'jq' can be used to extract information.
For convenience of processing, we merge the QMP command and response
dictionaries together:
{
"arguments": {},
"execute": "query-kvm",
"return": {
"enabled": false,
"present": true
}
}
Example usage
$ ./scripts/qmp/qmp-shell-wrap -l q.log -p -- ./build/qemu-system-x86_64 -display none
Welcome to the QMP low-level shell!
Connected
(QEMU) query-kvm
{
"return": {
"enabled": false,
"present": true
}
}
(QEMU) query-mice
{
"return": [
{
"absolute": false,
"current": true,
"index": 2,
"name": "QEMU PS/2 Mouse"
}
]
}
$ jq --slurp '. | to_entries[] | select(.value.execute == "query-kvm") |
.value.return.enabled' < q.log
false
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20220128161157.36261-3-berrange@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
With the current 'qmp-shell' tool developers must first spawn QEMU with
a suitable -qmp arg and then spawn qmp-shell in a separate terminal
pointing to the right socket.
With 'qmp-shell-wrap' developers can ignore QMP sockets entirely and
just pass the QEMU command and arguments they want. The program will
listen on a UNIX socket and tell QEMU to connect QMP to that.
For example, this:
# qmp-shell-wrap -- qemu-system-x86_64 -display none
Is roughly equivalent of running:
# qemu-system-x86_64 -display none -qmp qmp-shell-1234 &
# qmp-shell qmp-shell-1234
Except that 'qmp-shell-wrap' switches the socket peers around so that
it is the UNIX socket server and QEMU is the socket client. This makes
QEMU reliably go away when qmp-shell-wrap exits, closing the server
socket.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20220128161157.36261-2-berrange@redhat.com
[Edited for rebase. --js]
Signed-off-by: John Snow <jsnow@redhat.com>
The synchronous QMP library would bind to the server address during
__init__(). The new library delays this to the accept() call, because
binding occurs inside of the call to start_[unix_]server(), which is an
async method -- so it cannot happen during __init__ anymore.
Python 3.7+ adds the ability to create the server (and thus the bind()
call) and begin the active listening in separate steps, but we don't
have that functionality in 3.6, our current minimum.
Therefore ... Add a temporary workaround that allows the synchronous
version of the client to bind the socket in advance, guaranteeing that
there will be a UNIX socket in the filesystem ready for the QEMU client
to connect to without a race condition.
(Yes, it's a bit ugly. Fixing it more nicely will have to wait until our
minimum Python version is 3.7+.)
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20220201041134.1237016-5-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
We need a slightly newer version of mypy in order to use some features
of the asyncio server functions in the next commit.
(Note: pipenv is not really suited to upgrading individual packages; I
need to replace this tool with something better for the task. For now,
the miscellaneous updates not related to the mypy upgrade are simply
beyond my control. It's on my list to take care of soon.)
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20220201041134.1237016-4-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
This allows us to pack in some extra information about the failure,
which guarantees that if the caller did not *intentionally* cause a
failure (by capturing this Exception), some pretty good clues will be
printed at the bottom of the traceback information.
This will help make failures in the event of a non-negative return code
more obvious when they go unhandled; the current behavior in
_post_shutdown() is to print a warning message only in the event of
signal-based terminations (for negative return codes).
(Note: In Python, catching BaseException instead of Exception catches a
broader array of Exception events, including SystemExit and
KeyboardInterrupt. We do not want to "wrap" such exceptions as a
VMLaunchFailure, because that will 'downgrade' the exception from a
BaseException to a regular Exception. We do, however, want to perform
cleanup in either case, so catch on the broadest scope and
wrap-and-re-raise only in the more targeted scope.)
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20220201041134.1237016-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
QEMU versions prior to the "oob" capability *also* can't accept the
"enable" keyword argument at all. Fix the handshake process with older
QEMU versions.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20220201041134.1237016-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
In order to upload a QMP package to PyPI, I want to remove any scripts
that I am not 100% confident I want to support upstream, beyond our
castle walls.
Move most of our QMP utilities into the utils package so we can split
them out from the PyPI upload.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
We have a replacement for async QMP, but it doesn't have feature parity
yet. For now, then, port the old tool onto the new backend.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Async QMP always raises a "ConnectError" on any connection error which
houses the cause in a second exception. We can check if this root cause
was python's ConnectionError to determine a fairly similar condition to
the original error check here.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
Cleanup related to commit ccd3b3b811, "qemu-option: warn for
short-form boolean options".
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This is in preparation for renaming qemu.aqmp to qemu.qmp. I should have
done this from this from the very beginning, but it's a convenient time
to make sure this churn is taken care of.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
It's a commonly needed definition, it can be re-exported by the root.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
Copy the remaining type definitions from QMP into the qemu.aqmp.legacy
module. Now, users that require the legacy interface don't need to
import anything else but qemu.aqmp.legacy wrapper.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
This exception can be injected into any await statement. If we are
canceled via timeout, we want to clear the pending execution record on
our way out.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
asyncio can complain *very* loudly if you forget to back out of things
gracefully before the garbage collector starts destroying objects that
contain live references to asyncio Tasks.
The usual fix is just to remember to call aqmp.disconnect(), but for the
sake of the legacy wrapper and quick, one-off scripts where a graceful
shutdown is not necessarily of paramount imporance, add a courtesy
cleanup that will trigger prior to seeing screenfuls of confusing
asyncio tracebacks.
Note that we can't *always* save you from yourself; depending on when
the GC runs, you might just seriously be out of luck. The best we can do
in this case is to gently remind you to clean up after yourself.
(Still much better than multiple pages of incomprehensible python
warnings for the crime of forgetting to put your toys away.)
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
The old legacy runner no longer seems to work with output logging, so we
can't see failure logs when a test case fails. The new runner doesn't
(seem to) support Coverage.py yet, but seeing error output is a more
important feature.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
Message-id: 20220119193916.4138217-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
setuptools is a package that replaces the python stdlib 'distutils'. It
is generally installed by all venv-creating tools "by default". It isn't
actually needed at runtime for the qemu package, so our own setup.cfg
does not mention it as a dependency.
However, tox will create virtual environments that include it, and will
upgrade it to the very latest version. the 'venv' tool will also include
whichever version your host system happens to have.
Unfortunately, setuptools version 60.0.0 and above include a hack to
forcibly overwrite python's built-in distutils. The pylint tool that we
use to run code analysis checks on this package relies on distutils and
suffers regressions when setuptools >= 60.0.0 is present at all, see
https://github.com/PyCQA/pylint/issues/5704
Instruct tox and the 'check-dev' targets to avoid setuptools packages
that are too new, for now. Pipenv is unaffected, because setuptools 60
does not offer Python 3.6 support, and our pipenv config is pinned
against Python 3.6.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Tested-by: Cleber Rosa <crosa@redhat.com>
Message-id: 20220121005221.142236-1-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Mypy 0.930, released Dec 22, changes the way argparse objects are
considered. Crafting a definition that works under Python 3.6 and an
older mypy alongside newer versions simultaneously is ... difficult,
so... eh. Stub it out with an 'Any' definition to get the CI moving
again.
Oh well.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
Message-id: 20220110191349.1841027-4-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
0.920 (Released 2021-12-15) is not entirely happy with the
way that I was defining _FutureT:
qemu/aqmp/protocol.py:601: error: Item "object" of the upper bound
"Optional[Future[Any]]" of type variable "_FutureT" has no attribute
"done"
Update it with something a little mechanically simpler that works better
across a wider array of mypy versions.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20220110191349.1841027-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
pylint's dependency astroid appears to have bugs in 2.9.1 and 2.9.2 (Dec
31 and Jan 3) that appear to erroneously expect the qemu namespace to
have an __init__.py file. astroid 2.9.3 (Jan 9) avoids that problem, but
appears to not understand a relative import within a namespace package.
Update the relative import - it was worth changing anyway, because these
packages will eventually be packaged and distributed separately.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Beraldo Leal <bleal@redhat.com>
Message-id: 20220110191349.1841027-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
3.6 doesn't play keepaway with the socket object, so we don't need to go
fishing for it on this version. In fact, so long as 'sendmsg' is still
available, it's probably preferable to just use that method and only go
fishing for forbidden details when we absolutely have to.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-id: 20211118204620.1897674-8-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
In the case that the QEMU process actually launches -- but then dies so
quickly that we can't establish a QMP connection to it -- QEMUMachine
currently calls _post_shutdown() assuming that it never launched the VM
process.
This isn't true, though: it "merely" may have failed to establish a QMP
connection and the process is in the middle of its own exit path.
If we don't wait for the subprocess, the caller may get a bogus `None`
return for .exitcode(). This behavior was observed from
device-crash-test; after the switch to Async QMP, the timings were
changed such that it was now seemingly possible to witness the failure
of "vm.launch()" *prior* to the exitcode becoming available.
The semantic of the `_launched` property is changed in this
patch. Instead of representing the condition "launch() executed
successfully", it will now represent "has forked a child process
successfully". This way, wait() when called in the exit path won't
become a no-op.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-id: 20211118204620.1897674-6-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
No need to clear them only to set them later.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-id: 20211118204620.1897674-5-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
If you create two instances of QEMUMachine(), they'll both create the
same nickname by default -- which is not that helpful.
Luckily, they'll both create unique temporary directories ... but due to
user configuration, they may share logging and sockfile directories,
meaning two instances can collide. The Python logging will also be quite
confusing, with no differentiation between the two instances.
Add an instance disambiguator (The memory address of the instance) to
the default nickname to foolproof this in all cases.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-id: 20211118204620.1897674-4-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
It doesn't matter if it was the user or the class itself that specified
where the sockfile should be created; the fact is that if we are using a
sockfile here, we created it and we can clean it up.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-id: 20211118204620.1897674-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Analogous to temp_dir and log_dir, add a sock_dir property that defaults
to @temp_dir -- instead of base_temp_dir -- when the user hasn't
overridden the sock dir value in the initializer.
This gives us a much more unique directory to put sockfiles in by default.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-id: 20211118204620.1897674-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
When ConnectError is used to wrap an Exception that was initialized
without an error message, we are treated to a traceback with a rubbish
line like this:
... ConnectError: Failed to establish session:
Correct this to use the name of an exception as a fallback message:
... ConnectError: Failed to establish session: EOFError
Better!
Signed-off-by: John Snow <jsnow@redhat.com>
Reported-by: Thomas Huth <thuth@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-id: 20211111143719.2162525-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
If we receive ConnectionResetError (ECONNRESET) while attempting to
perform capabilities negotiation -- prior to the establishment of the
async reader/writer tasks -- the disconnect function is not aware that
we are in an error pathway.
As a result, when attempting to close the StreamWriter, we'll see the
same ConnectionResetError that caused us to initiate a disconnect in the
first place, which will cause the disconnect task itself to fail, which
emits a CRITICAL logging event.
I still don't know if there's a smarter way to check to see if an
exception received at this point is "the same" exception as the one that
caused the initial disconnect, but for now the problem can be avoided by
improving the error pathway detection in the exit path.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-id: 20211111143719.2162525-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
In the discussion about renaming the `tests/acceptance` [1], the
conclusion was that the folders inside `tests` are related to the
framework running the tests and not directly related to the type of
the tests.
This changes the folder to `tests/avocado` and adjusts the MAKEFILE, the
CI related files and the documentation.
[1] https://lists.gnu.org/archive/html/qemu-devel/2021-05/msg06553.html
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20211105155354.154864-3-willianr@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Swap out the synchronous QEMUMonitorProtocol from qemu.qmp with the sync
wrapper from qemu.aqmp instead.
Add an escape hatch in the form of the environment variable
QEMU_PYTHON_LEGACY_QMP which allows you to cajole QEMUMachine into using
the old implementation, proving that both implementations work
concurrently.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Message-id: 20211026175612.4127598-9-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
This is a wrapper around the async QMPClient that mimics the old,
synchronous QEMUMonitorProtocol class. It is designed to be
interchangeable with the old implementation.
It does not, however, attempt to mimic Exception compatibility.
Signed-off-by: John Snow <jsnow@redhat.com>
Acked-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Message-id: 20211026175612.4127598-8-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
The scary message interferes with the iotests output. Coincidentally, if
iotests works by removing this, then it's good evidence that we don't
really need to scare people away from using it.
Signed-off-by: John Snow <jsnow@redhat.com>
Acked-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Message-id: 20211026175612.4127598-4-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
To use the AQMP backend, Machine just needs to be a little more diligent
about what happens when closing a QMP connection. The operation is no
longer a freebie in the async world; it may return errors encountered in
the async bottom half on incoming message receipt, etc.
(AQMP's disconnect, ultimately, serves as the quiescence point where all
async contexts are gathered together, and any final errors reported at
that point.)
Because async QMP continues to check for messages asynchronously, it's
almost certainly likely that the loop will have exited due to EOF after
issuing the last 'quit' command. That error will ultimately be bubbled
up when attempting to close the QMP connection. The manager class here
then is free to discard it -- if it was expected.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20211026175612.4127598-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
If we spy on the QMP commands instead, we don't need callers to remember
to pass it. Seems like a fair trade-off.
The one slightly weird bit is overloading this instance variable for
wait(), where we use it to mean "don't issue the qmp 'quit'
command". This means that wait() will "fail" if the QEMU process does
not terminate of its own accord.
In most cases, we probably did already actually issue quit -- some
iotests do this -- but in some others, we may be waiting for QEMU to
terminate for some other reason, such as a test wherein we tell the
guest (directly) to shut down.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20211026175612.4127598-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Run mypy and pylint on the iotests files directly from the Python CI
test infrastructure. This ensures that any accidental breakages to the
qemu.[qmp|aqmp|machine|utils] packages will be caught by that test
suite.
It also ensures that these linters are run with well-known versions and
test against a wide variety of python versions, which helps to find
accidental cross-version python compatibility issues.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Message-id: 20211019144918.3159078-15-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
It's not used anymore, now.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20210923004938.3999963-11-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
It turns out you can do this directly from Python ... and because of
this, you don't need to worry about setting the inheritability of the
fds or spawning another process.
Doing this is helpful because it allows QEMUMonitorProtocol to keep its
file descriptor and socket object as private implementation
details. /that/ is helpful in turn because it allows me to write a
compatible, alternative implementation.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20210923004938.3999963-10-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
All callers in the tree *already* clear the events after a call to
get_events(). Do it automatically instead and update callsites to remove
the manual clear call.
These semantics are quite a bit easier to emulate with async QMP, and
nobody appears to be abusing some emergent properties of what happens if
you decide not to clear them, so let's dial down to the dumber, simpler
thing.
Specifically: callers of clear() right after a call to get_events() are
more likely expressing their desire to not see any events they just
retrieved, whereas callers of clear_events() not in relation to a recent
call to pull_event/get_events are likely expressing their desire to
simply drop *all* pending events straight onto the floor. In the sync
world, this is safe enough; in the async world it's nearly impossible to
promise that nothing happens between getting and clearing the
events.
Making the retrieval also clear the queue is vastly simpler.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20210923004938.3999963-9-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>