Use the defined symbol rather than hardcoding the value to
check whether the TMC buffer is full.
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch cleans up the peripheral id table for different ETMv4
implementations.
As per Cortex-A53 TRM, the ETM has following id values:
Peripheral ID0 0x5D 0xFE0
Peripheral ID1 0xB9 0xFE4
Peripheral ID2 0x4B 0xFE8
Peripheral ID3 0x00 0xFEC
where, PID2: has the following format:
[7:4] Revision
[3] JEDEC 0b1 res1. Indicates a JEP106 identity code is used
[2:0] DES_1 0b011 ARM Limited. This is bits[6:4] of JEP106 ID code
The existing table entry checks only the bits [1:0], which is not
sufficient enough. Fix it to match bits [3:0], just like the other
entries do. While at it, correct the comment for A57 and the A53 entry.
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
At present the ETF or ETR gives out the entire device
buffer, even if there is less or even no trace data
available. This patch limits the trace data given out to
the actual trace data collected.
Cc: mathieu.poirier@linaro.org
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This is a cleanup patch.
coresight_device->conns holds an array to point to the devices
connected to the OUT ports of a component. Sinks, e.g ETR, do not
have an OUT port (nr_outport = 0), as it streams the trace to
memory via AXI.
At coresight_register() we do :
conns = kcalloc(csdev->nr_outport, sizeof(*conns), GFP_KERNEL);
if (!conns) {
ret = -ENOMEM;
goto err_kzalloc_conns;
}
For ETR, since the total size requested for kcalloc is zero, the return
value is, ZERO_SIZE_PTR ( != NULL). Hence, csdev->conns = ZERO_SIZE_PTR
which cannot be verified later to contain a valid pointer. The code which
accesses the csdev->conns is bounded by the csdev->nr_outport check,
hence we don't try to dereference the ZERO_SIZE_PTR. This patch cleans
up the csdev->conns initialisation to make sure we initialise it
properly(i.e, either NULL or valid conns array).
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch cleans up the error handling path for tmc_probe
as a side effect of the removal of the spurious dma_free_coherent().
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit de5461970b ("coresight: tmc: allocating memory when needed")
removed the static allocation of buffer for the trace data in ETR mode in
tmc_probe. However it failed to remove the "devm_free_coherent" in
tmc_probe when the probe fails due to other reasons. This patch gets
rid of the incorrect dma_free_coherent() call.
Fixes: commit de5461970b ("coresight: tmc: allocating memory when needed")
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
etm4_trace_id is not guaranteed to be executed on the CPU whose ETM is
being accessed. This leads to exception similar to below one if the
CPU whose ETM is being accessed is in deeper idle states. So it must
be executed on the CPU whose ETM is being accessed.
Unhandled fault: synchronous external abort (0x96000210) at 0xffff000008db4040
Internal error: : 96000210 [#1] PREEMPT SMP
Modules linked in:
CPU: 5 PID: 5979 Comm: etm.sh Not tainted 4.7.0-rc3 #159
Hardware name: ARM Juno development board (r2) (DT)
task: ffff80096dd34b00 ti: ffff80096dfe4000 task.ti: ffff80096dfe4000
PC is at etm4_trace_id+0x5c/0x90
LR is at etm4_trace_id+0x3c/0x90
Call trace:
etm4_trace_id+0x5c/0x90
coresight_id_match+0x78/0xa8
bus_for_each_dev+0x60/0xa0
coresight_enable+0xc0/0x1b8
enable_source_store+0x3c/0x70
dev_attr_store+0x18/0x28
sysfs_kf_write+0x48/0x58
kernfs_fop_write+0x14c/0x1e0
__vfs_write+0x1c/0x100
vfs_write+0xa0/0x1b8
SyS_write+0x44/0xa0
el0_svc_naked+0x24/0x28
However, TRCTRACEIDR is not guaranteed to hold the previous programmed
trace id if it enters deeper idle states. Further, the trace id that is
computed in etm4_init_trace_id is programmed into TRCTRACEIDR only in
etm4_enable_hw which happens much later in the sequence after
coresight_id_match is executed from enable_source_store.
This patch simplifies etm4_trace_id by returning the stashed trace id
value similar to etm4_cpu_id.
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
CoreSight STM device allows direct mapping of the channel regions to
userspace for zero-copy writing. To support this ability, the STM
framework has provided a hook 'mmio_addr', this patch just implemented
this hook for CoreSight STM.
This patch also added an item into 'channel_space' to save the physical
base address of channel region which mmap operation needs to know.
Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If the addition of the coresight devices get deferred, then there's a
window before child_name is populated by of_get_coresight_platform_data
from the respective component driver's probe and the attempted to access
the same from coresight_orphan_match resulting in kernel NULL pointer
dereference as below:
Unable to handle kernel NULL pointer dereference at virtual address 0x0
Internal error: Oops: 96000004 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 1038 Comm: kworker/0:1 Not tainted 4.7.0-rc3 #124
Hardware name: ARM Juno development board (r2) (DT)
Workqueue: events amba_deferred_retry_func
PC is at strcmp+0x1c/0x160
LR is at coresight_orphan_match+0x7c/0xd0
Call trace:
strcmp+0x1c/0x160
bus_for_each_dev+0x60/0xa0
coresight_register+0x264/0x2e0
tmc_probe+0x130/0x310
amba_probe+0xd4/0x1c8
driver_probe_device+0x22c/0x418
__device_attach_driver+0xbc/0x158
bus_for_each_drv+0x58/0x98
__device_attach+0xc4/0x160
device_initial_probe+0x10/0x18
bus_probe_device+0x94/0xa0
device_add+0x344/0x580
amba_device_try_add+0x194/0x238
amba_deferred_retry_func+0x48/0xd0
process_one_work+0x118/0x378
worker_thread+0x48/0x498
kthread+0xd0/0xe8
ret_from_fork+0x10/0x40
This patch adds a check for non-NULL conn->child_name before accessing
the same.
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reports for available memory should use the si_mem_available() value.
The previous freeram value does not include available page cache memory.
Signed-off-by: Alex Ng <alexng@messages.microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
lockdep reports possible circular locking dependency when udev is used
for memory onlining:
systemd-udevd/3996 is trying to acquire lock:
((memory_chain).rwsem){++++.+}, at: [<ffffffff810d137e>] __blocking_notifier_call_chain+0x4e/0xc0
but task is already holding lock:
(&dm_device.ha_region_mutex){+.+.+.}, at: [<ffffffffa015382e>] hv_memory_notifier+0x5e/0xc0 [hv_balloon]
...
which is probably a false positive because we take and release
ha_region_mutex from memory notifier chain depending on the arg. No real
deadlocks were reported so far (though I'm not really sure about
preemptible kernels...) but we don't really need to hold the mutex
for so long. We use it to protect ha_region_list (and its members) and the
num_pages_onlined counter. None of these operations require us to sleep
and nothing is slow, switch to using spinlock with interrupts disabled.
While on it, replace list_for_each -> list_for_each_entry as we actually
need entries in all these cases, drop meaningless list_empty() checks.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
With the recently introduced in-kernel memory onlining
(MEMORY_HOTPLUG_DEFAULT_ONLINE) these is no point in waiting for pages
to come online in the driver and we can get rid of the waiting.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
I'm observing the following hot add requests from the WS2012 host:
hot_add_req: start_pfn = 0x108200 count = 330752
hot_add_req: start_pfn = 0x158e00 count = 193536
hot_add_req: start_pfn = 0x188400 count = 239616
As the host doesn't specify hot add regions we're trying to create
128Mb-aligned region covering the first request, we create the 0x108000 -
0x160000 region and we add 0x108000 - 0x158e00 memory. The second request
passes the pfn_covered() check, we enlarge the region to 0x108000 -
0x190000 and add 0x158e00 - 0x188200 memory. The problem emerges with the
third request as it starts at 0x188400 so there is a 0x200 gap which is
not covered. As the end of our region is 0x190000 now it again passes the
pfn_covered() check were we just adjust the covered_end_pfn and make it
0x188400 instead of 0x188200 which means that we'll try to online
0x188200-0x188400 pages but these pages were never assigned to us and we
crash.
We can't react to such requests by creating new hot add regions as it may
happen that the whole suggested range falls into the previously identified
128Mb-aligned area so we'll end up adding nothing or create intersecting
regions and our current logic doesn't allow that. Instead, create a list of
such 'gaps' and check for them in the page online callback.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Windows 2012 (non-R2) does not specify hot add region in hot add requests
and the logic in hot_add_req() is trying to find a 128Mb-aligned region
covering the request. It may also happen that host's requests are not 128Mb
aligned and the created ha_region will start before the first specified
PFN. We can't online these non-present pages but we don't remember the real
start of the region.
This is a regression introduced by the commit 5abbbb75d7 ("Drivers: hv:
hv_balloon: don't lose memory when onlining order is not natural"). While
the idea of keeping the 'moving window' was wrong (as there is no guarantee
that hot add requests come ordered) we should still keep track of
covered_start_pfn. This is not a revert, the logic is different.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
KVP daemon does fork()/exec() (with popen()) so we need to close our fds
to avoid sharing them with child processes. The immediate implication of
not doing so I see is SELinux complaining about 'ip' trying to access
'/dev/vmbus/hv_kvp'.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
On Hyper-V, performance critical channels use the monitor
mechanism to signal the host when the guest posts mesages
for the host. This mechanism minimizes the hypervisor intercepts
and also makes the host more efficient in that each time the
host is woken up, it processes a batch of messages as opposed to
just one. The goal here is improve the throughput and this is at
the expense of increased latency.
Implement a mechanism to let the client driver decide if latency
is important.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The current delay between retries is unnecessarily high and is negatively
affecting the time it takes to boot the system.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
For synthetic NIC channels, enable explicit signaling policy as netvsc wants to
explicitly control when the host is to be signaled.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There is a rare race when we remove an entry from the global list
hv_context.percpu_list[cpu] in hv_process_channel_removal() ->
percpu_channel_deq() -> list_del(): at this time, if vmbus_on_event() ->
process_chn_event() -> pcpu_relid2channel() is trying to query the list,
we can get the kernel fault.
Similarly, we also have the issue in the code path: vmbus_process_offer() ->
percpu_channel_enq().
We can resolve the issue by disabling the tasklet when updating the list.
The patch also moves vmbus_release_relid() to a later place where
the channel has been removed from the per-cpu and the global lists.
Reported-by: Rolf Neugebauer <rolf.neugebauer@docker.com>
Signed-off-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Background: userspace daemons registration protocol for Hyper-V utilities
drivers has two steps:
1) daemon writes its own version to kernel
2) kernel reads it and replies with module version
at this point we consider the handshake procedure being completed and we
do hv_poll_channel() transitioning the utility device to HVUTIL_READY
state. At this point we're ready to handle messages from kernel.
When hvutil_transport is in HVUTIL_TRANSPORT_CHARDEV mode we have a
single buffer for outgoing message. hvutil_transport_send() puts to this
buffer and till the buffer is cleared with hvt_op_read() returns -EFAULT
to all consequent calls. Host<->guest protocol guarantees there is no more
than one request at a time and we will not get new requests till we reply
to the previous one so this single message buffer is enough.
Now to the race. When we finish negotiation procedure and send kernel
module version to userspace with hvutil_transport_send() it goes into the
above mentioned buffer and if the daemon is slow enough to read it from
there we can get a collision when a request from the host comes, we won't
be able to put anything to the buffer so the request will be lost. To
solve the issue we need to know when the negotiation is really done (when
the version message is read by the daemon) and transition to HVUTIL_READY
state after this happens. Implement a callback on read to support this.
Old style netlink communication is not affected by the change, we don't
really know when these messages are delivered but we don't have a single
message buffer there.
Reported-by: Barry Davis <barry_davis@stormagic.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
vmbus_teardown_gpadl() can result in infinite wait when it is called on 5
second timeout in vmbus_open(). The issue is caused by the fact that gpadl
teardown operation won't ever succeed for an opened channel and the timeout
isn't always enough. As a guest, we can always trust the host to respond to
our request (and there is nothing we can do if it doesn't).
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In some cases create_gpadl_header() allocates submessages but we never
free them.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We use messagecount only once in vmbus_establish_gpadl() to check if
it is safe to iterate through the submsglist. We can just initialize
the list header in all cases in create_gpadl_header() instead.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When we crash from NMI context (e.g. after NMI injection from host when
'sysctl -w kernel.unknown_nmi_panic=1' is set) we hit
kernel BUG at mm/vmalloc.c:1530!
as vfree() is denied. While the issue could be solved with in_nmi() check
instead I opted for skipping vfree on all sorts of crashes to reduce the
amount of work which can cause consequent crashes. We don't really need to
free anything on crash.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Joe Perches points out [1] that this pattern isn't currently safe.
This driver doesn't really need the zeroing semantic anyway;
by restructuring the code slightly we can initialize all the
fields of the structure up front instead.
[1] https://lkml.kernel.org/r/1469729491.3998.58.camel@perches.com
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Replace explicit computation of vma page count by a call to
vma_pages()
Signed-off-by: Muhammad Falak R Wani <falakreyaz@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When building with W=1, the __scif_rma_destroy_tcw function
causes a harmless warning about an argument variable that is
modified but not used:
drivers/misc/mic/scif/scif_dma.c: In function ‘__scif_rma_destroy_tcw’:
drivers/misc/mic/scif/scif_dma.c:118:27: error: parameter ‘ep’ set but not used [-Werror=unused-but-set-parameter]
In this case, we can just remove the argument, since all callers
are in the same file.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The device lock was unnecessary obtained in bus rescan work before the
amthif client search. That causes incorrect lock ordering and task
hang:
...
[88004.613213] INFO: task kworker/1:14:21832 blocked for more than 120 seconds.
...
[88004.645934] Workqueue: events mei_cl_bus_rescan_work
...
The correct lock order is
cl_bus_lock
device_lock
me_clients_rwsem
Move device_lock into amthif init function that called
after me_clients_rwsem is released.
This fixes regression introduced by commit:
commit 025fb792ba ("mei: split amthif client init from end of clients enumeration")
Cc: <stable@vger.kernel.org> # 4.6+
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
mei_amthif_read have only one difference from mei_read, it is not
calling mei_read_start().
Make mei_read_start return immediately for amthif client and drop the
special mei_amthif_read function.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The FW supports only one pending read per host client, in order to
support issuing of consecutive reads the driver queues read requests
internally and send them to the firmware after pending one has
completed.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Enclose the boiler plate code of allocating a control/hbm command cb
and enqueueing it onto ctrl_wr.list in a convenient wrapper
mei_cl_enqueue_ctrl_wr_cb().
This is a preparatory patch for enabling consecutive reads.
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
With the introduction of the receive control flow credits prefixed with
rx_ we add tx_ prefix to the variables and function used for tracking
the transmit control flow credits.
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use RX flow control counter in the host client structure to
track the number of simultaneous outstanding reads.
This eliminates search in queues and makes ground for
enabling for parallel read.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The read callbacks for the fixed address clients, that don't have flow
control are built now on the receive path. In order to have a single
allocation place we remove the allocation from the read request.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The read callback is always prepared with MTU-sized buffer and the FW
can't send more than the MTU in one message.
Checking for buffer existence and krealloc to increase receive buffer
size are redundant and may be safely discarded.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Open code mei_clear_lists into its only caller mei_amthif_releas
and drop unused parameter 'dev' form from mei_clear_list function.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The Fixed address clients do not work with the flow control, and the
packet RX callback was allocated upon TX with anticipation of a
following RX. This won't work if the clients with unsolicited Rx. Rather
than preparing read callback upon a write we allocate one directly on
the reciev path if one doesn't exists.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Store the file associated with a client in the host client structure,
this enables dropping the special amthif client file pointer from struct
mei_device, and this is also a preparation for changing the way rx
packet allocation for fixed_address clients
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Move read cb to the completion queue if a read finds out that client
is not connected. This expedite user space reader wake on error
condition.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Correct errno on client disconnection is -ENODEV not -EBUSY
Cc: <stable@vger.kernel.org> #4.3+
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In the course of the read flow we want to wait for read completion only
if the read queue is empty.
However the calling list_empty(&cl->rd_completed) is a duplication as the
same check was performed by mei_cl_read_cb() and the waiting is skipped
if it returns not NULL.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In mei_hbm_cl_hdr buf argument was not described
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Schedule link reset if failed to perform runtime suspend or resume.
Set active runtime pm stte on link reset
to clean runtimr pm error, if present.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
mei_io_cb_alloc_buf have a single caller :mei_cl_alloc_cb. After amthif
stopped using it, the code can be integrated into the caller and the
function can be dropped.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use mei_cl_alloc_cb wrapper instead of open code steps
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Incorporate the mei_amthif_send_cmd code into its only caller:
mei_amthif_run_next_cmd
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently the poll function is bailing early for amthif client and
ignores requests for async events notifications.
Move async event processing before amthif to enable async events
notifications on amthif client.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
iamthif_current_cb was used in request cancel in amthif code.
Now a canceled request is discarded only at the end of the processing
and the variable lost its purpose and can be safely removed.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>