SCSI for-linus on 20161222

This is mostly stuff which missed the initial pull.  There's a new
 driver: qedi, some ufs, ibmvscsis and ncr5380 updates plus some
 assorted driver fixes and also a fix for the bug where if a device
 goes into a blocked state between configuration and sysfs device add
 (which can be a long time under async probing) it would become
 permanently blocked.
 
 Signed-off-by: James E.J. Bottomley <jejb@linux.vnet.ibm.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABAgAGBQJYXDugAAoJEAVr7HOZEZN4MO4P/2FM+U9uxjSMnHlnnEf/zVNE
 pqYS8vxdnoPwbuzd++uFTneLa8xMGXPmX4X0t6tZQsi9w+ZDAUYPWEjH2TPCa112
 V6y9bDKTnkXyLUo78dIRIJXb0lVSi2bCYi0R9x6HA2wyU4/NRhYtCSbYTONEzHkk
 Y3aADV6kLQf8NNsqExFmtY1XNCc+xjSistcJwbmIkFAB8qdVFo2xo0qa6Sb1SEsX
 WzER/6GdZOVblEy6llmlfSG3h8XPfbxVvqdHvTLlkCUjQsOucjzhxZZ1TGcqWhWB
 XQGuXlfOTJN1O6ZEAFfZdiz7g/1NGjwczUP4x99S2NiPFD76C34ncRBFiYODvdYs
 izsctB4l4cZmBNdYQSYGYxY5emJrmv1JCvtaqLxl2sQ+XIxGRv/t3yAWisdQi3My
 UlGVZw8lyxPNY/+E7vuC9QvFlDW2FBTop8VPH/wvZFB3cuDssHr6Uogx9oxuX/BY
 LfYu7GsGCcMigyot6MKiUMlT5Od4cHDlTaQE5kfgCSCP0s8Ssi7CQv2/CnpmWln+
 HU/zDC+CRzZjkcCNy7nsgUhCEWKMWcPPTQ/HMpXDMRUIMfweHDUJ/cM5tRyqb0kl
 n8TROG9Ptma2A2EcDYtkAqX+XqzE0RUTwDwcQhSQR/XJZEHmsVlXGyCcY3sx3UmZ
 5q9bgMoRLWzsiTuZSqBR
 =Zofc
 -----END PGP SIGNATURE-----

Merge tag 'scsi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull late SCSI updates from James Bottomley:
 "This is mostly stuff which missed the initial pull.

  There's a new driver: qedi, and some ufs, ibmvscsis and ncr5380
  updates plus some assorted driver fixes and also a fix for the bug
  where if a device goes into a blocked state between configuration and
  sysfs device add (which can be a long time under async probing) it
  would become permanently blocked"

* tag 'scsi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (30 commits)
  scsi: avoid a permanent stop of the scsi device's request queue
  scsi: mpt3sas: Recognize and act on iopriority info
  scsi: qla2xxx: Fix Target mode handling with Multiqueue changes.
  scsi: qla2xxx: Add Block Multi Queue functionality.
  scsi: qla2xxx: Add multiple queue pair functionality.
  scsi: qla2xxx: Utilize pci_alloc_irq_vectors/pci_free_irq_vectors calls.
  scsi: qla2xxx: Only allow operational MBX to proceed during RESET.
  scsi: hpsa: remove memory allocate failure message
  scsi: Update 3ware driver email addresses
  scsi: zfcp: fix rport unblock race with LUN recovery
  scsi: zfcp: do not trace pure benign residual HBA responses at default level
  scsi: zfcp: fix use-after-"free" in FC ingress path after TMF
  scsi: libcxgbi: return error if interface is not up
  scsi: cxgb4i: libcxgbi: add missing module_put()
  scsi: cxgb4i: libcxgbi: cxgb4: add T6 iSCSI completion feature
  scsi: cxgb4i: libcxgbi: add active open cmd for T6 adapters
  scsi: cxgb4i: use cxgb4_tp_smt_idx() to get smt_idx
  scsi: qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  scsi: aacraid: remove wildcard for series 9 controllers
  scsi: ibmvscsi: add write memory barrier to CRQ processing
  ...
This commit is contained in:
Linus Torvalds 2016-12-23 10:36:19 -08:00
commit f290cbacb6
64 changed files with 9676 additions and 701 deletions

View File

@ -6,17 +6,15 @@ NCR53c400 extensions (c) 1994,1995,1996 Kevin Lentin
This file documents the NCR53c400 extensions by Kevin Lentin and some
enhancements to the NCR5380 core.
This driver supports both NCR5380 and NCR53c400 cards in port or memory
mapped modes. Currently this driver can only support one of those mapping
modes at a time but it does support both of these chips at the same time.
The next release of this driver will support port & memory mapped cards at
the same time. It should be able to handle multiple different cards in the
same machine.
This driver supports NCR5380 and NCR53c400 and compatible cards in port or
memory mapped modes.
The drivers/scsi/Makefile has an override in it for the most common
NCR53c400 card, the Trantor T130B in its default configuration:
Port: 0x350
IRQ : 5
Use of an interrupt is recommended, if supported by the board, as this will
allow targets to disconnect and thereby improve SCSI bus utilization.
If the irq parameter is 254 or is omitted entirely, the driver will probe
for the correct IRQ line automatically. If the irq parameter is 0 or 255
then no IRQ will be used.
The NCR53c400 does not support DMA but it does have Pseudo-DMA which is
supported by the driver.
@ -47,22 +45,24 @@ These old-style parameters can support only one card:
dtc_3181e=1 to set up for a Domex Technology Corp 3181E board
hp_c2502=1 to set up for a Hewlett Packard C2502 board
e.g.
OLD: modprobe g_NCR5380 ncr_irq=5 ncr_addr=0x350 ncr_5380=1
NEW: modprobe g_NCR5380 irq=5 base=0x350 card=0
for a port mapped NCR5380 board or
E.g. Trantor T130B in its default configuration:
modprobe g_NCR5380 irq=5 base=0x350 card=1
or alternatively, using the old syntax,
modprobe g_NCR5380 ncr_irq=5 ncr_addr=0x350 ncr_53c400=1
OLD: modprobe g_NCR5380 ncr_irq=255 ncr_addr=0xc8000 ncr_53c400=1
NEW: modprobe g_NCR5380 irq=255 base=0xc8000 card=1
for a memory mapped NCR53C400 board with interrupts disabled or
E.g. a port mapped NCR5380 board, driver to probe for IRQ:
modprobe g_NCR5380 base=0x350 card=0
or alternatively,
modprobe g_NCR5380 ncr_addr=0x350 ncr_5380=1
NEW: modprobe g_NCR5380 irq=0,7 base=0x240,0x300 card=3,4
for two cards: DTC3181 (in non-PnP mode) at 0x240 with no IRQ
and HP C2502 at 0x300 with IRQ 7
(255 should be specified for no or DMA interrupt, 254 to autoprobe for an
IRQ line if overridden on the command line.)
E.g. a memory mapped NCR53C400 board with no IRQ:
modprobe g_NCR5380 irq=255 base=0xc8000 card=1
or alternatively,
modprobe g_NCR5380 ncr_irq=255 ncr_addr=0xc8000 ncr_53c400=1
E.g. two cards, DTC3181 (in non-PnP mode) at 0x240 with no IRQ
and HP C2502 at 0x300 with IRQ 7:
modprobe g_NCR5380 irq=0,7 base=0x240,0x300 card=3,4
Kevin Lentin
K.Lentin@cs.monash.edu.au

View File

@ -143,7 +143,7 @@ S: Maintained
F: drivers/net/ethernet/3com/typhoon*
3WARE SAS/SATA-RAID SCSI DRIVERS (3W-XXXX, 3W-9XXX, 3W-SAS)
M: Adam Radford <linuxraid@lsi.com>
M: Adam Radford <aradford@gmail.com>
L: linux-scsi@vger.kernel.org
W: http://www.lsi.com
S: Supported
@ -10136,6 +10136,12 @@ F: drivers/net/ethernet/qlogic/qed/
F: include/linux/qed/
F: drivers/net/ethernet/qlogic/qede/
QLOGIC QL41xxx ISCSI DRIVER
M: QLogic-Storage-Upstream@cavium.com
L: linux-scsi@vger.kernel.org
S: Supported
F: drivers/scsi/qedi/
QNX4 FILESYSTEM
M: Anders Larsen <al@alarsen.net>
W: http://www.alarsen.net/linux/qnx4fs/

View File

@ -76,6 +76,7 @@ enum {
CPL_PASS_ESTABLISH = 0x41,
CPL_RX_DATA_DDP = 0x42,
CPL_PASS_ACCEPT_REQ = 0x44,
CPL_RX_ISCSI_CMP = 0x45,
CPL_TRACE_PKT_T5 = 0x48,
CPL_RX_ISCSI_DDP = 0x49,
@ -934,6 +935,18 @@ struct cpl_iscsi_data {
__u8 status;
};
struct cpl_rx_iscsi_cmp {
union opcode_tid ot;
__be16 pdu_len_ddp;
__be16 len;
__be32 seq;
__be16 urg;
__u8 rsvd;
__u8 status;
__be32 ulp_crc;
__be32 ddpvld;
};
struct cpl_tx_data_iso {
__be32 op_to_scsi;
__u8 reserved1;

View File

@ -289,11 +289,12 @@ void zfcp_dbf_rec_trig(char *tag, struct zfcp_adapter *adapter,
/**
* zfcp_dbf_rec_run - trace event related to running recovery
* zfcp_dbf_rec_run_lvl - trace event related to running recovery
* @level: trace level to be used for event
* @tag: identifier for event
* @erp: erp_action running
*/
void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
void zfcp_dbf_rec_run_lvl(int level, char *tag, struct zfcp_erp_action *erp)
{
struct zfcp_dbf *dbf = erp->adapter->dbf;
struct zfcp_dbf_rec *rec = &dbf->rec_buf;
@ -319,10 +320,20 @@ void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
else
rec->u.run.rec_count = atomic_read(&erp->adapter->erp_counter);
debug_event(dbf->rec, 1, rec, sizeof(*rec));
debug_event(dbf->rec, level, rec, sizeof(*rec));
spin_unlock_irqrestore(&dbf->rec_lock, flags);
}
/**
* zfcp_dbf_rec_run - trace event related to running recovery
* @tag: identifier for event
* @erp: erp_action running
*/
void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
{
zfcp_dbf_rec_run_lvl(1, tag, erp);
}
/**
* zfcp_dbf_rec_run_wka - trace wka port event with info like running recovery
* @tag: identifier for event

View File

@ -2,7 +2,7 @@
* zfcp device driver
* debug feature declarations
*
* Copyright IBM Corp. 2008, 2015
* Copyright IBM Corp. 2008, 2016
*/
#ifndef ZFCP_DBF_H
@ -283,6 +283,30 @@ struct zfcp_dbf {
struct zfcp_dbf_scsi scsi_buf;
};
/**
* zfcp_dbf_hba_fsf_resp_suppress - true if we should not trace by default
* @req: request that has been completed
*
* Returns true if FCP response with only benign residual under count.
*/
static inline
bool zfcp_dbf_hba_fsf_resp_suppress(struct zfcp_fsf_req *req)
{
struct fsf_qtcb *qtcb = req->qtcb;
u32 fsf_stat = qtcb->header.fsf_status;
struct fcp_resp *fcp_rsp;
u8 rsp_flags, fr_status;
if (qtcb->prefix.qtcb_type != FSF_IO_COMMAND)
return false; /* not an FCP response */
fcp_rsp = (struct fcp_resp *)&qtcb->bottom.io.fcp_rsp;
rsp_flags = fcp_rsp->fr_flags;
fr_status = fcp_rsp->fr_status;
return (fsf_stat == FSF_FCP_RSP_AVAILABLE) &&
(rsp_flags == FCP_RESID_UNDER) &&
(fr_status == SAM_STAT_GOOD);
}
static inline
void zfcp_dbf_hba_fsf_resp(char *tag, int level, struct zfcp_fsf_req *req)
{
@ -304,7 +328,9 @@ void zfcp_dbf_hba_fsf_response(struct zfcp_fsf_req *req)
zfcp_dbf_hba_fsf_resp("fs_perr", 1, req);
} else if (qtcb->header.fsf_status != FSF_GOOD) {
zfcp_dbf_hba_fsf_resp("fs_ferr", 1, req);
zfcp_dbf_hba_fsf_resp("fs_ferr",
zfcp_dbf_hba_fsf_resp_suppress(req)
? 5 : 1, req);
} else if ((req->fsf_command == FSF_QTCB_OPEN_PORT_WITH_DID) ||
(req->fsf_command == FSF_QTCB_OPEN_LUN)) {
@ -388,4 +414,15 @@ void zfcp_dbf_scsi_devreset(char *tag, struct scsi_cmnd *scmnd, u8 flag)
_zfcp_dbf_scsi(tmp_tag, 1, scmnd, NULL);
}
/**
* zfcp_dbf_scsi_nullcmnd() - trace NULLify of SCSI command in dev/tgt-reset.
* @scmnd: SCSI command that was NULLified.
* @fsf_req: request that owned @scmnd.
*/
static inline void zfcp_dbf_scsi_nullcmnd(struct scsi_cmnd *scmnd,
struct zfcp_fsf_req *fsf_req)
{
_zfcp_dbf_scsi("scfc__1", 3, scmnd, fsf_req);
}
#endif /* ZFCP_DBF_H */

View File

@ -3,7 +3,7 @@
*
* Error Recovery Procedures (ERP).
*
* Copyright IBM Corp. 2002, 2015
* Copyright IBM Corp. 2002, 2016
*/
#define KMSG_COMPONENT "zfcp"
@ -1204,6 +1204,62 @@ static void zfcp_erp_action_dequeue(struct zfcp_erp_action *erp_action)
}
}
/**
* zfcp_erp_try_rport_unblock - unblock rport if no more/new recovery
* @port: zfcp_port whose fc_rport we should try to unblock
*/
static void zfcp_erp_try_rport_unblock(struct zfcp_port *port)
{
unsigned long flags;
struct zfcp_adapter *adapter = port->adapter;
int port_status;
struct Scsi_Host *shost = adapter->scsi_host;
struct scsi_device *sdev;
write_lock_irqsave(&adapter->erp_lock, flags);
port_status = atomic_read(&port->status);
if ((port_status & ZFCP_STATUS_COMMON_UNBLOCKED) == 0 ||
(port_status & (ZFCP_STATUS_COMMON_ERP_INUSE |
ZFCP_STATUS_COMMON_ERP_FAILED)) != 0) {
/* new ERP of severity >= port triggered elsewhere meanwhile or
* local link down (adapter erp_failed but not clear unblock)
*/
zfcp_dbf_rec_run_lvl(4, "ertru_p", &port->erp_action);
write_unlock_irqrestore(&adapter->erp_lock, flags);
return;
}
spin_lock(shost->host_lock);
__shost_for_each_device(sdev, shost) {
struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
int lun_status;
if (zsdev->port != port)
continue;
/* LUN under port of interest */
lun_status = atomic_read(&zsdev->status);
if ((lun_status & ZFCP_STATUS_COMMON_ERP_FAILED) != 0)
continue; /* unblock rport despite failed LUNs */
/* LUN recovery not given up yet [maybe follow-up pending] */
if ((lun_status & ZFCP_STATUS_COMMON_UNBLOCKED) == 0 ||
(lun_status & ZFCP_STATUS_COMMON_ERP_INUSE) != 0) {
/* LUN blocked:
* not yet unblocked [LUN recovery pending]
* or meanwhile blocked [new LUN recovery triggered]
*/
zfcp_dbf_rec_run_lvl(4, "ertru_l", &zsdev->erp_action);
spin_unlock(shost->host_lock);
write_unlock_irqrestore(&adapter->erp_lock, flags);
return;
}
}
/* now port has no child or all children have completed recovery,
* and no ERP of severity >= port was meanwhile triggered elsewhere
*/
zfcp_scsi_schedule_rport_register(port);
spin_unlock(shost->host_lock);
write_unlock_irqrestore(&adapter->erp_lock, flags);
}
static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
{
struct zfcp_adapter *adapter = act->adapter;
@ -1214,6 +1270,7 @@ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
case ZFCP_ERP_ACTION_REOPEN_LUN:
if (!(act->status & ZFCP_STATUS_ERP_NO_REF))
scsi_device_put(sdev);
zfcp_erp_try_rport_unblock(port);
break;
case ZFCP_ERP_ACTION_REOPEN_PORT:
@ -1224,7 +1281,7 @@ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
*/
if (act->step != ZFCP_ERP_STEP_UNINITIALIZED)
if (result == ZFCP_ERP_SUCCEEDED)
zfcp_scsi_schedule_rport_register(port);
zfcp_erp_try_rport_unblock(port);
/* fall through */
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
put_device(&port->dev);

View File

@ -3,7 +3,7 @@
*
* External function declarations.
*
* Copyright IBM Corp. 2002, 2015
* Copyright IBM Corp. 2002, 2016
*/
#ifndef ZFCP_EXT_H
@ -35,6 +35,8 @@ extern void zfcp_dbf_adapter_unregister(struct zfcp_adapter *);
extern void zfcp_dbf_rec_trig(char *, struct zfcp_adapter *,
struct zfcp_port *, struct scsi_device *, u8, u8);
extern void zfcp_dbf_rec_run(char *, struct zfcp_erp_action *);
extern void zfcp_dbf_rec_run_lvl(int level, char *tag,
struct zfcp_erp_action *erp);
extern void zfcp_dbf_rec_run_wka(char *, struct zfcp_fc_wka_port *, u64);
extern void zfcp_dbf_hba_fsf_uss(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_fsf_res(char *, int, struct zfcp_fsf_req *);

View File

@ -3,7 +3,7 @@
*
* Interface to the FSF support functions.
*
* Copyright IBM Corp. 2002, 2015
* Copyright IBM Corp. 2002, 2016
*/
#ifndef FSF_H
@ -78,6 +78,7 @@
#define FSF_APP_TAG_CHECK_FAILURE 0x00000082
#define FSF_REF_TAG_CHECK_FAILURE 0x00000083
#define FSF_ADAPTER_STATUS_AVAILABLE 0x000000AD
#define FSF_FCP_RSP_AVAILABLE 0x000000AF
#define FSF_UNKNOWN_COMMAND 0x000000E2
#define FSF_UNKNOWN_OP_SUBTYPE 0x000000E3
#define FSF_INVALID_COMMAND_OPTION 0x000000E5

View File

@ -4,7 +4,7 @@
* Data structure and helper functions for tracking pending FSF
* requests.
*
* Copyright IBM Corp. 2009
* Copyright IBM Corp. 2009, 2016
*/
#ifndef ZFCP_REQLIST_H
@ -180,4 +180,32 @@ static inline void zfcp_reqlist_move(struct zfcp_reqlist *rl,
spin_unlock_irqrestore(&rl->lock, flags);
}
/**
* zfcp_reqlist_apply_for_all() - apply a function to every request.
* @rl: the requestlist that contains the target requests.
* @f: the function to apply to each request; the first parameter of the
* function will be the target-request; the second parameter is the same
* pointer as given with the argument @data.
* @data: freely chosen argument; passed through to @f as second parameter.
*
* Uses :c:macro:`list_for_each_entry` to iterate over the lists in the hash-
* table (not a 'safe' variant, so don't modify the list).
*
* Holds @rl->lock over the entire request-iteration.
*/
static inline void
zfcp_reqlist_apply_for_all(struct zfcp_reqlist *rl,
void (*f)(struct zfcp_fsf_req *, void *), void *data)
{
struct zfcp_fsf_req *req;
unsigned long flags;
unsigned int i;
spin_lock_irqsave(&rl->lock, flags);
for (i = 0; i < ZFCP_REQ_LIST_BUCKETS; i++)
list_for_each_entry(req, &rl->buckets[i], list)
f(req, data);
spin_unlock_irqrestore(&rl->lock, flags);
}
#endif /* ZFCP_REQLIST_H */

View File

@ -3,7 +3,7 @@
*
* Interface to Linux SCSI midlayer.
*
* Copyright IBM Corp. 2002, 2015
* Copyright IBM Corp. 2002, 2016
*/
#define KMSG_COMPONENT "zfcp"
@ -88,9 +88,7 @@ int zfcp_scsi_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scpnt)
}
if (unlikely(!(status & ZFCP_STATUS_COMMON_UNBLOCKED))) {
/* This could be either
* open LUN pending: this is temporary, will result in
* open LUN or ERP_FAILED, so retry command
/* This could be
* call to rport_delete pending: mimic retry from
* fc_remote_port_chkready until rport is BLOCKED
*/
@ -209,6 +207,57 @@ static int zfcp_scsi_eh_abort_handler(struct scsi_cmnd *scpnt)
return retval;
}
struct zfcp_scsi_req_filter {
u8 tmf_scope;
u32 lun_handle;
u32 port_handle;
};
static void zfcp_scsi_forget_cmnd(struct zfcp_fsf_req *old_req, void *data)
{
struct zfcp_scsi_req_filter *filter =
(struct zfcp_scsi_req_filter *)data;
/* already aborted - prevent side-effects - or not a SCSI command */
if (old_req->data == NULL || old_req->fsf_command != FSF_QTCB_FCP_CMND)
return;
/* (tmf_scope == FCP_TMF_TGT_RESET || tmf_scope == FCP_TMF_LUN_RESET) */
if (old_req->qtcb->header.port_handle != filter->port_handle)
return;
if (filter->tmf_scope == FCP_TMF_LUN_RESET &&
old_req->qtcb->header.lun_handle != filter->lun_handle)
return;
zfcp_dbf_scsi_nullcmnd((struct scsi_cmnd *)old_req->data, old_req);
old_req->data = NULL;
}
static void zfcp_scsi_forget_cmnds(struct zfcp_scsi_dev *zsdev, u8 tm_flags)
{
struct zfcp_adapter *adapter = zsdev->port->adapter;
struct zfcp_scsi_req_filter filter = {
.tmf_scope = FCP_TMF_TGT_RESET,
.port_handle = zsdev->port->handle,
};
unsigned long flags;
if (tm_flags == FCP_TMF_LUN_RESET) {
filter.tmf_scope = FCP_TMF_LUN_RESET;
filter.lun_handle = zsdev->lun_handle;
}
/*
* abort_lock secures against other processings - in the abort-function
* and normal cmnd-handler - of (struct zfcp_fsf_req *)->data
*/
write_lock_irqsave(&adapter->abort_lock, flags);
zfcp_reqlist_apply_for_all(adapter->req_list, zfcp_scsi_forget_cmnd,
&filter);
write_unlock_irqrestore(&adapter->abort_lock, flags);
}
static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags)
{
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(scpnt->device);
@ -241,8 +290,10 @@ static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags)
if (fsf_req->status & ZFCP_STATUS_FSFREQ_TMFUNCFAILED) {
zfcp_dbf_scsi_devreset("fail", scpnt, tm_flags);
retval = FAILED;
} else
} else {
zfcp_dbf_scsi_devreset("okay", scpnt, tm_flags);
zfcp_scsi_forget_cmnds(zfcp_sdev, tm_flags);
}
zfcp_fsf_req_free(fsf_req);
return retval;

View File

@ -1,8 +1,8 @@
/*
3w-9xxx.c -- 3ware 9000 Storage Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Modifications By: Tom Couch <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Modifications By: Tom Couch
Copyright (C) 2004-2009 Applied Micro Circuits Corporation.
Copyright (C) 2010 LSI Corporation.
@ -41,10 +41,7 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
Note: This version of the driver does not contain a bundled firmware
image.

View File

@ -1,8 +1,8 @@
/*
3w-9xxx.h -- 3ware 9000 Storage Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Modifications By: Tom Couch <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Modifications By: Tom Couch
Copyright (C) 2004-2009 Applied Micro Circuits Corporation.
Copyright (C) 2010 LSI Corporation.
@ -41,10 +41,7 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
*/
#ifndef _3W_9XXX_H

View File

@ -1,7 +1,7 @@
/*
3w-sas.c -- LSI 3ware SAS/SATA-RAID Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Copyright (C) 2009 LSI Corporation.
@ -43,10 +43,7 @@
LSI 3ware 9750 6Gb/s SAS/SATA-RAID
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
History
-------

View File

@ -1,7 +1,7 @@
/*
3w-sas.h -- LSI 3ware SAS/SATA-RAID Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Copyright (C) 2009 LSI Corporation.
@ -39,10 +39,7 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
*/
#ifndef _3W_SAS_H

View File

@ -1,7 +1,7 @@
/*
3w-xxxx.c -- 3ware Storage Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Modifications By: Joel Jacobson <linux@3ware.com>
Arnaldo Carvalho de Melo <acme@conectiva.com.br>
Brad Strand <linux@3ware.com>
@ -47,10 +47,9 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
History
-------

View File

@ -1,7 +1,7 @@
/*
3w-xxxx.h -- 3ware Storage Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Modifications By: Joel Jacobson <linux@3ware.com>
Arnaldo Carvalho de Melo <acme@conectiva.com.br>
Brad Strand <linux@3ware.com>
@ -45,7 +45,8 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
aradford@gmail.com
For more information, goto:
http://www.lsi.com

View File

@ -1233,6 +1233,7 @@ config SCSI_QLOGICPTI
source "drivers/scsi/qla2xxx/Kconfig"
source "drivers/scsi/qla4xxx/Kconfig"
source "drivers/scsi/qedi/Kconfig"
config SCSI_LPFC
tristate "Emulex LightPulse Fibre Channel Support"

View File

@ -131,6 +131,7 @@ obj-$(CONFIG_PS3_ROM) += ps3rom.o
obj-$(CONFIG_SCSI_CXGB3_ISCSI) += libiscsi.o libiscsi_tcp.o cxgbi/
obj-$(CONFIG_SCSI_CXGB4_ISCSI) += libiscsi.o libiscsi_tcp.o cxgbi/
obj-$(CONFIG_SCSI_BNX2_ISCSI) += libiscsi.o bnx2i/
obj-$(CONFIG_QEDI) += libiscsi.o qedi/
obj-$(CONFIG_BE2ISCSI) += libiscsi.o be2iscsi/
obj-$(CONFIG_SCSI_ESAS2R) += esas2r/
obj-$(CONFIG_SCSI_PMCRAID) += pmcraid.o

View File

@ -97,9 +97,6 @@
* and macros and include this file in your driver.
*
* These macros control options :
* AUTOPROBE_IRQ - if defined, the NCR5380_probe_irq() function will be
* defined.
*
* AUTOSENSE - if defined, REQUEST SENSE will be performed automatically
* for commands that return with a CHECK CONDITION status.
*
@ -127,9 +124,7 @@
* NCR5380_dma_residual - residual byte count
*
* The generic driver is initialized by calling NCR5380_init(instance),
* after setting the appropriate host specific fields and ID. If the
* driver wishes to autoprobe for an IRQ line, the NCR5380_probe_irq(instance,
* possible) function may be used.
* after setting the appropriate host specific fields and ID.
*/
#ifndef NCR5380_io_delay
@ -351,76 +346,6 @@ static void NCR5380_print_phase(struct Scsi_Host *instance)
}
#endif
static int probe_irq;
/**
* probe_intr - helper for IRQ autoprobe
* @irq: interrupt number
* @dev_id: unused
* @regs: unused
*
* Set a flag to indicate the IRQ in question was received. This is
* used by the IRQ probe code.
*/
static irqreturn_t probe_intr(int irq, void *dev_id)
{
probe_irq = irq;
return IRQ_HANDLED;
}
/**
* NCR5380_probe_irq - find the IRQ of an NCR5380
* @instance: NCR5380 controller
* @possible: bitmask of ISA IRQ lines
*
* Autoprobe for the IRQ line used by the NCR5380 by triggering an IRQ
* and then looking to see what interrupt actually turned up.
*/
static int __maybe_unused NCR5380_probe_irq(struct Scsi_Host *instance,
int possible)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned long timeout;
int trying_irqs, i, mask;
for (trying_irqs = 0, i = 1, mask = 2; i < 16; ++i, mask <<= 1)
if ((mask & possible) && (request_irq(i, &probe_intr, 0, "NCR-probe", NULL) == 0))
trying_irqs |= mask;
timeout = jiffies + msecs_to_jiffies(250);
probe_irq = NO_IRQ;
/*
* A interrupt is triggered whenever BSY = false, SEL = true
* and a bit set in the SELECT_ENABLE_REG is asserted on the
* SCSI bus.
*
* Note that the bus is only driven when the phase control signals
* (I/O, C/D, and MSG) match those in the TCR, so we must reset that
* to zero.
*/
NCR5380_write(TARGET_COMMAND_REG, 0);
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
NCR5380_write(OUTPUT_DATA_REG, hostdata->id_mask);
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_DATA | ICR_ASSERT_SEL);
while (probe_irq == NO_IRQ && time_before(jiffies, timeout))
schedule_timeout_uninterruptible(1);
NCR5380_write(SELECT_ENABLE_REG, 0);
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
for (i = 1, mask = 2; i < 16; ++i, mask <<= 1)
if (trying_irqs & mask)
free_irq(i, NULL);
return probe_irq;
}
/**
* NCR58380_info - report driver and host information
* @instance: relevant scsi host instance

View File

@ -199,16 +199,6 @@
#define PHASE_SR_TO_TCR(phase) ((phase) >> 2)
/*
* These are "special" values for the irq and dma_channel fields of the
* Scsi_Host structure
*/
#define DMA_NONE 255
#define IRQ_AUTO 254
#define DMA_AUTO 254
#define PORT_AUTO 0xffff /* autoprobe io port for 53c400a */
#ifndef NO_IRQ
#define NO_IRQ 0
#endif
@ -290,7 +280,6 @@ static void NCR5380_print(struct Scsi_Host *instance);
#define NCR5380_dprint_phase(flg, arg) do {} while (0)
#endif
static int NCR5380_probe_irq(struct Scsi_Host *instance, int possible);
static int NCR5380_init(struct Scsi_Host *instance, int flags);
static int NCR5380_maybe_reset_bus(struct Scsi_Host *);
static void NCR5380_exit(struct Scsi_Host *instance);

View File

@ -160,7 +160,6 @@ static const struct pci_device_id aac_pci_tbl[] = {
{ 0x9005, 0x028b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 62 }, /* Adaptec PMC Series 6 (Tupelo) */
{ 0x9005, 0x028c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 63 }, /* Adaptec PMC Series 7 (Denali) */
{ 0x9005, 0x028d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 64 }, /* Adaptec PMC Series 8 */
{ 0x9005, 0x028f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 65 }, /* Adaptec PMC Series 9 */
{ 0,}
};
MODULE_DEVICE_TABLE(pci, aac_pci_tbl);
@ -239,7 +238,6 @@ static struct aac_driver_ident aac_drivers[] = {
{ aac_src_init, "aacraid", "ADAPTEC ", "RAID ", 2, AAC_QUIRK_SRC }, /* Adaptec PMC Series 6 (Tupelo) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID ", 2, AAC_QUIRK_SRC }, /* Adaptec PMC Series 7 (Denali) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID ", 2, AAC_QUIRK_SRC }, /* Adaptec PMC Series 8 */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID ", 2, AAC_QUIRK_SRC } /* Adaptec PMC Series 9 */
};
/**

View File

@ -189,7 +189,6 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
struct l2t_entry *e)
{
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(csk->cdev);
int t4 = is_t4(lldi->adapter_type);
int wscale = cxgbi_sock_compute_wscale(csk->mss_idx);
unsigned long long opt0;
unsigned int opt2;
@ -232,7 +231,7 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
csk, &req->local_ip, ntohs(req->local_port),
&req->peer_ip, ntohs(req->peer_port),
csk->atid, csk->rss_qid);
} else {
} else if (is_t5(lldi->adapter_type)) {
struct cpl_t5_act_open_req *req =
(struct cpl_t5_act_open_req *)skb->head;
u32 isn = (prandom_u32() & ~7UL) - 1;
@ -260,12 +259,45 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
csk, &req->local_ip, ntohs(req->local_port),
&req->peer_ip, ntohs(req->peer_port),
csk->atid, csk->rss_qid);
} else {
struct cpl_t6_act_open_req *req =
(struct cpl_t6_act_open_req *)skb->head;
u32 isn = (prandom_u32() & ~7UL) - 1;
INIT_TP_WR(req, 0);
OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
qid_atid));
req->local_port = csk->saddr.sin_port;
req->peer_port = csk->daddr.sin_port;
req->local_ip = csk->saddr.sin_addr.s_addr;
req->peer_ip = csk->daddr.sin_addr.s_addr;
req->opt0 = cpu_to_be64(opt0);
req->params = cpu_to_be64(FILTER_TUPLE_V(
cxgb4_select_ntuple(
csk->cdev->ports[csk->port_id],
csk->l2t)));
req->rsvd = cpu_to_be32(isn);
opt2 |= T5_ISS_VALID;
opt2 |= RX_FC_DISABLE_F;
opt2 |= T5_OPT_2_VALID_F;
req->opt2 = cpu_to_be32(opt2);
req->rsvd2 = cpu_to_be32(0);
req->opt3 = cpu_to_be32(0);
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
"csk t6 0x%p, %pI4:%u-%pI4:%u, atid %d, qid %u.\n",
csk, &req->local_ip, ntohs(req->local_port),
&req->peer_ip, ntohs(req->peer_port),
csk->atid, csk->rss_qid);
}
set_wr_txq(skb, CPL_PRIORITY_SETUP, csk->port_id);
pr_info_ipaddr("t%d csk 0x%p,%u,0x%lx,%u, rss_qid %u.\n",
(&csk->saddr), (&csk->daddr), t4 ? 4 : 5, csk,
(&csk->saddr), (&csk->daddr),
CHELSIO_CHIP_VERSION(lldi->adapter_type), csk,
csk->state, csk->flags, csk->atid, csk->rss_qid);
cxgb4_l2t_send(csk->cdev->ports[csk->port_id], skb, csk->l2t);
@ -276,7 +308,6 @@ static void send_act_open_req6(struct cxgbi_sock *csk, struct sk_buff *skb,
struct l2t_entry *e)
{
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(csk->cdev);
int t4 = is_t4(lldi->adapter_type);
int wscale = cxgbi_sock_compute_wscale(csk->mss_idx);
unsigned long long opt0;
unsigned int opt2;
@ -294,10 +325,9 @@ static void send_act_open_req6(struct cxgbi_sock *csk, struct sk_buff *skb,
opt2 = RX_CHANNEL_V(0) |
RSS_QUEUE_VALID_F |
RX_FC_DISABLE_F |
RSS_QUEUE_V(csk->rss_qid);
if (t4) {
if (is_t4(lldi->adapter_type)) {
struct cpl_act_open_req6 *req =
(struct cpl_act_open_req6 *)skb->head;
@ -322,7 +352,7 @@ static void send_act_open_req6(struct cxgbi_sock *csk, struct sk_buff *skb,
req->params = cpu_to_be32(cxgb4_select_ntuple(
csk->cdev->ports[csk->port_id],
csk->l2t));
} else {
} else if (is_t5(lldi->adapter_type)) {
struct cpl_t5_act_open_req6 *req =
(struct cpl_t5_act_open_req6 *)skb->head;
@ -345,12 +375,41 @@ static void send_act_open_req6(struct cxgbi_sock *csk, struct sk_buff *skb,
req->params = cpu_to_be64(FILTER_TUPLE_V(cxgb4_select_ntuple(
csk->cdev->ports[csk->port_id],
csk->l2t)));
} else {
struct cpl_t6_act_open_req6 *req =
(struct cpl_t6_act_open_req6 *)skb->head;
INIT_TP_WR(req, 0);
OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ6,
qid_atid));
req->local_port = csk->saddr6.sin6_port;
req->peer_port = csk->daddr6.sin6_port;
req->local_ip_hi = *(__be64 *)(csk->saddr6.sin6_addr.s6_addr);
req->local_ip_lo = *(__be64 *)(csk->saddr6.sin6_addr.s6_addr +
8);
req->peer_ip_hi = *(__be64 *)(csk->daddr6.sin6_addr.s6_addr);
req->peer_ip_lo = *(__be64 *)(csk->daddr6.sin6_addr.s6_addr +
8);
req->opt0 = cpu_to_be64(opt0);
opt2 |= RX_FC_DISABLE_F;
opt2 |= T5_OPT_2_VALID_F;
req->opt2 = cpu_to_be32(opt2);
req->params = cpu_to_be64(FILTER_TUPLE_V(cxgb4_select_ntuple(
csk->cdev->ports[csk->port_id],
csk->l2t)));
req->rsvd2 = cpu_to_be32(0);
req->opt3 = cpu_to_be32(0);
}
set_wr_txq(skb, CPL_PRIORITY_SETUP, csk->port_id);
pr_info("t%d csk 0x%p,%u,0x%lx,%u, [%pI6]:%u-[%pI6]:%u, rss_qid %u.\n",
t4 ? 4 : 5, csk, csk->state, csk->flags, csk->atid,
CHELSIO_CHIP_VERSION(lldi->adapter_type), csk, csk->state,
csk->flags, csk->atid,
&csk->saddr6.sin6_addr, ntohs(csk->saddr.sin_port),
&csk->daddr6.sin6_addr, ntohs(csk->daddr.sin_port),
csk->rss_qid);
@ -742,7 +801,7 @@ static void do_act_establish(struct cxgbi_device *cdev, struct sk_buff *skb)
(&csk->saddr), (&csk->daddr),
atid, tid, csk, csk->state, csk->flags, rcv_isn);
module_put(THIS_MODULE);
module_put(cdev->owner);
cxgbi_sock_get(csk);
csk->tid = tid;
@ -891,7 +950,7 @@ static void do_act_open_rpl(struct cxgbi_device *cdev, struct sk_buff *skb)
if (is_neg_adv(status))
goto rel_skb;
module_put(THIS_MODULE);
module_put(cdev->owner);
if (status && status != CPL_ERR_TCAM_FULL &&
status != CPL_ERR_CONN_EXIST &&
@ -1173,6 +1232,101 @@ rel_skb:
__kfree_skb(skb);
}
static void do_rx_iscsi_data(struct cxgbi_device *cdev, struct sk_buff *skb)
{
struct cxgbi_sock *csk;
struct cpl_iscsi_hdr *cpl = (struct cpl_iscsi_hdr *)skb->data;
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev);
struct tid_info *t = lldi->tids;
struct sk_buff *lskb;
u32 tid = GET_TID(cpl);
u16 pdu_len_ddp = be16_to_cpu(cpl->pdu_len_ddp);
csk = lookup_tid(t, tid);
if (unlikely(!csk)) {
pr_err("can't find conn. for tid %u.\n", tid);
goto rel_skb;
}
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_PDU_RX,
"csk 0x%p,%u,0x%lx, tid %u, skb 0x%p,%u, 0x%x.\n",
csk, csk->state, csk->flags, csk->tid, skb,
skb->len, pdu_len_ddp);
spin_lock_bh(&csk->lock);
if (unlikely(csk->state >= CTP_PASSIVE_CLOSE)) {
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
"csk 0x%p,%u,0x%lx,%u, bad state.\n",
csk, csk->state, csk->flags, csk->tid);
if (csk->state != CTP_ABORTING)
goto abort_conn;
else
goto discard;
}
cxgbi_skcb_tcp_seq(skb) = be32_to_cpu(cpl->seq);
cxgbi_skcb_flags(skb) = 0;
skb_reset_transport_header(skb);
__skb_pull(skb, sizeof(*cpl));
__pskb_trim(skb, ntohs(cpl->len));
if (!csk->skb_ulp_lhdr)
csk->skb_ulp_lhdr = skb;
lskb = csk->skb_ulp_lhdr;
cxgbi_skcb_set_flag(lskb, SKCBF_RX_DATA);
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_PDU_RX,
"csk 0x%p,%u,0x%lx, skb 0x%p data, 0x%p.\n",
csk, csk->state, csk->flags, skb, lskb);
__skb_queue_tail(&csk->receive_queue, skb);
spin_unlock_bh(&csk->lock);
return;
abort_conn:
send_abort_req(csk);
discard:
spin_unlock_bh(&csk->lock);
rel_skb:
__kfree_skb(skb);
}
static void
cxgb4i_process_ddpvld(struct cxgbi_sock *csk,
struct sk_buff *skb, u32 ddpvld)
{
if (ddpvld & (1 << CPL_RX_DDP_STATUS_HCRC_SHIFT)) {
pr_info("csk 0x%p, lhdr 0x%p, status 0x%x, hcrc bad 0x%lx.\n",
csk, skb, ddpvld, cxgbi_skcb_flags(skb));
cxgbi_skcb_set_flag(skb, SKCBF_RX_HCRC_ERR);
}
if (ddpvld & (1 << CPL_RX_DDP_STATUS_DCRC_SHIFT)) {
pr_info("csk 0x%p, lhdr 0x%p, status 0x%x, dcrc bad 0x%lx.\n",
csk, skb, ddpvld, cxgbi_skcb_flags(skb));
cxgbi_skcb_set_flag(skb, SKCBF_RX_DCRC_ERR);
}
if (ddpvld & (1 << CPL_RX_DDP_STATUS_PAD_SHIFT)) {
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lhdr 0x%p, status 0x%x, pad bad.\n",
csk, skb, ddpvld);
cxgbi_skcb_set_flag(skb, SKCBF_RX_PAD_ERR);
}
if ((ddpvld & (1 << CPL_RX_DDP_STATUS_DDP_SHIFT)) &&
!cxgbi_skcb_test_flag(skb, SKCBF_RX_DATA)) {
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lhdr 0x%p, 0x%x, data ddp'ed.\n",
csk, skb, ddpvld);
cxgbi_skcb_set_flag(skb, SKCBF_RX_DATA_DDPD);
}
}
static void do_rx_data_ddp(struct cxgbi_device *cdev,
struct sk_buff *skb)
{
@ -1182,7 +1336,7 @@ static void do_rx_data_ddp(struct cxgbi_device *cdev,
unsigned int tid = GET_TID(rpl);
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev);
struct tid_info *t = lldi->tids;
unsigned int status = ntohl(rpl->ddpvld);
u32 ddpvld = be32_to_cpu(rpl->ddpvld);
csk = lookup_tid(t, tid);
if (unlikely(!csk)) {
@ -1192,7 +1346,7 @@ static void do_rx_data_ddp(struct cxgbi_device *cdev,
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_PDU_RX,
"csk 0x%p,%u,0x%lx, skb 0x%p,0x%x, lhdr 0x%p.\n",
csk, csk->state, csk->flags, skb, status, csk->skb_ulp_lhdr);
csk, csk->state, csk->flags, skb, ddpvld, csk->skb_ulp_lhdr);
spin_lock_bh(&csk->lock);
@ -1220,29 +1374,8 @@ static void do_rx_data_ddp(struct cxgbi_device *cdev,
pr_info("tid 0x%x, RX_DATA_DDP pdulen %u != %u.\n",
csk->tid, ntohs(rpl->len), cxgbi_skcb_rx_pdulen(lskb));
if (status & (1 << CPL_RX_DDP_STATUS_HCRC_SHIFT)) {
pr_info("csk 0x%p, lhdr 0x%p, status 0x%x, hcrc bad 0x%lx.\n",
csk, lskb, status, cxgbi_skcb_flags(lskb));
cxgbi_skcb_set_flag(lskb, SKCBF_RX_HCRC_ERR);
}
if (status & (1 << CPL_RX_DDP_STATUS_DCRC_SHIFT)) {
pr_info("csk 0x%p, lhdr 0x%p, status 0x%x, dcrc bad 0x%lx.\n",
csk, lskb, status, cxgbi_skcb_flags(lskb));
cxgbi_skcb_set_flag(lskb, SKCBF_RX_DCRC_ERR);
}
if (status & (1 << CPL_RX_DDP_STATUS_PAD_SHIFT)) {
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lhdr 0x%p, status 0x%x, pad bad.\n",
csk, lskb, status);
cxgbi_skcb_set_flag(lskb, SKCBF_RX_PAD_ERR);
}
if ((status & (1 << CPL_RX_DDP_STATUS_DDP_SHIFT)) &&
!cxgbi_skcb_test_flag(lskb, SKCBF_RX_DATA)) {
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lhdr 0x%p, 0x%x, data ddp'ed.\n",
csk, lskb, status);
cxgbi_skcb_set_flag(lskb, SKCBF_RX_DATA_DDPD);
}
cxgb4i_process_ddpvld(csk, lskb, ddpvld);
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lskb 0x%p, f 0x%lx.\n",
csk, lskb, cxgbi_skcb_flags(lskb));
@ -1260,6 +1393,98 @@ rel_skb:
__kfree_skb(skb);
}
static void
do_rx_iscsi_cmp(struct cxgbi_device *cdev, struct sk_buff *skb)
{
struct cxgbi_sock *csk;
struct cpl_rx_iscsi_cmp *rpl = (struct cpl_rx_iscsi_cmp *)skb->data;
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev);
struct tid_info *t = lldi->tids;
struct sk_buff *data_skb = NULL;
u32 tid = GET_TID(rpl);
u32 ddpvld = be32_to_cpu(rpl->ddpvld);
u32 seq = be32_to_cpu(rpl->seq);
u16 pdu_len_ddp = be16_to_cpu(rpl->pdu_len_ddp);
csk = lookup_tid(t, tid);
if (unlikely(!csk)) {
pr_err("can't find connection for tid %u.\n", tid);
goto rel_skb;
}
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_PDU_RX,
"csk 0x%p,%u,0x%lx, skb 0x%p,0x%x, lhdr 0x%p, len %u, "
"pdu_len_ddp %u, status %u.\n",
csk, csk->state, csk->flags, skb, ddpvld, csk->skb_ulp_lhdr,
ntohs(rpl->len), pdu_len_ddp, rpl->status);
spin_lock_bh(&csk->lock);
if (unlikely(csk->state >= CTP_PASSIVE_CLOSE)) {
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
"csk 0x%p,%u,0x%lx,%u, bad state.\n",
csk, csk->state, csk->flags, csk->tid);
if (csk->state != CTP_ABORTING)
goto abort_conn;
else
goto discard;
}
cxgbi_skcb_tcp_seq(skb) = seq;
cxgbi_skcb_flags(skb) = 0;
cxgbi_skcb_rx_pdulen(skb) = 0;
skb_reset_transport_header(skb);
__skb_pull(skb, sizeof(*rpl));
__pskb_trim(skb, be16_to_cpu(rpl->len));
csk->rcv_nxt = seq + pdu_len_ddp;
if (csk->skb_ulp_lhdr) {
data_skb = skb_peek(&csk->receive_queue);
if (!data_skb ||
!cxgbi_skcb_test_flag(data_skb, SKCBF_RX_DATA)) {
pr_err("Error! freelist data not found 0x%p, tid %u\n",
data_skb, tid);
goto abort_conn;
}
__skb_unlink(data_skb, &csk->receive_queue);
cxgbi_skcb_set_flag(skb, SKCBF_RX_DATA);
__skb_queue_tail(&csk->receive_queue, skb);
__skb_queue_tail(&csk->receive_queue, data_skb);
} else {
__skb_queue_tail(&csk->receive_queue, skb);
}
csk->skb_ulp_lhdr = NULL;
cxgbi_skcb_set_flag(skb, SKCBF_RX_HDR);
cxgbi_skcb_set_flag(skb, SKCBF_RX_STATUS);
cxgbi_skcb_set_flag(skb, SKCBF_RX_ISCSI_COMPL);
cxgbi_skcb_rx_ddigest(skb) = be32_to_cpu(rpl->ulp_crc);
cxgb4i_process_ddpvld(csk, skb, ddpvld);
log_debug(1 << CXGBI_DBG_PDU_RX, "csk 0x%p, skb 0x%p, f 0x%lx.\n",
csk, skb, cxgbi_skcb_flags(skb));
cxgbi_conn_pdu_ready(csk);
spin_unlock_bh(&csk->lock);
return;
abort_conn:
send_abort_req(csk);
discard:
spin_unlock_bh(&csk->lock);
rel_skb:
__kfree_skb(skb);
}
static void do_fw4_ack(struct cxgbi_device *cdev, struct sk_buff *skb)
{
struct cxgbi_sock *csk;
@ -1382,7 +1607,6 @@ static int init_act_open(struct cxgbi_sock *csk)
void *daddr;
unsigned int step;
unsigned int size, size6;
int t4 = is_t4(lldi->adapter_type);
unsigned int linkspeed;
unsigned int rcv_winf, snd_winf;
@ -1428,12 +1652,15 @@ static int init_act_open(struct cxgbi_sock *csk)
cxgb4_clip_get(ndev, (const u32 *)&csk->saddr6.sin6_addr, 1);
#endif
if (t4) {
if (is_t4(lldi->adapter_type)) {
size = sizeof(struct cpl_act_open_req);
size6 = sizeof(struct cpl_act_open_req6);
} else {
} else if (is_t5(lldi->adapter_type)) {
size = sizeof(struct cpl_t5_act_open_req);
size6 = sizeof(struct cpl_t5_act_open_req6);
} else {
size = sizeof(struct cpl_t6_act_open_req);
size6 = sizeof(struct cpl_t6_act_open_req6);
}
if (csk->csk_family == AF_INET)
@ -1452,8 +1679,8 @@ static int init_act_open(struct cxgbi_sock *csk)
csk->mtu = dst_mtu(csk->dst);
cxgb4_best_mtu(lldi->mtus, csk->mtu, &csk->mss_idx);
csk->tx_chan = cxgb4_port_chan(ndev);
/* SMT two entries per row */
csk->smac_idx = ((cxgb4_port_viid(ndev) & 0x7F)) << 1;
csk->smac_idx = cxgb4_tp_smt_idx(lldi->adapter_type,
cxgb4_port_viid(ndev));
step = lldi->ntxq / lldi->nchan;
csk->txq_idx = cxgb4_port_idx(ndev) * step;
step = lldi->nrxq / lldi->nchan;
@ -1486,7 +1713,11 @@ static int init_act_open(struct cxgbi_sock *csk)
csk->mtu, csk->mss_idx, csk->smac_idx);
/* must wait for either a act_open_rpl or act_open_establish */
try_module_get(THIS_MODULE);
if (!try_module_get(cdev->owner)) {
pr_err("%s, try_module_get failed.\n", ndev->name);
goto rel_resource;
}
cxgbi_sock_set_state(csk, CTP_ACTIVE_OPEN);
if (csk->csk_family == AF_INET)
send_act_open_req(csk, skb, csk->l2t);
@ -1521,10 +1752,11 @@ static cxgb4i_cplhandler_func cxgb4i_cplhandlers[NUM_CPL_CMDS] = {
[CPL_CLOSE_CON_RPL] = do_close_con_rpl,
[CPL_FW4_ACK] = do_fw4_ack,
[CPL_ISCSI_HDR] = do_rx_iscsi_hdr,
[CPL_ISCSI_DATA] = do_rx_iscsi_hdr,
[CPL_ISCSI_DATA] = do_rx_iscsi_data,
[CPL_SET_TCB_RPL] = do_set_tcb_rpl,
[CPL_RX_DATA_DDP] = do_rx_data_ddp,
[CPL_RX_ISCSI_DDP] = do_rx_data_ddp,
[CPL_RX_ISCSI_CMP] = do_rx_iscsi_cmp,
[CPL_RX_DATA] = do_rx_data,
};
@ -1794,10 +2026,12 @@ static void *t4_uld_add(const struct cxgb4_lld_info *lldi)
cdev->nports = lldi->nports;
cdev->mtus = lldi->mtus;
cdev->nmtus = NMTUS;
cdev->rx_credit_thres = cxgb4i_rx_credit_thres;
cdev->rx_credit_thres = (CHELSIO_CHIP_VERSION(lldi->adapter_type) <=
CHELSIO_T5) ? cxgb4i_rx_credit_thres : 0;
cdev->skb_tx_rsvd = CXGB4I_TX_HEADER_LEN;
cdev->skb_rx_extra = sizeof(struct cpl_iscsi_hdr);
cdev->itp = &cxgb4i_iscsi_transport;
cdev->owner = THIS_MODULE;
cdev->pfvf = FW_VIID_PFN_G(cxgb4_port_viid(lldi->ports[0]))
<< FW_VIID_PFN_S;

View File

@ -642,6 +642,12 @@ static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr)
n->dev->name, ndev->name, mtu);
}
if (!(ndev->flags & IFF_UP) || !netif_carrier_ok(ndev)) {
pr_info("%s interface not up.\n", ndev->name);
err = -ENETDOWN;
goto rel_neigh;
}
cdev = cxgbi_device_find_by_netdev(ndev, &port);
if (!cdev) {
pr_info("dst %pI4, %s, NOT cxgbi device.\n",
@ -736,6 +742,12 @@ static struct cxgbi_sock *cxgbi_check_route6(struct sockaddr *dst_addr)
}
ndev = n->dev;
if (!(ndev->flags & IFF_UP) || !netif_carrier_ok(ndev)) {
pr_info("%s interface not up.\n", ndev->name);
err = -ENETDOWN;
goto rel_rt;
}
if (ipv6_addr_is_multicast(&daddr6->sin6_addr)) {
pr_info("multi-cast route %pI6 port %u, dev %s.\n",
daddr6->sin6_addr.s6_addr,
@ -896,6 +908,7 @@ EXPORT_SYMBOL_GPL(cxgbi_sock_fail_act_open);
void cxgbi_sock_act_open_req_arp_failure(void *handle, struct sk_buff *skb)
{
struct cxgbi_sock *csk = (struct cxgbi_sock *)skb->sk;
struct module *owner = csk->cdev->owner;
log_debug(1 << CXGBI_DBG_SOCK, "csk 0x%p,%u,0x%lx,%u.\n",
csk, (csk)->state, (csk)->flags, (csk)->tid);
@ -906,6 +919,8 @@ void cxgbi_sock_act_open_req_arp_failure(void *handle, struct sk_buff *skb)
spin_unlock_bh(&csk->lock);
cxgbi_sock_put(csk);
__kfree_skb(skb);
module_put(owner);
}
EXPORT_SYMBOL_GPL(cxgbi_sock_act_open_req_arp_failure);
@ -1574,6 +1589,25 @@ static int skb_read_pdu_bhs(struct iscsi_conn *conn, struct sk_buff *skb)
return -EIO;
}
if (cxgbi_skcb_test_flag(skb, SKCBF_RX_ISCSI_COMPL) &&
cxgbi_skcb_test_flag(skb, SKCBF_RX_DATA_DDPD)) {
/* If completion flag is set and data is directly
* placed in to the host memory then update
* task->exp_datasn to the datasn in completion
* iSCSI hdr as T6 adapter generates completion only
* for the last pdu of a sequence.
*/
itt_t itt = ((struct iscsi_data *)skb->data)->itt;
struct iscsi_task *task = iscsi_itt_to_ctask(conn, itt);
u32 data_sn = be32_to_cpu(((struct iscsi_data *)
skb->data)->datasn);
if (task && task->sc) {
struct iscsi_tcp_task *tcp_task = task->dd_data;
tcp_task->exp_datasn = data_sn;
}
}
return read_pdu_skb(conn, skb, 0, 0);
}
@ -1627,15 +1661,15 @@ static void csk_return_rx_credits(struct cxgbi_sock *csk, int copied)
csk->rcv_wup, cdev->rx_credit_thres,
csk->rcv_win);
if (!cdev->rx_credit_thres)
return;
if (csk->state != CTP_ESTABLISHED)
return;
credits = csk->copied_seq - csk->rcv_wup;
if (unlikely(!credits))
return;
if (unlikely(cdev->rx_credit_thres == 0))
return;
must_send = credits + 16384 >= csk->rcv_win;
if (must_send || credits >= cdev->rx_credit_thres)
csk->rcv_wup += cdev->csk_send_rx_credits(csk, credits);

View File

@ -207,6 +207,7 @@ enum cxgbi_skcb_flags {
SKCBF_RX_HDR, /* received pdu header */
SKCBF_RX_DATA, /* received pdu payload */
SKCBF_RX_STATUS, /* received ddp status */
SKCBF_RX_ISCSI_COMPL, /* received iscsi completion */
SKCBF_RX_DATA_DDPD, /* pdu payload ddp'd */
SKCBF_RX_HCRC_ERR, /* header digest error */
SKCBF_RX_DCRC_ERR, /* data digest error */
@ -467,6 +468,7 @@ struct cxgbi_device {
struct pci_dev *pdev;
struct dentry *debugfs_root;
struct iscsi_transport *itp;
struct module *owner;
unsigned int pfvf;
unsigned int rx_credit_thres;

View File

@ -37,7 +37,7 @@
#define MAX_CARDS 8
/* old-style parameters for compatibility */
static int ncr_irq;
static int ncr_irq = -1;
static int ncr_addr;
static int ncr_5380;
static int ncr_53c400;
@ -52,9 +52,9 @@ module_param(ncr_53c400a, int, 0);
module_param(dtc_3181e, int, 0);
module_param(hp_c2502, int, 0);
static int irq[] = { 0, 0, 0, 0, 0, 0, 0, 0 };
static int irq[] = { -1, -1, -1, -1, -1, -1, -1, -1 };
module_param_array(irq, int, NULL, 0);
MODULE_PARM_DESC(irq, "IRQ number(s)");
MODULE_PARM_DESC(irq, "IRQ number(s) (0=none, 254=auto [default])");
static int base[] = { 0, 0, 0, 0, 0, 0, 0, 0 };
module_param_array(base, int, NULL, 0);
@ -67,6 +67,56 @@ MODULE_PARM_DESC(card, "card type (0=NCR5380, 1=NCR53C400, 2=NCR53C400A, 3=DTC31
MODULE_ALIAS("g_NCR5380_mmio");
MODULE_LICENSE("GPL");
static void g_NCR5380_trigger_irq(struct Scsi_Host *instance)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
/*
* An interrupt is triggered whenever BSY = false, SEL = true
* and a bit set in the SELECT_ENABLE_REG is asserted on the
* SCSI bus.
*
* Note that the bus is only driven when the phase control signals
* (I/O, C/D, and MSG) match those in the TCR.
*/
NCR5380_write(TARGET_COMMAND_REG,
PHASE_SR_TO_TCR(NCR5380_read(STATUS_REG) & PHASE_MASK));
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
NCR5380_write(OUTPUT_DATA_REG, hostdata->id_mask);
NCR5380_write(INITIATOR_COMMAND_REG,
ICR_BASE | ICR_ASSERT_DATA | ICR_ASSERT_SEL);
msleep(1);
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
NCR5380_write(SELECT_ENABLE_REG, 0);
NCR5380_write(TARGET_COMMAND_REG, 0);
}
/**
* g_NCR5380_probe_irq - find the IRQ of a NCR5380 or equivalent
* @instance: SCSI host instance
*
* Autoprobe for the IRQ line used by the card by triggering an IRQ
* and then looking to see what interrupt actually turned up.
*/
static int g_NCR5380_probe_irq(struct Scsi_Host *instance)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
int irq_mask, irq;
NCR5380_read(RESET_PARITY_INTERRUPT_REG);
irq_mask = probe_irq_on();
g_NCR5380_trigger_irq(instance);
irq = probe_irq_off(irq_mask);
NCR5380_read(RESET_PARITY_INTERRUPT_REG);
if (irq <= 0)
return NO_IRQ;
return irq;
}
/*
* Configure I/O address of 53C400A or DTC436 by writing magic numbers
* to ports 0x779 and 0x379.
@ -81,14 +131,33 @@ static void magic_configure(int idx, u8 irq, u8 magic[])
outb(magic[3], 0x379);
outb(magic[4], 0x379);
/* allowed IRQs for HP C2502 */
if (irq != 2 && irq != 3 && irq != 4 && irq != 5 && irq != 7)
irq = 0;
if (irq == 9)
irq = 2;
if (idx >= 0 && idx <= 7)
cfg = 0x80 | idx | (irq << 4);
outb(cfg, 0x379);
}
static irqreturn_t legacy_empty_irq_handler(int irq, void *dev_id)
{
return IRQ_HANDLED;
}
static int legacy_find_free_irq(int *irq_table)
{
while (*irq_table != -1) {
if (!request_irq(*irq_table, legacy_empty_irq_handler,
IRQF_PROBE_SHARED, "Test IRQ",
(void *)irq_table)) {
free_irq(*irq_table, (void *) irq_table);
return *irq_table;
}
irq_table++;
}
return -1;
}
static unsigned int ncr_53c400a_ports[] = {
0x280, 0x290, 0x300, 0x310, 0x330, 0x340, 0x348, 0x350, 0
};
@ -101,6 +170,9 @@ static u8 ncr_53c400a_magic[] = { /* 53C400A & DTC436 */
static u8 hp_c2502_magic[] = { /* HP C2502 */
0x0f, 0x22, 0xf0, 0x20, 0x80
};
static int hp_c2502_irqs[] = {
9, 5, 7, 3, 4, -1
};
static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
struct device *pdev, int base, int irq, int board)
@ -248,6 +320,13 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
}
}
/* Check for vacant slot */
NCR5380_write(MODE_REG, 0);
if (NCR5380_read(MODE_REG) != 0) {
ret = -ENODEV;
goto out_unregister;
}
ret = NCR5380_init(instance, flags | FLAG_LATE_DMA_SETUP);
if (ret)
goto out_unregister;
@ -262,29 +341,57 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
NCR5380_maybe_reset_bus(instance);
if (irq != IRQ_AUTO)
instance->irq = irq;
else
instance->irq = NCR5380_probe_irq(instance, 0xffff);
/* Compatibility with documented NCR5380 kernel parameters */
if (instance->irq == 255)
instance->irq = NO_IRQ;
if (irq == 255 || irq == 0)
irq = NO_IRQ;
else if (irq == -1)
irq = IRQ_AUTO;
if (instance->irq != NO_IRQ) {
/* set IRQ for HP C2502 */
if (board == BOARD_HP_C2502)
magic_configure(port_idx, instance->irq, magic);
if (request_irq(instance->irq, generic_NCR5380_intr,
0, "NCR5380", instance)) {
printk(KERN_WARNING "scsi%d : IRQ%d not free, interrupts disabled\n", instance->host_no, instance->irq);
instance->irq = NO_IRQ;
if (board == BOARD_HP_C2502) {
int *irq_table = hp_c2502_irqs;
int board_irq = -1;
switch (irq) {
case NO_IRQ:
board_irq = 0;
break;
case IRQ_AUTO:
board_irq = legacy_find_free_irq(irq_table);
break;
default:
while (*irq_table != -1)
if (*irq_table++ == irq)
board_irq = irq;
}
if (board_irq <= 0) {
board_irq = 0;
irq = NO_IRQ;
}
magic_configure(port_idx, board_irq, magic);
}
if (instance->irq == NO_IRQ) {
printk(KERN_INFO "scsi%d : interrupts not enabled. for better interactive performance,\n", instance->host_no);
printk(KERN_INFO "scsi%d : please jumper the board for a free IRQ.\n", instance->host_no);
if (irq == IRQ_AUTO) {
instance->irq = g_NCR5380_probe_irq(instance);
if (instance->irq == NO_IRQ)
shost_printk(KERN_INFO, instance, "no irq detected\n");
} else {
instance->irq = irq;
if (instance->irq == NO_IRQ)
shost_printk(KERN_INFO, instance, "no irq provided\n");
}
if (instance->irq != NO_IRQ) {
if (request_irq(instance->irq, generic_NCR5380_intr,
0, "NCR5380", instance)) {
instance->irq = NO_IRQ;
shost_printk(KERN_INFO, instance,
"irq %d denied\n", instance->irq);
} else {
shost_printk(KERN_INFO, instance,
"irq %d acquired\n", instance->irq);
}
}
ret = scsi_add_host(instance, pdev);
@ -597,7 +704,7 @@ static int __init generic_NCR5380_init(void)
int ret = 0;
/* compatibility with old-style parameters */
if (irq[0] == 0 && base[0] == 0 && card[0] == -1) {
if (irq[0] == -1 && base[0] == 0 && card[0] == -1) {
irq[0] = ncr_irq;
base[0] = ncr_addr;
if (ncr_5380)

View File

@ -51,4 +51,6 @@
#define BOARD_DTC3181E 3
#define BOARD_HP_C2502 4
#define IRQ_AUTO 254
#endif /* GENERIC_NCR5380_H */

View File

@ -1557,10 +1557,9 @@ static void hpsa_monitor_offline_device(struct ctlr_info *h,
/* Device is not on the list, add it. */
device = kmalloc(sizeof(*device), GFP_KERNEL);
if (!device) {
dev_warn(&h->pdev->dev, "out of memory in %s\n", __func__);
if (!device)
return;
}
memcpy(device->scsi3addr, scsi3addr, sizeof(device->scsi3addr));
spin_lock_irqsave(&h->offline_device_lock, flags);
list_add_tail(&device->offline_list, &h->offline_device_list);
@ -2142,17 +2141,15 @@ static int hpsa_alloc_sg_chain_blocks(struct ctlr_info *h)
h->cmd_sg_list = kzalloc(sizeof(*h->cmd_sg_list) * h->nr_cmds,
GFP_KERNEL);
if (!h->cmd_sg_list) {
dev_err(&h->pdev->dev, "Failed to allocate SG list\n");
if (!h->cmd_sg_list)
return -ENOMEM;
}
for (i = 0; i < h->nr_cmds; i++) {
h->cmd_sg_list[i] = kmalloc(sizeof(*h->cmd_sg_list[i]) *
h->chainsize, GFP_KERNEL);
if (!h->cmd_sg_list[i]) {
dev_err(&h->pdev->dev, "Failed to allocate cmd SG\n");
if (!h->cmd_sg_list[i])
goto clean;
}
}
return 0;
@ -3454,11 +3451,8 @@ static void hpsa_get_sas_address(struct ctlr_info *h, unsigned char *scsi3addr,
struct bmic_sense_subsystem_info *ssi;
ssi = kzalloc(sizeof(*ssi), GFP_KERNEL);
if (ssi == NULL) {
dev_warn(&h->pdev->dev,
"%s: out of memory\n", __func__);
if (!ssi)
return;
}
rc = hpsa_bmic_sense_subsystem_information(h,
scsi3addr, 0, ssi, sizeof(*ssi));
@ -4335,8 +4329,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h)
currentsd[i] = kzalloc(sizeof(*currentsd[i]), GFP_KERNEL);
if (!currentsd[i]) {
dev_warn(&h->pdev->dev, "out of memory at %s:%d\n",
__FILE__, __LINE__);
h->drv_req_rescan = 1;
goto out;
}
@ -8597,14 +8589,12 @@ static int hpsa_luns_changed(struct ctlr_info *h)
*/
if (!h->lastlogicals)
goto out;
return rc;
logdev = kzalloc(sizeof(*logdev), GFP_KERNEL);
if (!logdev) {
dev_warn(&h->pdev->dev,
"Out of memory, can't track lun changes.\n");
goto out;
}
if (!logdev)
return rc;
if (hpsa_scsi_do_report_luns(h, 1, logdev, sizeof(*logdev), 0)) {
dev_warn(&h->pdev->dev,
"report luns failed, can't track lun changes.\n");
@ -8998,11 +8988,8 @@ static void hpsa_disable_rld_caching(struct ctlr_info *h)
return;
options = kzalloc(sizeof(*options), GFP_KERNEL);
if (!options) {
dev_err(&h->pdev->dev,
"Error: failed to disable rld caching, during alloc.\n");
if (!options)
return;
}
c = cmd_alloc(h);

View File

@ -95,6 +95,7 @@ static int fast_fail = 1;
static int client_reserve = 1;
static char partition_name[97] = "UNKNOWN";
static unsigned int partition_number = -1;
static LIST_HEAD(ibmvscsi_head);
static struct scsi_transport_template *ibmvscsi_transport_template;
@ -232,6 +233,7 @@ static void ibmvscsi_task(void *data)
while ((crq = crq_queue_next_crq(&hostdata->queue)) != NULL) {
ibmvscsi_handle_crq(crq, hostdata);
crq->valid = VIOSRP_CRQ_FREE;
wmb();
}
vio_enable_interrupts(vdev);
@ -240,6 +242,7 @@ static void ibmvscsi_task(void *data)
vio_disable_interrupts(vdev);
ibmvscsi_handle_crq(crq, hostdata);
crq->valid = VIOSRP_CRQ_FREE;
wmb();
} else {
done = 1;
}
@ -992,7 +995,7 @@ static void handle_cmd_rsp(struct srp_event_struct *evt_struct)
if (unlikely(rsp->opcode != SRP_RSP)) {
if (printk_ratelimit())
dev_warn(evt_struct->hostdata->dev,
"bad SRP RSP type %d\n", rsp->opcode);
"bad SRP RSP type %#02x\n", rsp->opcode);
}
if (cmnd) {
@ -2270,6 +2273,7 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
}
dev_set_drvdata(&vdev->dev, hostdata);
list_add_tail(&hostdata->host_list, &ibmvscsi_head);
return 0;
add_srp_port_failed:
@ -2291,6 +2295,7 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
static int ibmvscsi_remove(struct vio_dev *vdev)
{
struct ibmvscsi_host_data *hostdata = dev_get_drvdata(&vdev->dev);
list_del(&hostdata->host_list);
unmap_persist_bufs(hostdata);
release_event_pool(&hostdata->pool, hostdata);
ibmvscsi_release_crq_queue(&hostdata->queue, hostdata,

View File

@ -90,6 +90,7 @@ struct event_pool {
/* all driver data associated with a host adapter */
struct ibmvscsi_host_data {
struct list_head host_list;
atomic_t request_limit;
int client_migrated;
int reset_crq;

View File

@ -402,6 +402,9 @@ struct MPT3SAS_DEVICE {
u8 block;
u8 tlr_snoop_check;
u8 ignore_delay_remove;
/* Iopriority Command Handling */
u8 ncq_prio_enable;
};
#define MPT3_CMD_NOT_USED 0x8000 /* free */
@ -1458,4 +1461,7 @@ mpt3sas_setup_direct_io(struct MPT3SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
struct _raid_device *raid_device, Mpi2SCSIIORequest_t *mpi_request,
u16 smid);
/* NCQ Prio Handling Check */
bool scsih_ncq_prio_supp(struct scsi_device *sdev);
#endif /* MPT3SAS_BASE_H_INCLUDED */

View File

@ -3325,8 +3325,6 @@ static DEVICE_ATTR(diag_trigger_mpi, S_IRUGO | S_IWUSR,
/*********** diagnostic trigger suppport *** END ****************************/
/*****************************************/
struct device_attribute *mpt3sas_host_attrs[] = {
@ -3402,9 +3400,50 @@ _ctl_device_handle_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR(sas_device_handle, S_IRUGO, _ctl_device_handle_show, NULL);
/**
* _ctl_device_ncq_io_prio_show - send prioritized io commands to device
* @dev - pointer to embedded device
* @buf - the buffer returned
*
* A sysfs 'read/write' sdev attribute, only works with SATA
*/
static ssize_t
_ctl_device_ncq_prio_enable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scsi_device *sdev = to_scsi_device(dev);
struct MPT3SAS_DEVICE *sas_device_priv_data = sdev->hostdata;
return snprintf(buf, PAGE_SIZE, "%d\n",
sas_device_priv_data->ncq_prio_enable);
}
static ssize_t
_ctl_device_ncq_prio_enable_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct scsi_device *sdev = to_scsi_device(dev);
struct MPT3SAS_DEVICE *sas_device_priv_data = sdev->hostdata;
bool ncq_prio_enable = 0;
if (kstrtobool(buf, &ncq_prio_enable))
return -EINVAL;
if (!scsih_ncq_prio_supp(sdev))
return -EINVAL;
sas_device_priv_data->ncq_prio_enable = ncq_prio_enable;
return strlen(buf);
}
static DEVICE_ATTR(sas_ncq_prio_enable, S_IRUGO | S_IWUSR,
_ctl_device_ncq_prio_enable_show,
_ctl_device_ncq_prio_enable_store);
struct device_attribute *mpt3sas_dev_attrs[] = {
&dev_attr_sas_address,
&dev_attr_sas_device_handle,
&dev_attr_sas_ncq_prio_enable,
NULL,
};

View File

@ -4053,6 +4053,8 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
struct MPT3SAS_DEVICE *sas_device_priv_data;
struct MPT3SAS_TARGET *sas_target_priv_data;
struct _raid_device *raid_device;
struct request *rq = scmd->request;
int class;
Mpi2SCSIIORequest_t *mpi_request;
u32 mpi_control;
u16 smid;
@ -4115,7 +4117,12 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
/* set tags */
mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
/* NCQ Prio supported, make sure control indicated high priority */
if (sas_device_priv_data->ncq_prio_enable) {
class = IOPRIO_PRIO_CLASS(req_get_ioprio(rq));
if (class == IOPRIO_CLASS_RT)
mpi_control |= 1 << MPI2_SCSIIO_CONTROL_CMDPRI_SHIFT;
}
/* Make sure Device is not raid volume.
* We do not expose raid functionality to upper layer for warpdrive.
*/
@ -9099,6 +9106,31 @@ scsih_pci_mmio_enabled(struct pci_dev *pdev)
return PCI_ERS_RESULT_RECOVERED;
}
/**
* scsih__ncq_prio_supp - Check for NCQ command priority support
* @sdev: scsi device struct
*
* This is called when a user indicates they would like to enable
* ncq command priorities. This works only on SATA devices.
*/
bool scsih_ncq_prio_supp(struct scsi_device *sdev)
{
unsigned char *buf;
bool ncq_prio_supp = false;
if (!scsi_device_supports_vpd(sdev))
return ncq_prio_supp;
buf = kmalloc(SCSI_VPD_PG_LEN, GFP_KERNEL);
if (!buf)
return ncq_prio_supp;
if (!scsi_get_vpd_page(sdev, 0x89, buf, SCSI_VPD_PG_LEN))
ncq_prio_supp = (buf[213] >> 4) & 1;
kfree(buf);
return ncq_prio_supp;
}
/*
* The pci device ids are defined in mpi/mpi2_cnfg.h.
*/

10
drivers/scsi/qedi/Kconfig Normal file
View File

@ -0,0 +1,10 @@
config QEDI
tristate "QLogic QEDI 25/40/100Gb iSCSI Initiator Driver Support"
depends on PCI && SCSI
depends on QED
select SCSI_ISCSI_ATTRS
select QED_LL2
select QED_ISCSI
---help---
This driver supports iSCSI offload for the QLogic FastLinQ
41000 Series Converged Network Adapters.

View File

@ -0,0 +1,5 @@
obj-$(CONFIG_QEDI) := qedi.o
qedi-y := qedi_main.o qedi_iscsi.o qedi_fw.o qedi_sysfs.o \
qedi_dbg.o
qedi-$(CONFIG_DEBUG_FS) += qedi_debugfs.o

364
drivers/scsi/qedi/qedi.h Normal file
View File

@ -0,0 +1,364 @@
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef _QEDI_H_
#define _QEDI_H_
#define __PREVENT_QED_HSI__
#include <scsi/scsi_transport_iscsi.h>
#include <scsi/libiscsi.h>
#include <scsi/scsi_host.h>
#include <linux/uio_driver.h>
#include "qedi_hsi.h"
#include <linux/qed/qed_if.h>
#include "qedi_dbg.h"
#include <linux/qed/qed_iscsi_if.h>
#include <linux/qed/qed_ll2_if.h>
#include "qedi_version.h"
#define QEDI_MODULE_NAME "qedi"
struct qedi_endpoint;
/*
* PCI function probe defines
*/
#define QEDI_MODE_NORMAL 0
#define QEDI_MODE_RECOVERY 1
#define ISCSI_WQE_SET_PTU_INVALIDATE 1
#define QEDI_MAX_ISCSI_TASK 4096
#define QEDI_MAX_TASK_NUM 0x0FFF
#define QEDI_MAX_ISCSI_CONNS_PER_HBA 1024
#define QEDI_ISCSI_MAX_BDS_PER_CMD 256 /* Firmware max BDs is 256 */
#define MAX_OUSTANDING_TASKS_PER_CON 1024
#define QEDI_MAX_BD_LEN 0xffff
#define QEDI_BD_SPLIT_SZ 0x1000
#define QEDI_PAGE_SIZE 4096
#define QEDI_FAST_SGE_COUNT 4
/* MAX Length for cached SGL */
#define MAX_SGLEN_FOR_CACHESGL ((1U << 16) - 1)
#define MAX_NUM_MSIX_PF 8
#define MIN_NUM_CPUS_MSIX(x) min((x)->msix_count, num_online_cpus())
#define QEDI_LOCAL_PORT_MIN 60000
#define QEDI_LOCAL_PORT_MAX 61024
#define QEDI_LOCAL_PORT_RANGE (QEDI_LOCAL_PORT_MAX - QEDI_LOCAL_PORT_MIN)
#define QEDI_LOCAL_PORT_INVALID 0xffff
#define TX_RX_RING 16
#define RX_RING (TX_RX_RING - 1)
#define LL2_SINGLE_BUF_SIZE 0x400
#define QEDI_PAGE_SIZE 4096
#define QEDI_PAGE_ALIGN(addr) ALIGN(addr, QEDI_PAGE_SIZE)
#define QEDI_PAGE_MASK (~((QEDI_PAGE_SIZE) - 1))
#define QEDI_PAGE_SIZE 4096
#define QEDI_PATH_HANDLE 0xFE0000000UL
struct qedi_uio_ctrl {
/* meta data */
u32 uio_hsi_version;
/* user writes */
u32 host_tx_prod;
u32 host_rx_cons;
u32 host_rx_bd_cons;
u32 host_tx_pkt_len;
u32 host_rx_cons_cnt;
/* driver writes */
u32 hw_tx_cons;
u32 hw_rx_prod;
u32 hw_rx_bd_prod;
u32 hw_rx_prod_cnt;
/* other */
u8 mac_addr[6];
u8 reserve[2];
};
struct qedi_rx_bd {
u32 rx_pkt_index;
u32 rx_pkt_len;
u16 vlan_id;
};
#define QEDI_RX_DESC_CNT (QEDI_PAGE_SIZE / sizeof(struct qedi_rx_bd))
#define QEDI_MAX_RX_DESC_CNT (QEDI_RX_DESC_CNT - 1)
#define QEDI_NUM_RX_BD (QEDI_RX_DESC_CNT * 1)
#define QEDI_MAX_RX_BD (QEDI_NUM_RX_BD - 1)
#define QEDI_NEXT_RX_IDX(x) ((((x) & (QEDI_MAX_RX_DESC_CNT)) == \
(QEDI_MAX_RX_DESC_CNT - 1)) ? \
(x) + 2 : (x) + 1)
struct qedi_uio_dev {
struct uio_info qedi_uinfo;
u32 uio_dev;
struct list_head list;
u32 ll2_ring_size;
void *ll2_ring;
u32 ll2_buf_size;
void *ll2_buf;
void *rx_pkt;
void *tx_pkt;
struct qedi_ctx *qedi;
struct pci_dev *pdev;
void *uctrl;
};
/* List to maintain the skb pointers */
struct skb_work_list {
struct list_head list;
struct sk_buff *skb;
u16 vlan_id;
};
/* Queue sizes in number of elements */
#define QEDI_SQ_SIZE MAX_OUSTANDING_TASKS_PER_CON
#define QEDI_CQ_SIZE 2048
#define QEDI_CMDQ_SIZE QEDI_MAX_ISCSI_TASK
#define QEDI_PROTO_CQ_PROD_IDX 0
struct qedi_glbl_q_params {
u64 hw_p_cq; /* Completion queue PBL */
u64 hw_p_rq; /* Request queue PBL */
u64 hw_p_cmdq; /* Command queue PBL */
};
struct global_queue {
union iscsi_cqe *cq;
dma_addr_t cq_dma;
u32 cq_mem_size;
u32 cq_cons_idx; /* Completion queue consumer index */
void *cq_pbl;
dma_addr_t cq_pbl_dma;
u32 cq_pbl_size;
};
struct qedi_fastpath {
struct qed_sb_info *sb_info;
u16 sb_id;
#define QEDI_NAME_SIZE 16
char name[QEDI_NAME_SIZE];
struct qedi_ctx *qedi;
};
/* Used to pass fastpath information needed to process CQEs */
struct qedi_io_work {
struct list_head list;
struct iscsi_cqe_solicited cqe;
u16 que_idx;
};
/**
* struct iscsi_cid_queue - Per adapter iscsi cid queue
*
* @cid_que_base: queue base memory
* @cid_que: queue memory pointer
* @cid_q_prod_idx: produce index
* @cid_q_cons_idx: consumer index
* @cid_q_max_idx: max index. used to detect wrap around condition
* @cid_free_cnt: queue size
* @conn_cid_tbl: iscsi cid to conn structure mapping table
*
* Per adapter iSCSI CID Queue
*/
struct iscsi_cid_queue {
void *cid_que_base;
u32 *cid_que;
u32 cid_q_prod_idx;
u32 cid_q_cons_idx;
u32 cid_q_max_idx;
u32 cid_free_cnt;
struct qedi_conn **conn_cid_tbl;
};
struct qedi_portid_tbl {
spinlock_t lock; /* Port id lock */
u16 start;
u16 max;
u16 next;
unsigned long *table;
};
struct qedi_itt_map {
__le32 itt;
struct qedi_cmd *p_cmd;
};
/* I/O tracing entry */
#define QEDI_IO_TRACE_SIZE 2048
struct qedi_io_log {
#define QEDI_IO_TRACE_REQ 0
#define QEDI_IO_TRACE_RSP 1
u8 direction;
u16 task_id;
u32 cid;
u32 port_id; /* Remote port fabric ID */
int lun;
u8 op; /* SCSI CDB */
u8 lba[4];
unsigned int bufflen; /* SCSI buffer length */
unsigned int sg_count; /* Number of SG elements */
u8 fast_sgs; /* number of fast sgls */
u8 slow_sgs; /* number of slow sgls */
u8 cached_sgs; /* number of cached sgls */
int result; /* Result passed back to mid-layer */
unsigned long jiffies; /* Time stamp when I/O logged */
int refcount; /* Reference count for task id */
unsigned int blk_req_cpu; /* CPU that the task is queued on by
* blk layer
*/
unsigned int req_cpu; /* CPU that the task is queued on */
unsigned int intr_cpu; /* Interrupt CPU that the task is received on */
unsigned int blk_rsp_cpu;/* CPU that task is actually processed and
* returned to blk layer
*/
bool cached_sge;
bool slow_sge;
bool fast_sge;
};
/* Number of entries in BDQ */
#define QEDI_BDQ_NUM 256
#define QEDI_BDQ_BUF_SIZE 256
/* DMA coherent buffers for BDQ */
struct qedi_bdq_buf {
void *buf_addr;
dma_addr_t buf_dma;
};
/* Main port level struct */
struct qedi_ctx {
struct qedi_dbg_ctx dbg_ctx;
struct Scsi_Host *shost;
struct pci_dev *pdev;
struct qed_dev *cdev;
struct qed_dev_iscsi_info dev_info;
struct qed_int_info int_info;
struct qedi_glbl_q_params *p_cpuq;
struct global_queue **global_queues;
/* uio declaration */
struct qedi_uio_dev *udev;
struct list_head ll2_skb_list;
spinlock_t ll2_lock; /* Light L2 lock */
spinlock_t hba_lock; /* per port lock */
struct task_struct *ll2_recv_thread;
unsigned long flags;
#define UIO_DEV_OPENED 1
#define QEDI_IOTHREAD_WAKE 2
#define QEDI_IN_RECOVERY 5
#define QEDI_IN_OFFLINE 6
u8 mac[ETH_ALEN];
u32 src_ip[4];
u8 ip_type;
/* Physical address of above array */
dma_addr_t hw_p_cpuq;
struct qedi_bdq_buf bdq[QEDI_BDQ_NUM];
void *bdq_pbl;
dma_addr_t bdq_pbl_dma;
size_t bdq_pbl_mem_size;
void *bdq_pbl_list;
dma_addr_t bdq_pbl_list_dma;
u8 bdq_pbl_list_num_entries;
void __iomem *bdq_primary_prod;
void __iomem *bdq_secondary_prod;
u16 bdq_prod_idx;
u16 rq_num_entries;
u32 msix_count;
u32 max_sqes;
u8 num_queues;
u32 max_active_conns;
struct iscsi_cid_queue cid_que;
struct qedi_endpoint **ep_tbl;
struct qedi_portid_tbl lcl_port_tbl;
/* Rx fast path intr context */
struct qed_sb_info *sb_array;
struct qedi_fastpath *fp_array;
struct qed_iscsi_tid tasks;
#define QEDI_LINK_DOWN 0
#define QEDI_LINK_UP 1
atomic_t link_state;
#define QEDI_RESERVE_TASK_ID 0
#define MAX_ISCSI_TASK_ENTRIES 4096
#define QEDI_INVALID_TASK_ID (MAX_ISCSI_TASK_ENTRIES + 1)
unsigned long task_idx_map[MAX_ISCSI_TASK_ENTRIES / BITS_PER_LONG];
struct qedi_itt_map *itt_map;
u16 tid_reuse_count[QEDI_MAX_ISCSI_TASK];
struct qed_pf_params pf_params;
struct workqueue_struct *tmf_thread;
struct workqueue_struct *offload_thread;
u16 ll2_mtu;
struct workqueue_struct *dpc_wq;
spinlock_t task_idx_lock; /* To protect gbl context */
s32 last_tidx_alloc;
s32 last_tidx_clear;
struct qedi_io_log io_trace_buf[QEDI_IO_TRACE_SIZE];
spinlock_t io_trace_lock; /* prtect trace Log buf */
u16 io_trace_idx;
unsigned int intr_cpu;
u32 cached_sgls;
bool use_cached_sge;
u32 slow_sgls;
bool use_slow_sge;
u32 fast_sgls;
bool use_fast_sge;
atomic_t num_offloads;
};
struct qedi_work {
struct list_head list;
struct qedi_ctx *qedi;
union iscsi_cqe cqe;
u16 que_idx;
bool is_solicited;
};
struct qedi_percpu_s {
struct task_struct *iothread;
struct list_head work_list;
spinlock_t p_work_lock; /* Per cpu worker lock */
};
static inline void *qedi_get_task_mem(struct qed_iscsi_tid *info, u32 tid)
{
return (info->blocks[tid / info->num_tids_per_block] +
(tid % info->num_tids_per_block) * info->size);
}
#define QEDI_U64_HI(val) ((u32)(((u64)(val)) >> 32))
#define QEDI_U64_LO(val) ((u32)(((u64)(val)) & 0xffffffff))
#endif /* _QEDI_H_ */

View File

@ -0,0 +1,143 @@
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#include "qedi_dbg.h"
#include <linux/vmalloc.h>
void
qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...)
{
va_list va;
struct va_format vaf;
char nfunc[32];
memset(nfunc, 0, sizeof(nfunc));
memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
vaf.fmt = fmt;
vaf.va = &va;
if (likely(qedi) && likely(qedi->pdev))
pr_err("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
nfunc, line, qedi->host_no, &vaf);
else
pr_err("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
va_end(va);
}
void
qedi_dbg_warn(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...)
{
va_list va;
struct va_format vaf;
char nfunc[32];
memset(nfunc, 0, sizeof(nfunc));
memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
vaf.fmt = fmt;
vaf.va = &va;
if (!(qedi_dbg_log & QEDI_LOG_WARN))
return;
if (likely(qedi) && likely(qedi->pdev))
pr_warn("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
nfunc, line, qedi->host_no, &vaf);
else
pr_warn("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
va_end(va);
}
void
qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...)
{
va_list va;
struct va_format vaf;
char nfunc[32];
memset(nfunc, 0, sizeof(nfunc));
memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
vaf.fmt = fmt;
vaf.va = &va;
if (!(qedi_dbg_log & QEDI_LOG_NOTICE))
return;
if (likely(qedi) && likely(qedi->pdev))
pr_notice("[%s]:[%s:%d]:%d: %pV",
dev_name(&qedi->pdev->dev), nfunc, line,
qedi->host_no, &vaf);
else
pr_notice("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
va_end(va);
}
void
qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
u32 level, const char *fmt, ...)
{
va_list va;
struct va_format vaf;
char nfunc[32];
memset(nfunc, 0, sizeof(nfunc));
memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
vaf.fmt = fmt;
vaf.va = &va;
if (!(qedi_dbg_log & level))
return;
if (likely(qedi) && likely(qedi->pdev))
pr_info("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
nfunc, line, qedi->host_no, &vaf);
else
pr_info("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
va_end(va);
}
int
qedi_create_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
{
int ret = 0;
for (; iter->name; iter++) {
ret = sysfs_create_bin_file(&shost->shost_gendev.kobj,
iter->attr);
if (ret)
pr_err("Unable to create sysfs %s attr, err(%d).\n",
iter->name, ret);
}
return ret;
}
void
qedi_remove_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
{
for (; iter->name; iter++)
sysfs_remove_bin_file(&shost->shost_gendev.kobj, iter->attr);
}

View File

@ -0,0 +1,144 @@
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef _QEDI_DBG_H_
#define _QEDI_DBG_H_
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/compiler.h>
#include <linux/string.h>
#include <linux/version.h>
#include <linux/pci.h>
#include <linux/delay.h>
#include <scsi/scsi_transport.h>
#include <scsi/scsi_transport_iscsi.h>
#include <linux/fs.h>
#define __PREVENT_QED_HSI__
#include <linux/qed/common_hsi.h>
#include <linux/qed/qed_if.h>
extern uint qedi_dbg_log;
/* Debug print level definitions */
#define QEDI_LOG_DEFAULT 0x1 /* Set default logging mask */
#define QEDI_LOG_INFO 0x2 /* Informational logs,
* MAC address, WWPN, WWNN
*/
#define QEDI_LOG_DISC 0x4 /* Init, discovery, rport */
#define QEDI_LOG_LL2 0x8 /* LL2, VLAN logs */
#define QEDI_LOG_CONN 0x10 /* Connection setup, cleanup */
#define QEDI_LOG_EVT 0x20 /* Events, link, mtu */
#define QEDI_LOG_TIMER 0x40 /* Timer events */
#define QEDI_LOG_MP_REQ 0x80 /* Middle Path (MP) logs */
#define QEDI_LOG_SCSI_TM 0x100 /* SCSI Aborts, Task Mgmt */
#define QEDI_LOG_UNSOL 0x200 /* unsolicited event logs */
#define QEDI_LOG_IO 0x400 /* scsi cmd, completion */
#define QEDI_LOG_MQ 0x800 /* Multi Queue logs */
#define QEDI_LOG_BSG 0x1000 /* BSG logs */
#define QEDI_LOG_DEBUGFS 0x2000 /* debugFS logs */
#define QEDI_LOG_LPORT 0x4000 /* lport logs */
#define QEDI_LOG_ELS 0x8000 /* ELS logs */
#define QEDI_LOG_NPIV 0x10000 /* NPIV logs */
#define QEDI_LOG_SESS 0x20000 /* Conection setup, cleanup */
#define QEDI_LOG_UIO 0x40000 /* iSCSI UIO logs */
#define QEDI_LOG_TID 0x80000 /* FW TID context acquire,
* free
*/
#define QEDI_TRACK_TID 0x100000 /* Track TID state. To be
* enabled only at module load
* and not run-time.
*/
#define QEDI_TRACK_CMD_LIST 0x300000 /* Track active cmd list nodes,
* done with reference to TID,
* hence TRACK_TID also enabled.
*/
#define QEDI_LOG_NOTICE 0x40000000 /* Notice logs */
#define QEDI_LOG_WARN 0x80000000 /* Warning logs */
/* Debug context structure */
struct qedi_dbg_ctx {
unsigned int host_no;
struct pci_dev *pdev;
#ifdef CONFIG_DEBUG_FS
struct dentry *bdf_dentry;
#endif
};
#define QEDI_ERR(pdev, fmt, ...) \
qedi_dbg_err(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
#define QEDI_WARN(pdev, fmt, ...) \
qedi_dbg_warn(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
#define QEDI_NOTICE(pdev, fmt, ...) \
qedi_dbg_notice(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
#define QEDI_INFO(pdev, level, fmt, ...) \
qedi_dbg_info(pdev, __func__, __LINE__, level, fmt, \
## __VA_ARGS__)
void qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...);
void qedi_dbg_warn(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...);
void qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...);
void qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
u32 info, const char *fmt, ...);
struct Scsi_Host;
struct sysfs_bin_attrs {
char *name;
struct bin_attribute *attr;
};
int qedi_create_sysfs_attr(struct Scsi_Host *shost,
struct sysfs_bin_attrs *iter);
void qedi_remove_sysfs_attr(struct Scsi_Host *shost,
struct sysfs_bin_attrs *iter);
#ifdef CONFIG_DEBUG_FS
/* DebugFS related code */
struct qedi_list_of_funcs {
char *oper_str;
ssize_t (*oper_func)(struct qedi_dbg_ctx *qedi);
};
struct qedi_debugfs_ops {
char *name;
struct qedi_list_of_funcs *qedi_funcs;
};
#define qedi_dbg_fileops(drv, ops) \
{ \
.owner = THIS_MODULE, \
.open = simple_open, \
.read = drv##_dbg_##ops##_cmd_read, \
.write = drv##_dbg_##ops##_cmd_write \
}
/* Used for debugfs sequential files */
#define qedi_dbg_fileops_seq(drv, ops) \
{ \
.owner = THIS_MODULE, \
.open = drv##_dbg_##ops##_open, \
.read = seq_read, \
.llseek = seq_lseek, \
.release = single_release, \
}
void qedi_dbg_host_init(struct qedi_dbg_ctx *qedi,
struct qedi_debugfs_ops *dops,
const struct file_operations *fops);
void qedi_dbg_host_exit(struct qedi_dbg_ctx *qedi);
void qedi_dbg_init(char *drv_name);
void qedi_dbg_exit(void);
#endif /* CONFIG_DEBUG_FS */
#endif /* _QEDI_DBG_H_ */

View File

@ -0,0 +1,244 @@
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#include "qedi.h"
#include "qedi_dbg.h"
#include <linux/uaccess.h>
#include <linux/debugfs.h>
#include <linux/module.h>
int do_not_recover;
static struct dentry *qedi_dbg_root;
void
qedi_dbg_host_init(struct qedi_dbg_ctx *qedi,
struct qedi_debugfs_ops *dops,
const struct file_operations *fops)
{
char host_dirname[32];
struct dentry *file_dentry = NULL;
sprintf(host_dirname, "host%u", qedi->host_no);
qedi->bdf_dentry = debugfs_create_dir(host_dirname, qedi_dbg_root);
if (!qedi->bdf_dentry)
return;
while (dops) {
if (!(dops->name))
break;
file_dentry = debugfs_create_file(dops->name, 0600,
qedi->bdf_dentry, qedi,
fops);
if (!file_dentry) {
QEDI_INFO(qedi, QEDI_LOG_DEBUGFS,
"Debugfs entry %s creation failed\n",
dops->name);
debugfs_remove_recursive(qedi->bdf_dentry);
return;
}
dops++;
fops++;
}
}
void
qedi_dbg_host_exit(struct qedi_dbg_ctx *qedi)
{
debugfs_remove_recursive(qedi->bdf_dentry);
qedi->bdf_dentry = NULL;
}
void
qedi_dbg_init(char *drv_name)
{
qedi_dbg_root = debugfs_create_dir(drv_name, NULL);
if (!qedi_dbg_root)
QEDI_INFO(NULL, QEDI_LOG_DEBUGFS, "Init of debugfs failed\n");
}
void
qedi_dbg_exit(void)
{
debugfs_remove_recursive(qedi_dbg_root);
qedi_dbg_root = NULL;
}
static ssize_t
qedi_dbg_do_not_recover_enable(struct qedi_dbg_ctx *qedi_dbg)
{
if (!do_not_recover)
do_not_recover = 1;
QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
do_not_recover);
return 0;
}
static ssize_t
qedi_dbg_do_not_recover_disable(struct qedi_dbg_ctx *qedi_dbg)
{
if (do_not_recover)
do_not_recover = 0;
QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
do_not_recover);
return 0;
}
static struct qedi_list_of_funcs qedi_dbg_do_not_recover_ops[] = {
{ "enable", qedi_dbg_do_not_recover_enable },
{ "disable", qedi_dbg_do_not_recover_disable },
{ NULL, NULL }
};
struct qedi_debugfs_ops qedi_debugfs_ops[] = {
{ "gbl_ctx", NULL },
{ "do_not_recover", qedi_dbg_do_not_recover_ops},
{ "io_trace", NULL },
{ NULL, NULL }
};
static ssize_t
qedi_dbg_do_not_recover_cmd_write(struct file *filp, const char __user *buffer,
size_t count, loff_t *ppos)
{
size_t cnt = 0;
struct qedi_dbg_ctx *qedi_dbg =
(struct qedi_dbg_ctx *)filp->private_data;
struct qedi_list_of_funcs *lof = qedi_dbg_do_not_recover_ops;
if (*ppos)
return 0;
while (lof) {
if (!(lof->oper_str))
break;
if (!strncmp(lof->oper_str, buffer, strlen(lof->oper_str))) {
cnt = lof->oper_func(qedi_dbg);
break;
}
lof++;
}
return (count - cnt);
}
static ssize_t
qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer,
size_t count, loff_t *ppos)
{
size_t cnt = 0;
if (*ppos)
return 0;
cnt = sprintf(buffer, "do_not_recover=%d\n", do_not_recover);
cnt = min_t(int, count, cnt - *ppos);
*ppos += cnt;
return cnt;
}
static int
qedi_gbl_ctx_show(struct seq_file *s, void *unused)
{
struct qedi_fastpath *fp = NULL;
struct qed_sb_info *sb_info = NULL;
struct status_block *sb = NULL;
struct global_queue *que = NULL;
int id;
u16 prod_idx;
struct qedi_ctx *qedi = s->private;
unsigned long flags;
seq_puts(s, " DUMP CQ CONTEXT:\n");
for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
spin_lock_irqsave(&qedi->hba_lock, flags);
seq_printf(s, "=========FAST CQ PATH [%d] ==========\n", id);
fp = &qedi->fp_array[id];
sb_info = fp->sb_info;
sb = sb_info->sb_virt;
prod_idx = (sb->pi_array[QEDI_PROTO_CQ_PROD_IDX] &
STATUS_BLOCK_PROD_INDEX_MASK);
seq_printf(s, "SB PROD IDX: %d\n", prod_idx);
que = qedi->global_queues[fp->sb_id];
seq_printf(s, "DRV CONS IDX: %d\n", que->cq_cons_idx);
seq_printf(s, "CQ complete host memory: %d\n", fp->sb_id);
seq_puts(s, "=========== END ==================\n\n\n");
spin_unlock_irqrestore(&qedi->hba_lock, flags);
}
return 0;
}
static int
qedi_dbg_gbl_ctx_open(struct inode *inode, struct file *file)
{
struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
dbg_ctx);
return single_open(file, qedi_gbl_ctx_show, qedi);
}
static int
qedi_io_trace_show(struct seq_file *s, void *unused)
{
int id, idx = 0;
struct qedi_ctx *qedi = s->private;
struct qedi_io_log *io_log;
unsigned long flags;
seq_puts(s, " DUMP IO LOGS:\n");
spin_lock_irqsave(&qedi->io_trace_lock, flags);
idx = qedi->io_trace_idx;
for (id = 0; id < QEDI_IO_TRACE_SIZE; id++) {
io_log = &qedi->io_trace_buf[idx];
seq_printf(s, "iodir-%d:", io_log->direction);
seq_printf(s, "tid-0x%x:", io_log->task_id);
seq_printf(s, "cid-0x%x:", io_log->cid);
seq_printf(s, "lun-%d:", io_log->lun);
seq_printf(s, "op-0x%02x:", io_log->op);
seq_printf(s, "0x%02x%02x%02x%02x:", io_log->lba[0],
io_log->lba[1], io_log->lba[2], io_log->lba[3]);
seq_printf(s, "buflen-%d:", io_log->bufflen);
seq_printf(s, "sgcnt-%d:", io_log->sg_count);
seq_printf(s, "res-0x%08x:", io_log->result);
seq_printf(s, "jif-%lu:", io_log->jiffies);
seq_printf(s, "blk_req_cpu-%d:", io_log->blk_req_cpu);
seq_printf(s, "req_cpu-%d:", io_log->req_cpu);
seq_printf(s, "intr_cpu-%d:", io_log->intr_cpu);
seq_printf(s, "blk_rsp_cpu-%d\n", io_log->blk_rsp_cpu);
idx++;
if (idx == QEDI_IO_TRACE_SIZE)
idx = 0;
}
spin_unlock_irqrestore(&qedi->io_trace_lock, flags);
return 0;
}
static int
qedi_dbg_io_trace_open(struct inode *inode, struct file *file)
{
struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
dbg_ctx);
return single_open(file, qedi_io_trace_show, qedi);
}
const struct file_operations qedi_dbg_fops[] = {
qedi_dbg_fileops_seq(qedi, gbl_ctx),
qedi_dbg_fileops(qedi, do_not_recover),
qedi_dbg_fileops_seq(qedi, io_trace),
{ NULL, NULL },
};

2378
drivers/scsi/qedi/qedi_fw.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,73 @@
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef _QEDI_GBL_H_
#define _QEDI_GBL_H_
#include "qedi_iscsi.h"
extern uint qedi_io_tracing;
extern int do_not_recover;
extern struct scsi_host_template qedi_host_template;
extern struct iscsi_transport qedi_iscsi_transport;
extern const struct qed_iscsi_ops *qedi_ops;
extern struct qedi_debugfs_ops qedi_debugfs_ops;
extern const struct file_operations qedi_dbg_fops;
extern struct device_attribute *qedi_shost_attrs[];
int qedi_alloc_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
void qedi_free_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
struct iscsi_task *task);
int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
struct iscsi_task *task);
int qedi_iscsi_abort_work(struct qedi_conn *qedi_conn,
struct iscsi_task *mtask);
int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
struct iscsi_task *task);
int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
struct iscsi_task *task,
char *datap, int data_len, int unsol);
int qedi_iscsi_send_ioreq(struct iscsi_task *task);
int qedi_get_task_idx(struct qedi_ctx *qedi);
void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx);
int qedi_iscsi_cleanup_task(struct iscsi_task *task,
bool mark_cmd_node_deleted);
void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd);
void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt,
struct qedi_cmd *qedi_cmd);
void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt);
void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, int16_t *tid);
void qedi_process_iscsi_error(struct qedi_endpoint *ep,
struct async_data *data);
void qedi_start_conn_recovery(struct qedi_ctx *qedi,
struct qedi_conn *qedi_conn);
struct qedi_conn *qedi_get_conn_from_id(struct qedi_ctx *qedi, u32 iscsi_cid);
void qedi_process_tcp_error(struct qedi_endpoint *ep, struct async_data *data);
void qedi_mark_device_missing(struct iscsi_cls_session *cls_session);
void qedi_mark_device_available(struct iscsi_cls_session *cls_session);
void qedi_reset_host_mtu(struct qedi_ctx *qedi, u16 mtu);
int qedi_recover_all_conns(struct qedi_ctx *qedi);
void qedi_fp_process_cqes(struct qedi_work *work);
int qedi_cleanup_all_io(struct qedi_ctx *qedi,
struct qedi_conn *qedi_conn,
struct iscsi_task *task, bool in_recovery);
void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
u16 tid, int8_t direction);
int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id);
u16 qedi_alloc_new_id(struct qedi_portid_tbl *id_tbl);
void qedi_free_id(struct qedi_portid_tbl *id_tbl, u16 id);
int qedi_create_sysfs_ctx_attr(struct qedi_ctx *qedi);
void qedi_remove_sysfs_ctx_attr(struct qedi_ctx *qedi);
void qedi_clearsq(struct qedi_ctx *qedi,
struct qedi_conn *qedi_conn,
struct iscsi_task *task);
#endif

View File

@ -0,0 +1,52 @@
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef __QEDI_HSI__
#define __QEDI_HSI__
/*
* Add include to common target
*/
#include <linux/qed/common_hsi.h>
/*
* Add include to common storage target
*/
#include <linux/qed/storage_common.h>
/*
* Add include to common TCP target
*/
#include <linux/qed/tcp_common.h>
/*
* Add include to common iSCSI target for both eCore and protocol driver
*/
#include <linux/qed/iscsi_common.h>
/*
* iSCSI CMDQ element
*/
struct iscsi_cmdqe {
__le16 conn_id;
u8 invalid_command;
u8 cmd_hdr_type;
__le32 reserved1[2];
__le32 cmd_payload[13];
};
/*
* iSCSI CMD header type
*/
enum iscsi_cmd_hdr_type {
ISCSI_CMD_HDR_TYPE_BHS_ONLY /* iSCSI BHS with no expected AHS */,
ISCSI_CMD_HDR_TYPE_BHS_W_AHS /* iSCSI BHS with expected AHS */,
ISCSI_CMD_HDR_TYPE_AHS /* iSCSI AHS */,
MAX_ISCSI_CMD_HDR_TYPE
};
#endif /* __QEDI_HSI__ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,232 @@
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef _QEDI_ISCSI_H_
#define _QEDI_ISCSI_H_
#include <linux/socket.h>
#include <linux/completion.h>
#include "qedi.h"
#define ISCSI_MAX_SESS_PER_HBA 4096
#define DEF_KA_TIMEOUT 7200000
#define DEF_KA_INTERVAL 10000
#define DEF_KA_MAX_PROBE_COUNT 10
#define DEF_TOS 0
#define DEF_TTL 0xfe
#define DEF_SND_SEQ_SCALE 0
#define DEF_RCV_BUF 0xffff
#define DEF_SND_BUF 0xffff
#define DEF_SEED 0
#define DEF_MAX_RT_TIME 8000
#define DEF_MAX_DA_COUNT 2
#define DEF_SWS_TIMER 1000
#define DEF_MAX_CWND 2
#define DEF_PATH_MTU 1500
#define DEF_MSS 1460
#define DEF_LL2_MTU 1560
#define JUMBO_MTU 9000
#define MIN_MTU 576 /* rfc 793 */
#define IPV4_HDR_LEN 20
#define IPV6_HDR_LEN 40
#define TCP_HDR_LEN 20
#define TCP_OPTION_LEN 12
#define VLAN_LEN 4
enum {
EP_STATE_IDLE = 0x0,
EP_STATE_ACQRCONN_START = 0x1,
EP_STATE_ACQRCONN_COMPL = 0x2,
EP_STATE_OFLDCONN_START = 0x4,
EP_STATE_OFLDCONN_COMPL = 0x8,
EP_STATE_DISCONN_START = 0x10,
EP_STATE_DISCONN_COMPL = 0x20,
EP_STATE_CLEANUP_START = 0x40,
EP_STATE_CLEANUP_CMPL = 0x80,
EP_STATE_TCP_FIN_RCVD = 0x100,
EP_STATE_TCP_RST_RCVD = 0x200,
EP_STATE_LOGOUT_SENT = 0x400,
EP_STATE_LOGOUT_RESP_RCVD = 0x800,
EP_STATE_CLEANUP_FAILED = 0x1000,
EP_STATE_OFLDCONN_FAILED = 0x2000,
EP_STATE_CONNECT_FAILED = 0x4000,
EP_STATE_DISCONN_TIMEDOUT = 0x8000,
};
struct qedi_conn;
struct qedi_endpoint {
struct qedi_ctx *qedi;
u32 dst_addr[4];
u32 src_addr[4];
u16 src_port;
u16 dst_port;
u16 vlan_id;
u16 pmtu;
u8 src_mac[ETH_ALEN];
u8 dst_mac[ETH_ALEN];
u8 ip_type;
int state;
wait_queue_head_t ofld_wait;
wait_queue_head_t tcp_ofld_wait;
u32 iscsi_cid;
/* identifier of the connection from qed */
u32 handle;
u32 fw_cid;
void __iomem *p_doorbell;
/* Send queue management */
struct iscsi_wqe *sq;
dma_addr_t sq_dma;
u16 sq_prod_idx;
u16 fw_sq_prod_idx;
u16 sq_con_idx;
u32 sq_mem_size;
void *sq_pbl;
dma_addr_t sq_pbl_dma;
u32 sq_pbl_size;
struct qedi_conn *conn;
struct work_struct offload_work;
};
#define QEDI_SQ_WQES_MIN 16
struct qedi_io_bdt {
struct iscsi_sge *sge_tbl;
dma_addr_t sge_tbl_dma;
u16 sge_valid;
};
/**
* struct generic_pdu_resc - login pdu resource structure
*
* @req_buf: driver buffer used to stage payload associated with
* the login request
* @req_dma_addr: dma address for iscsi login request payload buffer
* @req_buf_size: actual login request payload length
* @req_wr_ptr: pointer into login request buffer when next data is
* to be written
* @resp_hdr: iscsi header where iscsi login response header is to
* be recreated
* @resp_buf: buffer to stage login response payload
* @resp_dma_addr: login response payload buffer dma address
* @resp_buf_size: login response paylod length
* @resp_wr_ptr: pointer into login response buffer when next data is
* to be written
* @req_bd_tbl: iscsi login request payload BD table
* @req_bd_dma: login request BD table dma address
* @resp_bd_tbl: iscsi login response payload BD table
* @resp_bd_dma: login request BD table dma address
*
* following structure defines buffer info for generic pdus such as iSCSI Login,
* Logout and NOP
*/
struct generic_pdu_resc {
char *req_buf;
dma_addr_t req_dma_addr;
u32 req_buf_size;
char *req_wr_ptr;
struct iscsi_hdr resp_hdr;
char *resp_buf;
dma_addr_t resp_dma_addr;
u32 resp_buf_size;
char *resp_wr_ptr;
char *req_bd_tbl;
dma_addr_t req_bd_dma;
char *resp_bd_tbl;
dma_addr_t resp_bd_dma;
};
struct qedi_conn {
struct iscsi_cls_conn *cls_conn;
struct qedi_ctx *qedi;
struct qedi_endpoint *ep;
struct list_head active_cmd_list;
spinlock_t list_lock; /* internal conn lock */
u32 active_cmd_count;
u32 cmd_cleanup_req;
u32 cmd_cleanup_cmpl;
u32 iscsi_conn_id;
int itt;
int abrt_conn;
#define QEDI_CID_RESERVED 0x5AFF
u32 fw_cid;
/*
* Buffer for login negotiation process
*/
struct generic_pdu_resc gen_pdu;
struct list_head tmf_work_list;
wait_queue_head_t wait_queue;
spinlock_t tmf_work_lock; /* tmf work lock */
unsigned long flags;
#define QEDI_CONN_FW_CLEANUP 1
};
struct qedi_cmd {
struct list_head io_cmd;
bool io_cmd_in_list;
struct iscsi_hdr hdr;
struct qedi_conn *conn;
struct scsi_cmnd *scsi_cmd;
struct scatterlist *sg;
struct qedi_io_bdt io_tbl;
struct iscsi_task_context request;
unsigned char *sense_buffer;
dma_addr_t sense_buffer_dma;
u16 task_id;
/* field populated for tmf work queue */
struct iscsi_task *task;
struct work_struct tmf_work;
int state;
#define CLEANUP_WAIT 1
#define CLEANUP_RECV 2
#define CLEANUP_WAIT_FAILED 3
#define CLEANUP_NOT_REQUIRED 4
#define LUN_RESET_RESPONSE_RECEIVED 5
#define RESPONSE_RECEIVED 6
int type;
#define TYPEIO 1
#define TYPERESET 2
struct qedi_work_map *list_tmf_work;
/* slowpath management */
bool use_slowpath;
struct iscsi_tm_rsp *tmf_resp_buf;
struct qedi_work cqe_work;
};
struct qedi_work_map {
struct list_head list;
struct qedi_cmd *qedi_cmd;
int rtid;
int state;
#define QEDI_WORK_QUEUED 1
#define QEDI_WORK_SCHEDULED 2
#define QEDI_WORK_EXIT 3
struct work_struct *ptr_tmf_work;
};
#define qedi_set_itt(task_id, itt) ((u32)(((task_id) & 0xffff) | ((itt) << 16)))
#define qedi_get_itt(cqe) (cqe.iscsi_hdr.cmd.itt >> 16)
#define QEDI_OFLD_WAIT_STATE(q) ((q)->state == EP_STATE_OFLDCONN_FAILED || \
(q)->state == EP_STATE_OFLDCONN_COMPL)
#endif /* _QEDI_ISCSI_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,52 @@
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#include "qedi.h"
#include "qedi_gbl.h"
#include "qedi_iscsi.h"
#include "qedi_dbg.h"
static inline struct qedi_ctx *qedi_dev_to_hba(struct device *dev)
{
struct Scsi_Host *shost = class_to_shost(dev);
return iscsi_host_priv(shost);
}
static ssize_t qedi_show_port_state(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct qedi_ctx *qedi = qedi_dev_to_hba(dev);
if (atomic_read(&qedi->link_state) == QEDI_LINK_UP)
return sprintf(buf, "Online\n");
else
return sprintf(buf, "Linkdown\n");
}
static ssize_t qedi_show_speed(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct qedi_ctx *qedi = qedi_dev_to_hba(dev);
struct qed_link_output if_link;
qedi_ops->common->get_link(qedi->cdev, &if_link);
return sprintf(buf, "%d Gbit\n", if_link.speed / 1000);
}
static DEVICE_ATTR(port_state, 0444, qedi_show_port_state, NULL);
static DEVICE_ATTR(speed, 0444, qedi_show_speed, NULL);
struct device_attribute *qedi_shost_attrs[] = {
&dev_attr_port_state,
&dev_attr_speed,
NULL
};

View File

@ -0,0 +1,14 @@
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#define QEDI_MODULE_VERSION "8.10.3.0"
#define QEDI_DRIVER_MAJOR_VER 8
#define QEDI_DRIVER_MINOR_VER 10
#define QEDI_DRIVER_REV_VER 3
#define QEDI_DRIVER_ENG_VER 0

View File

@ -1988,9 +1988,9 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
scsi_qla_host_t *base_vha = shost_priv(fc_vport->shost);
scsi_qla_host_t *vha = NULL;
struct qla_hw_data *ha = base_vha->hw;
uint16_t options = 0;
int cnt;
struct req_que *req = ha->req_q_map[0];
struct qla_qpair *qpair;
ret = qla24xx_vport_create_req_sanity_check(fc_vport);
if (ret) {
@ -2075,15 +2075,9 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
qlt_vport_create(vha, ha);
qla24xx_vport_disable(fc_vport, disable);
if (ha->flags.cpu_affinity_enabled) {
req = ha->req_q_map[1];
ql_dbg(ql_dbg_multiq, vha, 0xc000,
"Request queue %p attached with "
"VP[%d], cpu affinity =%d\n",
req, vha->vp_idx, ha->flags.cpu_affinity_enabled);
goto vport_queue;
} else if (ql2xmaxqueues == 1 || !ha->npiv_info)
if (!ql2xmqsupport || !ha->npiv_info)
goto vport_queue;
/* Create a request queue in QoS mode for the vport */
for (cnt = 0; cnt < ha->nvram_npiv_size; cnt++) {
if (memcmp(ha->npiv_info[cnt].port_name, vha->port_name, 8) == 0
@ -2095,20 +2089,20 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
}
if (qos) {
ret = qla25xx_create_req_que(ha, options, vha->vp_idx, 0, 0,
qos);
if (!ret)
qpair = qla2xxx_create_qpair(vha, qos, vha->vp_idx);
if (!qpair)
ql_log(ql_log_warn, vha, 0x7084,
"Can't create request queue for VP[%d]\n",
"Can't create qpair for VP[%d]\n",
vha->vp_idx);
else {
ql_dbg(ql_dbg_multiq, vha, 0xc001,
"Request Que:%d Q0s: %d) created for VP[%d]\n",
ret, qos, vha->vp_idx);
"Queue pair: %d Qos: %d) created for VP[%d]\n",
qpair->id, qos, vha->vp_idx);
ql_dbg(ql_dbg_user, vha, 0x7085,
"Request Que:%d Q0s: %d) created for VP[%d]\n",
ret, qos, vha->vp_idx);
req = ha->req_q_map[ret];
"Queue Pair: %d Qos: %d) created for VP[%d]\n",
qpair->id, qos, vha->vp_idx);
req = qpair->req;
vha->qpair = qpair;
}
}
@ -2162,10 +2156,10 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
clear_bit(vha->vp_idx, ha->vp_idx_map);
mutex_unlock(&ha->vport_lock);
if (vha->req->id && !ha->flags.cpu_affinity_enabled) {
if (qla25xx_delete_req_que(vha, vha->req) != QLA_SUCCESS)
if (vha->qpair->vp_idx == vha->vp_idx) {
if (qla2xxx_delete_qpair(vha, vha->qpair) != QLA_SUCCESS)
ql_log(ql_log_warn, vha, 0x7087,
"Queue delete failed.\n");
"Queue Pair delete failed.\n");
}
ql_log(ql_log_info, vha, 0x7088, "VP[%d] deleted.\n", id);

View File

@ -11,7 +11,7 @@
* ----------------------------------------------------------------------
* | Level | Last Value Used | Holes |
* ----------------------------------------------------------------------
* | Module Init and Probe | 0x0191 | 0x0146 |
* | Module Init and Probe | 0x0193 | 0x0146 |
* | | | 0x015b-0x0160 |
* | | | 0x016e |
* | Mailbox commands | 0x1199 | 0x1193 |
@ -58,7 +58,7 @@
* | | | 0xb13a,0xb142 |
* | | | 0xb13c-0xb140 |
* | | | 0xb149 |
* | MultiQ | 0xc00c | |
* | MultiQ | 0xc010 | |
* | Misc | 0xd301 | 0xd031-0xd0ff |
* | | | 0xd101-0xd1fe |
* | | | 0xd214-0xd2fe |

View File

@ -401,6 +401,7 @@ typedef struct srb {
uint16_t type;
char *name;
int iocbs;
struct qla_qpair *qpair;
union {
struct srb_iocb iocb_cmd;
struct bsg_job *bsg_job;
@ -2719,6 +2720,7 @@ struct isp_operations {
int (*get_flash_version) (struct scsi_qla_host *, void *);
int (*start_scsi) (srb_t *);
int (*start_scsi_mq) (srb_t *);
int (*abort_isp) (struct scsi_qla_host *);
int (*iospace_config)(struct qla_hw_data*);
int (*initialize_adapter)(struct scsi_qla_host *);
@ -2730,8 +2732,10 @@ struct isp_operations {
#define QLA_MSIX_FW_MODE(m) (((m) & (BIT_7|BIT_8|BIT_9)) >> 7)
#define QLA_MSIX_FW_MODE_1(m) (QLA_MSIX_FW_MODE(m) == 1)
#define QLA_MSIX_DEFAULT 0x00
#define QLA_MSIX_RSP_Q 0x01
#define QLA_MSIX_DEFAULT 0x00
#define QLA_MSIX_RSP_Q 0x01
#define QLA_ATIO_VECTOR 0x02
#define QLA_MSIX_QPAIR_MULTIQ_RSP_Q 0x03
#define QLA_MIDX_DEFAULT 0
#define QLA_MIDX_RSP_Q 1
@ -2745,9 +2749,11 @@ struct scsi_qla_host;
struct qla_msix_entry {
int have_irq;
int in_use;
uint32_t vector;
uint16_t entry;
struct rsp_que *rsp;
char name[30];
void *handle;
struct irq_affinity_notify irq_notify;
int cpuid;
};
@ -2872,7 +2878,6 @@ struct rsp_que {
struct qla_msix_entry *msix;
struct req_que *req;
srb_t *status_srb; /* status continuation entry */
struct work_struct q_work;
dma_addr_t dma_fx00;
response_t *ring_fx00;
@ -2909,6 +2914,37 @@ struct req_que {
uint8_t req_pkt[REQUEST_ENTRY_SIZE];
};
/*Queue pair data structure */
struct qla_qpair {
spinlock_t qp_lock;
atomic_t ref_count;
/* distill these fields down to 'online=0/1'
* ha->flags.eeh_busy
* ha->flags.pci_channel_io_perm_failure
* base_vha->loop_state
*/
uint32_t online:1;
/* move vha->flags.difdix_supported here */
uint32_t difdix_supported:1;
uint32_t delete_in_progress:1;
uint16_t id; /* qp number used with FW */
uint16_t num_active_cmd; /* cmds down at firmware */
cpumask_t cpu_mask; /* CPU mask for cpu affinity operation */
uint16_t vp_idx; /* vport ID */
mempool_t *srb_mempool;
/* to do: New driver: move queues to here instead of pointers */
struct req_que *req;
struct rsp_que *rsp;
struct atio_que *atio;
struct qla_msix_entry *msix; /* point to &ha->msix_entries[x] */
struct qla_hw_data *hw;
struct work_struct q_work;
struct list_head qp_list_elem; /* vha->qp_list */
};
/* Place holder for FW buffer parameters */
struct qlfc_fw {
void *fw_buf;
@ -3004,7 +3040,6 @@ struct qla_hw_data {
uint32_t chip_reset_done :1;
uint32_t running_gold_fw :1;
uint32_t eeh_busy :1;
uint32_t cpu_affinity_enabled :1;
uint32_t disable_msix_handshake :1;
uint32_t fcp_prio_enabled :1;
uint32_t isp82xx_fw_hung:1;
@ -3061,10 +3096,15 @@ struct qla_hw_data {
uint8_t mqenable;
struct req_que **req_q_map;
struct rsp_que **rsp_q_map;
struct qla_qpair **queue_pair_map;
unsigned long req_qid_map[(QLA_MAX_QUEUES / 8) / sizeof(unsigned long)];
unsigned long rsp_qid_map[(QLA_MAX_QUEUES / 8) / sizeof(unsigned long)];
unsigned long qpair_qid_map[(QLA_MAX_QUEUES / 8)
/ sizeof(unsigned long)];
uint8_t max_req_queues;
uint8_t max_rsp_queues;
uint8_t max_qpairs;
struct qla_qpair *base_qpair;
struct qla_npiv_entry *npiv_info;
uint16_t nvram_npiv_size;
@ -3328,6 +3368,7 @@ struct qla_hw_data {
struct mutex vport_lock; /* Virtual port synchronization */
spinlock_t vport_slock; /* order is hardware_lock, then vport_slock */
struct mutex mq_lock; /* multi-queue synchronization */
struct completion mbx_cmd_comp; /* Serialize mbx access */
struct completion mbx_intr_comp; /* Used for completion notification */
struct completion dcbx_comp; /* For set port config notification */
@ -3608,6 +3649,7 @@ typedef struct scsi_qla_host {
uint32_t fw_tgt_reported:1;
uint32_t bbcr_enable:1;
uint32_t qpairs_available:1;
} flags;
atomic_t loop_state;
@ -3646,6 +3688,7 @@ typedef struct scsi_qla_host {
#define FX00_TARGET_SCAN 24
#define FX00_CRITEMP_RECOVERY 25
#define FX00_HOST_INFO_RESEND 26
#define QPAIR_ONLINE_CHECK_NEEDED 27
unsigned long pci_flags;
#define PFLG_DISCONNECTED 0 /* PCI device removed */
@ -3704,10 +3747,13 @@ typedef struct scsi_qla_host {
/* List of pending PLOGI acks, protected by hw lock */
struct list_head plogi_ack_list;
struct list_head qp_list;
uint32_t vp_abort_cnt;
struct fc_vport *fc_vport; /* holds fc_vport * for each vport */
uint16_t vp_idx; /* vport ID */
struct qla_qpair *qpair; /* base qpair */
unsigned long vp_flags;
#define VP_IDX_ACQUIRED 0 /* bit no 0 */
@ -3763,6 +3809,23 @@ struct qla_tgt_vp_map {
scsi_qla_host_t *vha;
};
struct qla2_sgx {
dma_addr_t dma_addr; /* OUT */
uint32_t dma_len; /* OUT */
uint32_t tot_bytes; /* IN */
struct scatterlist *cur_sg; /* IN */
/* for book keeping, bzero on initial invocation */
uint32_t bytes_consumed;
uint32_t num_bytes;
uint32_t tot_partial;
/* for debugging */
uint32_t num_sg;
srb_t *sp;
};
/*
* Macros to help code, maintain, etc.
*/
@ -3775,21 +3838,34 @@ struct qla_tgt_vp_map {
(test_bit(ISP_ABORT_NEEDED, &ha->dpc_flags) || \
test_bit(LOOP_RESYNC_NEEDED, &ha->dpc_flags))
#define QLA_VHA_MARK_BUSY(__vha, __bail) do { \
atomic_inc(&__vha->vref_count); \
mb(); \
if (__vha->flags.delete_progress) { \
atomic_dec(&__vha->vref_count); \
__bail = 1; \
} else { \
__bail = 0; \
} \
#define QLA_VHA_MARK_BUSY(__vha, __bail) do { \
atomic_inc(&__vha->vref_count); \
mb(); \
if (__vha->flags.delete_progress) { \
atomic_dec(&__vha->vref_count); \
__bail = 1; \
} else { \
__bail = 0; \
} \
} while (0)
#define QLA_VHA_MARK_NOT_BUSY(__vha) do { \
atomic_dec(&__vha->vref_count); \
#define QLA_VHA_MARK_NOT_BUSY(__vha) \
atomic_dec(&__vha->vref_count); \
#define QLA_QPAIR_MARK_BUSY(__qpair, __bail) do { \
atomic_inc(&__qpair->ref_count); \
mb(); \
if (__qpair->delete_in_progress) { \
atomic_dec(&__qpair->ref_count); \
__bail = 1; \
} else { \
__bail = 0; \
} \
} while (0)
#define QLA_QPAIR_MARK_NOT_BUSY(__qpair) \
atomic_dec(&__qpair->ref_count); \
/*
* qla2x00 local function return status codes
*/

View File

@ -91,12 +91,17 @@ extern int
qla2x00_alloc_outstanding_cmds(struct qla_hw_data *, struct req_que *);
extern int qla2x00_init_rings(scsi_qla_host_t *);
extern uint8_t qla27xx_find_valid_image(struct scsi_qla_host *);
extern struct qla_qpair *qla2xxx_create_qpair(struct scsi_qla_host *,
int, int);
extern int qla2xxx_delete_qpair(struct scsi_qla_host *, struct qla_qpair *);
/*
* Global Data in qla_os.c source file.
*/
extern char qla2x00_version_str[];
extern struct kmem_cache *srb_cachep;
extern int ql2xlogintimeout;
extern int qlport_down_retry;
extern int ql2xplogiabsentdevice;
@ -105,8 +110,7 @@ extern int ql2xfdmienable;
extern int ql2xallocfwdump;
extern int ql2xextended_error_logging;
extern int ql2xiidmaenable;
extern int ql2xmaxqueues;
extern int ql2xmultique_tag;
extern int ql2xmqsupport;
extern int ql2xfwloadbin;
extern int ql2xetsenable;
extern int ql2xshiftctondsd;
@ -172,6 +176,9 @@ extern int qla2x00_post_uevent_work(struct scsi_qla_host *, u32);
extern int qla2x00_post_uevent_work(struct scsi_qla_host *, u32);
extern void qla2x00_disable_board_on_pci_error(struct work_struct *);
extern void qla2x00_sp_compl(void *, void *, int);
extern void qla2xxx_qpair_sp_free_dma(void *, void *);
extern void qla2xxx_qpair_sp_compl(void *, void *, int);
/*
* Global Functions in qla_mid.c source file.
@ -220,6 +227,8 @@ extern uint16_t qla2x00_calc_iocbs_32(uint16_t);
extern uint16_t qla2x00_calc_iocbs_64(uint16_t);
extern void qla2x00_build_scsi_iocbs_32(srb_t *, cmd_entry_t *, uint16_t);
extern void qla2x00_build_scsi_iocbs_64(srb_t *, cmd_entry_t *, uint16_t);
extern void qla24xx_build_scsi_iocbs(srb_t *, struct cmd_type_7 *,
uint16_t, struct req_que *);
extern int qla2x00_start_scsi(srb_t *sp);
extern int qla24xx_start_scsi(srb_t *sp);
int qla2x00_marker(struct scsi_qla_host *, struct req_que *, struct rsp_que *,
@ -227,6 +236,7 @@ int qla2x00_marker(struct scsi_qla_host *, struct req_que *, struct rsp_que *,
extern int qla2x00_start_sp(srb_t *);
extern int qla24xx_dif_start_scsi(srb_t *);
extern int qla2x00_start_bidir(srb_t *, struct scsi_qla_host *, uint32_t);
extern int qla2xxx_dif_start_scsi_mq(srb_t *);
extern unsigned long qla2x00_get_async_timeout(struct scsi_qla_host *);
extern void *qla2x00_alloc_iocbs(scsi_qla_host_t *, srb_t *);
@ -237,7 +247,10 @@ extern int qla24xx_walk_and_build_sglist(struct qla_hw_data *, srb_t *,
uint32_t *, uint16_t, struct qla_tgt_cmd *);
extern int qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *, srb_t *,
uint32_t *, uint16_t, struct qla_tgt_cmd *);
extern int qla24xx_get_one_block_sg(uint32_t, struct qla2_sgx *, uint32_t *);
extern int qla24xx_configure_prot_mode(srb_t *, uint16_t *);
extern int qla24xx_build_scsi_crc_2_iocbs(srb_t *,
struct cmd_type_crc_2 *, uint16_t, uint16_t, uint16_t);
/*
* Global Function Prototypes in qla_mbx.c source file.
@ -468,6 +481,8 @@ qla2x00_get_sp_from_handle(scsi_qla_host_t *, const char *, struct req_que *,
extern void
qla2x00_process_completed_request(struct scsi_qla_host *, struct req_que *,
uint32_t);
extern irqreturn_t
qla2xxx_msix_rsp_q(int irq, void *dev_id);
/*
* Global Function Prototypes in qla_sup.c source file.
@ -603,15 +618,18 @@ extern int qla2x00_dfs_setup(scsi_qla_host_t *);
extern int qla2x00_dfs_remove(scsi_qla_host_t *);
/* Globa function prototypes for multi-q */
extern int qla25xx_request_irq(struct rsp_que *);
extern int qla25xx_request_irq(struct qla_hw_data *, struct qla_qpair *,
struct qla_msix_entry *, int);
extern int qla25xx_init_req_que(struct scsi_qla_host *, struct req_que *);
extern int qla25xx_init_rsp_que(struct scsi_qla_host *, struct rsp_que *);
extern int qla25xx_create_req_que(struct qla_hw_data *, uint16_t, uint8_t,
uint16_t, int, uint8_t);
extern int qla25xx_create_rsp_que(struct qla_hw_data *, uint16_t, uint8_t,
uint16_t, int);
uint16_t, struct qla_qpair *);
extern void qla2x00_init_response_q_entries(struct rsp_que *);
extern int qla25xx_delete_req_que(struct scsi_qla_host *, struct req_que *);
extern int qla25xx_delete_rsp_que(struct scsi_qla_host *, struct rsp_que *);
extern int qla25xx_delete_queues(struct scsi_qla_host *);
extern uint16_t qla24xx_rd_req_reg(struct qla_hw_data *, uint16_t);
extern uint16_t qla25xx_rd_req_reg(struct qla_hw_data *, uint16_t);

View File

@ -1769,8 +1769,7 @@ qla2x00_alloc_outstanding_cmds(struct qla_hw_data *ha, struct req_que *req)
if (req->outstanding_cmds)
return QLA_SUCCESS;
if (!IS_FWI2_CAPABLE(ha) || (ha->mqiobase &&
(ql2xmultique_tag || ql2xmaxqueues > 1)))
if (!IS_FWI2_CAPABLE(ha))
req->num_outstanding_cmds = DEFAULT_OUTSTANDING_COMMANDS;
else {
if (ha->cur_fw_xcb_count <= ha->cur_fw_iocb_count)
@ -4248,10 +4247,7 @@ qla2x00_loop_resync(scsi_qla_host_t *vha)
struct req_que *req;
struct rsp_que *rsp;
if (vha->hw->flags.cpu_affinity_enabled)
req = vha->hw->req_q_map[0];
else
req = vha->req;
req = vha->req;
rsp = req->rsp;
clear_bit(ISP_ABORT_RETRY, &vha->dpc_flags);
@ -6040,10 +6036,10 @@ qla24xx_configure_vhba(scsi_qla_host_t *vha)
return -EINVAL;
rval = qla2x00_fw_ready(base_vha);
if (ha->flags.cpu_affinity_enabled)
req = ha->req_q_map[0];
if (vha->qpair)
req = vha->qpair->req;
else
req = vha->req;
req = ha->req_q_map[0];
rsp = req->rsp;
if (rval == QLA_SUCCESS) {
@ -6725,3 +6721,162 @@ qla24xx_update_all_fcp_prio(scsi_qla_host_t *vha)
return ret;
}
struct qla_qpair *qla2xxx_create_qpair(struct scsi_qla_host *vha, int qos, int vp_idx)
{
int rsp_id = 0;
int req_id = 0;
int i;
struct qla_hw_data *ha = vha->hw;
uint16_t qpair_id = 0;
struct qla_qpair *qpair = NULL;
struct qla_msix_entry *msix;
if (!(ha->fw_attributes & BIT_6) || !ha->flags.msix_enabled) {
ql_log(ql_log_warn, vha, 0x00181,
"FW/Driver is not multi-queue capable.\n");
return NULL;
}
if (ql2xmqsupport) {
qpair = kzalloc(sizeof(struct qla_qpair), GFP_KERNEL);
if (qpair == NULL) {
ql_log(ql_log_warn, vha, 0x0182,
"Failed to allocate memory for queue pair.\n");
return NULL;
}
memset(qpair, 0, sizeof(struct qla_qpair));
qpair->hw = vha->hw;
/* Assign available que pair id */
mutex_lock(&ha->mq_lock);
qpair_id = find_first_zero_bit(ha->qpair_qid_map, ha->max_qpairs);
if (qpair_id >= ha->max_qpairs) {
mutex_unlock(&ha->mq_lock);
ql_log(ql_log_warn, vha, 0x0183,
"No resources to create additional q pair.\n");
goto fail_qid_map;
}
set_bit(qpair_id, ha->qpair_qid_map);
ha->queue_pair_map[qpair_id] = qpair;
qpair->id = qpair_id;
qpair->vp_idx = vp_idx;
for (i = 0; i < ha->msix_count; i++) {
msix = &ha->msix_entries[i];
if (msix->in_use)
continue;
qpair->msix = msix;
ql_log(ql_dbg_multiq, vha, 0xc00f,
"Vector %x selected for qpair\n", msix->vector);
break;
}
if (!qpair->msix) {
ql_log(ql_log_warn, vha, 0x0184,
"Out of MSI-X vectors!.\n");
goto fail_msix;
}
qpair->msix->in_use = 1;
list_add_tail(&qpair->qp_list_elem, &vha->qp_list);
mutex_unlock(&ha->mq_lock);
/* Create response queue first */
rsp_id = qla25xx_create_rsp_que(ha, 0, 0, 0, qpair);
if (!rsp_id) {
ql_log(ql_log_warn, vha, 0x0185,
"Failed to create response queue.\n");
goto fail_rsp;
}
qpair->rsp = ha->rsp_q_map[rsp_id];
/* Create request queue */
req_id = qla25xx_create_req_que(ha, 0, vp_idx, 0, rsp_id, qos);
if (!req_id) {
ql_log(ql_log_warn, vha, 0x0186,
"Failed to create request queue.\n");
goto fail_req;
}
qpair->req = ha->req_q_map[req_id];
qpair->rsp->req = qpair->req;
if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) {
if (ha->fw_attributes & BIT_4)
qpair->difdix_supported = 1;
}
qpair->srb_mempool = mempool_create_slab_pool(SRB_MIN_REQ, srb_cachep);
if (!qpair->srb_mempool) {
ql_log(ql_log_warn, vha, 0x0191,
"Failed to create srb mempool for qpair %d\n",
qpair->id);
goto fail_mempool;
}
/* Mark as online */
qpair->online = 1;
if (!vha->flags.qpairs_available)
vha->flags.qpairs_available = 1;
ql_dbg(ql_dbg_multiq, vha, 0xc00d,
"Request/Response queue pair created, id %d\n",
qpair->id);
ql_dbg(ql_dbg_init, vha, 0x0187,
"Request/Response queue pair created, id %d\n",
qpair->id);
}
return qpair;
fail_mempool:
fail_req:
qla25xx_delete_rsp_que(vha, qpair->rsp);
fail_rsp:
mutex_lock(&ha->mq_lock);
qpair->msix->in_use = 0;
list_del(&qpair->qp_list_elem);
if (list_empty(&vha->qp_list))
vha->flags.qpairs_available = 0;
fail_msix:
ha->queue_pair_map[qpair_id] = NULL;
clear_bit(qpair_id, ha->qpair_qid_map);
mutex_unlock(&ha->mq_lock);
fail_qid_map:
kfree(qpair);
return NULL;
}
int qla2xxx_delete_qpair(struct scsi_qla_host *vha, struct qla_qpair *qpair)
{
int ret;
struct qla_hw_data *ha = qpair->hw;
qpair->delete_in_progress = 1;
while (atomic_read(&qpair->ref_count))
msleep(500);
ret = qla25xx_delete_req_que(vha, qpair->req);
if (ret != QLA_SUCCESS)
goto fail;
ret = qla25xx_delete_rsp_que(vha, qpair->rsp);
if (ret != QLA_SUCCESS)
goto fail;
mutex_lock(&ha->mq_lock);
ha->queue_pair_map[qpair->id] = NULL;
clear_bit(qpair->id, ha->qpair_qid_map);
list_del(&qpair->qp_list_elem);
if (list_empty(&vha->qp_list))
vha->flags.qpairs_available = 0;
mempool_destroy(qpair->srb_mempool);
kfree(qpair);
mutex_unlock(&ha->mq_lock);
return QLA_SUCCESS;
fail:
return ret;
}

View File

@ -215,6 +215,36 @@ qla2x00_reset_active(scsi_qla_host_t *vha)
test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags);
}
static inline srb_t *
qla2xxx_get_qpair_sp(struct qla_qpair *qpair, fc_port_t *fcport, gfp_t flag)
{
srb_t *sp = NULL;
uint8_t bail;
QLA_QPAIR_MARK_BUSY(qpair, bail);
if (unlikely(bail))
return NULL;
sp = mempool_alloc(qpair->srb_mempool, flag);
if (!sp)
goto done;
memset(sp, 0, sizeof(*sp));
sp->fcport = fcport;
sp->iocbs = 1;
done:
if (!sp)
QLA_QPAIR_MARK_NOT_BUSY(qpair);
return sp;
}
static inline void
qla2xxx_rel_qpair_sp(struct qla_qpair *qpair, srb_t *sp)
{
mempool_free(sp, qpair->srb_mempool);
QLA_QPAIR_MARK_NOT_BUSY(qpair);
}
static inline srb_t *
qla2x00_get_sp(scsi_qla_host_t *vha, fc_port_t *fcport, gfp_t flag)
{

View File

@ -12,7 +12,6 @@
#include <scsi/scsi_tcq.h>
static void qla25xx_set_que(srb_t *, struct rsp_que **);
/**
* qla2x00_get_cmd_direction() - Determine control_flag data direction.
* @cmd: SCSI command
@ -143,7 +142,7 @@ qla2x00_prep_cont_type1_iocb(scsi_qla_host_t *vha, struct req_que *req)
return (cont_pkt);
}
static inline int
inline int
qla24xx_configure_prot_mode(srb_t *sp, uint16_t *fw_prot_opts)
{
struct scsi_cmnd *cmd = GET_CMD_SP(sp);
@ -693,10 +692,11 @@ qla24xx_calc_dsd_lists(uint16_t dsds)
* @sp: SRB command to process
* @cmd_pkt: Command type 3 IOCB
* @tot_dsds: Total number of segments to transfer
* @req: pointer to request queue
*/
static inline void
inline void
qla24xx_build_scsi_iocbs(srb_t *sp, struct cmd_type_7 *cmd_pkt,
uint16_t tot_dsds)
uint16_t tot_dsds, struct req_que *req)
{
uint16_t avail_dsds;
uint32_t *cur_dsd;
@ -745,7 +745,7 @@ qla24xx_build_scsi_iocbs(srb_t *sp, struct cmd_type_7 *cmd_pkt,
* Five DSDs are available in the Continuation
* Type 1 IOCB.
*/
cont_pkt = qla2x00_prep_cont_type1_iocb(vha, vha->req);
cont_pkt = qla2x00_prep_cont_type1_iocb(vha, req);
cur_dsd = (uint32_t *)cont_pkt->dseg_0_address;
avail_dsds = 5;
}
@ -845,24 +845,7 @@ qla24xx_set_t10dif_tags(srb_t *sp, struct fw_dif_context *pkt,
}
}
struct qla2_sgx {
dma_addr_t dma_addr; /* OUT */
uint32_t dma_len; /* OUT */
uint32_t tot_bytes; /* IN */
struct scatterlist *cur_sg; /* IN */
/* for book keeping, bzero on initial invocation */
uint32_t bytes_consumed;
uint32_t num_bytes;
uint32_t tot_partial;
/* for debugging */
uint32_t num_sg;
srb_t *sp;
};
static int
int
qla24xx_get_one_block_sg(uint32_t blk_sz, struct qla2_sgx *sgx,
uint32_t *partial)
{
@ -1207,7 +1190,7 @@ qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *ha, srb_t *sp,
* @cmd_pkt: Command type 3 IOCB
* @tot_dsds: Total number of segments to transfer
*/
static inline int
inline int
qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
uint16_t tot_dsds, uint16_t tot_prot_dsds, uint16_t fw_prot_opts)
{
@ -1436,8 +1419,8 @@ qla24xx_start_scsi(srb_t *sp)
struct qla_hw_data *ha = vha->hw;
/* Setup device pointers. */
qla25xx_set_que(sp, &rsp);
req = vha->req;
rsp = req->rsp;
/* So we know we haven't pci_map'ed anything yet */
tot_dsds = 0;
@ -1523,12 +1506,10 @@ qla24xx_start_scsi(srb_t *sp)
cmd_pkt->byte_count = cpu_to_le32((uint32_t)scsi_bufflen(cmd));
/* Build IOCB segments */
qla24xx_build_scsi_iocbs(sp, cmd_pkt, tot_dsds);
qla24xx_build_scsi_iocbs(sp, cmd_pkt, tot_dsds, req);
/* Set total data segment count. */
cmd_pkt->entry_count = (uint8_t)req_cnt;
/* Specify response queue number where completion should happen */
cmd_pkt->entry_status = (uint8_t) rsp->id;
wmb();
/* Adjust ring index. */
req->ring_index++;
@ -1597,9 +1578,8 @@ qla24xx_dif_start_scsi(srb_t *sp)
}
/* Setup device pointers. */
qla25xx_set_que(sp, &rsp);
req = vha->req;
rsp = req->rsp;
/* So we know we haven't pci_map'ed anything yet */
tot_dsds = 0;
@ -1764,18 +1744,365 @@ queuing_error:
return QLA_FUNCTION_FAILED;
}
static void qla25xx_set_que(srb_t *sp, struct rsp_que **rsp)
/**
* qla2xxx_start_scsi_mq() - Send a SCSI command to the ISP
* @sp: command to send to the ISP
*
* Returns non-zero if a failure occurred, else zero.
*/
static int
qla2xxx_start_scsi_mq(srb_t *sp)
{
int nseg;
unsigned long flags;
uint32_t *clr_ptr;
uint32_t index;
uint32_t handle;
struct cmd_type_7 *cmd_pkt;
uint16_t cnt;
uint16_t req_cnt;
uint16_t tot_dsds;
struct req_que *req = NULL;
struct rsp_que *rsp = NULL;
struct scsi_cmnd *cmd = GET_CMD_SP(sp);
struct qla_hw_data *ha = sp->fcport->vha->hw;
int affinity = cmd->request->cpu;
struct scsi_qla_host *vha = sp->fcport->vha;
struct qla_hw_data *ha = vha->hw;
struct qla_qpair *qpair = sp->qpair;
if (ha->flags.cpu_affinity_enabled && affinity >= 0 &&
affinity < ha->max_rsp_queues - 1)
*rsp = ha->rsp_q_map[affinity + 1];
else
*rsp = ha->rsp_q_map[0];
/* Setup qpair pointers */
rsp = qpair->rsp;
req = qpair->req;
/* So we know we haven't pci_map'ed anything yet */
tot_dsds = 0;
/* Send marker if required */
if (vha->marker_needed != 0) {
if (qla2x00_marker(vha, req, rsp, 0, 0, MK_SYNC_ALL) !=
QLA_SUCCESS)
return QLA_FUNCTION_FAILED;
vha->marker_needed = 0;
}
/* Acquire qpair specific lock */
spin_lock_irqsave(&qpair->qp_lock, flags);
/* Check for room in outstanding command list. */
handle = req->current_outstanding_cmd;
for (index = 1; index < req->num_outstanding_cmds; index++) {
handle++;
if (handle == req->num_outstanding_cmds)
handle = 1;
if (!req->outstanding_cmds[handle])
break;
}
if (index == req->num_outstanding_cmds)
goto queuing_error;
/* Map the sg table so we have an accurate count of sg entries needed */
if (scsi_sg_count(cmd)) {
nseg = dma_map_sg(&ha->pdev->dev, scsi_sglist(cmd),
scsi_sg_count(cmd), cmd->sc_data_direction);
if (unlikely(!nseg))
goto queuing_error;
} else
nseg = 0;
tot_dsds = nseg;
req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
if (req->cnt < (req_cnt + 2)) {
cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
RD_REG_DWORD_RELAXED(req->req_q_out);
if (req->ring_index < cnt)
req->cnt = cnt - req->ring_index;
else
req->cnt = req->length -
(req->ring_index - cnt);
if (req->cnt < (req_cnt + 2))
goto queuing_error;
}
/* Build command packet. */
req->current_outstanding_cmd = handle;
req->outstanding_cmds[handle] = sp;
sp->handle = handle;
cmd->host_scribble = (unsigned char *)(unsigned long)handle;
req->cnt -= req_cnt;
cmd_pkt = (struct cmd_type_7 *)req->ring_ptr;
cmd_pkt->handle = MAKE_HANDLE(req->id, handle);
/* Zero out remaining portion of packet. */
/* tagged queuing modifier -- default is TSK_SIMPLE (0). */
clr_ptr = (uint32_t *)cmd_pkt + 2;
memset(clr_ptr, 0, REQUEST_ENTRY_SIZE - 8);
cmd_pkt->dseg_count = cpu_to_le16(tot_dsds);
/* Set NPORT-ID and LUN number*/
cmd_pkt->nport_handle = cpu_to_le16(sp->fcport->loop_id);
cmd_pkt->port_id[0] = sp->fcport->d_id.b.al_pa;
cmd_pkt->port_id[1] = sp->fcport->d_id.b.area;
cmd_pkt->port_id[2] = sp->fcport->d_id.b.domain;
cmd_pkt->vp_index = sp->fcport->vha->vp_idx;
int_to_scsilun(cmd->device->lun, &cmd_pkt->lun);
host_to_fcp_swap((uint8_t *)&cmd_pkt->lun, sizeof(cmd_pkt->lun));
cmd_pkt->task = TSK_SIMPLE;
/* Load SCSI command packet. */
memcpy(cmd_pkt->fcp_cdb, cmd->cmnd, cmd->cmd_len);
host_to_fcp_swap(cmd_pkt->fcp_cdb, sizeof(cmd_pkt->fcp_cdb));
cmd_pkt->byte_count = cpu_to_le32((uint32_t)scsi_bufflen(cmd));
/* Build IOCB segments */
qla24xx_build_scsi_iocbs(sp, cmd_pkt, tot_dsds, req);
/* Set total data segment count. */
cmd_pkt->entry_count = (uint8_t)req_cnt;
wmb();
/* Adjust ring index. */
req->ring_index++;
if (req->ring_index == req->length) {
req->ring_index = 0;
req->ring_ptr = req->ring;
} else
req->ring_ptr++;
sp->flags |= SRB_DMA_VALID;
/* Set chip new ring index. */
WRT_REG_DWORD(req->req_q_in, req->ring_index);
/* Manage unprocessed RIO/ZIO commands in response queue. */
if (vha->flags.process_response_queue &&
rsp->ring_ptr->signature != RESPONSE_PROCESSED)
qla24xx_process_response_queue(vha, rsp);
spin_unlock_irqrestore(&qpair->qp_lock, flags);
return QLA_SUCCESS;
queuing_error:
if (tot_dsds)
scsi_dma_unmap(cmd);
spin_unlock_irqrestore(&qpair->qp_lock, flags);
return QLA_FUNCTION_FAILED;
}
/**
* qla2xxx_dif_start_scsi_mq() - Send a SCSI command to the ISP
* @sp: command to send to the ISP
*
* Returns non-zero if a failure occurred, else zero.
*/
int
qla2xxx_dif_start_scsi_mq(srb_t *sp)
{
int nseg;
unsigned long flags;
uint32_t *clr_ptr;
uint32_t index;
uint32_t handle;
uint16_t cnt;
uint16_t req_cnt = 0;
uint16_t tot_dsds;
uint16_t tot_prot_dsds;
uint16_t fw_prot_opts = 0;
struct req_que *req = NULL;
struct rsp_que *rsp = NULL;
struct scsi_cmnd *cmd = GET_CMD_SP(sp);
struct scsi_qla_host *vha = sp->fcport->vha;
struct qla_hw_data *ha = vha->hw;
struct cmd_type_crc_2 *cmd_pkt;
uint32_t status = 0;
struct qla_qpair *qpair = sp->qpair;
#define QDSS_GOT_Q_SPACE BIT_0
/* Check for host side state */
if (!qpair->online) {
cmd->result = DID_NO_CONNECT << 16;
return QLA_INTERFACE_ERROR;
}
if (!qpair->difdix_supported &&
scsi_get_prot_op(cmd) != SCSI_PROT_NORMAL) {
cmd->result = DID_NO_CONNECT << 16;
return QLA_INTERFACE_ERROR;
}
/* Only process protection or >16 cdb in this routine */
if (scsi_get_prot_op(cmd) == SCSI_PROT_NORMAL) {
if (cmd->cmd_len <= 16)
return qla2xxx_start_scsi_mq(sp);
}
/* Setup qpair pointers */
rsp = qpair->rsp;
req = qpair->req;
/* So we know we haven't pci_map'ed anything yet */
tot_dsds = 0;
/* Send marker if required */
if (vha->marker_needed != 0) {
if (qla2x00_marker(vha, req, rsp, 0, 0, MK_SYNC_ALL) !=
QLA_SUCCESS)
return QLA_FUNCTION_FAILED;
vha->marker_needed = 0;
}
/* Acquire ring specific lock */
spin_lock_irqsave(&qpair->qp_lock, flags);
/* Check for room in outstanding command list. */
handle = req->current_outstanding_cmd;
for (index = 1; index < req->num_outstanding_cmds; index++) {
handle++;
if (handle == req->num_outstanding_cmds)
handle = 1;
if (!req->outstanding_cmds[handle])
break;
}
if (index == req->num_outstanding_cmds)
goto queuing_error;
/* Compute number of required data segments */
/* Map the sg table so we have an accurate count of sg entries needed */
if (scsi_sg_count(cmd)) {
nseg = dma_map_sg(&ha->pdev->dev, scsi_sglist(cmd),
scsi_sg_count(cmd), cmd->sc_data_direction);
if (unlikely(!nseg))
goto queuing_error;
else
sp->flags |= SRB_DMA_VALID;
if ((scsi_get_prot_op(cmd) == SCSI_PROT_READ_INSERT) ||
(scsi_get_prot_op(cmd) == SCSI_PROT_WRITE_STRIP)) {
struct qla2_sgx sgx;
uint32_t partial;
memset(&sgx, 0, sizeof(struct qla2_sgx));
sgx.tot_bytes = scsi_bufflen(cmd);
sgx.cur_sg = scsi_sglist(cmd);
sgx.sp = sp;
nseg = 0;
while (qla24xx_get_one_block_sg(
cmd->device->sector_size, &sgx, &partial))
nseg++;
}
} else
nseg = 0;
/* number of required data segments */
tot_dsds = nseg;
/* Compute number of required protection segments */
if (qla24xx_configure_prot_mode(sp, &fw_prot_opts)) {
nseg = dma_map_sg(&ha->pdev->dev, scsi_prot_sglist(cmd),
scsi_prot_sg_count(cmd), cmd->sc_data_direction);
if (unlikely(!nseg))
goto queuing_error;
else
sp->flags |= SRB_CRC_PROT_DMA_VALID;
if ((scsi_get_prot_op(cmd) == SCSI_PROT_READ_INSERT) ||
(scsi_get_prot_op(cmd) == SCSI_PROT_WRITE_STRIP)) {
nseg = scsi_bufflen(cmd) / cmd->device->sector_size;
}
} else {
nseg = 0;
}
req_cnt = 1;
/* Total Data and protection sg segment(s) */
tot_prot_dsds = nseg;
tot_dsds += nseg;
if (req->cnt < (req_cnt + 2)) {
cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
RD_REG_DWORD_RELAXED(req->req_q_out);
if (req->ring_index < cnt)
req->cnt = cnt - req->ring_index;
else
req->cnt = req->length -
(req->ring_index - cnt);
if (req->cnt < (req_cnt + 2))
goto queuing_error;
}
status |= QDSS_GOT_Q_SPACE;
/* Build header part of command packet (excluding the OPCODE). */
req->current_outstanding_cmd = handle;
req->outstanding_cmds[handle] = sp;
sp->handle = handle;
cmd->host_scribble = (unsigned char *)(unsigned long)handle;
req->cnt -= req_cnt;
/* Fill-in common area */
cmd_pkt = (struct cmd_type_crc_2 *)req->ring_ptr;
cmd_pkt->handle = MAKE_HANDLE(req->id, handle);
clr_ptr = (uint32_t *)cmd_pkt + 2;
memset(clr_ptr, 0, REQUEST_ENTRY_SIZE - 8);
/* Set NPORT-ID and LUN number*/
cmd_pkt->nport_handle = cpu_to_le16(sp->fcport->loop_id);
cmd_pkt->port_id[0] = sp->fcport->d_id.b.al_pa;
cmd_pkt->port_id[1] = sp->fcport->d_id.b.area;
cmd_pkt->port_id[2] = sp->fcport->d_id.b.domain;
int_to_scsilun(cmd->device->lun, &cmd_pkt->lun);
host_to_fcp_swap((uint8_t *)&cmd_pkt->lun, sizeof(cmd_pkt->lun));
/* Total Data and protection segment(s) */
cmd_pkt->dseg_count = cpu_to_le16(tot_dsds);
/* Build IOCB segments and adjust for data protection segments */
if (qla24xx_build_scsi_crc_2_iocbs(sp, (struct cmd_type_crc_2 *)
req->ring_ptr, tot_dsds, tot_prot_dsds, fw_prot_opts) !=
QLA_SUCCESS)
goto queuing_error;
cmd_pkt->entry_count = (uint8_t)req_cnt;
cmd_pkt->timeout = cpu_to_le16(0);
wmb();
/* Adjust ring index. */
req->ring_index++;
if (req->ring_index == req->length) {
req->ring_index = 0;
req->ring_ptr = req->ring;
} else
req->ring_ptr++;
/* Set chip new ring index. */
WRT_REG_DWORD(req->req_q_in, req->ring_index);
/* Manage unprocessed RIO/ZIO commands in response queue. */
if (vha->flags.process_response_queue &&
rsp->ring_ptr->signature != RESPONSE_PROCESSED)
qla24xx_process_response_queue(vha, rsp);
spin_unlock_irqrestore(&qpair->qp_lock, flags);
return QLA_SUCCESS;
queuing_error:
if (status & QDSS_GOT_Q_SPACE) {
req->outstanding_cmds[handle] = NULL;
req->cnt += req_cnt;
}
/* Cleanup will be performed by the caller (queuecommand) */
spin_unlock_irqrestore(&qpair->qp_lock, flags);
return QLA_FUNCTION_FAILED;
}
/* Generic Control-SRB manipulation functions. */
@ -2664,7 +2991,7 @@ sufficient_dsds:
cmd_pkt->byte_count = cpu_to_le32((uint32_t)scsi_bufflen(cmd));
/* Build IOCB segments */
qla24xx_build_scsi_iocbs(sp, cmd_pkt, tot_dsds);
qla24xx_build_scsi_iocbs(sp, cmd_pkt, tot_dsds, req);
/* Set total data segment count. */
cmd_pkt->entry_count = (uint8_t)req_cnt;

View File

@ -2871,41 +2871,6 @@ out:
return IRQ_HANDLED;
}
static irqreturn_t
qla25xx_msix_rsp_q(int irq, void *dev_id)
{
struct qla_hw_data *ha;
scsi_qla_host_t *vha;
struct rsp_que *rsp;
struct device_reg_24xx __iomem *reg;
unsigned long flags;
uint32_t hccr = 0;
rsp = (struct rsp_que *) dev_id;
if (!rsp) {
ql_log(ql_log_info, NULL, 0x505b,
"%s: NULL response queue pointer.\n", __func__);
return IRQ_NONE;
}
ha = rsp->hw;
vha = pci_get_drvdata(ha->pdev);
/* Clear the interrupt, if enabled, for this response queue */
if (!ha->flags.disable_msix_handshake) {
reg = &ha->iobase->isp24;
spin_lock_irqsave(&ha->hardware_lock, flags);
WRT_REG_DWORD(&reg->hccr, HCCRX_CLR_RISC_INT);
hccr = RD_REG_DWORD_RELAXED(&reg->hccr);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
if (qla2x00_check_reg32_for_disconnect(vha, hccr))
goto out;
queue_work_on((int) (rsp->id - 1), ha->wq, &rsp->q_work);
out:
return IRQ_HANDLED;
}
static irqreturn_t
qla24xx_msix_default(int irq, void *dev_id)
{
@ -3002,6 +2967,35 @@ qla24xx_msix_default(int irq, void *dev_id)
return IRQ_HANDLED;
}
irqreturn_t
qla2xxx_msix_rsp_q(int irq, void *dev_id)
{
struct qla_hw_data *ha;
struct qla_qpair *qpair;
struct device_reg_24xx __iomem *reg;
unsigned long flags;
qpair = dev_id;
if (!qpair) {
ql_log(ql_log_info, NULL, 0x505b,
"%s: NULL response queue pointer.\n", __func__);
return IRQ_NONE;
}
ha = qpair->hw;
/* Clear the interrupt, if enabled, for this response queue */
if (unlikely(!ha->flags.disable_msix_handshake)) {
reg = &ha->iobase->isp24;
spin_lock_irqsave(&ha->hardware_lock, flags);
WRT_REG_DWORD(&reg->hccr, HCCRX_CLR_RISC_INT);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
queue_work(ha->wq, &qpair->q_work);
return IRQ_HANDLED;
}
/* Interrupt handling helpers. */
struct qla_init_msix_entry {
@ -3009,69 +3003,28 @@ struct qla_init_msix_entry {
irq_handler_t handler;
};
static struct qla_init_msix_entry msix_entries[3] = {
{ "qla2xxx (default)", qla24xx_msix_default },
{ "qla2xxx (rsp_q)", qla24xx_msix_rsp_q },
{ "qla2xxx (multiq)", qla25xx_msix_rsp_q },
};
static struct qla_init_msix_entry qla82xx_msix_entries[2] = {
{ "qla2xxx (default)", qla82xx_msix_default },
{ "qla2xxx (rsp_q)", qla82xx_msix_rsp_q },
};
static struct qla_init_msix_entry qla83xx_msix_entries[3] = {
static struct qla_init_msix_entry msix_entries[] = {
{ "qla2xxx (default)", qla24xx_msix_default },
{ "qla2xxx (rsp_q)", qla24xx_msix_rsp_q },
{ "qla2xxx (atio_q)", qla83xx_msix_atio_q },
{ "qla2xxx (qpair_multiq)", qla2xxx_msix_rsp_q },
};
static void
qla24xx_disable_msix(struct qla_hw_data *ha)
{
int i;
struct qla_msix_entry *qentry;
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
for (i = 0; i < ha->msix_count; i++) {
qentry = &ha->msix_entries[i];
if (qentry->have_irq) {
/* un-register irq cpu affinity notification */
irq_set_affinity_notifier(qentry->vector, NULL);
free_irq(qentry->vector, qentry->rsp);
}
}
pci_disable_msix(ha->pdev);
kfree(ha->msix_entries);
ha->msix_entries = NULL;
ha->flags.msix_enabled = 0;
ql_dbg(ql_dbg_init, vha, 0x0042,
"Disabled the MSI.\n");
}
static struct qla_init_msix_entry qla82xx_msix_entries[] = {
{ "qla2xxx (default)", qla82xx_msix_default },
{ "qla2xxx (rsp_q)", qla82xx_msix_rsp_q },
};
static int
qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
{
#define MIN_MSIX_COUNT 2
#define ATIO_VECTOR 2
int i, ret;
struct msix_entry *entries;
struct qla_msix_entry *qentry;
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
entries = kzalloc(sizeof(struct msix_entry) * ha->msix_count,
GFP_KERNEL);
if (!entries) {
ql_log(ql_log_warn, vha, 0x00bc,
"Failed to allocate memory for msix_entry.\n");
return -ENOMEM;
}
for (i = 0; i < ha->msix_count; i++)
entries[i].entry = i;
ret = pci_enable_msix_range(ha->pdev,
entries, MIN_MSIX_COUNT, ha->msix_count);
ret = pci_alloc_irq_vectors(ha->pdev, MIN_MSIX_COUNT, ha->msix_count,
PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
if (ret < 0) {
ql_log(ql_log_fatal, vha, 0x00c7,
"MSI-X: Failed to enable support, "
@ -3081,10 +3034,23 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
} else if (ret < ha->msix_count) {
ql_log(ql_log_warn, vha, 0x00c6,
"MSI-X: Failed to enable support "
"-- %d/%d\n Retry with %d vectors.\n",
ha->msix_count, ret, ret);
"with %d vectors, using %d vectors.\n",
ha->msix_count, ret);
ha->msix_count = ret;
ha->max_rsp_queues = ha->msix_count - 1;
/* Recalculate queue values */
if (ha->mqiobase && ql2xmqsupport) {
ha->max_req_queues = ha->msix_count - 1;
/* ATIOQ needs 1 vector. That's 1 less QPair */
if (QLA_TGT_MODE_ENABLED())
ha->max_req_queues--;
ha->max_rsp_queues = ha->max_req_queues;
ha->max_qpairs = ha->max_req_queues - 1;
ql_dbg_pci(ql_dbg_init, ha->pdev, 0x0190,
"Adjusted Max no of queues pairs: %d.\n", ha->max_qpairs);
}
}
ha->msix_entries = kzalloc(sizeof(struct qla_msix_entry) *
ha->msix_count, GFP_KERNEL);
@ -3098,20 +3064,23 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
for (i = 0; i < ha->msix_count; i++) {
qentry = &ha->msix_entries[i];
qentry->vector = entries[i].vector;
qentry->entry = entries[i].entry;
qentry->vector = pci_irq_vector(ha->pdev, i);
qentry->entry = i;
qentry->have_irq = 0;
qentry->rsp = NULL;
qentry->in_use = 0;
qentry->handle = NULL;
qentry->irq_notify.notify = qla_irq_affinity_notify;
qentry->irq_notify.release = qla_irq_affinity_release;
qentry->cpuid = -1;
}
/* Enable MSI-X vectors for the base queue */
for (i = 0; i < 2; i++) {
for (i = 0; i < (QLA_MSIX_RSP_Q + 1); i++) {
qentry = &ha->msix_entries[i];
qentry->rsp = rsp;
qentry->handle = rsp;
rsp->msix = qentry;
scnprintf(qentry->name, sizeof(qentry->name),
msix_entries[i].name);
if (IS_P3P_TYPE(ha))
ret = request_irq(qentry->vector,
qla82xx_msix_entries[i].handler,
@ -3123,6 +3092,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
if (ret)
goto msix_register_fail;
qentry->have_irq = 1;
qentry->in_use = 1;
/* Register for CPU affinity notification. */
irq_set_affinity_notifier(qentry->vector, &qentry->irq_notify);
@ -3142,12 +3112,15 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
* queue.
*/
if (QLA_TGT_MODE_ENABLED() && IS_ATIO_MSIX_CAPABLE(ha)) {
qentry = &ha->msix_entries[ATIO_VECTOR];
qentry->rsp = rsp;
qentry = &ha->msix_entries[QLA_ATIO_VECTOR];
rsp->msix = qentry;
qentry->handle = rsp;
scnprintf(qentry->name, sizeof(qentry->name),
msix_entries[QLA_ATIO_VECTOR].name);
qentry->in_use = 1;
ret = request_irq(qentry->vector,
qla83xx_msix_entries[ATIO_VECTOR].handler,
0, qla83xx_msix_entries[ATIO_VECTOR].name, rsp);
msix_entries[QLA_ATIO_VECTOR].handler,
0, msix_entries[QLA_ATIO_VECTOR].name, rsp);
qentry->have_irq = 1;
}
@ -3156,7 +3129,7 @@ msix_register_fail:
ql_log(ql_log_fatal, vha, 0x00cb,
"MSI-X: unable to register handler -- %x/%d.\n",
qentry->vector, ret);
qla24xx_disable_msix(ha);
qla2x00_free_irqs(vha);
ha->mqenable = 0;
goto msix_out;
}
@ -3164,11 +3137,13 @@ msix_register_fail:
/* Enable MSI-X vector for response queue update for queue 0 */
if (IS_QLA83XX(ha) || IS_QLA27XX(ha)) {
if (ha->msixbase && ha->mqiobase &&
(ha->max_rsp_queues > 1 || ha->max_req_queues > 1))
(ha->max_rsp_queues > 1 || ha->max_req_queues > 1 ||
ql2xmqsupport))
ha->mqenable = 1;
} else
if (ha->mqiobase
&& (ha->max_rsp_queues > 1 || ha->max_req_queues > 1))
if (ha->mqiobase &&
(ha->max_rsp_queues > 1 || ha->max_req_queues > 1 ||
ql2xmqsupport))
ha->mqenable = 1;
ql_dbg(ql_dbg_multiq, vha, 0xc005,
"mqiobase=%p, max_rsp_queues=%d, max_req_queues=%d.\n",
@ -3178,7 +3153,6 @@ msix_register_fail:
ha->mqiobase, ha->max_rsp_queues, ha->max_req_queues);
msix_out:
kfree(entries);
return ret;
}
@ -3231,7 +3205,7 @@ skip_msix:
!IS_QLA27XX(ha))
goto skip_msi;
ret = pci_enable_msi(ha->pdev);
ret = pci_alloc_irq_vectors(ha->pdev, 1, 1, PCI_IRQ_MSI);
if (!ret) {
ql_dbg(ql_dbg_init, vha, 0x0038,
"MSI: Enabled.\n");
@ -3276,6 +3250,8 @@ qla2x00_free_irqs(scsi_qla_host_t *vha)
{
struct qla_hw_data *ha = vha->hw;
struct rsp_que *rsp;
struct qla_msix_entry *qentry;
int i;
/*
* We need to check that ha->rsp_q_map is valid in case we are called
@ -3285,25 +3261,36 @@ qla2x00_free_irqs(scsi_qla_host_t *vha)
return;
rsp = ha->rsp_q_map[0];
if (ha->flags.msix_enabled)
qla24xx_disable_msix(ha);
else if (ha->flags.msi_enabled) {
free_irq(ha->pdev->irq, rsp);
pci_disable_msi(ha->pdev);
} else
free_irq(ha->pdev->irq, rsp);
if (ha->flags.msix_enabled) {
for (i = 0; i < ha->msix_count; i++) {
qentry = &ha->msix_entries[i];
if (qentry->have_irq) {
irq_set_affinity_notifier(qentry->vector, NULL);
free_irq(pci_irq_vector(ha->pdev, i), qentry->handle);
}
}
kfree(ha->msix_entries);
ha->msix_entries = NULL;
ha->flags.msix_enabled = 0;
ql_dbg(ql_dbg_init, vha, 0x0042,
"Disabled MSI-X.\n");
} else {
free_irq(pci_irq_vector(ha->pdev, 0), rsp);
}
pci_free_irq_vectors(ha->pdev);
}
int qla25xx_request_irq(struct rsp_que *rsp)
int qla25xx_request_irq(struct qla_hw_data *ha, struct qla_qpair *qpair,
struct qla_msix_entry *msix, int vector_type)
{
struct qla_hw_data *ha = rsp->hw;
struct qla_init_msix_entry *intr = &msix_entries[2];
struct qla_msix_entry *msix = rsp->msix;
struct qla_init_msix_entry *intr = &msix_entries[vector_type];
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
int ret;
ret = request_irq(msix->vector, intr->handler, 0, intr->name, rsp);
scnprintf(msix->name, sizeof(msix->name),
"qla2xxx%lu_qpair%d", vha->host_no, qpair->id);
ret = request_irq(msix->vector, intr->handler, 0, msix->name, qpair);
if (ret) {
ql_log(ql_log_fatal, vha, 0x00e6,
"MSI-X: Unable to register handler -- %x/%d.\n",
@ -3311,7 +3298,7 @@ int qla25xx_request_irq(struct rsp_que *rsp)
return ret;
}
msix->have_irq = 1;
msix->rsp = rsp;
msix->handle = qpair;
return ret;
}
@ -3324,11 +3311,12 @@ static void qla_irq_affinity_notify(struct irq_affinity_notify *notify,
container_of(notify, struct qla_msix_entry, irq_notify);
struct qla_hw_data *ha;
struct scsi_qla_host *base_vha;
struct rsp_que *rsp = e->handle;
/* user is recommended to set mask to just 1 cpu */
e->cpuid = cpumask_first(mask);
ha = e->rsp->hw;
ha = rsp->hw;
base_vha = pci_get_drvdata(ha->pdev);
ql_dbg(ql_dbg_init, base_vha, 0xffff,
@ -3352,9 +3340,10 @@ static void qla_irq_affinity_release(struct kref *ref)
container_of(ref, struct irq_affinity_notify, kref);
struct qla_msix_entry *e =
container_of(notify, struct qla_msix_entry, irq_notify);
struct scsi_qla_host *base_vha = pci_get_drvdata(e->rsp->hw->pdev);
struct rsp_que *rsp = e->handle;
struct scsi_qla_host *base_vha = pci_get_drvdata(rsp->hw->pdev);
ql_dbg(ql_dbg_init, base_vha, 0xffff,
"%s: host%ld: vector %d cpu %d \n", __func__,
"%s: host%ld: vector %d cpu %d\n", __func__,
base_vha->host_no, e->vector, e->cpuid);
}

View File

@ -10,6 +10,43 @@
#include <linux/delay.h>
#include <linux/gfp.h>
struct rom_cmd {
uint16_t cmd;
} rom_cmds[] = {
{ MBC_LOAD_RAM },
{ MBC_EXECUTE_FIRMWARE },
{ MBC_READ_RAM_WORD },
{ MBC_MAILBOX_REGISTER_TEST },
{ MBC_VERIFY_CHECKSUM },
{ MBC_GET_FIRMWARE_VERSION },
{ MBC_LOAD_RISC_RAM },
{ MBC_DUMP_RISC_RAM },
{ MBC_LOAD_RISC_RAM_EXTENDED },
{ MBC_DUMP_RISC_RAM_EXTENDED },
{ MBC_WRITE_RAM_WORD_EXTENDED },
{ MBC_READ_RAM_EXTENDED },
{ MBC_GET_RESOURCE_COUNTS },
{ MBC_SET_FIRMWARE_OPTION },
{ MBC_MID_INITIALIZE_FIRMWARE },
{ MBC_GET_FIRMWARE_STATE },
{ MBC_GET_MEM_OFFLOAD_CNTRL_STAT },
{ MBC_GET_RETRY_COUNT },
{ MBC_TRACE_CONTROL },
};
static int is_rom_cmd(uint16_t cmd)
{
int i;
struct rom_cmd *wc;
for (i = 0; i < ARRAY_SIZE(rom_cmds); i++) {
wc = rom_cmds + i;
if (wc->cmd == cmd)
return 1;
}
return 0;
}
/*
* qla2x00_mailbox_command
@ -92,6 +129,17 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
return QLA_FUNCTION_TIMEOUT;
}
/* check if ISP abort is active and return cmd with timeout */
if ((test_bit(ABORT_ISP_ACTIVE, &base_vha->dpc_flags) ||
test_bit(ISP_ABORT_RETRY, &base_vha->dpc_flags) ||
test_bit(ISP_ABORT_NEEDED, &base_vha->dpc_flags)) &&
!is_rom_cmd(mcp->mb[0])) {
ql_log(ql_log_info, vha, 0x1005,
"Cmd 0x%x aborted with timeout since ISP Abort is pending\n",
mcp->mb[0]);
return QLA_FUNCTION_TIMEOUT;
}
/*
* Wait for active mailbox commands to finish by waiting at most tov
* seconds. This is to serialize actual issuing of mailbox cmds during
@ -178,6 +226,7 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
WRT_REG_WORD(&reg->isp.hccr, HCCR_SET_HOST_INT);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
wait_time = jiffies;
if (!wait_for_completion_timeout(&ha->mbx_intr_comp,
mcp->tov * HZ)) {
ql_dbg(ql_dbg_mbx, vha, 0x117a,
@ -186,6 +235,9 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
if (time_after(jiffies, wait_time + 5 * HZ))
ql_log(ql_log_warn, vha, 0x1015, "cmd=0x%x, waited %d msecs\n",
command, jiffies_to_msecs(jiffies - wait_time));
} else {
ql_dbg(ql_dbg_mbx, vha, 0x1011,
"Cmd=%x Polling Mode.\n", command);
@ -1194,12 +1246,17 @@ qla2x00_abort_command(srb_t *sp)
fc_port_t *fcport = sp->fcport;
scsi_qla_host_t *vha = fcport->vha;
struct qla_hw_data *ha = vha->hw;
struct req_que *req = vha->req;
struct req_que *req;
struct scsi_cmnd *cmd = GET_CMD_SP(sp);
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x103b,
"Entered %s.\n", __func__);
if (vha->flags.qpairs_available && sp->qpair)
req = sp->qpair->req;
else
req = vha->req;
spin_lock_irqsave(&ha->hardware_lock, flags);
for (handle = 1; handle < req->num_outstanding_cmds; handle++) {
if (req->outstanding_cmds[handle] == sp)
@ -2152,10 +2209,10 @@ qla24xx_login_fabric(scsi_qla_host_t *vha, uint16_t loop_id, uint8_t domain,
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1061,
"Entered %s.\n", __func__);
if (ha->flags.cpu_affinity_enabled)
req = ha->req_q_map[0];
if (vha->vp_idx && vha->qpair)
req = vha->qpair->req;
else
req = vha->req;
req = ha->req_q_map[0];
lg = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &lg_dma);
if (lg == NULL) {
@ -2435,10 +2492,7 @@ qla24xx_fabric_logout(scsi_qla_host_t *vha, uint16_t loop_id, uint8_t domain,
}
memset(lg, 0, sizeof(struct logio_entry_24xx));
if (ql2xmaxqueues > 1)
req = ha->req_q_map[0];
else
req = vha->req;
req = vha->req;
lg->entry_type = LOGINOUT_PORT_IOCB_TYPE;
lg->entry_count = 1;
lg->handle = MAKE_HANDLE(req->id, lg->handle);
@ -2904,6 +2958,9 @@ qla24xx_abort_command(srb_t *sp)
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x108c,
"Entered %s.\n", __func__);
if (vha->flags.qpairs_available && sp->qpair)
req = sp->qpair->req;
if (ql2xasynctmfenable)
return qla24xx_async_abort_command(sp);
@ -2984,6 +3041,7 @@ __qla24xx_issue_tmf(char *name, uint32_t type, struct fc_port *fcport,
struct qla_hw_data *ha;
struct req_que *req;
struct rsp_que *rsp;
struct qla_qpair *qpair;
vha = fcport->vha;
ha = vha->hw;
@ -2992,10 +3050,15 @@ __qla24xx_issue_tmf(char *name, uint32_t type, struct fc_port *fcport,
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1092,
"Entered %s.\n", __func__);
if (ha->flags.cpu_affinity_enabled)
rsp = ha->rsp_q_map[tag + 1];
else
if (vha->vp_idx && vha->qpair) {
/* NPIV port */
qpair = vha->qpair;
rsp = qpair->rsp;
req = qpair->req;
} else {
rsp = req->rsp;
}
tsk = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &tsk_dma);
if (tsk == NULL) {
ql_log(ql_log_warn, vha, 0x1093,

View File

@ -540,9 +540,10 @@ qla25xx_free_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
uint16_t que_id = rsp->id;
if (rsp->msix && rsp->msix->have_irq) {
free_irq(rsp->msix->vector, rsp);
free_irq(rsp->msix->vector, rsp->msix->handle);
rsp->msix->have_irq = 0;
rsp->msix->rsp = NULL;
rsp->msix->in_use = 0;
rsp->msix->handle = NULL;
}
dma_free_coherent(&ha->pdev->dev, (rsp->length + 1) *
sizeof(response_t), rsp->ring, rsp->dma);
@ -573,7 +574,7 @@ qla25xx_delete_req_que(struct scsi_qla_host *vha, struct req_que *req)
return ret;
}
static int
int
qla25xx_delete_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
{
int ret = -1;
@ -596,34 +597,42 @@ qla25xx_delete_queues(struct scsi_qla_host *vha)
struct req_que *req = NULL;
struct rsp_que *rsp = NULL;
struct qla_hw_data *ha = vha->hw;
struct qla_qpair *qpair, *tqpair;
/* Delete request queues */
for (cnt = 1; cnt < ha->max_req_queues; cnt++) {
req = ha->req_q_map[cnt];
if (req && test_bit(cnt, ha->req_qid_map)) {
ret = qla25xx_delete_req_que(vha, req);
if (ret != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x00ea,
"Couldn't delete req que %d.\n",
req->id);
return ret;
if (ql2xmqsupport) {
list_for_each_entry_safe(qpair, tqpair, &vha->qp_list,
qp_list_elem)
qla2xxx_delete_qpair(vha, qpair);
} else {
/* Delete request queues */
for (cnt = 1; cnt < ha->max_req_queues; cnt++) {
req = ha->req_q_map[cnt];
if (req && test_bit(cnt, ha->req_qid_map)) {
ret = qla25xx_delete_req_que(vha, req);
if (ret != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x00ea,
"Couldn't delete req que %d.\n",
req->id);
return ret;
}
}
}
/* Delete response queues */
for (cnt = 1; cnt < ha->max_rsp_queues; cnt++) {
rsp = ha->rsp_q_map[cnt];
if (rsp && test_bit(cnt, ha->rsp_qid_map)) {
ret = qla25xx_delete_rsp_que(vha, rsp);
if (ret != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x00eb,
"Couldn't delete rsp que %d.\n",
rsp->id);
return ret;
}
}
}
}
/* Delete response queues */
for (cnt = 1; cnt < ha->max_rsp_queues; cnt++) {
rsp = ha->rsp_q_map[cnt];
if (rsp && test_bit(cnt, ha->rsp_qid_map)) {
ret = qla25xx_delete_rsp_que(vha, rsp);
if (ret != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x00eb,
"Couldn't delete rsp que %d.\n",
rsp->id);
return ret;
}
}
}
return ret;
}
@ -659,10 +668,10 @@ qla25xx_create_req_que(struct qla_hw_data *ha, uint16_t options,
if (ret != QLA_SUCCESS)
goto que_failed;
mutex_lock(&ha->vport_lock);
mutex_lock(&ha->mq_lock);
que_id = find_first_zero_bit(ha->req_qid_map, ha->max_req_queues);
if (que_id >= ha->max_req_queues) {
mutex_unlock(&ha->vport_lock);
mutex_unlock(&ha->mq_lock);
ql_log(ql_log_warn, base_vha, 0x00db,
"No resources to create additional request queue.\n");
goto que_failed;
@ -708,7 +717,7 @@ qla25xx_create_req_que(struct qla_hw_data *ha, uint16_t options,
req->req_q_out = &reg->isp25mq.req_q_out;
req->max_q_depth = ha->req_q_map[0]->max_q_depth;
req->out_ptr = (void *)(req->ring + req->length);
mutex_unlock(&ha->vport_lock);
mutex_unlock(&ha->mq_lock);
ql_dbg(ql_dbg_multiq, base_vha, 0xc004,
"ring_ptr=%p ring_index=%d, "
"cnt=%d id=%d max_q_depth=%d.\n",
@ -724,9 +733,9 @@ qla25xx_create_req_que(struct qla_hw_data *ha, uint16_t options,
if (ret != QLA_SUCCESS) {
ql_log(ql_log_fatal, base_vha, 0x00df,
"%s failed.\n", __func__);
mutex_lock(&ha->vport_lock);
mutex_lock(&ha->mq_lock);
clear_bit(que_id, ha->req_qid_map);
mutex_unlock(&ha->vport_lock);
mutex_unlock(&ha->mq_lock);
goto que_failed;
}
@ -741,20 +750,20 @@ failed:
static void qla_do_work(struct work_struct *work)
{
unsigned long flags;
struct rsp_que *rsp = container_of(work, struct rsp_que, q_work);
struct qla_qpair *qpair = container_of(work, struct qla_qpair, q_work);
struct scsi_qla_host *vha;
struct qla_hw_data *ha = rsp->hw;
struct qla_hw_data *ha = qpair->hw;
spin_lock_irqsave(&rsp->hw->hardware_lock, flags);
spin_lock_irqsave(&qpair->qp_lock, flags);
vha = pci_get_drvdata(ha->pdev);
qla24xx_process_response_queue(vha, rsp);
spin_unlock_irqrestore(&rsp->hw->hardware_lock, flags);
qla24xx_process_response_queue(vha, qpair->rsp);
spin_unlock_irqrestore(&qpair->qp_lock, flags);
}
/* create response queue */
int
qla25xx_create_rsp_que(struct qla_hw_data *ha, uint16_t options,
uint8_t vp_idx, uint16_t rid, int req)
uint8_t vp_idx, uint16_t rid, struct qla_qpair *qpair)
{
int ret = 0;
struct rsp_que *rsp = NULL;
@ -779,28 +788,24 @@ qla25xx_create_rsp_que(struct qla_hw_data *ha, uint16_t options,
goto que_failed;
}
mutex_lock(&ha->vport_lock);
mutex_lock(&ha->mq_lock);
que_id = find_first_zero_bit(ha->rsp_qid_map, ha->max_rsp_queues);
if (que_id >= ha->max_rsp_queues) {
mutex_unlock(&ha->vport_lock);
mutex_unlock(&ha->mq_lock);
ql_log(ql_log_warn, base_vha, 0x00e2,
"No resources to create additional request queue.\n");
goto que_failed;
}
set_bit(que_id, ha->rsp_qid_map);
if (ha->flags.msix_enabled)
rsp->msix = &ha->msix_entries[que_id + 1];
else
ql_log(ql_log_warn, base_vha, 0x00e3,
"MSIX not enabled.\n");
rsp->msix = qpair->msix;
ha->rsp_q_map[que_id] = rsp;
rsp->rid = rid;
rsp->vp_idx = vp_idx;
rsp->hw = ha;
ql_dbg(ql_dbg_init, base_vha, 0x00e4,
"queue_id=%d rid=%d vp_idx=%d hw=%p.\n",
"rsp queue_id=%d rid=%d vp_idx=%d hw=%p.\n",
que_id, rsp->rid, rsp->vp_idx, rsp->hw);
/* Use alternate PCI bus number */
if (MSB(rsp->rid))
@ -812,23 +817,27 @@ qla25xx_create_rsp_que(struct qla_hw_data *ha, uint16_t options,
if (!IS_MSIX_NACK_CAPABLE(ha))
options |= BIT_6;
/* Set option to indicate response queue creation */
options |= BIT_1;
rsp->options = options;
rsp->id = que_id;
reg = ISP_QUE_REG(ha, que_id);
rsp->rsp_q_in = &reg->isp25mq.rsp_q_in;
rsp->rsp_q_out = &reg->isp25mq.rsp_q_out;
rsp->in_ptr = (void *)(rsp->ring + rsp->length);
mutex_unlock(&ha->vport_lock);
mutex_unlock(&ha->mq_lock);
ql_dbg(ql_dbg_multiq, base_vha, 0xc00b,
"options=%x id=%d rsp_q_in=%p rsp_q_out=%p",
"options=%x id=%d rsp_q_in=%p rsp_q_out=%p\n",
rsp->options, rsp->id, rsp->rsp_q_in,
rsp->rsp_q_out);
ql_dbg(ql_dbg_init, base_vha, 0x00e5,
"options=%x id=%d rsp_q_in=%p rsp_q_out=%p",
"options=%x id=%d rsp_q_in=%p rsp_q_out=%p\n",
rsp->options, rsp->id, rsp->rsp_q_in,
rsp->rsp_q_out);
ret = qla25xx_request_irq(rsp);
ret = qla25xx_request_irq(ha, qpair, qpair->msix,
QLA_MSIX_QPAIR_MULTIQ_RSP_Q);
if (ret)
goto que_failed;
@ -836,19 +845,16 @@ qla25xx_create_rsp_que(struct qla_hw_data *ha, uint16_t options,
if (ret != QLA_SUCCESS) {
ql_log(ql_log_fatal, base_vha, 0x00e7,
"%s failed.\n", __func__);
mutex_lock(&ha->vport_lock);
mutex_lock(&ha->mq_lock);
clear_bit(que_id, ha->rsp_qid_map);
mutex_unlock(&ha->vport_lock);
mutex_unlock(&ha->mq_lock);
goto que_failed;
}
if (req >= 0)
rsp->req = ha->req_q_map[req];
else
rsp->req = NULL;
rsp->req = NULL;
qla2x00_init_response_q_entries(rsp);
if (rsp->hw->wq)
INIT_WORK(&rsp->q_work, qla_do_work);
if (qpair->hw->wq)
INIT_WORK(&qpair->q_work, qla_do_work);
return rsp->id;
que_failed:

View File

@ -13,6 +13,7 @@
#include <linux/mutex.h>
#include <linux/kobject.h>
#include <linux/slab.h>
#include <linux/blk-mq-pci.h>
#include <scsi/scsi_tcq.h>
#include <scsi/scsicam.h>
#include <scsi/scsi_transport.h>
@ -30,7 +31,7 @@ static int apidev_major;
/*
* SRB allocation cache
*/
static struct kmem_cache *srb_cachep;
struct kmem_cache *srb_cachep;
/*
* CT6 CTX allocation cache
@ -143,19 +144,12 @@ MODULE_PARM_DESC(ql2xiidmaenable,
"Enables iIDMA settings "
"Default is 1 - perform iIDMA. 0 - no iIDMA.");
int ql2xmaxqueues = 1;
module_param(ql2xmaxqueues, int, S_IRUGO);
MODULE_PARM_DESC(ql2xmaxqueues,
"Enables MQ settings "
"Default is 1 for single queue. Set it to number "
"of queues in MQ mode.");
int ql2xmultique_tag;
module_param(ql2xmultique_tag, int, S_IRUGO);
MODULE_PARM_DESC(ql2xmultique_tag,
"Enables CPU affinity settings for the driver "
"Default is 0 for no affinity of request and response IO. "
"Set it to 1 to turn on the cpu affinity.");
int ql2xmqsupport = 1;
module_param(ql2xmqsupport, int, S_IRUGO);
MODULE_PARM_DESC(ql2xmqsupport,
"Enable on demand multiple queue pairs support "
"Default is 1 for supported. "
"Set it to 0 to turn off mq qpair support.");
int ql2xfwloadbin;
module_param(ql2xfwloadbin, int, S_IRUGO|S_IWUSR);
@ -261,6 +255,7 @@ static int qla2xxx_eh_host_reset(struct scsi_cmnd *);
static void qla2x00_clear_drv_active(struct qla_hw_data *);
static void qla2x00_free_device(scsi_qla_host_t *);
static void qla83xx_disable_laser(scsi_qla_host_t *vha);
static int qla2xxx_map_queues(struct Scsi_Host *shost);
struct scsi_host_template qla2xxx_driver_template = {
.module = THIS_MODULE,
@ -280,6 +275,7 @@ struct scsi_host_template qla2xxx_driver_template = {
.scan_finished = qla2xxx_scan_finished,
.scan_start = qla2xxx_scan_start,
.change_queue_depth = scsi_change_queue_depth,
.map_queues = qla2xxx_map_queues,
.this_id = -1,
.cmd_per_lun = 3,
.use_clustering = ENABLE_CLUSTERING,
@ -339,6 +335,8 @@ static int qla2x00_mem_alloc(struct qla_hw_data *, uint16_t, uint16_t,
struct req_que **, struct rsp_que **);
static void qla2x00_free_fw_dump(struct qla_hw_data *);
static void qla2x00_mem_free(struct qla_hw_data *);
int qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
struct qla_qpair *qpair);
/* -------------------------------------------------------------------------- */
static int qla2x00_alloc_queues(struct qla_hw_data *ha, struct req_que *req,
@ -360,6 +358,25 @@ static int qla2x00_alloc_queues(struct qla_hw_data *ha, struct req_que *req,
"Unable to allocate memory for response queue ptrs.\n");
goto fail_rsp_map;
}
if (ql2xmqsupport && ha->max_qpairs) {
ha->queue_pair_map = kcalloc(ha->max_qpairs, sizeof(struct qla_qpair *),
GFP_KERNEL);
if (!ha->queue_pair_map) {
ql_log(ql_log_fatal, vha, 0x0180,
"Unable to allocate memory for queue pair ptrs.\n");
goto fail_qpair_map;
}
ha->base_qpair = kzalloc(sizeof(struct qla_qpair), GFP_KERNEL);
if (ha->base_qpair == NULL) {
ql_log(ql_log_warn, vha, 0x0182,
"Failed to allocate base queue pair memory.\n");
goto fail_base_qpair;
}
ha->base_qpair->req = req;
ha->base_qpair->rsp = rsp;
}
/*
* Make sure we record at least the request and response queue zero in
* case we need to free them if part of the probe fails.
@ -370,6 +387,11 @@ static int qla2x00_alloc_queues(struct qla_hw_data *ha, struct req_que *req,
set_bit(0, ha->req_qid_map);
return 1;
fail_base_qpair:
kfree(ha->queue_pair_map);
fail_qpair_map:
kfree(ha->rsp_q_map);
ha->rsp_q_map = NULL;
fail_rsp_map:
kfree(ha->req_q_map);
ha->req_q_map = NULL;
@ -417,84 +439,45 @@ static void qla2x00_free_queues(struct qla_hw_data *ha)
struct req_que *req;
struct rsp_que *rsp;
int cnt;
unsigned long flags;
spin_lock_irqsave(&ha->hardware_lock, flags);
for (cnt = 0; cnt < ha->max_req_queues; cnt++) {
if (!test_bit(cnt, ha->req_qid_map))
continue;
req = ha->req_q_map[cnt];
clear_bit(cnt, ha->req_qid_map);
ha->req_q_map[cnt] = NULL;
spin_unlock_irqrestore(&ha->hardware_lock, flags);
qla2x00_free_req_que(ha, req);
spin_lock_irqsave(&ha->hardware_lock, flags);
}
spin_unlock_irqrestore(&ha->hardware_lock, flags);
kfree(ha->req_q_map);
ha->req_q_map = NULL;
spin_lock_irqsave(&ha->hardware_lock, flags);
for (cnt = 0; cnt < ha->max_rsp_queues; cnt++) {
if (!test_bit(cnt, ha->rsp_qid_map))
continue;
rsp = ha->rsp_q_map[cnt];
clear_bit(cnt, ha->req_qid_map);
ha->rsp_q_map[cnt] = NULL;
spin_unlock_irqrestore(&ha->hardware_lock, flags);
qla2x00_free_rsp_que(ha, rsp);
spin_lock_irqsave(&ha->hardware_lock, flags);
}
spin_unlock_irqrestore(&ha->hardware_lock, flags);
kfree(ha->rsp_q_map);
ha->rsp_q_map = NULL;
}
static int qla25xx_setup_mode(struct scsi_qla_host *vha)
{
uint16_t options = 0;
int ques, req, ret;
struct qla_hw_data *ha = vha->hw;
if (!(ha->fw_attributes & BIT_6)) {
ql_log(ql_log_warn, vha, 0x00d8,
"Firmware is not multi-queue capable.\n");
goto fail;
}
if (ql2xmultique_tag) {
/* create a request queue for IO */
options |= BIT_7;
req = qla25xx_create_req_que(ha, options, 0, 0, -1,
QLA_DEFAULT_QUE_QOS);
if (!req) {
ql_log(ql_log_warn, vha, 0x00e0,
"Failed to create request queue.\n");
goto fail;
}
ha->wq = alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM, 1);
vha->req = ha->req_q_map[req];
options |= BIT_1;
for (ques = 1; ques < ha->max_rsp_queues; ques++) {
ret = qla25xx_create_rsp_que(ha, options, 0, 0, req);
if (!ret) {
ql_log(ql_log_warn, vha, 0x00e8,
"Failed to create response queue.\n");
goto fail2;
}
}
ha->flags.cpu_affinity_enabled = 1;
ql_dbg(ql_dbg_multiq, vha, 0xc007,
"CPU affinity mode enabled, "
"no. of response queues:%d no. of request queues:%d.\n",
ha->max_rsp_queues, ha->max_req_queues);
ql_dbg(ql_dbg_init, vha, 0x00e9,
"CPU affinity mode enabled, "
"no. of response queues:%d no. of request queues:%d.\n",
ha->max_rsp_queues, ha->max_req_queues);
}
return 0;
fail2:
qla25xx_delete_queues(vha);
destroy_workqueue(ha->wq);
ha->wq = NULL;
vha->req = ha->req_q_map[0];
fail:
ha->mqenable = 0;
kfree(ha->req_q_map);
kfree(ha->rsp_q_map);
ha->max_req_queues = ha->max_rsp_queues = 1;
return 1;
}
static char *
qla2x00_pci_info_str(struct scsi_qla_host *vha, char *str)
{
@ -669,7 +652,7 @@ qla2x00_sp_free_dma(void *vha, void *ptr)
qla2x00_rel_sp(sp->fcport->vha, sp);
}
static void
void
qla2x00_sp_compl(void *data, void *ptr, int res)
{
struct qla_hw_data *ha = (struct qla_hw_data *)data;
@ -693,6 +676,75 @@ qla2x00_sp_compl(void *data, void *ptr, int res)
cmd->scsi_done(cmd);
}
void
qla2xxx_qpair_sp_free_dma(void *vha, void *ptr)
{
srb_t *sp = (srb_t *)ptr;
struct scsi_cmnd *cmd = GET_CMD_SP(sp);
struct qla_hw_data *ha = sp->fcport->vha->hw;
void *ctx = GET_CMD_CTX_SP(sp);
if (sp->flags & SRB_DMA_VALID) {
scsi_dma_unmap(cmd);
sp->flags &= ~SRB_DMA_VALID;
}
if (sp->flags & SRB_CRC_PROT_DMA_VALID) {
dma_unmap_sg(&ha->pdev->dev, scsi_prot_sglist(cmd),
scsi_prot_sg_count(cmd), cmd->sc_data_direction);
sp->flags &= ~SRB_CRC_PROT_DMA_VALID;
}
if (sp->flags & SRB_CRC_CTX_DSD_VALID) {
/* List assured to be having elements */
qla2x00_clean_dsd_pool(ha, sp, NULL);
sp->flags &= ~SRB_CRC_CTX_DSD_VALID;
}
if (sp->flags & SRB_CRC_CTX_DMA_VALID) {
dma_pool_free(ha->dl_dma_pool, ctx,
((struct crc_context *)ctx)->crc_ctx_dma);
sp->flags &= ~SRB_CRC_CTX_DMA_VALID;
}
if (sp->flags & SRB_FCP_CMND_DMA_VALID) {
struct ct6_dsd *ctx1 = (struct ct6_dsd *)ctx;
dma_pool_free(ha->fcp_cmnd_dma_pool, ctx1->fcp_cmnd,
ctx1->fcp_cmnd_dma);
list_splice(&ctx1->dsd_list, &ha->gbl_dsd_list);
ha->gbl_dsd_inuse -= ctx1->dsd_use_cnt;
ha->gbl_dsd_avail += ctx1->dsd_use_cnt;
mempool_free(ctx1, ha->ctx_mempool);
}
CMD_SP(cmd) = NULL;
qla2xxx_rel_qpair_sp(sp->qpair, sp);
}
void
qla2xxx_qpair_sp_compl(void *data, void *ptr, int res)
{
srb_t *sp = (srb_t *)ptr;
struct scsi_cmnd *cmd = GET_CMD_SP(sp);
cmd->result = res;
if (atomic_read(&sp->ref_count) == 0) {
ql_dbg(ql_dbg_io, sp->fcport->vha, 0x3079,
"SP reference-count to ZERO -- sp=%p cmd=%p.\n",
sp, GET_CMD_SP(sp));
if (ql2xextended_error_logging & ql_dbg_io)
WARN_ON(atomic_read(&sp->ref_count) == 0);
return;
}
if (!atomic_dec_and_test(&sp->ref_count))
return;
qla2xxx_qpair_sp_free_dma(sp->fcport->vha, sp);
cmd->scsi_done(cmd);
}
/* If we are SP1 here, we need to still take and release the host_lock as SP1
* does not have the changes necessary to avoid taking host->host_lock.
*/
@ -706,12 +758,28 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
struct scsi_qla_host *base_vha = pci_get_drvdata(ha->pdev);
srb_t *sp;
int rval;
struct qla_qpair *qpair = NULL;
uint32_t tag;
uint16_t hwq;
if (unlikely(test_bit(UNLOADING, &base_vha->dpc_flags))) {
cmd->result = DID_NO_CONNECT << 16;
goto qc24_fail_command;
}
if (ha->mqenable) {
if (shost_use_blk_mq(vha->host)) {
tag = blk_mq_unique_tag(cmd->request);
hwq = blk_mq_unique_tag_to_hwq(tag);
qpair = ha->queue_pair_map[hwq];
} else if (vha->vp_idx && vha->qpair) {
qpair = vha->qpair;
}
if (qpair)
return qla2xxx_mqueuecommand(host, cmd, qpair);
}
if (ha->flags.eeh_busy) {
if (ha->flags.pci_channel_io_perm_failure) {
ql_dbg(ql_dbg_aer, vha, 0x9010,
@ -808,6 +876,95 @@ qc24_fail_command:
return 0;
}
/* For MQ supported I/O */
int
qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
struct qla_qpair *qpair)
{
scsi_qla_host_t *vha = shost_priv(host);
fc_port_t *fcport = (struct fc_port *) cmd->device->hostdata;
struct fc_rport *rport = starget_to_rport(scsi_target(cmd->device));
struct qla_hw_data *ha = vha->hw;
struct scsi_qla_host *base_vha = pci_get_drvdata(ha->pdev);
srb_t *sp;
int rval;
rval = fc_remote_port_chkready(rport);
if (rval) {
cmd->result = rval;
ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3076,
"fc_remote_port_chkready failed for cmd=%p, rval=0x%x.\n",
cmd, rval);
goto qc24_fail_command;
}
if (!fcport) {
cmd->result = DID_NO_CONNECT << 16;
goto qc24_fail_command;
}
if (atomic_read(&fcport->state) != FCS_ONLINE) {
if (atomic_read(&fcport->state) == FCS_DEVICE_DEAD ||
atomic_read(&base_vha->loop_state) == LOOP_DEAD) {
ql_dbg(ql_dbg_io, vha, 0x3077,
"Returning DNC, fcport_state=%d loop_state=%d.\n",
atomic_read(&fcport->state),
atomic_read(&base_vha->loop_state));
cmd->result = DID_NO_CONNECT << 16;
goto qc24_fail_command;
}
goto qc24_target_busy;
}
/*
* Return target busy if we've received a non-zero retry_delay_timer
* in a FCP_RSP.
*/
if (fcport->retry_delay_timestamp == 0) {
/* retry delay not set */
} else if (time_after(jiffies, fcport->retry_delay_timestamp))
fcport->retry_delay_timestamp = 0;
else
goto qc24_target_busy;
sp = qla2xxx_get_qpair_sp(qpair, fcport, GFP_ATOMIC);
if (!sp)
goto qc24_host_busy;
sp->u.scmd.cmd = cmd;
sp->type = SRB_SCSI_CMD;
atomic_set(&sp->ref_count, 1);
CMD_SP(cmd) = (void *)sp;
sp->free = qla2xxx_qpair_sp_free_dma;
sp->done = qla2xxx_qpair_sp_compl;
sp->qpair = qpair;
rval = ha->isp_ops->start_scsi_mq(sp);
if (rval != QLA_SUCCESS) {
ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3078,
"Start scsi failed rval=%d for cmd=%p.\n", rval, cmd);
if (rval == QLA_INTERFACE_ERROR)
goto qc24_fail_command;
goto qc24_host_busy_free_sp;
}
return 0;
qc24_host_busy_free_sp:
qla2xxx_qpair_sp_free_dma(vha, sp);
qc24_host_busy:
return SCSI_MLQUEUE_HOST_BUSY;
qc24_target_busy:
return SCSI_MLQUEUE_TARGET_BUSY;
qc24_fail_command:
cmd->scsi_done(cmd);
return 0;
}
/*
* qla2x00_eh_wait_on_command
* Waits for the command to be returned by the Firmware for some
@ -1601,7 +1758,6 @@ qla2x00_iospace_config(struct qla_hw_data *ha)
{
resource_size_t pio;
uint16_t msix;
int cpus;
if (pci_request_selected_regions(ha->pdev, ha->bars,
QLA2XXX_DRIVER_NAME)) {
@ -1658,9 +1814,7 @@ skip_pio:
/* Determine queue resources */
ha->max_req_queues = ha->max_rsp_queues = 1;
if ((ql2xmaxqueues <= 1 && !ql2xmultique_tag) ||
(ql2xmaxqueues > 1 && ql2xmultique_tag) ||
(!IS_QLA25XX(ha) && !IS_QLA81XX(ha)))
if (!ql2xmqsupport || (!IS_QLA25XX(ha) && !IS_QLA81XX(ha)))
goto mqiobase_exit;
ha->mqiobase = ioremap(pci_resource_start(ha->pdev, 3),
@ -1670,26 +1824,18 @@ skip_pio:
"MQIO Base=%p.\n", ha->mqiobase);
/* Read MSIX vector size of the board */
pci_read_config_word(ha->pdev, QLA_PCI_MSIX_CONTROL, &msix);
ha->msix_count = msix;
ha->msix_count = msix + 1;
/* Max queues are bounded by available msix vectors */
/* queue 0 uses two msix vectors */
if (ql2xmultique_tag) {
cpus = num_online_cpus();
ha->max_rsp_queues = (ha->msix_count - 1 > cpus) ?
(cpus + 1) : (ha->msix_count - 1);
ha->max_req_queues = 2;
} else if (ql2xmaxqueues > 1) {
ha->max_req_queues = ql2xmaxqueues > QLA_MQ_SIZE ?
QLA_MQ_SIZE : ql2xmaxqueues;
ql_dbg_pci(ql_dbg_multiq, ha->pdev, 0xc008,
"QoS mode set, max no of request queues:%d.\n",
ha->max_req_queues);
ql_dbg_pci(ql_dbg_init, ha->pdev, 0x0019,
"QoS mode set, max no of request queues:%d.\n",
ha->max_req_queues);
}
/* MB interrupt uses 1 vector */
ha->max_req_queues = ha->msix_count - 1;
ha->max_rsp_queues = ha->max_req_queues;
/* Queue pairs is the max value minus the base queue pair */
ha->max_qpairs = ha->max_rsp_queues - 1;
ql_dbg_pci(ql_dbg_init, ha->pdev, 0x0188,
"Max no of queues pairs: %d.\n", ha->max_qpairs);
ql_log_pci(ql_log_info, ha->pdev, 0x001a,
"MSI-X vector count: %d.\n", msix);
"MSI-X vector count: %d.\n", ha->msix_count);
} else
ql_log_pci(ql_log_info, ha->pdev, 0x001b,
"BAR 3 not enabled.\n");
@ -1709,7 +1855,6 @@ static int
qla83xx_iospace_config(struct qla_hw_data *ha)
{
uint16_t msix;
int cpus;
if (pci_request_selected_regions(ha->pdev, ha->bars,
QLA2XXX_DRIVER_NAME)) {
@ -1761,32 +1906,36 @@ qla83xx_iospace_config(struct qla_hw_data *ha)
/* Read MSIX vector size of the board */
pci_read_config_word(ha->pdev,
QLA_83XX_PCI_MSIX_CONTROL, &msix);
ha->msix_count = msix;
/* Max queues are bounded by available msix vectors */
/* queue 0 uses two msix vectors */
if (ql2xmultique_tag) {
cpus = num_online_cpus();
ha->max_rsp_queues = (ha->msix_count - 1 > cpus) ?
(cpus + 1) : (ha->msix_count - 1);
ha->max_req_queues = 2;
} else if (ql2xmaxqueues > 1) {
ha->max_req_queues = ql2xmaxqueues > QLA_MQ_SIZE ?
QLA_MQ_SIZE : ql2xmaxqueues;
ql_dbg_pci(ql_dbg_multiq, ha->pdev, 0xc00c,
"QoS mode set, max no of request queues:%d.\n",
ha->max_req_queues);
ql_dbg_pci(ql_dbg_init, ha->pdev, 0x011b,
"QoS mode set, max no of request queues:%d.\n",
ha->max_req_queues);
ha->msix_count = msix + 1;
/*
* By default, driver uses at least two msix vectors
* (default & rspq)
*/
if (ql2xmqsupport) {
/* MB interrupt uses 1 vector */
ha->max_req_queues = ha->msix_count - 1;
ha->max_rsp_queues = ha->max_req_queues;
/* ATIOQ needs 1 vector. That's 1 less QPair */
if (QLA_TGT_MODE_ENABLED())
ha->max_req_queues--;
/* Queue pairs is the max value minus
* the base queue pair */
ha->max_qpairs = ha->max_req_queues - 1;
ql_dbg_pci(ql_dbg_init, ha->pdev, 0x0190,
"Max no of queues pairs: %d.\n", ha->max_qpairs);
}
ql_log_pci(ql_log_info, ha->pdev, 0x011c,
"MSI-X vector count: %d.\n", msix);
"MSI-X vector count: %d.\n", ha->msix_count);
} else
ql_log_pci(ql_log_info, ha->pdev, 0x011e,
"BAR 1 not enabled.\n");
mqiobase_exit:
ha->msix_count = ha->max_rsp_queues + 1;
if (QLA_TGT_MODE_ENABLED())
ha->msix_count++;
qlt_83xx_iospace_config(ha);
@ -1831,6 +1980,7 @@ static struct isp_operations qla2100_isp_ops = {
.write_optrom = qla2x00_write_optrom_data,
.get_flash_version = qla2x00_get_flash_version,
.start_scsi = qla2x00_start_scsi,
.start_scsi_mq = NULL,
.abort_isp = qla2x00_abort_isp,
.iospace_config = qla2x00_iospace_config,
.initialize_adapter = qla2x00_initialize_adapter,
@ -1869,6 +2019,7 @@ static struct isp_operations qla2300_isp_ops = {
.write_optrom = qla2x00_write_optrom_data,
.get_flash_version = qla2x00_get_flash_version,
.start_scsi = qla2x00_start_scsi,
.start_scsi_mq = NULL,
.abort_isp = qla2x00_abort_isp,
.iospace_config = qla2x00_iospace_config,
.initialize_adapter = qla2x00_initialize_adapter,
@ -1907,6 +2058,7 @@ static struct isp_operations qla24xx_isp_ops = {
.write_optrom = qla24xx_write_optrom_data,
.get_flash_version = qla24xx_get_flash_version,
.start_scsi = qla24xx_start_scsi,
.start_scsi_mq = NULL,
.abort_isp = qla2x00_abort_isp,
.iospace_config = qla2x00_iospace_config,
.initialize_adapter = qla2x00_initialize_adapter,
@ -1945,6 +2097,7 @@ static struct isp_operations qla25xx_isp_ops = {
.write_optrom = qla24xx_write_optrom_data,
.get_flash_version = qla24xx_get_flash_version,
.start_scsi = qla24xx_dif_start_scsi,
.start_scsi_mq = qla2xxx_dif_start_scsi_mq,
.abort_isp = qla2x00_abort_isp,
.iospace_config = qla2x00_iospace_config,
.initialize_adapter = qla2x00_initialize_adapter,
@ -1983,6 +2136,7 @@ static struct isp_operations qla81xx_isp_ops = {
.write_optrom = qla24xx_write_optrom_data,
.get_flash_version = qla24xx_get_flash_version,
.start_scsi = qla24xx_dif_start_scsi,
.start_scsi_mq = qla2xxx_dif_start_scsi_mq,
.abort_isp = qla2x00_abort_isp,
.iospace_config = qla2x00_iospace_config,
.initialize_adapter = qla2x00_initialize_adapter,
@ -2021,6 +2175,7 @@ static struct isp_operations qla82xx_isp_ops = {
.write_optrom = qla82xx_write_optrom_data,
.get_flash_version = qla82xx_get_flash_version,
.start_scsi = qla82xx_start_scsi,
.start_scsi_mq = NULL,
.abort_isp = qla82xx_abort_isp,
.iospace_config = qla82xx_iospace_config,
.initialize_adapter = qla2x00_initialize_adapter,
@ -2059,6 +2214,7 @@ static struct isp_operations qla8044_isp_ops = {
.write_optrom = qla8044_write_optrom_data,
.get_flash_version = qla82xx_get_flash_version,
.start_scsi = qla82xx_start_scsi,
.start_scsi_mq = NULL,
.abort_isp = qla8044_abort_isp,
.iospace_config = qla82xx_iospace_config,
.initialize_adapter = qla2x00_initialize_adapter,
@ -2097,6 +2253,7 @@ static struct isp_operations qla83xx_isp_ops = {
.write_optrom = qla24xx_write_optrom_data,
.get_flash_version = qla24xx_get_flash_version,
.start_scsi = qla24xx_dif_start_scsi,
.start_scsi_mq = qla2xxx_dif_start_scsi_mq,
.abort_isp = qla2x00_abort_isp,
.iospace_config = qla83xx_iospace_config,
.initialize_adapter = qla2x00_initialize_adapter,
@ -2135,6 +2292,7 @@ static struct isp_operations qlafx00_isp_ops = {
.write_optrom = qla24xx_write_optrom_data,
.get_flash_version = qla24xx_get_flash_version,
.start_scsi = qlafx00_start_scsi,
.start_scsi_mq = NULL,
.abort_isp = qlafx00_abort_isp,
.iospace_config = qlafx00_iospace_config,
.initialize_adapter = qlafx00_initialize_adapter,
@ -2173,6 +2331,7 @@ static struct isp_operations qla27xx_isp_ops = {
.write_optrom = qla24xx_write_optrom_data,
.get_flash_version = qla24xx_get_flash_version,
.start_scsi = qla24xx_dif_start_scsi,
.start_scsi_mq = qla2xxx_dif_start_scsi_mq,
.abort_isp = qla2x00_abort_isp,
.iospace_config = qla83xx_iospace_config,
.initialize_adapter = qla2x00_initialize_adapter,
@ -2387,6 +2546,8 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
uint16_t req_length = 0, rsp_length = 0;
struct req_que *req = NULL;
struct rsp_que *rsp = NULL;
int i;
bars = pci_select_bars(pdev, IORESOURCE_MEM | IORESOURCE_IO);
sht = &qla2xxx_driver_template;
if (pdev->device == PCI_DEVICE_ID_QLOGIC_ISP2422 ||
@ -2650,6 +2811,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
"Found an ISP%04X irq %d iobase 0x%p.\n",
pdev->device, pdev->irq, ha->iobase);
mutex_init(&ha->vport_lock);
mutex_init(&ha->mq_lock);
init_completion(&ha->mbx_cmd_comp);
complete(&ha->mbx_cmd_comp);
init_completion(&ha->mbx_intr_comp);
@ -2737,7 +2899,11 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
host->max_cmd_len, host->max_channel, host->max_lun,
host->transportt, sht->vendor_id);
que_init:
/* Set up the irqs */
ret = qla2x00_request_irqs(ha, rsp);
if (ret)
goto probe_init_failed;
/* Alloc arrays of request and response ring ptrs */
if (!qla2x00_alloc_queues(ha, req, rsp)) {
ql_log(ql_log_fatal, base_vha, 0x003d,
@ -2746,12 +2912,17 @@ que_init:
goto probe_init_failed;
}
qlt_probe_one_stage1(base_vha, ha);
if (ha->mqenable && shost_use_blk_mq(host)) {
/* number of hardware queues supported by blk/scsi-mq*/
host->nr_hw_queues = ha->max_qpairs;
/* Set up the irqs */
ret = qla2x00_request_irqs(ha, rsp);
if (ret)
goto probe_init_failed;
ql_dbg(ql_dbg_init, base_vha, 0x0192,
"blk/scsi-mq enabled, HW queues = %d.\n", host->nr_hw_queues);
} else
ql_dbg(ql_dbg_init, base_vha, 0x0193,
"blk/scsi-mq disabled.\n");
qlt_probe_one_stage1(base_vha, ha);
pci_save_state(pdev);
@ -2842,11 +3013,12 @@ que_init:
host->can_queue, base_vha->req,
base_vha->mgmt_svr_loop_id, host->sg_tablesize);
if (ha->mqenable) {
if (qla25xx_setup_mode(base_vha)) {
ql_log(ql_log_warn, base_vha, 0x00ec,
"Failed to create queues, falling back to single queue mode.\n");
goto que_init;
if (ha->mqenable && qla_ini_mode_enabled(base_vha)) {
ha->wq = alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM, 1);
/* Create start of day qpairs for Block MQ */
if (shost_use_blk_mq(host)) {
for (i = 0; i < ha->max_qpairs; i++)
qla2xxx_create_qpair(base_vha, 5, 0);
}
}
@ -3115,13 +3287,6 @@ qla2x00_delete_all_vps(struct qla_hw_data *ha, scsi_qla_host_t *base_vha)
static void
qla2x00_destroy_deferred_work(struct qla_hw_data *ha)
{
/* Flush the work queue and remove it */
if (ha->wq) {
flush_workqueue(ha->wq);
destroy_workqueue(ha->wq);
ha->wq = NULL;
}
/* Cancel all work and destroy DPC workqueues */
if (ha->dpc_lp_wq) {
cancel_work_sync(&ha->idc_aen);
@ -3317,9 +3482,17 @@ qla2x00_free_device(scsi_qla_host_t *vha)
ha->isp_ops->disable_intrs(ha);
}
qla2x00_free_fcports(vha);
qla2x00_free_irqs(vha);
qla2x00_free_fcports(vha);
/* Flush the work queue and remove it */
if (ha->wq) {
flush_workqueue(ha->wq);
destroy_workqueue(ha->wq);
ha->wq = NULL;
}
qla2x00_mem_free(ha);
@ -4034,6 +4207,7 @@ struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *sht,
INIT_LIST_HEAD(&vha->qla_sess_op_cmd_list);
INIT_LIST_HEAD(&vha->logo_list);
INIT_LIST_HEAD(&vha->plogi_ack_list);
INIT_LIST_HEAD(&vha->qp_list);
spin_lock_init(&vha->work_lock);
spin_lock_init(&vha->cmd_list_lock);
@ -5038,8 +5212,8 @@ qla2x00_disable_board_on_pci_error(struct work_struct *work)
base_vha->flags.init_done = 0;
qla25xx_delete_queues(base_vha);
qla2x00_free_irqs(base_vha);
qla2x00_free_fcports(base_vha);
qla2x00_free_irqs(base_vha);
qla2x00_mem_free(ha);
qla82xx_md_free(base_vha);
qla2x00_free_queues(ha);
@ -5073,6 +5247,8 @@ qla2x00_do_dpc(void *data)
{
scsi_qla_host_t *base_vha;
struct qla_hw_data *ha;
uint32_t online;
struct qla_qpair *qpair;
ha = (struct qla_hw_data *)data;
base_vha = pci_get_drvdata(ha->pdev);
@ -5334,6 +5510,22 @@ intr_on_check:
ha->isp_ops->beacon_blink(base_vha);
}
/* qpair online check */
if (test_and_clear_bit(QPAIR_ONLINE_CHECK_NEEDED,
&base_vha->dpc_flags)) {
if (ha->flags.eeh_busy ||
ha->flags.pci_channel_io_perm_failure)
online = 0;
else
online = 1;
mutex_lock(&ha->mq_lock);
list_for_each_entry(qpair, &base_vha->qp_list,
qp_list_elem)
qpair->online = online;
mutex_unlock(&ha->mq_lock);
}
if (!IS_QLAFX00(ha))
qla2x00_do_dpc_all_vps(base_vha);
@ -5676,6 +5868,10 @@ qla2xxx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
switch (state) {
case pci_channel_io_normal:
ha->flags.eeh_busy = 0;
if (ql2xmqsupport) {
set_bit(QPAIR_ONLINE_CHECK_NEEDED, &vha->dpc_flags);
qla2xxx_wake_dpc(vha);
}
return PCI_ERS_RESULT_CAN_RECOVER;
case pci_channel_io_frozen:
ha->flags.eeh_busy = 1;
@ -5689,10 +5885,18 @@ qla2xxx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
pci_disable_device(pdev);
/* Return back all IOs */
qla2x00_abort_all_cmds(vha, DID_RESET << 16);
if (ql2xmqsupport) {
set_bit(QPAIR_ONLINE_CHECK_NEEDED, &vha->dpc_flags);
qla2xxx_wake_dpc(vha);
}
return PCI_ERS_RESULT_NEED_RESET;
case pci_channel_io_perm_failure:
ha->flags.pci_channel_io_perm_failure = 1;
qla2x00_abort_all_cmds(vha, DID_NO_CONNECT << 16);
if (ql2xmqsupport) {
set_bit(QPAIR_ONLINE_CHECK_NEEDED, &vha->dpc_flags);
qla2xxx_wake_dpc(vha);
}
return PCI_ERS_RESULT_DISCONNECT;
}
return PCI_ERS_RESULT_NEED_RESET;
@ -5960,6 +6164,13 @@ qla83xx_disable_laser(scsi_qla_host_t *vha)
qla83xx_wr_reg(vha, reg, data);
}
static int qla2xxx_map_queues(struct Scsi_Host *shost)
{
scsi_qla_host_t *vha = (scsi_qla_host_t *)shost->hostdata;
return blk_mq_pci_map_queues(&shost->tag_set, vha->hw->pdev);
}
static const struct pci_error_handlers qla2xxx_err_handler = {
.error_detected = qla2xxx_pci_error_detected,
.mmio_enabled = qla2xxx_pci_mmio_enabled,

View File

@ -1204,10 +1204,6 @@ int scsi_sysfs_add_sdev(struct scsi_device *sdev)
struct request_queue *rq = sdev->request_queue;
struct scsi_target *starget = sdev->sdev_target;
error = scsi_device_set_state(sdev, SDEV_RUNNING);
if (error)
return error;
error = scsi_target_add(starget);
if (error)
return error;

View File

@ -23,6 +23,7 @@
#include "unipro.h"
#include "ufs-qcom.h"
#include "ufshci.h"
#include "ufs_quirks.h"
#define UFS_QCOM_DEFAULT_DBG_PRINT_EN \
(UFS_QCOM_DBG_PRINT_REGS_EN | UFS_QCOM_DBG_PRINT_TEST_BUS_EN)
@ -1031,6 +1032,34 @@ out:
return ret;
}
static int ufs_qcom_quirk_host_pa_saveconfigtime(struct ufs_hba *hba)
{
int err;
u32 pa_vs_config_reg1;
err = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_VS_CONFIG_REG1),
&pa_vs_config_reg1);
if (err)
goto out;
/* Allow extension of MSB bits of PA_SaveConfigTime attribute */
err = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_VS_CONFIG_REG1),
(pa_vs_config_reg1 | (1 << 12)));
out:
return err;
}
static int ufs_qcom_apply_dev_quirks(struct ufs_hba *hba)
{
int err = 0;
if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME)
err = ufs_qcom_quirk_host_pa_saveconfigtime(hba);
return err;
}
static u32 ufs_qcom_get_ufs_hci_version(struct ufs_hba *hba)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
@ -1194,7 +1223,16 @@ static int ufs_qcom_init(struct ufs_hba *hba)
*/
host->generic_phy = devm_phy_get(dev, "ufsphy");
if (IS_ERR(host->generic_phy)) {
if (host->generic_phy == ERR_PTR(-EPROBE_DEFER)) {
/*
* UFS driver might be probed before the phy driver does.
* In that case we would like to return EPROBE_DEFER code.
*/
err = -EPROBE_DEFER;
dev_warn(dev, "%s: required phy device. hasn't probed yet. err = %d\n",
__func__, err);
goto out_variant_clear;
} else if (IS_ERR(host->generic_phy)) {
err = PTR_ERR(host->generic_phy);
dev_err(dev, "%s: PHY get failed %d\n", __func__, err);
goto out_variant_clear;
@ -1432,7 +1470,8 @@ static void ufs_qcom_print_hw_debug_reg_all(struct ufs_hba *hba,
reg = ufs_qcom_get_debug_reg_offset(host, UFS_UFS_DBG_RD_PRDT_RAM);
print_fn(hba, reg, 64, "UFS_UFS_DBG_RD_PRDT_RAM ", priv);
ufshcd_writel(hba, (reg & ~UFS_BIT(17)), REG_UFS_CFG1);
/* clear bit 17 - UTP_DBG_RAMS_EN */
ufshcd_rmwl(hba, UFS_BIT(17), 0, REG_UFS_CFG1);
reg = ufs_qcom_get_debug_reg_offset(host, UFS_DBG_RD_REG_UAWM);
print_fn(hba, reg, 4, "UFS_DBG_RD_REG_UAWM ", priv);
@ -1609,6 +1648,7 @@ static struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
.hce_enable_notify = ufs_qcom_hce_enable_notify,
.link_startup_notify = ufs_qcom_link_startup_notify,
.pwr_change_notify = ufs_qcom_pwr_change_notify,
.apply_dev_quirks = ufs_qcom_apply_dev_quirks,
.suspend = ufs_qcom_suspend,
.resume = ufs_qcom_resume,
.dbg_register_dump = ufs_qcom_dump_dbg_regs,

View File

@ -142,6 +142,7 @@ enum ufs_qcom_phy_init_type {
UFS_QCOM_DBG_PRINT_TEST_BUS_EN)
/* QUniPro Vendor specific attributes */
#define PA_VS_CONFIG_REG1 0x9000
#define DME_VS_CORE_CLK_CTRL 0xD002
/* bit and mask definitions for DME_VS_CORE_CLK_CTRL attribute */
#define DME_VS_CORE_CLK_CTRL_CORE_CLK_DIV_EN_BIT BIT(8)

View File

@ -134,29 +134,17 @@ struct ufs_dev_fix {
*/
#define UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE (1 << 7)
/*
* The max. value PA_SaveConfigTime is 250 (10us) but this is not enough for
* some vendors.
* Gear switch from PWM to HS may fail even with this max. PA_SaveConfigTime.
* Gear switch can be issued by host controller as an error recovery and any
* software delay will not help on this case so we need to increase
* PA_SaveConfigTime to >32us as per vendor recommendation.
*/
#define UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME (1 << 8)
struct ufs_hba;
void ufs_advertise_fixup_device(struct ufs_hba *hba);
static struct ufs_dev_fix ufs_fixups[] = {
/* UFS cards deviations table */
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL, UFS_DEVICE_NO_VCCQ),
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS),
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_NO_FASTAUTO),
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE),
UFS_FIX(UFS_VENDOR_TOSHIBA, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
UFS_FIX(UFS_VENDOR_TOSHIBA, "THGLF2G9C8KBADG",
UFS_DEVICE_QUIRK_PA_TACTIVATE),
UFS_FIX(UFS_VENDOR_TOSHIBA, "THGLF2G9D8KBADG",
UFS_DEVICE_QUIRK_PA_TACTIVATE),
UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, UFS_DEVICE_NO_VCCQ),
END_FIX
};
#endif /* UFS_QUIRKS_H_ */

View File

@ -185,6 +185,30 @@ ufs_get_pm_lvl_to_link_pwr_state(enum ufs_pm_level lvl)
return ufs_pm_lvl_states[lvl].link_state;
}
static struct ufs_dev_fix ufs_fixups[] = {
/* UFS cards deviations table */
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL, UFS_DEVICE_NO_VCCQ),
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS),
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_NO_FASTAUTO),
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE),
UFS_FIX(UFS_VENDOR_TOSHIBA, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
UFS_FIX(UFS_VENDOR_TOSHIBA, "THGLF2G9C8KBADG",
UFS_DEVICE_QUIRK_PA_TACTIVATE),
UFS_FIX(UFS_VENDOR_TOSHIBA, "THGLF2G9D8KBADG",
UFS_DEVICE_QUIRK_PA_TACTIVATE),
UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, UFS_DEVICE_NO_VCCQ),
UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME),
END_FIX
};
static void ufshcd_tmc_handler(struct ufs_hba *hba);
static void ufshcd_async_scan(void *data, async_cookie_t cookie);
static int ufshcd_reset_and_restore(struct ufs_hba *hba);
@ -288,10 +312,24 @@ int ufshcd_wait_for_register(struct ufs_hba *hba, u32 reg, u32 mask,
*/
static inline u32 ufshcd_get_intr_mask(struct ufs_hba *hba)
{
if (hba->ufs_version == UFSHCI_VERSION_10)
return INTERRUPT_MASK_ALL_VER_10;
else
return INTERRUPT_MASK_ALL_VER_11;
u32 intr_mask = 0;
switch (hba->ufs_version) {
case UFSHCI_VERSION_10:
intr_mask = INTERRUPT_MASK_ALL_VER_10;
break;
/* allow fall through */
case UFSHCI_VERSION_11:
case UFSHCI_VERSION_20:
intr_mask = INTERRUPT_MASK_ALL_VER_11;
break;
/* allow fall through */
case UFSHCI_VERSION_21:
default:
intr_mask = INTERRUPT_MASK_ALL_VER_21;
}
return intr_mask;
}
/**
@ -5199,6 +5237,8 @@ static void ufshcd_tune_unipro_params(struct ufs_hba *hba)
if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE)
ufshcd_quirk_tune_host_pa_tactivate(hba);
ufshcd_vops_apply_dev_quirks(hba);
}
/**
@ -6667,6 +6707,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
/* Get UFS version supported by the controller */
hba->ufs_version = ufshcd_get_ufs_version(hba);
if ((hba->ufs_version != UFSHCI_VERSION_10) &&
(hba->ufs_version != UFSHCI_VERSION_11) &&
(hba->ufs_version != UFSHCI_VERSION_20) &&
(hba->ufs_version != UFSHCI_VERSION_21))
dev_err(hba->dev, "invalid UFS version 0x%x\n",
hba->ufs_version);
/* Get Interrupt bit mask per version */
hba->intr_mask = ufshcd_get_intr_mask(hba);

View File

@ -266,7 +266,7 @@ struct ufs_pwr_mode_info {
* @setup_task_mgmt: called before any task management request is issued
* to set some things
* @hibern8_notify: called around hibern8 enter/exit
* to configure some things
* @apply_dev_quirks: called to apply device specific quirks
* @suspend: called during host controller PM callback
* @resume: called during host controller PM callback
* @dbg_register_dump: used to dump controller debug information
@ -293,7 +293,8 @@ struct ufs_hba_variant_ops {
void (*setup_xfer_req)(struct ufs_hba *, int, bool);
void (*setup_task_mgmt)(struct ufs_hba *, int, u8);
void (*hibern8_notify)(struct ufs_hba *, enum uic_cmd_dme,
enum ufs_notify_change_status);
enum ufs_notify_change_status);
int (*apply_dev_quirks)(struct ufs_hba *);
int (*suspend)(struct ufs_hba *, enum ufs_pm_op);
int (*resume)(struct ufs_hba *, enum ufs_pm_op);
void (*dbg_register_dump)(struct ufs_hba *hba);
@ -839,6 +840,13 @@ static inline void ufshcd_vops_hibern8_notify(struct ufs_hba *hba,
return hba->vops->hibern8_notify(hba, cmd, status);
}
static inline int ufshcd_vops_apply_dev_quirks(struct ufs_hba *hba)
{
if (hba->vops && hba->vops->apply_dev_quirks)
return hba->vops->apply_dev_quirks(hba);
return 0;
}
static inline int ufshcd_vops_suspend(struct ufs_hba *hba, enum ufs_pm_op op)
{
if (hba->vops && hba->vops->suspend)

View File

@ -72,6 +72,10 @@ enum {
REG_UIC_COMMAND_ARG_1 = 0x94,
REG_UIC_COMMAND_ARG_2 = 0x98,
REG_UIC_COMMAND_ARG_3 = 0x9C,
REG_UFS_CCAP = 0x100,
REG_UFS_CRYPTOCAP = 0x104,
UFSHCI_CRYPTO_REG_SPACE_SIZE = 0x400,
};
/* Controller capability masks */
@ -275,6 +279,9 @@ enum {
/* Interrupt disable mask for UFSHCI v1.1 */
INTERRUPT_MASK_ALL_VER_11 = 0x31FFF,
/* Interrupt disable mask for UFSHCI v2.1 */
INTERRUPT_MASK_ALL_VER_21 = 0x71FFF,
};
/*