Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net

Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates

This series implements the new i40e driver for Intel's upcoming
Intel(R) Ethernet Controller XL710 Family of devices.

V7: many changes from a few comments:
    use linux errno types
    change I40E_SUCCESS to 0, standardize returns
    change s32 return values to int
    use void return values where possible
    prefer use of int over i40e_status
V6: rename Kbuild to Makefile
    rename i40e_mem[set|cpy] to regular memset/memcpy
V5: remove sysfs support from this set, will rearchitect
    changes from community comments
V4: addresses remaining community comments, mostly trivial edits.
    major sparse based cleanup of possible endian issues
    removal of most of __func__ references
    sizeof(*var) instead of sizeof(struct ...)
    change 'NULL ==' tests to !NULL
    implement xps
    use kernel bitshift macros (upper_32_bits, etc)
V3: many more individual comments addressed, thanks reviewers!  Many
    other changes due to internal review and development.
V2: each patch has individual comments, in general, feedback from the
    list was applied and addressed. Many changes due to internal review
    and coding as well.
V1: initial send

Let me start by saying thanks and we appreciate any time spent by
those of you who review and comment on this new driver, and we will
attempt to address and respond to all issues brought to our attention.

I tried to break the patches up to ease review, but the series should
apply and still be bisectable, as the last patch adds the driver to
the kernel compile with CONFIG_I40E.

This driver is for a brand new bit of silicon that has a different
design than other Intel Ethernet silicon, and therefore needed a new
driver.

The hardware has quite a bit of capability and this driver is only
meant to provide basic functionality at first.  Future patches will
continue to add functionality and bug fixes.

This initial release is very early in the product cycle with the intent
of getting initial support into the kernel before users have the
hardware available to purchase.  A software development manual is not
ready yet but will be available when the hardware ships.

The driver development model and interaction with community submitted
patches *will not be any different* than what we are currently doing
today.  We plan to continue established processes.

An associated i40evf driver has been posted for review.

List of tools we ran in preparation:
way more sparse clean
make W=1, W=2 clean
checkpatch (almost) clean
        total: 1 errors, 4 warnings, 30461 lines checked
        NOTE: Ignored message types: LONG_LINE
        - issues have been addressed and the remainders
          are noise.
codespell clean
smatch (almost) clean with a couple minor warnings
coccicheck clean
namespacecheck clean
allmodconfig clean
ppc64 build clean (untested)

This driver is a team effort, thank you to Joseph Gasparakis,
Shannon Nelson, Anjali Singhai-Jain, Mitch Williams, Neerav
Parikh, Vasu Dev, Kavindya Deegala, Yi Zou, and PJ Waskiewicz.

TODO (known issues)
BQL implementation
finish rtnl_stat64 locking (we have a patch but debugging it)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2013-09-11 17:06:49 -04:00
commit e6eddc3345
32 changed files with 30434 additions and 1 deletions

View File

@ -86,6 +86,8 @@ generic_netlink.txt
- info on Generic Netlink
gianfar.txt
- Gianfar Ethernet Driver.
i40e.txt
- README for the Intel Ethernet Controller XL710 Driver (i40e).
ieee802154.txt
- Linux IEEE 802.15.4 implementation, API and drivers
igb.txt

View File

@ -0,0 +1,115 @@
Linux Base Driver for the Intel(R) Ethernet Controller XL710 Family
===================================================================
Intel i40e Linux driver.
Copyright(c) 2013 Intel Corporation.
Contents
========
- Identifying Your Adapter
- Additional Configurations
- Performance Tuning
- Known Issues
- Support
Identifying Your Adapter
========================
The driver in this release is compatible with the Intel Ethernet
Controller XL710 Family.
For more information on how to identify your adapter, go to the Adapter &
Driver ID Guide at:
http://support.intel.com/support/network/sb/CS-012904.htm
Enabling the driver
===================
The driver is enabled via the standard kernel configuration system,
using the make command:
Make oldconfig/silentoldconfig/menuconfig/etc.
The driver is located in the menu structure at:
-> Device Drivers
-> Network device support (NETDEVICES [=y])
-> Ethernet driver support
-> Intel devices
-> Intel(R) Ethernet Controller XL710 Family
Additional Configurations
=========================
Generic Receive Offload (GRO)
-----------------------------
The driver supports the in-kernel software implementation of GRO. GRO has
shown that by coalescing Rx traffic into larger chunks of data, CPU
utilization can be significantly reduced when under large Rx load. GRO is
an evolution of the previously-used LRO interface. GRO is able to coalesce
other protocols besides TCP. It's also safe to use with configurations that
are problematic for LRO, namely bridging and iSCSI.
Ethtool
-------
The driver utilizes the ethtool interface for driver configuration and
diagnostics, as well as displaying statistical information. The latest
ethtool version is required for this functionality.
The latest release of ethtool can be found from
https://www.kernel.org/pub/software/network/ethtool
Data Center Bridging (DCB)
--------------------------
DCB configuration is not currently supported.
FCoE
----
Fiber Channel over Ethernet (FCoE) hardware offload is not currently
supported.
MAC and VLAN anti-spoofing feature
----------------------------------
When a malicious driver attempts to send a spoofed packet, it is dropped by
the hardware and not transmitted. An interrupt is sent to the PF driver
notifying it of the spoof attempt.
When a spoofed packet is detected the PF driver will send the following
message to the system log (displayed by the "dmesg" command):
Spoof event(s) detected on VF (n)
Where n=the VF that attempted to do the spoofing.
Performance Tuning
==================
An excellent article on performance tuning can be found at:
http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
Known Issues
============
Support
=======
For general information, go to the Intel support website at:
http://support.intel.com
or the Intel Wired Networking project hosted by Sourceforge at:
http://e1000.sourceforge.net
If an issue is identified with the released source code on the supported
kernel with a supported adapter, email the specific information related
to the issue to e1000-devel@lists.sourceforge.net and copy
netdev@vger.kernel.org.

View File

@ -4346,7 +4346,7 @@ M: Deepak Saxena <dsaxena@plexity.net>
S: Maintained
F: drivers/char/hw_random/ixp4xx-rng.c
INTEL ETHERNET DRIVERS (e100/e1000/e1000e/igb/igbvf/ixgb/ixgbe/ixgbevf)
INTEL ETHERNET DRIVERS (e100/e1000/e1000e/igb/igbvf/ixgb/ixgbe/ixgbevf/i40e)
M: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
M: Jesse Brandeburg <jesse.brandeburg@intel.com>
M: Bruce Allan <bruce.w.allan@intel.com>
@ -4371,6 +4371,7 @@ F: Documentation/networking/igbvf.txt
F: Documentation/networking/ixgb.txt
F: Documentation/networking/ixgbe.txt
F: Documentation/networking/ixgbevf.txt
F: Documentation/networking/i40e.txt
F: drivers/net/ethernet/intel/
INTEL PRO/WIRELESS 2100, 2200BG, 2915ABG NETWORK CONNECTION SUPPORT

View File

@ -241,4 +241,22 @@ config IXGBEVF
will be called ixgbevf. MSI-X interrupt support is required
for this driver to work correctly.
config I40E
tristate "Intel(R) Ethernet Controller XL710 Family support"
depends on PCI
---help---
This driver supports Intel(R) Ethernet Controller XL710 Family of
devices. For more information on how to identify your adapter, go
to the Adapter & Driver ID Guide at:
<http://support.intel.com/support/network/adapter/pro100/21397.htm>
For general information and support, go to the Intel support
website at:
<http://support.intel.com>
To compile this driver as a module, choose M here. The module
will be called i40e.
endif # NET_VENDOR_INTEL

View File

@ -9,4 +9,5 @@ obj-$(CONFIG_IGB) += igb/
obj-$(CONFIG_IGBVF) += igbvf/
obj-$(CONFIG_IXGBE) += ixgbe/
obj-$(CONFIG_IXGBEVF) += ixgbevf/
obj-$(CONFIG_I40E) += i40e/
obj-$(CONFIG_IXGB) += ixgb/

View File

@ -0,0 +1,44 @@
################################################################################
#
# Intel Ethernet Controller XL710 Family Linux Driver
# Copyright(c) 2013 Intel Corporation.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms and conditions of the GNU General Public License,
# version 2, as published by the Free Software Foundation.
#
# This program is distributed in the hope it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
#
# The full GNU General Public License is included in this distribution in
# the file called "COPYING".
#
# Contact Information:
# e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
# Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
#
################################################################################
#
# Makefile for the Intel(R) Ethernet Connection XL710 (i40e.ko) driver
#
obj-$(CONFIG_I40E) += i40e.o
i40e-objs := i40e_main.o \
i40e_ethtool.o \
i40e_adminq.o \
i40e_common.o \
i40e_hmc.o \
i40e_lan_hmc.o \
i40e_nvm.o \
i40e_debugfs.o \
i40e_diag.o \
i40e_txrx.o \
i40e_virtchnl_pf.o

View File

@ -0,0 +1,558 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_H_
#define _I40E_H_
#include <net/tcp.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/aer.h>
#include <linux/netdevice.h>
#include <linux/ioport.h>
#include <linux/slab.h>
#include <linux/list.h>
#include <linux/string.h>
#include <linux/in.h>
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/sctp.h>
#include <linux/pkt_sched.h>
#include <linux/ipv6.h>
#include <linux/version.h>
#include <net/checksum.h>
#include <net/ip6_checksum.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
#include "i40e_type.h"
#include "i40e_prototype.h"
#include "i40e_virtchnl.h"
#include "i40e_virtchnl_pf.h"
#include "i40e_txrx.h"
/* Useful i40e defaults */
#define I40E_BASE_PF_SEID 16
#define I40E_BASE_VSI_SEID 512
#define I40E_BASE_VEB_SEID 288
#define I40E_MAX_VEB 16
#define I40E_MAX_NUM_DESCRIPTORS 4096
#define I40E_MAX_REGISTER 0x0038FFFF
#define I40E_DEFAULT_NUM_DESCRIPTORS 512
#define I40E_REQ_DESCRIPTOR_MULTIPLE 32
#define I40E_MIN_NUM_DESCRIPTORS 64
#define I40E_MIN_MSIX 2
#define I40E_DEFAULT_NUM_VMDQ_VSI 8 /* max 256 VSIs */
#define I40E_DEFAULT_QUEUES_PER_VMDQ 2 /* max 16 qps */
#define I40E_DEFAULT_QUEUES_PER_VF 4
#define I40E_DEFAULT_QUEUES_PER_TC 1 /* should be a power of 2 */
#define I40E_FDIR_RING 0
#define I40E_FDIR_RING_COUNT 32
#define I40E_MAX_AQ_BUF_SIZE 4096
#define I40E_AQ_LEN 32
#define I40E_AQ_WORK_LIMIT 16
#define I40E_MAX_USER_PRIORITY 8
#define I40E_DEFAULT_MSG_ENABLE 4
#define I40E_NVM_VERSION_LO_SHIFT 0
#define I40E_NVM_VERSION_LO_MASK (0xf << I40E_NVM_VERSION_LO_SHIFT)
#define I40E_NVM_VERSION_MID_SHIFT 4
#define I40E_NVM_VERSION_MID_MASK (0xff << I40E_NVM_VERSION_MID_SHIFT)
#define I40E_NVM_VERSION_HI_SHIFT 12
#define I40E_NVM_VERSION_HI_MASK (0xf << I40E_NVM_VERSION_HI_SHIFT)
/* magic for getting defines into strings */
#define STRINGIFY(foo) #foo
#define XSTRINGIFY(bar) STRINGIFY(bar)
#ifndef ARCH_HAS_PREFETCH
#define prefetch(X)
#endif
#define I40E_RX_DESC(R, i) \
((ring_is_16byte_desc_enabled(R)) \
? (union i40e_32byte_rx_desc *) \
(&(((union i40e_16byte_rx_desc *)((R)->desc))[i])) \
: (&(((union i40e_32byte_rx_desc *)((R)->desc))[i])))
#define I40E_TX_DESC(R, i) \
(&(((struct i40e_tx_desc *)((R)->desc))[i]))
#define I40E_TX_CTXTDESC(R, i) \
(&(((struct i40e_tx_context_desc *)((R)->desc))[i]))
#define I40E_TX_FDIRDESC(R, i) \
(&(((struct i40e_filter_program_desc *)((R)->desc))[i]))
/* default to trying for four seconds */
#define I40E_TRY_LINK_TIMEOUT (4 * HZ)
/* driver state flags */
enum i40e_state_t {
__I40E_TESTING,
__I40E_CONFIG_BUSY,
__I40E_CONFIG_DONE,
__I40E_DOWN,
__I40E_NEEDS_RESTART,
__I40E_SERVICE_SCHED,
__I40E_ADMINQ_EVENT_PENDING,
__I40E_MDD_EVENT_PENDING,
__I40E_VFLR_EVENT_PENDING,
__I40E_RESET_RECOVERY_PENDING,
__I40E_RESET_INTR_RECEIVED,
__I40E_REINIT_REQUESTED,
__I40E_PF_RESET_REQUESTED,
__I40E_CORE_RESET_REQUESTED,
__I40E_GLOBAL_RESET_REQUESTED,
__I40E_FILTER_OVERFLOW_PROMISC,
};
enum i40e_interrupt_policy {
I40E_INTERRUPT_BEST_CASE,
I40E_INTERRUPT_MEDIUM,
I40E_INTERRUPT_LOWEST
};
struct i40e_lump_tracking {
u16 num_entries;
u16 search_hint;
u16 list[0];
#define I40E_PILE_VALID_BIT 0x8000
};
#define I40E_DEFAULT_ATR_SAMPLE_RATE 20
#define I40E_FDIR_MAX_RAW_PACKET_LOOKUP 512
struct i40e_fdir_data {
u16 q_index;
u8 flex_off;
u8 pctype;
u16 dest_vsi;
u8 dest_ctl;
u8 fd_status;
u16 cnt_index;
u32 fd_id;
u8 *raw_packet;
};
#define I40E_DCB_PRIO_TYPE_STRICT 0
#define I40E_DCB_PRIO_TYPE_ETS 1
#define I40E_DCB_STRICT_PRIO_CREDITS 127
#define I40E_MAX_USER_PRIORITY 8
/* DCB per TC information data structure */
struct i40e_tc_info {
u16 qoffset; /* Queue offset from base queue */
u16 qcount; /* Total Queues */
u8 netdev_tc; /* Netdev TC index if netdev associated */
};
/* TC configuration data structure */
struct i40e_tc_configuration {
u8 numtc; /* Total number of enabled TCs */
u8 enabled_tc; /* TC map */
struct i40e_tc_info tc_info[I40E_MAX_TRAFFIC_CLASS];
};
/* struct that defines the Ethernet device */
struct i40e_pf {
struct pci_dev *pdev;
struct i40e_hw hw;
unsigned long state;
unsigned long link_check_timeout;
struct msix_entry *msix_entries;
u16 num_msix_entries;
bool fc_autoneg_status;
u16 eeprom_version;
u16 num_vmdq_vsis; /* num vmdq pools this pf has set up */
u16 num_vmdq_qps; /* num queue pairs per vmdq pool */
u16 num_vmdq_msix; /* num queue vectors per vmdq pool */
u16 num_req_vfs; /* num vfs requested for this vf */
u16 num_vf_qps; /* num queue pairs per vf */
u16 num_tc_qps; /* num queue pairs per TC */
u16 num_lan_qps; /* num lan queues this pf has set up */
u16 num_lan_msix; /* num queue vectors for the base pf vsi */
u16 rss_size; /* num queues in the RSS array */
u16 rss_size_max; /* HW defined max RSS queues */
u16 fdir_pf_filter_count; /* num of guaranteed filters for this PF */
u8 atr_sample_rate;
enum i40e_interrupt_policy int_policy;
u16 rx_itr_default;
u16 tx_itr_default;
u16 msg_enable;
char misc_int_name[IFNAMSIZ + 9];
u16 adminq_work_limit; /* num of admin receive queue desc to process */
int service_timer_period;
struct timer_list service_timer;
struct work_struct service_task;
u64 flags;
#define I40E_FLAG_RX_CSUM_ENABLED (u64)(1 << 1)
#define I40E_FLAG_MSI_ENABLED (u64)(1 << 2)
#define I40E_FLAG_MSIX_ENABLED (u64)(1 << 3)
#define I40E_FLAG_RX_1BUF_ENABLED (u64)(1 << 4)
#define I40E_FLAG_RX_PS_ENABLED (u64)(1 << 5)
#define I40E_FLAG_RSS_ENABLED (u64)(1 << 6)
#define I40E_FLAG_MQ_ENABLED (u64)(1 << 7)
#define I40E_FLAG_VMDQ_ENABLED (u64)(1 << 8)
#define I40E_FLAG_FDIR_REQUIRES_REINIT (u64)(1 << 9)
#define I40E_FLAG_NEED_LINK_UPDATE (u64)(1 << 10)
#define I40E_FLAG_IN_NETPOLL (u64)(1 << 13)
#define I40E_FLAG_16BYTE_RX_DESC_ENABLED (u64)(1 << 14)
#define I40E_FLAG_CLEAN_ADMINQ (u64)(1 << 15)
#define I40E_FLAG_FILTER_SYNC (u64)(1 << 16)
#define I40E_FLAG_PROCESS_MDD_EVENT (u64)(1 << 18)
#define I40E_FLAG_PROCESS_VFLR_EVENT (u64)(1 << 19)
#define I40E_FLAG_SRIOV_ENABLED (u64)(1 << 20)
#define I40E_FLAG_DCB_ENABLED (u64)(1 << 21)
#define I40E_FLAG_FDIR_ENABLED (u64)(1 << 22)
#define I40E_FLAG_FDIR_ATR_ENABLED (u64)(1 << 23)
#define I40E_FLAG_MFP_ENABLED (u64)(1 << 27)
u16 num_tx_queues;
u16 num_rx_queues;
bool stat_offsets_loaded;
struct i40e_hw_port_stats stats;
struct i40e_hw_port_stats stats_offsets;
u32 tx_timeout_count;
u32 tx_timeout_recovery_level;
unsigned long tx_timeout_last_recovery;
u32 hw_csum_rx_error;
u32 led_status;
u16 corer_count; /* Core reset count */
u16 globr_count; /* Global reset count */
u16 empr_count; /* EMP reset count */
u16 pfr_count; /* PF reset count */
struct mutex switch_mutex;
u16 lan_vsi; /* our default LAN VSI */
u16 lan_veb; /* initial relay, if exists */
#define I40E_NO_VEB 0xffff
#define I40E_NO_VSI 0xffff
u16 next_vsi; /* Next unallocated VSI - 0-based! */
struct i40e_vsi **vsi;
struct i40e_veb *veb[I40E_MAX_VEB];
struct i40e_lump_tracking *qp_pile;
struct i40e_lump_tracking *irq_pile;
/* switch config info */
u16 pf_seid;
u16 main_vsi_seid;
u16 mac_seid;
struct i40e_aqc_get_switch_config_data *sw_config;
struct kobject *switch_kobj;
#ifdef CONFIG_DEBUG_FS
struct dentry *i40e_dbg_pf;
#endif /* CONFIG_DEBUG_FS */
/* sr-iov config info */
struct i40e_vf *vf;
int num_alloc_vfs; /* actual number of VFs allocated */
u32 vf_aq_requests;
/* DCBx/DCBNL capability for PF that indicates
* whether DCBx is managed by firmware or host
* based agent (LLDPAD). Also, indicates what
* flavor of DCBx protocol (IEEE/CEE) is supported
* by the device. For now we're supporting IEEE
* mode only.
*/
u16 dcbx_cap;
u32 fcoe_hmc_filt_num;
u32 fcoe_hmc_cntx_num;
struct i40e_filter_control_settings filter_settings;
};
struct i40e_mac_filter {
struct list_head list;
u8 macaddr[ETH_ALEN];
#define I40E_VLAN_ANY -1
s16 vlan;
u8 counter; /* number of instances of this filter */
bool is_vf; /* filter belongs to a VF */
bool is_netdev; /* filter belongs to a netdev */
bool changed; /* filter needs to be sync'd to the HW */
};
struct i40e_veb {
struct i40e_pf *pf;
u16 idx;
u16 veb_idx; /* index of VEB parent */
u16 seid;
u16 uplink_seid;
u16 stats_idx; /* index of VEB parent */
u8 enabled_tc;
u16 flags;
u16 bw_limit;
u8 bw_max_quanta;
bool is_abs_credits;
u8 bw_tc_share_credits[I40E_MAX_TRAFFIC_CLASS];
u16 bw_tc_limit_credits[I40E_MAX_TRAFFIC_CLASS];
u8 bw_tc_max_quanta[I40E_MAX_TRAFFIC_CLASS];
struct kobject *kobj;
bool stat_offsets_loaded;
struct i40e_eth_stats stats;
struct i40e_eth_stats stats_offsets;
};
/* struct that defines a VSI, associated with a dev */
struct i40e_vsi {
struct net_device *netdev;
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
bool netdev_registered;
bool stat_offsets_loaded;
u32 current_netdev_flags;
unsigned long state;
#define I40E_VSI_FLAG_FILTER_CHANGED (1<<0)
#define I40E_VSI_FLAG_VEB_OWNER (1<<1)
unsigned long flags;
struct list_head mac_filter_list;
/* VSI stats */
struct rtnl_link_stats64 net_stats;
struct rtnl_link_stats64 net_stats_offsets;
struct i40e_eth_stats eth_stats;
struct i40e_eth_stats eth_stats_offsets;
u32 tx_restart;
u32 tx_busy;
u32 rx_buf_failed;
u32 rx_page_failed;
/* These are arrays of rings, allocated at run-time */
struct i40e_ring *rx_rings;
struct i40e_ring *tx_rings;
u16 work_limit;
/* high bit set means dynamic, use accessor routines to read/write.
* hardware only supports 2us resolution for the ITR registers.
* these values always store the USER setting, and must be converted
* before programming to a register.
*/
u16 rx_itr_setting;
u16 tx_itr_setting;
u16 max_frame;
u16 rx_hdr_len;
u16 rx_buf_len;
u8 dtype;
/* List of q_vectors allocated to this VSI */
struct i40e_q_vector *q_vectors;
int num_q_vectors;
int base_vector;
u16 seid; /* HW index of this VSI (absolute index) */
u16 id; /* VSI number */
u16 uplink_seid;
u16 base_queue; /* vsi's first queue in hw array */
u16 alloc_queue_pairs; /* Allocated Tx/Rx queues */
u16 num_queue_pairs; /* Used tx and rx pairs */
u16 num_desc;
enum i40e_vsi_type type; /* VSI type, e.g., LAN, FCoE, etc */
u16 vf_id; /* Virtual function ID for SRIOV VSIs */
struct i40e_tc_configuration tc_config;
struct i40e_aqc_vsi_properties_data info;
/* VSI BW limit (absolute across all TCs) */
u16 bw_limit; /* VSI BW Limit (0 = disabled) */
u8 bw_max_quanta; /* Max Quanta when BW limit is enabled */
/* Relative TC credits across VSIs */
u8 bw_ets_share_credits[I40E_MAX_TRAFFIC_CLASS];
/* TC BW limit credits within VSI */
u16 bw_ets_limit_credits[I40E_MAX_TRAFFIC_CLASS];
/* TC BW limit max quanta within VSI */
u8 bw_ets_max_quanta[I40E_MAX_TRAFFIC_CLASS];
struct i40e_pf *back; /* Backreference to associated PF */
u16 idx; /* index in pf->vsi[] */
u16 veb_idx; /* index of VEB parent */
struct kobject *kobj; /* sysfs object */
/* VSI specific handlers */
irqreturn_t (*irq_handler)(int irq, void *data);
} ____cacheline_internodealigned_in_smp;
struct i40e_netdev_priv {
struct i40e_vsi *vsi;
};
/* struct that defines an interrupt vector */
struct i40e_q_vector {
struct i40e_vsi *vsi;
u16 v_idx; /* index in the vsi->q_vector array. */
u16 reg_idx; /* register index of the interrupt */
struct napi_struct napi;
struct i40e_ring_container rx;
struct i40e_ring_container tx;
u8 num_ringpairs; /* total number of ring pairs in vector */
char name[IFNAMSIZ + 9];
cpumask_t affinity_mask;
} ____cacheline_internodealigned_in_smp;
/* lan device */
struct i40e_device {
struct list_head list;
struct i40e_pf *pf;
};
/**
* i40e_fw_version_str - format the FW and NVM version strings
* @hw: ptr to the hardware info
**/
static inline char *i40e_fw_version_str(struct i40e_hw *hw)
{
static char buf[32];
snprintf(buf, sizeof(buf),
"f%d.%d a%d.%d n%02d.%02d.%02d e%08x",
hw->aq.fw_maj_ver, hw->aq.fw_min_ver,
hw->aq.api_maj_ver, hw->aq.api_min_ver,
(hw->nvm.version & I40E_NVM_VERSION_HI_MASK)
>> I40E_NVM_VERSION_HI_SHIFT,
(hw->nvm.version & I40E_NVM_VERSION_MID_MASK)
>> I40E_NVM_VERSION_MID_SHIFT,
(hw->nvm.version & I40E_NVM_VERSION_LO_MASK)
>> I40E_NVM_VERSION_LO_SHIFT,
hw->nvm.eetrack);
return buf;
}
/**
* i40e_netdev_to_pf: Retrieve the PF struct for given netdev
* @netdev: the corresponding netdev
*
* Return the PF struct for the given netdev
**/
static inline struct i40e_pf *i40e_netdev_to_pf(struct net_device *netdev)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
return vsi->back;
}
static inline void i40e_vsi_setup_irqhandler(struct i40e_vsi *vsi,
irqreturn_t (*irq_handler)(int, void *))
{
vsi->irq_handler = irq_handler;
}
/**
* i40e_rx_is_programming_status - check for programming status descriptor
* @qw: the first quad word of the program status descriptor
*
* The value of in the descriptor length field indicate if this
* is a programming status descriptor for flow director or FCoE
* by the value of I40E_RX_PROG_STATUS_DESC_LENGTH, otherwise
* it is a packet descriptor.
**/
static inline bool i40e_rx_is_programming_status(u64 qw)
{
return I40E_RX_PROG_STATUS_DESC_LENGTH ==
(qw >> I40E_RX_PROG_STATUS_DESC_LENGTH_SHIFT);
}
/* needed by i40e_ethtool.c */
int i40e_up(struct i40e_vsi *vsi);
void i40e_down(struct i40e_vsi *vsi);
extern const char i40e_driver_name[];
extern const char i40e_driver_version_str[];
void i40e_do_reset(struct i40e_pf *pf, u32 reset_flags);
void i40e_update_stats(struct i40e_vsi *vsi);
void i40e_update_eth_stats(struct i40e_vsi *vsi);
struct rtnl_link_stats64 *i40e_get_vsi_stats_struct(struct i40e_vsi *vsi);
int i40e_fetch_switch_configuration(struct i40e_pf *pf,
bool printconfig);
/* needed by i40e_main.c */
void i40e_add_fdir_filter(struct i40e_fdir_data fdir_data,
struct i40e_ring *tx_ring);
void i40e_add_remove_filter(struct i40e_fdir_data fdir_data,
struct i40e_ring *tx_ring);
void i40e_update_fdir_filter(struct i40e_fdir_data fdir_data,
struct i40e_ring *tx_ring);
int i40e_program_fdir_filter(struct i40e_fdir_data *fdir_data,
struct i40e_pf *pf, bool add);
void i40e_set_ethtool_ops(struct net_device *netdev);
struct i40e_mac_filter *i40e_add_filter(struct i40e_vsi *vsi,
u8 *macaddr, s16 vlan,
bool is_vf, bool is_netdev);
void i40e_del_filter(struct i40e_vsi *vsi, u8 *macaddr, s16 vlan,
bool is_vf, bool is_netdev);
int i40e_sync_vsi_filters(struct i40e_vsi *vsi);
struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf, u8 type,
u16 uplink, u32 param1);
int i40e_vsi_release(struct i40e_vsi *vsi);
struct i40e_vsi *i40e_vsi_lookup(struct i40e_pf *pf, enum i40e_vsi_type type,
struct i40e_vsi *start_vsi);
struct i40e_veb *i40e_veb_setup(struct i40e_pf *pf, u16 flags, u16 uplink_seid,
u16 downlink_seid, u8 enabled_tc);
void i40e_veb_release(struct i40e_veb *veb);
i40e_status i40e_vsi_add_pvid(struct i40e_vsi *vsi, u16 vid);
void i40e_vsi_remove_pvid(struct i40e_vsi *vsi);
void i40e_vsi_reset_stats(struct i40e_vsi *vsi);
void i40e_pf_reset_stats(struct i40e_pf *pf);
#ifdef CONFIG_DEBUG_FS
void i40e_dbg_pf_init(struct i40e_pf *pf);
void i40e_dbg_pf_exit(struct i40e_pf *pf);
void i40e_dbg_init(void);
void i40e_dbg_exit(void);
#else
static inline void i40e_dbg_pf_init(struct i40e_pf *pf) {}
static inline void i40e_dbg_pf_exit(struct i40e_pf *pf) {}
static inline void i40e_dbg_init(void) {}
static inline void i40e_dbg_exit(void) {}
#endif /* CONFIG_DEBUG_FS*/
void i40e_irq_dynamic_enable(struct i40e_vsi *vsi, int vector);
int i40e_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd);
void i40e_vlan_stripping_disable(struct i40e_vsi *vsi);
int i40e_vsi_add_vlan(struct i40e_vsi *vsi, s16 vid);
int i40e_vsi_kill_vlan(struct i40e_vsi *vsi, s16 vid);
struct i40e_mac_filter *i40e_put_mac_in_vlan(struct i40e_vsi *vsi, u8 *macaddr,
bool is_vf, bool is_netdev);
bool i40e_is_vsi_in_vlan(struct i40e_vsi *vsi);
struct i40e_mac_filter *i40e_find_mac(struct i40e_vsi *vsi, u8 *macaddr,
bool is_vf, bool is_netdev);
void i40e_vlan_stripping_enable(struct i40e_vsi *vsi);
#endif /* _I40E_H_ */

View File

@ -0,0 +1,983 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#include "i40e_status.h"
#include "i40e_type.h"
#include "i40e_register.h"
#include "i40e_adminq.h"
#include "i40e_prototype.h"
/**
* i40e_adminq_init_regs - Initialize AdminQ registers
* @hw: pointer to the hardware structure
*
* This assumes the alloc_asq and alloc_arq functions have already been called
**/
static void i40e_adminq_init_regs(struct i40e_hw *hw)
{
/* set head and tail registers in our local struct */
if (hw->mac.type == I40E_MAC_VF) {
hw->aq.asq.tail = I40E_VF_ATQT1;
hw->aq.asq.head = I40E_VF_ATQH1;
hw->aq.arq.tail = I40E_VF_ARQT1;
hw->aq.arq.head = I40E_VF_ARQH1;
} else {
hw->aq.asq.tail = I40E_PF_ATQT;
hw->aq.asq.head = I40E_PF_ATQH;
hw->aq.arq.tail = I40E_PF_ARQT;
hw->aq.arq.head = I40E_PF_ARQH;
}
}
/**
* i40e_alloc_adminq_asq_ring - Allocate Admin Queue send rings
* @hw: pointer to the hardware structure
**/
static i40e_status i40e_alloc_adminq_asq_ring(struct i40e_hw *hw)
{
i40e_status ret_code;
struct i40e_virt_mem mem;
ret_code = i40e_allocate_dma_mem(hw, &hw->aq.asq_mem,
i40e_mem_atq_ring,
(hw->aq.num_asq_entries *
sizeof(struct i40e_aq_desc)),
I40E_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
return ret_code;
hw->aq.asq.desc = hw->aq.asq_mem.va;
hw->aq.asq.dma_addr = hw->aq.asq_mem.pa;
ret_code = i40e_allocate_virt_mem(hw, &mem,
(hw->aq.num_asq_entries *
sizeof(struct i40e_asq_cmd_details)));
if (ret_code) {
i40e_free_dma_mem(hw, &hw->aq.asq_mem);
hw->aq.asq_mem.va = NULL;
hw->aq.asq_mem.pa = 0;
return ret_code;
}
hw->aq.asq.details = mem.va;
return ret_code;
}
/**
* i40e_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
* @hw: pointer to the hardware structure
**/
static i40e_status i40e_alloc_adminq_arq_ring(struct i40e_hw *hw)
{
i40e_status ret_code;
ret_code = i40e_allocate_dma_mem(hw, &hw->aq.arq_mem,
i40e_mem_arq_ring,
(hw->aq.num_arq_entries *
sizeof(struct i40e_aq_desc)),
I40E_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
return ret_code;
hw->aq.arq.desc = hw->aq.arq_mem.va;
hw->aq.arq.dma_addr = hw->aq.arq_mem.pa;
return ret_code;
}
/**
* i40e_free_adminq_asq - Free Admin Queue send rings
* @hw: pointer to the hardware structure
*
* This assumes the posted send buffers have already been cleaned
* and de-allocated
**/
static void i40e_free_adminq_asq(struct i40e_hw *hw)
{
struct i40e_virt_mem mem;
i40e_free_dma_mem(hw, &hw->aq.asq_mem);
hw->aq.asq_mem.va = NULL;
hw->aq.asq_mem.pa = 0;
mem.va = hw->aq.asq.details;
i40e_free_virt_mem(hw, &mem);
hw->aq.asq.details = NULL;
}
/**
* i40e_free_adminq_arq - Free Admin Queue receive rings
* @hw: pointer to the hardware structure
*
* This assumes the posted receive buffers have already been cleaned
* and de-allocated
**/
static void i40e_free_adminq_arq(struct i40e_hw *hw)
{
i40e_free_dma_mem(hw, &hw->aq.arq_mem);
hw->aq.arq_mem.va = NULL;
hw->aq.arq_mem.pa = 0;
}
/**
* i40e_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
* @hw: pointer to the hardware structure
**/
static i40e_status i40e_alloc_arq_bufs(struct i40e_hw *hw)
{
i40e_status ret_code;
struct i40e_aq_desc *desc;
struct i40e_virt_mem mem;
struct i40e_dma_mem *bi;
int i;
/* We'll be allocating the buffer info memory first, then we can
* allocate the mapped buffers for the event processing
*/
/* buffer_info structures do not need alignment */
ret_code = i40e_allocate_virt_mem(hw, &mem, (hw->aq.num_arq_entries *
sizeof(struct i40e_dma_mem)));
if (ret_code)
goto alloc_arq_bufs;
hw->aq.arq.r.arq_bi = (struct i40e_dma_mem *)mem.va;
/* allocate the mapped buffers */
for (i = 0; i < hw->aq.num_arq_entries; i++) {
bi = &hw->aq.arq.r.arq_bi[i];
ret_code = i40e_allocate_dma_mem(hw, bi,
i40e_mem_arq_buf,
hw->aq.arq_buf_size,
I40E_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
goto unwind_alloc_arq_bufs;
/* now configure the descriptors for use */
desc = I40E_ADMINQ_DESC(hw->aq.arq, i);
desc->flags = cpu_to_le16(I40E_AQ_FLAG_BUF);
if (hw->aq.arq_buf_size > I40E_AQ_LARGE_BUF)
desc->flags |= cpu_to_le16(I40E_AQ_FLAG_LB);
desc->opcode = 0;
/* This is in accordance with Admin queue design, there is no
* register for buffer size configuration
*/
desc->datalen = cpu_to_le16((u16)bi->size);
desc->retval = 0;
desc->cookie_high = 0;
desc->cookie_low = 0;
desc->params.external.addr_high =
cpu_to_le32(upper_32_bits(bi->pa));
desc->params.external.addr_low =
cpu_to_le32(lower_32_bits(bi->pa));
desc->params.external.param0 = 0;
desc->params.external.param1 = 0;
}
alloc_arq_bufs:
return ret_code;
unwind_alloc_arq_bufs:
/* don't try to free the one that failed... */
i--;
for (; i >= 0; i--)
i40e_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
mem.va = hw->aq.arq.r.arq_bi;
i40e_free_virt_mem(hw, &mem);
return ret_code;
}
/**
* i40e_alloc_asq_bufs - Allocate empty buffer structs for the send queue
* @hw: pointer to the hardware structure
**/
static i40e_status i40e_alloc_asq_bufs(struct i40e_hw *hw)
{
i40e_status ret_code;
struct i40e_virt_mem mem;
struct i40e_dma_mem *bi;
int i;
/* No mapped memory needed yet, just the buffer info structures */
ret_code = i40e_allocate_virt_mem(hw, &mem, (hw->aq.num_asq_entries *
sizeof(struct i40e_dma_mem)));
if (ret_code)
goto alloc_asq_bufs;
hw->aq.asq.r.asq_bi = (struct i40e_dma_mem *)mem.va;
/* allocate the mapped buffers */
for (i = 0; i < hw->aq.num_asq_entries; i++) {
bi = &hw->aq.asq.r.asq_bi[i];
ret_code = i40e_allocate_dma_mem(hw, bi,
i40e_mem_asq_buf,
hw->aq.asq_buf_size,
I40E_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
goto unwind_alloc_asq_bufs;
}
alloc_asq_bufs:
return ret_code;
unwind_alloc_asq_bufs:
/* don't try to free the one that failed... */
i--;
for (; i >= 0; i--)
i40e_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
mem.va = hw->aq.asq.r.asq_bi;
i40e_free_virt_mem(hw, &mem);
return ret_code;
}
/**
* i40e_free_arq_bufs - Free receive queue buffer info elements
* @hw: pointer to the hardware structure
**/
static void i40e_free_arq_bufs(struct i40e_hw *hw)
{
struct i40e_virt_mem mem;
int i;
for (i = 0; i < hw->aq.num_arq_entries; i++)
i40e_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
mem.va = hw->aq.arq.r.arq_bi;
i40e_free_virt_mem(hw, &mem);
}
/**
* i40e_free_asq_bufs - Free send queue buffer info elements
* @hw: pointer to the hardware structure
**/
static void i40e_free_asq_bufs(struct i40e_hw *hw)
{
struct i40e_virt_mem mem;
int i;
/* only unmap if the address is non-NULL */
for (i = 0; i < hw->aq.num_asq_entries; i++)
if (hw->aq.asq.r.asq_bi[i].pa)
i40e_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
/* now free the buffer info list */
mem.va = hw->aq.asq.r.asq_bi;
i40e_free_virt_mem(hw, &mem);
}
/**
* i40e_config_asq_regs - configure ASQ registers
* @hw: pointer to the hardware structure
*
* Configure base address and length registers for the transmit queue
**/
static void i40e_config_asq_regs(struct i40e_hw *hw)
{
if (hw->mac.type == I40E_MAC_VF) {
/* configure the transmit queue */
wr32(hw, I40E_VF_ATQBAH1, upper_32_bits(hw->aq.asq.dma_addr));
wr32(hw, I40E_VF_ATQBAL1, lower_32_bits(hw->aq.asq.dma_addr));
wr32(hw, I40E_VF_ATQLEN1, (hw->aq.num_asq_entries |
I40E_VF_ATQLEN1_ATQENABLE_MASK));
} else {
/* configure the transmit queue */
wr32(hw, I40E_PF_ATQBAH, upper_32_bits(hw->aq.asq.dma_addr));
wr32(hw, I40E_PF_ATQBAL, lower_32_bits(hw->aq.asq.dma_addr));
wr32(hw, I40E_PF_ATQLEN, (hw->aq.num_asq_entries |
I40E_PF_ATQLEN_ATQENABLE_MASK));
}
}
/**
* i40e_config_arq_regs - ARQ register configuration
* @hw: pointer to the hardware structure
*
* Configure base address and length registers for the receive (event queue)
**/
static void i40e_config_arq_regs(struct i40e_hw *hw)
{
if (hw->mac.type == I40E_MAC_VF) {
/* configure the receive queue */
wr32(hw, I40E_VF_ARQBAH1, upper_32_bits(hw->aq.arq.dma_addr));
wr32(hw, I40E_VF_ARQBAL1, lower_32_bits(hw->aq.arq.dma_addr));
wr32(hw, I40E_VF_ARQLEN1, (hw->aq.num_arq_entries |
I40E_VF_ARQLEN1_ARQENABLE_MASK));
} else {
/* configure the receive queue */
wr32(hw, I40E_PF_ARQBAH, upper_32_bits(hw->aq.arq.dma_addr));
wr32(hw, I40E_PF_ARQBAL, lower_32_bits(hw->aq.arq.dma_addr));
wr32(hw, I40E_PF_ARQLEN, (hw->aq.num_arq_entries |
I40E_PF_ARQLEN_ARQENABLE_MASK));
}
/* Update tail in the HW to post pre-allocated buffers */
wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1);
}
/**
* i40e_init_asq - main initialization routine for ASQ
* @hw: pointer to the hardware structure
*
* This is the main initialization routine for the Admin Send Queue
* Prior to calling this function, drivers *MUST* set the following fields
* in the hw->aq structure:
* - hw->aq.num_asq_entries
* - hw->aq.arq_buf_size
*
* Do *NOT* hold the lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
**/
static i40e_status i40e_init_asq(struct i40e_hw *hw)
{
i40e_status ret_code = 0;
if (hw->aq.asq.count > 0) {
/* queue already initialized */
ret_code = I40E_ERR_NOT_READY;
goto init_adminq_exit;
}
/* verify input for valid configuration */
if ((hw->aq.num_asq_entries == 0) ||
(hw->aq.asq_buf_size == 0)) {
ret_code = I40E_ERR_CONFIG;
goto init_adminq_exit;
}
hw->aq.asq.next_to_use = 0;
hw->aq.asq.next_to_clean = 0;
hw->aq.asq.count = hw->aq.num_asq_entries;
/* allocate the ring memory */
ret_code = i40e_alloc_adminq_asq_ring(hw);
if (ret_code)
goto init_adminq_exit;
/* allocate buffers in the rings */
ret_code = i40e_alloc_asq_bufs(hw);
if (ret_code)
goto init_adminq_free_rings;
/* initialize base registers */
i40e_config_asq_regs(hw);
/* success! */
goto init_adminq_exit;
init_adminq_free_rings:
i40e_free_adminq_asq(hw);
init_adminq_exit:
return ret_code;
}
/**
* i40e_init_arq - initialize ARQ
* @hw: pointer to the hardware structure
*
* The main initialization routine for the Admin Receive (Event) Queue.
* Prior to calling this function, drivers *MUST* set the following fields
* in the hw->aq structure:
* - hw->aq.num_asq_entries
* - hw->aq.arq_buf_size
*
* Do *NOT* hold the lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
**/
static i40e_status i40e_init_arq(struct i40e_hw *hw)
{
i40e_status ret_code = 0;
if (hw->aq.arq.count > 0) {
/* queue already initialized */
ret_code = I40E_ERR_NOT_READY;
goto init_adminq_exit;
}
/* verify input for valid configuration */
if ((hw->aq.num_arq_entries == 0) ||
(hw->aq.arq_buf_size == 0)) {
ret_code = I40E_ERR_CONFIG;
goto init_adminq_exit;
}
hw->aq.arq.next_to_use = 0;
hw->aq.arq.next_to_clean = 0;
hw->aq.arq.count = hw->aq.num_arq_entries;
/* allocate the ring memory */
ret_code = i40e_alloc_adminq_arq_ring(hw);
if (ret_code)
goto init_adminq_exit;
/* allocate buffers in the rings */
ret_code = i40e_alloc_arq_bufs(hw);
if (ret_code)
goto init_adminq_free_rings;
/* initialize base registers */
i40e_config_arq_regs(hw);
/* success! */
goto init_adminq_exit;
init_adminq_free_rings:
i40e_free_adminq_arq(hw);
init_adminq_exit:
return ret_code;
}
/**
* i40e_shutdown_asq - shutdown the ASQ
* @hw: pointer to the hardware structure
*
* The main shutdown routine for the Admin Send Queue
**/
static i40e_status i40e_shutdown_asq(struct i40e_hw *hw)
{
i40e_status ret_code = 0;
if (hw->aq.asq.count == 0)
return I40E_ERR_NOT_READY;
/* Stop firmware AdminQ processing */
if (hw->mac.type == I40E_MAC_VF)
wr32(hw, I40E_VF_ATQLEN1, 0);
else
wr32(hw, I40E_PF_ATQLEN, 0);
/* make sure lock is available */
mutex_lock(&hw->aq.asq_mutex);
hw->aq.asq.count = 0; /* to indicate uninitialized queue */
/* free ring buffers */
i40e_free_asq_bufs(hw);
/* free the ring descriptors */
i40e_free_adminq_asq(hw);
mutex_unlock(&hw->aq.asq_mutex);
return ret_code;
}
/**
* i40e_shutdown_arq - shutdown ARQ
* @hw: pointer to the hardware structure
*
* The main shutdown routine for the Admin Receive Queue
**/
static i40e_status i40e_shutdown_arq(struct i40e_hw *hw)
{
i40e_status ret_code = 0;
if (hw->aq.arq.count == 0)
return I40E_ERR_NOT_READY;
/* Stop firmware AdminQ processing */
if (hw->mac.type == I40E_MAC_VF)
wr32(hw, I40E_VF_ARQLEN1, 0);
else
wr32(hw, I40E_PF_ARQLEN, 0);
/* make sure lock is available */
mutex_lock(&hw->aq.arq_mutex);
hw->aq.arq.count = 0; /* to indicate uninitialized queue */
/* free ring buffers */
i40e_free_arq_bufs(hw);
/* free the ring descriptors */
i40e_free_adminq_arq(hw);
mutex_unlock(&hw->aq.arq_mutex);
return ret_code;
}
/**
* i40e_init_adminq - main initialization routine for Admin Queue
* @hw: pointer to the hardware structure
*
* Prior to calling this function, drivers *MUST* set the following fields
* in the hw->aq structure:
* - hw->aq.num_asq_entries
* - hw->aq.num_arq_entries
* - hw->aq.arq_buf_size
* - hw->aq.asq_buf_size
**/
i40e_status i40e_init_adminq(struct i40e_hw *hw)
{
u16 eetrack_lo, eetrack_hi;
i40e_status ret_code;
/* verify input for valid configuration */
if ((hw->aq.num_arq_entries == 0) ||
(hw->aq.num_asq_entries == 0) ||
(hw->aq.arq_buf_size == 0) ||
(hw->aq.asq_buf_size == 0)) {
ret_code = I40E_ERR_CONFIG;
goto init_adminq_exit;
}
/* initialize locks */
mutex_init(&hw->aq.asq_mutex);
mutex_init(&hw->aq.arq_mutex);
/* Set up register offsets */
i40e_adminq_init_regs(hw);
/* allocate the ASQ */
ret_code = i40e_init_asq(hw);
if (ret_code)
goto init_adminq_destroy_locks;
/* allocate the ARQ */
ret_code = i40e_init_arq(hw);
if (ret_code)
goto init_adminq_free_asq;
ret_code = i40e_aq_get_firmware_version(hw,
&hw->aq.fw_maj_ver, &hw->aq.fw_min_ver,
&hw->aq.api_maj_ver, &hw->aq.api_min_ver,
NULL);
if (ret_code)
goto init_adminq_free_arq;
if (hw->aq.api_maj_ver != I40E_FW_API_VERSION_MAJOR ||
hw->aq.api_min_ver != I40E_FW_API_VERSION_MINOR) {
ret_code = I40E_ERR_FIRMWARE_API_VERSION;
goto init_adminq_free_arq;
}
i40e_read_nvm_word(hw, I40E_SR_NVM_IMAGE_VERSION, &hw->nvm.version);
i40e_read_nvm_word(hw, I40E_SR_NVM_EETRACK_LO, &eetrack_lo);
i40e_read_nvm_word(hw, I40E_SR_NVM_EETRACK_HI, &eetrack_hi);
hw->nvm.eetrack = (eetrack_hi << 16) | eetrack_lo;
ret_code = i40e_aq_set_hmc_resource_profile(hw,
I40E_HMC_PROFILE_DEFAULT,
0,
NULL);
ret_code = 0;
/* success! */
goto init_adminq_exit;
init_adminq_free_arq:
i40e_shutdown_arq(hw);
init_adminq_free_asq:
i40e_shutdown_asq(hw);
init_adminq_destroy_locks:
init_adminq_exit:
return ret_code;
}
/**
* i40e_shutdown_adminq - shutdown routine for the Admin Queue
* @hw: pointer to the hardware structure
**/
i40e_status i40e_shutdown_adminq(struct i40e_hw *hw)
{
i40e_status ret_code = 0;
i40e_shutdown_asq(hw);
i40e_shutdown_arq(hw);
/* destroy the locks */
return ret_code;
}
/**
* i40e_clean_asq - cleans Admin send queue
* @asq: pointer to the adminq send ring
*
* returns the number of free desc
**/
static u16 i40e_clean_asq(struct i40e_hw *hw)
{
struct i40e_adminq_ring *asq = &(hw->aq.asq);
struct i40e_asq_cmd_details *details;
u16 ntc = asq->next_to_clean;
struct i40e_aq_desc desc_cb;
struct i40e_aq_desc *desc;
desc = I40E_ADMINQ_DESC(*asq, ntc);
details = I40E_ADMINQ_DETAILS(*asq, ntc);
while (rd32(hw, hw->aq.asq.head) != ntc) {
if (details->callback) {
I40E_ADMINQ_CALLBACK cb_func =
(I40E_ADMINQ_CALLBACK)details->callback;
desc_cb = *desc;
cb_func(hw, &desc_cb);
}
memset((void *)desc, 0, sizeof(struct i40e_aq_desc));
memset((void *)details, 0,
sizeof(struct i40e_asq_cmd_details));
ntc++;
if (ntc == asq->count)
ntc = 0;
desc = I40E_ADMINQ_DESC(*asq, ntc);
details = I40E_ADMINQ_DETAILS(*asq, ntc);
}
asq->next_to_clean = ntc;
return I40E_DESC_UNUSED(asq);
}
/**
* i40e_asq_done - check if FW has processed the Admin Send Queue
* @hw: pointer to the hw struct
*
* Returns true if the firmware has processed all descriptors on the
* admin send queue. Returns false if there are still requests pending.
**/
bool i40e_asq_done(struct i40e_hw *hw)
{
/* AQ designers suggest use of head for better
* timing reliability than DD bit
*/
return (rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use);
}
/**
* i40e_asq_send_command - send command to Admin Queue
* @hw: pointer to the hw struct
* @desc: prefilled descriptor describing the command (non DMA mem)
* @buff: buffer to use for indirect commands
* @buff_size: size of buffer for indirect commands
* @opaque: pointer to info to be used in async cleanup
*
* This is the main send command driver routine for the Admin Queue send
* queue. It runs the queue, cleans the queue, etc
**/
i40e_status i40e_asq_send_command(struct i40e_hw *hw,
struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
u16 buff_size,
struct i40e_asq_cmd_details *cmd_details)
{
i40e_status status = 0;
struct i40e_dma_mem *dma_buff = NULL;
struct i40e_asq_cmd_details *details;
struct i40e_aq_desc *desc_on_ring;
bool cmd_completed = false;
u16 retval = 0;
if (hw->aq.asq.count == 0) {
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
"AQTX: Admin queue not initialized.\n");
status = I40E_ERR_QUEUE_EMPTY;
goto asq_send_command_exit;
}
details = I40E_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
if (cmd_details) {
memcpy(details, cmd_details,
sizeof(struct i40e_asq_cmd_details));
/* If the cmd_details are defined copy the cookie. The
* cpu_to_le32 is not needed here because the data is ignored
* by the FW, only used by the driver
*/
if (details->cookie) {
desc->cookie_high =
cpu_to_le32(upper_32_bits(details->cookie));
desc->cookie_low =
cpu_to_le32(lower_32_bits(details->cookie));
}
} else {
memset(details, 0, sizeof(struct i40e_asq_cmd_details));
}
/* clear requested flags and then set additional flags if defined */
desc->flags &= ~cpu_to_le16(details->flags_dis);
desc->flags |= cpu_to_le16(details->flags_ena);
mutex_lock(&hw->aq.asq_mutex);
if (buff_size > hw->aq.asq_buf_size) {
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
"AQTX: Invalid buffer size: %d.\n",
buff_size);
status = I40E_ERR_INVALID_SIZE;
goto asq_send_command_error;
}
if (details->postpone && !details->async) {
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
"AQTX: Async flag not set along with postpone flag");
status = I40E_ERR_PARAM;
goto asq_send_command_error;
}
/* call clean and check queue available function to reclaim the
* descriptors that were processed by FW, the function returns the
* number of desc available
*/
/* the clean function called here could be called in a separate thread
* in case of asynchronous completions
*/
if (i40e_clean_asq(hw) == 0) {
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
"AQTX: Error queue is full.\n");
status = I40E_ERR_ADMIN_QUEUE_FULL;
goto asq_send_command_error;
}
/* initialize the temp desc pointer with the right desc */
desc_on_ring = I40E_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
/* if the desc is available copy the temp desc to the right place */
memcpy(desc_on_ring, desc, sizeof(struct i40e_aq_desc));
/* if buff is not NULL assume indirect command */
if (buff != NULL) {
dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]);
/* copy the user buff into the respective DMA buff */
memcpy(dma_buff->va, buff, buff_size);
desc_on_ring->datalen = cpu_to_le16(buff_size);
/* Update the address values in the desc with the pa value
* for respective buffer
*/
desc_on_ring->params.external.addr_high =
cpu_to_le32(upper_32_bits(dma_buff->pa));
desc_on_ring->params.external.addr_low =
cpu_to_le32(lower_32_bits(dma_buff->pa));
}
/* bump the tail */
i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc_on_ring, buff);
(hw->aq.asq.next_to_use)++;
if (hw->aq.asq.next_to_use == hw->aq.asq.count)
hw->aq.asq.next_to_use = 0;
if (!details->postpone)
wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use);
/* if cmd_details are not defined or async flag is not set,
* we need to wait for desc write back
*/
if (!details->async && !details->postpone) {
u32 total_delay = 0;
u32 delay_len = 10;
do {
/* AQ designers suggest use of head for better
* timing reliability than DD bit
*/
if (i40e_asq_done(hw))
break;
/* ugh! delay while spin_lock */
udelay(delay_len);
total_delay += delay_len;
} while (total_delay < I40E_ASQ_CMD_TIMEOUT);
}
/* if ready, copy the desc back to temp */
if (i40e_asq_done(hw)) {
memcpy(desc, desc_on_ring, sizeof(struct i40e_aq_desc));
if (buff != NULL)
memcpy(buff, dma_buff->va, buff_size);
retval = le16_to_cpu(desc->retval);
if (retval != 0) {
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
"AQTX: Command completed with error 0x%X.\n",
retval);
/* strip off FW internal code */
retval &= 0xff;
}
cmd_completed = true;
if ((enum i40e_admin_queue_err)retval == I40E_AQ_RC_OK)
status = 0;
else
status = I40E_ERR_ADMIN_QUEUE_ERROR;
hw->aq.asq_last_status = (enum i40e_admin_queue_err)retval;
}
/* update the error if time out occurred */
if ((!cmd_completed) &&
(!details->async && !details->postpone)) {
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
"AQTX: Writeback timeout.\n");
status = I40E_ERR_ADMIN_QUEUE_TIMEOUT;
}
asq_send_command_error:
mutex_unlock(&hw->aq.asq_mutex);
asq_send_command_exit:
return status;
}
/**
* i40e_fill_default_direct_cmd_desc - AQ descriptor helper function
* @desc: pointer to the temp descriptor (non DMA mem)
* @opcode: the opcode can be used to decide which flags to turn off or on
*
* Fill the desc with default values
**/
void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
u16 opcode)
{
/* zero out the desc */
memset((void *)desc, 0, sizeof(struct i40e_aq_desc));
desc->opcode = cpu_to_le16(opcode);
desc->flags = cpu_to_le16(I40E_AQ_FLAG_EI | I40E_AQ_FLAG_SI);
}
/**
* i40e_clean_arq_element
* @hw: pointer to the hw struct
* @e: event info from the receive descriptor, includes any buffers
* @pending: number of events that could be left to process
*
* This function cleans one Admin Receive Queue element and returns
* the contents through e. It can also return how many events are
* left to process through 'pending'
**/
i40e_status i40e_clean_arq_element(struct i40e_hw *hw,
struct i40e_arq_event_info *e,
u16 *pending)
{
i40e_status ret_code = 0;
u16 ntc = hw->aq.arq.next_to_clean;
struct i40e_aq_desc *desc;
struct i40e_dma_mem *bi;
u16 desc_idx;
u16 datalen;
u16 flags;
u16 ntu;
/* take the lock before we start messing with the ring */
mutex_lock(&hw->aq.arq_mutex);
/* set next_to_use to head */
ntu = (rd32(hw, hw->aq.arq.head) & I40E_PF_ARQH_ARQH_MASK);
if (ntu == ntc) {
/* nothing to do - shouldn't need to update ring's values */
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
"AQRX: Queue is empty.\n");
ret_code = I40E_ERR_ADMIN_QUEUE_NO_WORK;
goto clean_arq_element_out;
}
/* now clean the next descriptor */
desc = I40E_ADMINQ_DESC(hw->aq.arq, ntc);
desc_idx = ntc;
i40e_debug_aq(hw,
I40E_DEBUG_AQ_COMMAND,
(void *)desc,
hw->aq.arq.r.arq_bi[desc_idx].va);
flags = le16_to_cpu(desc->flags);
if (flags & I40E_AQ_FLAG_ERR) {
ret_code = I40E_ERR_ADMIN_QUEUE_ERROR;
hw->aq.arq_last_status =
(enum i40e_admin_queue_err)le16_to_cpu(desc->retval);
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
"AQRX: Event received with error 0x%X.\n",
hw->aq.arq_last_status);
} else {
memcpy(&e->desc, desc, sizeof(struct i40e_aq_desc));
datalen = le16_to_cpu(desc->datalen);
e->msg_size = min(datalen, e->msg_size);
if (e->msg_buf != NULL && (e->msg_size != 0))
memcpy(e->msg_buf, hw->aq.arq.r.arq_bi[desc_idx].va,
e->msg_size);
}
/* Restore the original datalen and buffer address in the desc,
* FW updates datalen to indicate the event message
* size
*/
bi = &hw->aq.arq.r.arq_bi[ntc];
desc->datalen = cpu_to_le16((u16)bi->size);
desc->params.external.addr_high = cpu_to_le32(upper_32_bits(bi->pa));
desc->params.external.addr_low = cpu_to_le32(lower_32_bits(bi->pa));
/* set tail = the last cleaned desc index. */
wr32(hw, hw->aq.arq.tail, ntc);
/* ntc is updated to tail + 1 */
ntc++;
if (ntc == hw->aq.num_arq_entries)
ntc = 0;
hw->aq.arq.next_to_clean = ntc;
hw->aq.arq.next_to_use = ntu;
clean_arq_element_out:
/* Set pending if needed, unlock and return */
if (pending != NULL)
*pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);
mutex_unlock(&hw->aq.arq_mutex);
return ret_code;
}
void i40e_resume_aq(struct i40e_hw *hw)
{
u32 reg = 0;
/* Registers are reset after PF reset */
hw->aq.asq.next_to_use = 0;
hw->aq.asq.next_to_clean = 0;
i40e_config_asq_regs(hw);
reg = hw->aq.num_asq_entries;
if (hw->mac.type == I40E_MAC_VF) {
reg |= I40E_VF_ATQLEN_ATQENABLE_MASK;
wr32(hw, I40E_VF_ATQLEN1, reg);
} else {
reg |= I40E_PF_ATQLEN_ATQENABLE_MASK;
wr32(hw, I40E_PF_ATQLEN, reg);
}
hw->aq.arq.next_to_use = 0;
hw->aq.arq.next_to_clean = 0;
i40e_config_arq_regs(hw);
reg = hw->aq.num_arq_entries;
if (hw->mac.type == I40E_MAC_VF) {
reg |= I40E_VF_ATQLEN_ATQENABLE_MASK;
wr32(hw, I40E_VF_ARQLEN1, reg);
} else {
reg |= I40E_PF_ATQLEN_ATQENABLE_MASK;
wr32(hw, I40E_PF_ARQLEN, reg);
}
}

View File

@ -0,0 +1,112 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_ADMINQ_H_
#define _I40E_ADMINQ_H_
#include "i40e_osdep.h"
#include "i40e_adminq_cmd.h"
#define I40E_ADMINQ_DESC(R, i) \
(&(((struct i40e_aq_desc *)((R).desc))[i]))
#define I40E_ADMINQ_DESC_ALIGNMENT 4096
struct i40e_adminq_ring {
void *desc; /* Descriptor ring memory */
void *details; /* ASQ details */
union {
struct i40e_dma_mem *asq_bi;
struct i40e_dma_mem *arq_bi;
} r;
u64 dma_addr; /* Physical address of the ring */
u16 count; /* Number of descriptors */
u16 rx_buf_len; /* Admin Receive Queue buffer length */
/* used for interrupt processing */
u16 next_to_use;
u16 next_to_clean;
/* used for queue tracking */
u32 head;
u32 tail;
};
/* ASQ transaction details */
struct i40e_asq_cmd_details {
void *callback; /* cast from type I40E_ADMINQ_CALLBACK */
u64 cookie;
u16 flags_ena;
u16 flags_dis;
bool async;
bool postpone;
};
#define I40E_ADMINQ_DETAILS(R, i) \
(&(((struct i40e_asq_cmd_details *)((R).details))[i]))
/* ARQ event information */
struct i40e_arq_event_info {
struct i40e_aq_desc desc;
u16 msg_size;
u8 *msg_buf;
};
/* Admin Queue information */
struct i40e_adminq_info {
struct i40e_adminq_ring arq; /* receive queue */
struct i40e_adminq_ring asq; /* send queue */
u16 num_arq_entries; /* receive queue depth */
u16 num_asq_entries; /* send queue depth */
u16 arq_buf_size; /* receive queue buffer size */
u16 asq_buf_size; /* send queue buffer size */
u16 fw_maj_ver; /* firmware major version */
u16 fw_min_ver; /* firmware minor version */
u16 api_maj_ver; /* api major version */
u16 api_min_ver; /* api minor version */
struct mutex asq_mutex; /* Send queue lock */
struct mutex arq_mutex; /* Receive queue lock */
struct i40e_dma_mem asq_mem; /* send queue dynamic memory */
struct i40e_dma_mem arq_mem; /* receive queue dynamic memory */
/* last status values on send and receive queues */
enum i40e_admin_queue_err asq_last_status;
enum i40e_admin_queue_err arq_last_status;
};
/* general information */
#define I40E_AQ_LARGE_BUF 512
#define I40E_ASQ_CMD_TIMEOUT 100000 /* usecs */
void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
u16 opcode);
#endif /* _I40E_ADMINQ_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,59 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_ALLOC_H_
#define _I40E_ALLOC_H_
struct i40e_hw;
/* Memory allocation types */
enum i40e_memory_type {
i40e_mem_arq_buf = 0, /* ARQ indirect command buffer */
i40e_mem_asq_buf = 1,
i40e_mem_atq_buf = 2, /* ATQ indirect command buffer */
i40e_mem_arq_ring = 3, /* ARQ descriptor ring */
i40e_mem_atq_ring = 4, /* ATQ descriptor ring */
i40e_mem_pd = 5, /* Page Descriptor */
i40e_mem_bp = 6, /* Backing Page - 4KB */
i40e_mem_bp_jumbo = 7, /* Backing Page - > 4KB */
i40e_mem_reserved
};
/* prototype for functions used for dynamic memory allocation */
i40e_status i40e_allocate_dma_mem(struct i40e_hw *hw,
struct i40e_dma_mem *mem,
enum i40e_memory_type type,
u64 size, u32 alignment);
i40e_status i40e_free_dma_mem(struct i40e_hw *hw,
struct i40e_dma_mem *mem);
i40e_status i40e_allocate_virt_mem(struct i40e_hw *hw,
struct i40e_virt_mem *mem,
u32 size);
i40e_status i40e_free_virt_mem(struct i40e_hw *hw,
struct i40e_virt_mem *mem);
#endif /* _I40E_ALLOC_H_ */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,131 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#include "i40e_diag.h"
#include "i40e_prototype.h"
/**
* i40e_diag_reg_pattern_test
* @hw: pointer to the hw struct
* @reg: reg to be tested
* @mask: bits to be touched
**/
static i40e_status i40e_diag_reg_pattern_test(struct i40e_hw *hw,
u32 reg, u32 mask)
{
const u32 patterns[] = {0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF};
u32 pat, val, orig_val;
int i;
orig_val = rd32(hw, reg);
for (i = 0; i < ARRAY_SIZE(patterns); i++) {
pat = patterns[i];
wr32(hw, reg, (pat & mask));
val = rd32(hw, reg);
if ((val & mask) != (pat & mask)) {
i40e_debug(hw, I40E_DEBUG_DIAG,
"%s: reg pattern test failed - reg 0x%08x pat 0x%08x val 0x%08x\n",
__func__, reg, pat, val);
return I40E_ERR_DIAG_TEST_FAILED;
}
}
wr32(hw, reg, orig_val);
val = rd32(hw, reg);
if (val != orig_val) {
i40e_debug(hw, I40E_DEBUG_DIAG,
"%s: reg restore test failed - reg 0x%08x orig_val 0x%08x val 0x%08x\n",
__func__, reg, orig_val, val);
return I40E_ERR_DIAG_TEST_FAILED;
}
return 0;
}
struct i40e_diag_reg_test_info i40e_reg_list[] = {
/* offset mask elements stride */
{I40E_QTX_CTL(0), 0x0000FFBF, 64, I40E_QTX_CTL(1) - I40E_QTX_CTL(0)},
{I40E_PFINT_ITR0(0), 0x00000FFF, 3, I40E_PFINT_ITR0(1) - I40E_PFINT_ITR0(0)},
{I40E_PFINT_ITRN(0, 0), 0x00000FFF, 64, I40E_PFINT_ITRN(0, 1) - I40E_PFINT_ITRN(0, 0)},
{I40E_PFINT_ITRN(1, 0), 0x00000FFF, 64, I40E_PFINT_ITRN(1, 1) - I40E_PFINT_ITRN(1, 0)},
{I40E_PFINT_ITRN(2, 0), 0x00000FFF, 64, I40E_PFINT_ITRN(2, 1) - I40E_PFINT_ITRN(2, 0)},
{I40E_PFINT_STAT_CTL0, 0x0000000C, 1, 0},
{I40E_PFINT_LNKLST0, 0x00001FFF, 1, 0},
{I40E_PFINT_LNKLSTN(0), 0x000007FF, 511, I40E_PFINT_LNKLSTN(1) - I40E_PFINT_LNKLSTN(0)},
{I40E_QINT_TQCTL(0), 0x000000FF, I40E_QINT_TQCTL_MAX_INDEX + 1, I40E_QINT_TQCTL(1) - I40E_QINT_TQCTL(0)},
{I40E_QINT_RQCTL(0), 0x000000FF, I40E_QINT_RQCTL_MAX_INDEX + 1, I40E_QINT_RQCTL(1) - I40E_QINT_RQCTL(0)},
{I40E_PFINT_ICR0_ENA, 0xF7F20000, 1, 0},
{ 0 }
};
/**
* i40e_diag_reg_test
* @hw: pointer to the hw struct
*
* Perform registers diagnostic test
**/
i40e_status i40e_diag_reg_test(struct i40e_hw *hw)
{
i40e_status ret_code = 0;
u32 reg, mask;
u32 i, j;
for (i = 0; (i40e_reg_list[i].offset != 0) && !ret_code; i++) {
mask = i40e_reg_list[i].mask;
for (j = 0; (j < i40e_reg_list[i].elements) && !ret_code; j++) {
reg = i40e_reg_list[i].offset +
(j * i40e_reg_list[i].stride);
ret_code = i40e_diag_reg_pattern_test(hw, reg, mask);
}
}
return ret_code;
}
/**
* i40e_diag_eeprom_test
* @hw: pointer to the hw struct
*
* Perform EEPROM diagnostic test
**/
i40e_status i40e_diag_eeprom_test(struct i40e_hw *hw)
{
i40e_status ret_code;
u16 reg_val;
/* read NVM control word and if NVM valid, validate EEPROM checksum*/
ret_code = i40e_read_nvm_word(hw, I40E_SR_NVM_CONTROL_WORD, &reg_val);
if ((!ret_code) &&
((reg_val & I40E_SR_CONTROL_WORD_1_MASK) ==
(0x01 << I40E_SR_CONTROL_WORD_1_SHIFT))) {
ret_code = i40e_validate_nvm_checksum(hw, NULL);
} else {
ret_code = I40E_ERR_DIAG_TEST_FAILED;
}
return ret_code;
}

View File

@ -0,0 +1,52 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_DIAG_H_
#define _I40E_DIAG_H_
#include "i40e_type.h"
enum i40e_lb_mode {
I40E_LB_MODE_NONE = 0,
I40E_LB_MODE_PHY_LOCAL,
I40E_LB_MODE_PHY_REMOTE,
I40E_LB_MODE_MAC_LOCAL,
};
struct i40e_diag_reg_test_info {
u32 offset; /* the base register */
u32 mask; /* bits that can be tested */
u32 elements; /* number of elements if array */
u32 stride; /* bytes between each element */
};
extern struct i40e_diag_reg_test_info i40e_reg_list[];
i40e_status i40e_diag_reg_test(struct i40e_hw *hw);
i40e_status i40e_diag_eeprom_test(struct i40e_hw *hw);
#endif /* _I40E_DIAG_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,366 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#include "i40e_osdep.h"
#include "i40e_register.h"
#include "i40e_status.h"
#include "i40e_alloc.h"
#include "i40e_hmc.h"
#include "i40e_type.h"
/**
* i40e_add_sd_table_entry - Adds a segment descriptor to the table
* @hw: pointer to our hw struct
* @hmc_info: pointer to the HMC configuration information struct
* @sd_index: segment descriptor index to manipulate
* @type: what type of segment descriptor we're manipulating
* @direct_mode_sz: size to alloc in direct mode
**/
i40e_status i40e_add_sd_table_entry(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 sd_index,
enum i40e_sd_entry_type type,
u64 direct_mode_sz)
{
enum i40e_memory_type mem_type __attribute__((unused));
i40e_status ret_code = 0;
struct i40e_hmc_sd_entry *sd_entry;
bool dma_mem_alloc_done = false;
struct i40e_dma_mem mem;
u64 alloc_len;
if (NULL == hmc_info->sd_table.sd_entry) {
ret_code = I40E_ERR_BAD_PTR;
hw_dbg(hw, "i40e_add_sd_table_entry: bad sd_entry\n");
goto exit;
}
if (sd_index >= hmc_info->sd_table.sd_cnt) {
ret_code = I40E_ERR_INVALID_SD_INDEX;
hw_dbg(hw, "i40e_add_sd_table_entry: bad sd_index\n");
goto exit;
}
sd_entry = &hmc_info->sd_table.sd_entry[sd_index];
if (!sd_entry->valid) {
if (I40E_SD_TYPE_PAGED == type) {
mem_type = i40e_mem_pd;
alloc_len = I40E_HMC_PAGED_BP_SIZE;
} else {
mem_type = i40e_mem_bp_jumbo;
alloc_len = direct_mode_sz;
}
/* allocate a 4K pd page or 2M backing page */
ret_code = i40e_allocate_dma_mem(hw, &mem, mem_type, alloc_len,
I40E_HMC_PD_BP_BUF_ALIGNMENT);
if (ret_code)
goto exit;
dma_mem_alloc_done = true;
if (I40E_SD_TYPE_PAGED == type) {
ret_code = i40e_allocate_virt_mem(hw,
&sd_entry->u.pd_table.pd_entry_virt_mem,
sizeof(struct i40e_hmc_pd_entry) * 512);
if (ret_code)
goto exit;
sd_entry->u.pd_table.pd_entry =
(struct i40e_hmc_pd_entry *)
sd_entry->u.pd_table.pd_entry_virt_mem.va;
memcpy(&sd_entry->u.pd_table.pd_page_addr, &mem,
sizeof(struct i40e_dma_mem));
} else {
memcpy(&sd_entry->u.bp.addr, &mem,
sizeof(struct i40e_dma_mem));
sd_entry->u.bp.sd_pd_index = sd_index;
}
/* initialize the sd entry */
hmc_info->sd_table.sd_entry[sd_index].entry_type = type;
/* increment the ref count */
I40E_INC_SD_REFCNT(&hmc_info->sd_table);
}
/* Increment backing page reference count */
if (I40E_SD_TYPE_DIRECT == sd_entry->entry_type)
I40E_INC_BP_REFCNT(&sd_entry->u.bp);
exit:
if (ret_code)
if (dma_mem_alloc_done)
i40e_free_dma_mem(hw, &mem);
return ret_code;
}
/**
* i40e_add_pd_table_entry - Adds page descriptor to the specified table
* @hw: pointer to our HW structure
* @hmc_info: pointer to the HMC configuration information structure
* @pd_index: which page descriptor index to manipulate
*
* This function:
* 1. Initializes the pd entry
* 2. Adds pd_entry in the pd_table
* 3. Mark the entry valid in i40e_hmc_pd_entry structure
* 4. Initializes the pd_entry's ref count to 1
* assumptions:
* 1. The memory for pd should be pinned down, physically contiguous and
* aligned on 4K boundary and zeroed memory.
* 2. It should be 4K in size.
**/
i40e_status i40e_add_pd_table_entry(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 pd_index)
{
i40e_status ret_code = 0;
struct i40e_hmc_pd_table *pd_table;
struct i40e_hmc_pd_entry *pd_entry;
struct i40e_dma_mem mem;
u32 sd_idx, rel_pd_idx;
u64 *pd_addr;
u64 page_desc;
if (pd_index / I40E_HMC_PD_CNT_IN_SD >= hmc_info->sd_table.sd_cnt) {
ret_code = I40E_ERR_INVALID_PAGE_DESC_INDEX;
hw_dbg(hw, "i40e_add_pd_table_entry: bad pd_index\n");
goto exit;
}
/* find corresponding sd */
sd_idx = (pd_index / I40E_HMC_PD_CNT_IN_SD);
if (I40E_SD_TYPE_PAGED !=
hmc_info->sd_table.sd_entry[sd_idx].entry_type)
goto exit;
rel_pd_idx = (pd_index % I40E_HMC_PD_CNT_IN_SD);
pd_table = &hmc_info->sd_table.sd_entry[sd_idx].u.pd_table;
pd_entry = &pd_table->pd_entry[rel_pd_idx];
if (!pd_entry->valid) {
/* allocate a 4K backing page */
ret_code = i40e_allocate_dma_mem(hw, &mem, i40e_mem_bp,
I40E_HMC_PAGED_BP_SIZE,
I40E_HMC_PD_BP_BUF_ALIGNMENT);
if (ret_code)
goto exit;
memcpy(&pd_entry->bp.addr, &mem, sizeof(struct i40e_dma_mem));
pd_entry->bp.sd_pd_index = pd_index;
pd_entry->bp.entry_type = I40E_SD_TYPE_PAGED;
/* Set page address and valid bit */
page_desc = mem.pa | 0x1;
pd_addr = (u64 *)pd_table->pd_page_addr.va;
pd_addr += rel_pd_idx;
/* Add the backing page physical address in the pd entry */
memcpy(pd_addr, &page_desc, sizeof(u64));
pd_entry->sd_index = sd_idx;
pd_entry->valid = true;
I40E_INC_PD_REFCNT(pd_table);
}
I40E_INC_BP_REFCNT(&pd_entry->bp);
exit:
return ret_code;
}
/**
* i40e_remove_pd_bp - remove a backing page from a page descriptor
* @hw: pointer to our HW structure
* @hmc_info: pointer to the HMC configuration information structure
* @idx: the page index
* @is_pf: distinguishes a VF from a PF
*
* This function:
* 1. Marks the entry in pd tabe (for paged address mode) or in sd table
* (for direct address mode) invalid.
* 2. Write to register PMPDINV to invalidate the backing page in FV cache
* 3. Decrement the ref count for the pd _entry
* assumptions:
* 1. Caller can deallocate the memory used by backing storage after this
* function returns.
**/
i40e_status i40e_remove_pd_bp(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 idx, bool is_pf)
{
i40e_status ret_code = 0;
struct i40e_hmc_pd_entry *pd_entry;
struct i40e_hmc_pd_table *pd_table;
struct i40e_hmc_sd_entry *sd_entry;
u32 sd_idx, rel_pd_idx;
u64 *pd_addr;
/* calculate index */
sd_idx = idx / I40E_HMC_PD_CNT_IN_SD;
rel_pd_idx = idx % I40E_HMC_PD_CNT_IN_SD;
if (sd_idx >= hmc_info->sd_table.sd_cnt) {
ret_code = I40E_ERR_INVALID_PAGE_DESC_INDEX;
hw_dbg(hw, "i40e_remove_pd_bp: bad idx\n");
goto exit;
}
sd_entry = &hmc_info->sd_table.sd_entry[sd_idx];
if (I40E_SD_TYPE_PAGED != sd_entry->entry_type) {
ret_code = I40E_ERR_INVALID_SD_TYPE;
hw_dbg(hw, "i40e_remove_pd_bp: wrong sd_entry type\n");
goto exit;
}
/* get the entry and decrease its ref counter */
pd_table = &hmc_info->sd_table.sd_entry[sd_idx].u.pd_table;
pd_entry = &pd_table->pd_entry[rel_pd_idx];
I40E_DEC_BP_REFCNT(&pd_entry->bp);
if (pd_entry->bp.ref_cnt)
goto exit;
/* mark the entry invalid */
pd_entry->valid = false;
I40E_DEC_PD_REFCNT(pd_table);
pd_addr = (u64 *)pd_table->pd_page_addr.va;
pd_addr += rel_pd_idx;
memset(pd_addr, 0, sizeof(u64));
if (is_pf)
I40E_INVALIDATE_PF_HMC_PD(hw, sd_idx, idx);
else
I40E_INVALIDATE_VF_HMC_PD(hw, sd_idx, idx, hmc_info->hmc_fn_id);
/* free memory here */
ret_code = i40e_free_dma_mem(hw, &(pd_entry->bp.addr));
if (ret_code)
goto exit;
if (!pd_table->ref_cnt)
i40e_free_virt_mem(hw, &pd_table->pd_entry_virt_mem);
exit:
return ret_code;
}
/**
* i40e_prep_remove_sd_bp - Prepares to remove a backing page from a sd entry
* @hmc_info: pointer to the HMC configuration information structure
* @idx: the page index
**/
i40e_status i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info,
u32 idx)
{
i40e_status ret_code = 0;
struct i40e_hmc_sd_entry *sd_entry;
/* get the entry and decrease its ref counter */
sd_entry = &hmc_info->sd_table.sd_entry[idx];
I40E_DEC_BP_REFCNT(&sd_entry->u.bp);
if (sd_entry->u.bp.ref_cnt) {
ret_code = I40E_ERR_NOT_READY;
goto exit;
}
I40E_DEC_SD_REFCNT(&hmc_info->sd_table);
/* mark the entry invalid */
sd_entry->valid = false;
exit:
return ret_code;
}
/**
* i40e_remove_sd_bp_new - Removes a backing page from a segment descriptor
* @hw: pointer to our hw struct
* @hmc_info: pointer to the HMC configuration information structure
* @idx: the page index
* @is_pf: used to distinguish between VF and PF
**/
i40e_status i40e_remove_sd_bp_new(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 idx, bool is_pf)
{
struct i40e_hmc_sd_entry *sd_entry;
i40e_status ret_code = 0;
/* get the entry and decrease its ref counter */
sd_entry = &hmc_info->sd_table.sd_entry[idx];
if (is_pf) {
I40E_CLEAR_PF_SD_ENTRY(hw, idx, I40E_SD_TYPE_DIRECT);
} else {
ret_code = I40E_NOT_SUPPORTED;
goto exit;
}
ret_code = i40e_free_dma_mem(hw, &(sd_entry->u.bp.addr));
if (ret_code)
goto exit;
exit:
return ret_code;
}
/**
* i40e_prep_remove_pd_page - Prepares to remove a PD page from sd entry.
* @hmc_info: pointer to the HMC configuration information structure
* @idx: segment descriptor index to find the relevant page descriptor
**/
i40e_status i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info,
u32 idx)
{
i40e_status ret_code = 0;
struct i40e_hmc_sd_entry *sd_entry;
sd_entry = &hmc_info->sd_table.sd_entry[idx];
if (sd_entry->u.pd_table.ref_cnt) {
ret_code = I40E_ERR_NOT_READY;
goto exit;
}
/* mark the entry invalid */
sd_entry->valid = false;
I40E_DEC_SD_REFCNT(&hmc_info->sd_table);
exit:
return ret_code;
}
/**
* i40e_remove_pd_page_new - Removes a PD page from sd entry.
* @hw: pointer to our hw struct
* @hmc_info: pointer to the HMC configuration information structure
* @idx: segment descriptor index to find the relevant page descriptor
* @is_pf: used to distinguish between VF and PF
**/
i40e_status i40e_remove_pd_page_new(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 idx, bool is_pf)
{
i40e_status ret_code = 0;
struct i40e_hmc_sd_entry *sd_entry;
sd_entry = &hmc_info->sd_table.sd_entry[idx];
if (is_pf) {
I40E_CLEAR_PF_SD_ENTRY(hw, idx, I40E_SD_TYPE_PAGED);
} else {
ret_code = I40E_NOT_SUPPORTED;
goto exit;
}
/* free memory here */
ret_code = i40e_free_dma_mem(hw, &(sd_entry->u.pd_table.pd_page_addr));
if (ret_code)
goto exit;
exit:
return ret_code;
}

View File

@ -0,0 +1,245 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_HMC_H_
#define _I40E_HMC_H_
#define I40E_HMC_MAX_BP_COUNT 512
/* forward-declare the HW struct for the compiler */
struct i40e_hw;
#define I40E_HMC_INFO_SIGNATURE 0x484D5347 /* HMSG */
#define I40E_HMC_PD_CNT_IN_SD 512
#define I40E_HMC_DIRECT_BP_SIZE 0x200000 /* 2M */
#define I40E_HMC_PAGED_BP_SIZE 4096
#define I40E_HMC_PD_BP_BUF_ALIGNMENT 4096
#define I40E_FIRST_VF_FPM_ID 16
struct i40e_hmc_obj_info {
u64 base; /* base addr in FPM */
u32 max_cnt; /* max count available for this hmc func */
u32 cnt; /* count of objects driver actually wants to create */
u64 size; /* size in bytes of one object */
};
enum i40e_sd_entry_type {
I40E_SD_TYPE_INVALID = 0,
I40E_SD_TYPE_PAGED = 1,
I40E_SD_TYPE_DIRECT = 2
};
struct i40e_hmc_bp {
enum i40e_sd_entry_type entry_type;
struct i40e_dma_mem addr; /* populate to be used by hw */
u32 sd_pd_index;
u32 ref_cnt;
};
struct i40e_hmc_pd_entry {
struct i40e_hmc_bp bp;
u32 sd_index;
bool valid;
};
struct i40e_hmc_pd_table {
struct i40e_dma_mem pd_page_addr; /* populate to be used by hw */
struct i40e_hmc_pd_entry *pd_entry; /* [512] for sw book keeping */
struct i40e_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */
u32 ref_cnt;
u32 sd_index;
};
struct i40e_hmc_sd_entry {
enum i40e_sd_entry_type entry_type;
bool valid;
union {
struct i40e_hmc_pd_table pd_table;
struct i40e_hmc_bp bp;
} u;
};
struct i40e_hmc_sd_table {
struct i40e_virt_mem addr; /* used to track sd_entry allocations */
u32 sd_cnt;
u32 ref_cnt;
struct i40e_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */
};
struct i40e_hmc_info {
u32 signature;
/* equals to pci func num for PF and dynamically allocated for VFs */
u8 hmc_fn_id;
u16 first_sd_index; /* index of the first available SD */
/* hmc objects */
struct i40e_hmc_obj_info *hmc_obj;
struct i40e_virt_mem hmc_obj_virt_mem;
struct i40e_hmc_sd_table sd_table;
};
#define I40E_INC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt++)
#define I40E_INC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt++)
#define I40E_INC_BP_REFCNT(bp) ((bp)->ref_cnt++)
#define I40E_DEC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt--)
#define I40E_DEC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt--)
#define I40E_DEC_BP_REFCNT(bp) ((bp)->ref_cnt--)
/**
* I40E_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware
* @hw: pointer to our hw struct
* @pa: pointer to physical address
* @sd_index: segment descriptor index
* @hmc_fn_id: hmc function id
* @type: if sd entry is direct or paged
**/
#define I40E_SET_PF_SD_ENTRY(hw, pa, sd_index, type) \
{ \
u32 val1, val2, val3; \
val1 = (u32)(upper_32_bits(pa)); \
val2 = (u32)(pa) | (I40E_HMC_MAX_BP_COUNT << \
I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \
((((type) == I40E_SD_TYPE_PAGED) ? 0 : 1) << \
I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) | \
(1 << I40E_PFHMC_SDDATALOW_PMSDVALID_SHIFT); \
val3 = (sd_index) | (1 << I40E_PFHMC_SDCMD_PMSDWR_SHIFT); \
wr32((hw), I40E_PFHMC_SDDATAHIGH, val1); \
wr32((hw), I40E_PFHMC_SDDATALOW, val2); \
wr32((hw), I40E_PFHMC_SDCMD, val3); \
}
/**
* I40E_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware
* @hw: pointer to our hw struct
* @sd_index: segment descriptor index
* @hmc_fn_id: hmc function id
* @type: if sd entry is direct or paged
**/
#define I40E_CLEAR_PF_SD_ENTRY(hw, sd_index, type) \
{ \
u32 val2, val3; \
val2 = (I40E_HMC_MAX_BP_COUNT << \
I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \
((((type) == I40E_SD_TYPE_PAGED) ? 0 : 1) << \
I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT); \
val3 = (sd_index) | (1 << I40E_PFHMC_SDCMD_PMSDWR_SHIFT); \
wr32((hw), I40E_PFHMC_SDDATAHIGH, 0); \
wr32((hw), I40E_PFHMC_SDDATALOW, val2); \
wr32((hw), I40E_PFHMC_SDCMD, val3); \
}
/**
* I40E_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
* @hw: pointer to our hw struct
* @sd_idx: segment descriptor index
* @pd_idx: page descriptor index
* @hmc_fn_id: hmc function id
**/
#define I40E_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx) \
wr32((hw), I40E_PFHMC_PDINV, \
(((sd_idx) << I40E_PFHMC_PDINV_PMSDIDX_SHIFT) | \
((pd_idx) << I40E_PFHMC_PDINV_PMPDIDX_SHIFT)))
#define I40E_INVALIDATE_VF_HMC_PD(hw, sd_idx, pd_idx, hmc_fn_id) \
wr32((hw), I40E_GLHMC_VFPDINV((hmc_fn_id) - I40E_FIRST_VF_FPM_ID), \
(((sd_idx) << I40E_PFHMC_PDINV_PMSDIDX_SHIFT) | \
((pd_idx) << I40E_PFHMC_PDINV_PMPDIDX_SHIFT)))
/**
* I40E_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit
* @hmc_info: pointer to the HMC configuration information structure
* @type: type of HMC resources we're searching
* @index: starting index for the object
* @cnt: number of objects we're trying to create
* @sd_idx: pointer to return index of the segment descriptor in question
* @sd_limit: pointer to return the maximum number of segment descriptors
*
* This function calculates the segment descriptor index and index limit
* for the resource defined by i40e_hmc_rsrc_type.
**/
#define I40E_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\
{ \
u64 fpm_addr, fpm_limit; \
fpm_addr = (hmc_info)->hmc_obj[(type)].base + \
(hmc_info)->hmc_obj[(type)].size * (index); \
fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\
*(sd_idx) = (u32)(fpm_addr / I40E_HMC_DIRECT_BP_SIZE); \
*(sd_limit) = (u32)((fpm_limit - 1) / I40E_HMC_DIRECT_BP_SIZE); \
/* add one more to the limit to correct our range */ \
*(sd_limit) += 1; \
}
/**
* I40E_FIND_PD_INDEX_LIMIT - finds page descriptor index limit
* @hmc_info: pointer to the HMC configuration information struct
* @type: HMC resource type we're examining
* @idx: starting index for the object
* @cnt: number of objects we're trying to create
* @pd_index: pointer to return page descriptor index
* @pd_limit: pointer to return page descriptor index limit
*
* Calculates the page descriptor index and index limit for the resource
* defined by i40e_hmc_rsrc_type.
**/
#define I40E_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\
{ \
u64 fpm_adr, fpm_limit; \
fpm_adr = (hmc_info)->hmc_obj[(type)].base + \
(hmc_info)->hmc_obj[(type)].size * (idx); \
fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt); \
*(pd_index) = (u32)(fpm_adr / I40E_HMC_PAGED_BP_SIZE); \
*(pd_limit) = (u32)((fpm_limit - 1) / I40E_HMC_PAGED_BP_SIZE); \
/* add one more to the limit to correct our range */ \
*(pd_limit) += 1; \
}
i40e_status i40e_add_sd_table_entry(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 sd_index,
enum i40e_sd_entry_type type,
u64 direct_mode_sz);
i40e_status i40e_add_pd_table_entry(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 pd_index);
i40e_status i40e_remove_pd_bp(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 idx, bool is_pf);
i40e_status i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info,
u32 idx);
i40e_status i40e_remove_sd_bp_new(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 idx, bool is_pf);
i40e_status i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info,
u32 idx);
i40e_status i40e_remove_pd_page_new(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 idx, bool is_pf);
#endif /* _I40E_HMC_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,169 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_LAN_HMC_H_
#define _I40E_LAN_HMC_H_
/* forward-declare the HW struct for the compiler */
struct i40e_hw;
/* HMC element context information */
/* Rx queue context data */
struct i40e_hmc_obj_rxq {
u16 head;
u8 cpuid;
u64 base;
u16 qlen;
#define I40E_RXQ_CTX_DBUFF_SHIFT 7
u8 dbuff;
#define I40E_RXQ_CTX_HBUFF_SHIFT 6
u8 hbuff;
u8 dtype;
u8 dsize;
u8 crcstrip;
u8 fc_ena;
u8 l2tsel;
u8 hsplit_0;
u8 hsplit_1;
u8 showiv;
u16 rxmax;
u8 tphrdesc_ena;
u8 tphwdesc_ena;
u8 tphdata_ena;
u8 tphhead_ena;
u8 lrxqthresh;
};
/* Tx queue context data */
struct i40e_hmc_obj_txq {
u16 head;
u8 new_context;
u64 base;
u8 fc_ena;
u8 timesync_ena;
u8 fd_ena;
u8 alt_vlan_ena;
u16 thead_wb;
u16 cpuid;
u8 head_wb_ena;
u16 qlen;
u8 tphrdesc_ena;
u8 tphrpacket_ena;
u8 tphwdesc_ena;
u64 head_wb_addr;
u32 crc;
u16 rdylist;
u8 rdylist_act;
};
/* for hsplit_0 field of Rx HMC context */
enum i40e_hmc_obj_rx_hsplit_0 {
I40E_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT = 0,
I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2 = 1,
I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP = 2,
I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4,
I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP = 8,
};
/* fcoe_cntx and fcoe_filt are for debugging purpose only */
struct i40e_hmc_obj_fcoe_cntx {
u32 rsv[32];
};
struct i40e_hmc_obj_fcoe_filt {
u32 rsv[8];
};
/* Context sizes for LAN objects */
enum i40e_hmc_lan_object_size {
I40E_HMC_LAN_OBJ_SZ_8 = 0x3,
I40E_HMC_LAN_OBJ_SZ_16 = 0x4,
I40E_HMC_LAN_OBJ_SZ_32 = 0x5,
I40E_HMC_LAN_OBJ_SZ_64 = 0x6,
I40E_HMC_LAN_OBJ_SZ_128 = 0x7,
I40E_HMC_LAN_OBJ_SZ_256 = 0x8,
I40E_HMC_LAN_OBJ_SZ_512 = 0x9,
};
#define I40E_HMC_L2OBJ_BASE_ALIGNMENT 512
#define I40E_HMC_OBJ_SIZE_TXQ 128
#define I40E_HMC_OBJ_SIZE_RXQ 32
#define I40E_HMC_OBJ_SIZE_FCOE_CNTX 128
#define I40E_HMC_OBJ_SIZE_FCOE_FILT 32
enum i40e_hmc_lan_rsrc_type {
I40E_HMC_LAN_FULL = 0,
I40E_HMC_LAN_TX = 1,
I40E_HMC_LAN_RX = 2,
I40E_HMC_FCOE_CTX = 3,
I40E_HMC_FCOE_FILT = 4,
I40E_HMC_LAN_MAX = 5
};
enum i40e_hmc_model {
I40E_HMC_MODEL_DIRECT_PREFERRED = 0,
I40E_HMC_MODEL_DIRECT_ONLY = 1,
I40E_HMC_MODEL_PAGED_ONLY = 2,
I40E_HMC_MODEL_UNKNOWN,
};
struct i40e_hmc_lan_create_obj_info {
struct i40e_hmc_info *hmc_info;
u32 rsrc_type;
u32 start_idx;
u32 count;
enum i40e_sd_entry_type entry_type;
u64 direct_mode_sz;
};
struct i40e_hmc_lan_delete_obj_info {
struct i40e_hmc_info *hmc_info;
u32 rsrc_type;
u32 start_idx;
u32 count;
};
i40e_status i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num,
u32 rxq_num, u32 fcoe_cntx_num,
u32 fcoe_filt_num);
i40e_status i40e_configure_lan_hmc(struct i40e_hw *hw,
enum i40e_hmc_model model);
i40e_status i40e_shutdown_lan_hmc(struct i40e_hw *hw);
i40e_status i40e_clear_lan_tx_queue_context(struct i40e_hw *hw,
u16 queue);
i40e_status i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
u16 queue,
struct i40e_hmc_obj_txq *s);
i40e_status i40e_clear_lan_rx_queue_context(struct i40e_hw *hw,
u16 queue);
i40e_status i40e_set_lan_rx_queue_context(struct i40e_hw *hw,
u16 queue,
struct i40e_hmc_obj_rxq *s);
#endif /* _I40E_LAN_HMC_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,391 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#include "i40e_prototype.h"
/**
* i40e_init_nvm_ops - Initialize NVM function pointers.
* @hw: pointer to the HW structure.
*
* Setups the function pointers and the NVM info structure. Should be called
* once per NVM initialization, e.g. inside the i40e_init_shared_code().
* Please notice that the NVM term is used here (& in all methods covered
* in this file) as an equivalent of the FLASH part mapped into the SR.
* We are accessing FLASH always thru the Shadow RAM.
**/
i40e_status i40e_init_nvm(struct i40e_hw *hw)
{
struct i40e_nvm_info *nvm = &hw->nvm;
i40e_status ret_code = 0;
u32 fla, gens;
u8 sr_size;
/* The SR size is stored regardless of the nvm programming mode
* as the blank mode may be used in the factory line.
*/
gens = rd32(hw, I40E_GLNVM_GENS);
sr_size = ((gens & I40E_GLNVM_GENS_SR_SIZE_MASK) >>
I40E_GLNVM_GENS_SR_SIZE_SHIFT);
/* Switching to words (sr_size contains power of 2KB). */
nvm->sr_size = (1 << sr_size) * I40E_SR_WORDS_IN_1KB;
/* Check if we are in the normal or blank NVM programming mode. */
fla = rd32(hw, I40E_GLNVM_FLA);
if (fla & I40E_GLNVM_FLA_LOCKED_MASK) { /* Normal programming mode. */
/* Max NVM timeout. */
nvm->timeout = I40E_MAX_NVM_TIMEOUT;
nvm->blank_nvm_mode = false;
} else { /* Blank programming mode. */
nvm->blank_nvm_mode = true;
ret_code = I40E_ERR_NVM_BLANK_MODE;
hw_dbg(hw, "NVM init error: unsupported blank mode.\n");
}
return ret_code;
}
/**
* i40e_acquire_nvm - Generic request for acquiring the NVM ownership.
* @hw: pointer to the HW structure.
* @access: NVM access type (read or write).
*
* This function will request NVM ownership for reading
* via the proper Admin Command.
**/
i40e_status i40e_acquire_nvm(struct i40e_hw *hw,
enum i40e_aq_resource_access_type access)
{
i40e_status ret_code = 0;
u64 gtime, timeout;
u64 time = 0;
if (hw->nvm.blank_nvm_mode)
goto i40e_i40e_acquire_nvm_exit;
ret_code = i40e_aq_request_resource(hw, I40E_NVM_RESOURCE_ID, access,
0, &time, NULL);
/* Reading the Global Device Timer. */
gtime = rd32(hw, I40E_GLVFGEN_TIMER);
/* Store the timeout. */
hw->nvm.hw_semaphore_timeout = I40E_MS_TO_GTIME(time) + gtime;
if (ret_code) {
/* Set the polling timeout. */
if (time > I40E_MAX_NVM_TIMEOUT)
timeout = I40E_MS_TO_GTIME(I40E_MAX_NVM_TIMEOUT)
+ gtime;
else
timeout = hw->nvm.hw_semaphore_timeout;
/* Poll until the current NVM owner timeouts. */
while (gtime < timeout) {
usleep_range(10000, 20000);
ret_code = i40e_aq_request_resource(hw,
I40E_NVM_RESOURCE_ID,
access, 0, &time,
NULL);
if (!ret_code) {
hw->nvm.hw_semaphore_timeout =
I40E_MS_TO_GTIME(time) + gtime;
break;
}
gtime = rd32(hw, I40E_GLVFGEN_TIMER);
}
if (ret_code) {
hw->nvm.hw_semaphore_timeout = 0;
hw->nvm.hw_semaphore_wait =
I40E_MS_TO_GTIME(time) + gtime;
hw_dbg(hw, "NVM acquire timed out, wait %llu ms before trying again.\n",
time);
}
}
i40e_i40e_acquire_nvm_exit:
return ret_code;
}
/**
* i40e_release_nvm - Generic request for releasing the NVM ownership.
* @hw: pointer to the HW structure.
*
* This function will release NVM resource via the proper Admin Command.
**/
void i40e_release_nvm(struct i40e_hw *hw)
{
if (!hw->nvm.blank_nvm_mode)
i40e_aq_release_resource(hw, I40E_NVM_RESOURCE_ID, 0, NULL);
}
/**
* i40e_poll_sr_srctl_done_bit - Polls the GLNVM_SRCTL done bit.
* @hw: pointer to the HW structure.
*
* Polls the SRCTL Shadow RAM register done bit.
**/
static i40e_status i40e_poll_sr_srctl_done_bit(struct i40e_hw *hw)
{
i40e_status ret_code = I40E_ERR_TIMEOUT;
u32 srctl, wait_cnt;
/* Poll the I40E_GLNVM_SRCTL until the done bit is set. */
for (wait_cnt = 0; wait_cnt < I40E_SRRD_SRCTL_ATTEMPTS; wait_cnt++) {
srctl = rd32(hw, I40E_GLNVM_SRCTL);
if (srctl & I40E_GLNVM_SRCTL_DONE_MASK) {
ret_code = 0;
break;
}
udelay(5);
}
if (ret_code == I40E_ERR_TIMEOUT)
hw_dbg(hw, "Done bit in GLNVM_SRCTL not set");
return ret_code;
}
/**
* i40e_read_nvm_srctl - Reads Shadow RAM.
* @hw: pointer to the HW structure.
* @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF).
* @data: word read from the Shadow RAM.
*
* Reads 16 bit word from the Shadow RAM using the GLNVM_SRCTL register.
**/
static i40e_status i40e_read_nvm_srctl(struct i40e_hw *hw, u16 offset,
u16 *data)
{
i40e_status ret_code = I40E_ERR_TIMEOUT;
u32 sr_reg;
if (offset >= hw->nvm.sr_size) {
hw_dbg(hw, "NVM read error: Offset beyond Shadow RAM limit.\n");
ret_code = I40E_ERR_PARAM;
goto read_nvm_exit;
}
/* Poll the done bit first. */
ret_code = i40e_poll_sr_srctl_done_bit(hw);
if (!ret_code) {
/* Write the address and start reading. */
sr_reg = (u32)(offset << I40E_GLNVM_SRCTL_ADDR_SHIFT) |
(1 << I40E_GLNVM_SRCTL_START_SHIFT);
wr32(hw, I40E_GLNVM_SRCTL, sr_reg);
/* Poll I40E_GLNVM_SRCTL until the done bit is set. */
ret_code = i40e_poll_sr_srctl_done_bit(hw);
if (!ret_code) {
sr_reg = rd32(hw, I40E_GLNVM_SRDATA);
*data = (u16)((sr_reg &
I40E_GLNVM_SRDATA_RDDATA_MASK)
>> I40E_GLNVM_SRDATA_RDDATA_SHIFT);
}
}
if (ret_code)
hw_dbg(hw, "NVM read error: Couldn't access Shadow RAM address: 0x%x\n",
offset);
read_nvm_exit:
return ret_code;
}
/**
* i40e_read_nvm_word - Reads Shadow RAM word.
* @hw: pointer to the HW structure.
* @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF).
* @data: word read from the Shadow RAM.
*
* Reads 16 bit word from the Shadow RAM. Each read is preceded
* with the NVM ownership taking and followed by the release.
**/
i40e_status i40e_read_nvm_word(struct i40e_hw *hw, u16 offset,
u16 *data)
{
i40e_status ret_code = 0;
ret_code = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
if (!ret_code) {
ret_code = i40e_read_nvm_srctl(hw, offset, data);
i40e_release_nvm(hw);
}
return ret_code;
}
/**
* i40e_read_nvm_buffer - Reads Shadow RAM buffer.
* @hw: pointer to the HW structure.
* @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF).
* @words: number of words to read (in) &
* number of words read before the NVM ownership timeout (out).
* @data: words read from the Shadow RAM.
*
* Reads 16 bit words (data buffer) from the SR using the i40e_read_nvm_srrd()
* method. The buffer read is preceded by the NVM ownership take
* and followed by the release.
**/
i40e_status i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset,
u16 *words, u16 *data)
{
i40e_status ret_code = 0;
u16 index, word;
u32 time;
ret_code = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
if (!ret_code) {
/* Loop thru the selected region. */
for (word = 0; word < *words; word++) {
index = offset + word;
ret_code = i40e_read_nvm_srctl(hw, index, &data[word]);
if (ret_code)
break;
/* Check if we didn't exceeded the semaphore timeout. */
time = rd32(hw, I40E_GLVFGEN_TIMER);
if (time >= hw->nvm.hw_semaphore_timeout) {
ret_code = I40E_ERR_TIMEOUT;
hw_dbg(hw, "NVM read error: timeout.\n");
break;
}
}
/* Update the number of words read from the Shadow RAM. */
*words = word;
/* Release the NVM ownership. */
i40e_release_nvm(hw);
}
return ret_code;
}
/**
* i40e_calc_nvm_checksum - Calculates and returns the checksum
* @hw: pointer to hardware structure
*
* This function calculate SW Checksum that covers the whole 64kB shadow RAM
* except the VPD and PCIe ALT Auto-load modules. The structure and size of VPD
* is customer specific and unknown. Therefore, this function skips all maximum
* possible size of VPD (1kB).
**/
static i40e_status i40e_calc_nvm_checksum(struct i40e_hw *hw,
u16 *checksum)
{
i40e_status ret_code = 0;
u16 pcie_alt_module = 0;
u16 checksum_local = 0;
u16 vpd_module = 0;
u16 word = 0;
u32 i = 0;
/* read pointer to VPD area */
ret_code = i40e_read_nvm_srctl(hw, I40E_SR_VPD_PTR, &vpd_module);
if (ret_code) {
ret_code = I40E_ERR_NVM_CHECKSUM;
goto i40e_calc_nvm_checksum_exit;
}
/* read pointer to PCIe Alt Auto-load module */
ret_code = i40e_read_nvm_srctl(hw, I40E_SR_PCIE_ALT_AUTO_LOAD_PTR,
&pcie_alt_module);
if (ret_code) {
ret_code = I40E_ERR_NVM_CHECKSUM;
goto i40e_calc_nvm_checksum_exit;
}
/* Calculate SW checksum that covers the whole 64kB shadow RAM
* except the VPD and PCIe ALT Auto-load modules
*/
for (i = 0; i < hw->nvm.sr_size; i++) {
/* Skip Checksum word */
if (i == I40E_SR_SW_CHECKSUM_WORD)
i++;
/* Skip VPD module (convert byte size to word count) */
if (i == (u32)vpd_module) {
i += (I40E_SR_VPD_MODULE_MAX_SIZE / 2);
if (i >= hw->nvm.sr_size)
break;
}
/* Skip PCIe ALT module (convert byte size to word count) */
if (i == (u32)pcie_alt_module) {
i += (I40E_SR_PCIE_ALT_MODULE_MAX_SIZE / 2);
if (i >= hw->nvm.sr_size)
break;
}
ret_code = i40e_read_nvm_srctl(hw, (u16)i, &word);
if (ret_code) {
ret_code = I40E_ERR_NVM_CHECKSUM;
goto i40e_calc_nvm_checksum_exit;
}
checksum_local += word;
}
*checksum = (u16)I40E_SR_SW_CHECKSUM_BASE - checksum_local;
i40e_calc_nvm_checksum_exit:
return ret_code;
}
/**
* i40e_validate_nvm_checksum - Validate EEPROM checksum
* @hw: pointer to hardware structure
* @checksum: calculated checksum
*
* Performs checksum calculation and validates the NVM SW checksum. If the
* caller does not need checksum, the value can be NULL.
**/
i40e_status i40e_validate_nvm_checksum(struct i40e_hw *hw,
u16 *checksum)
{
i40e_status ret_code = 0;
u16 checksum_sr = 0;
u16 checksum_local;
ret_code = i40e_acquire_nvm(hw, I40E_RESOURCE_READ);
if (ret_code)
goto i40e_validate_nvm_checksum_exit;
ret_code = i40e_calc_nvm_checksum(hw, &checksum_local);
if (ret_code)
goto i40e_validate_nvm_checksum_free;
/* Do not use i40e_read_nvm_word() because we do not want to take
* the synchronization semaphores twice here.
*/
i40e_read_nvm_srctl(hw, I40E_SR_SW_CHECKSUM_WORD, &checksum_sr);
/* Verify read checksum from EEPROM is the same as
* calculated checksum
*/
if (checksum_local != checksum_sr)
ret_code = I40E_ERR_NVM_CHECKSUM;
/* If the user cares, return the calculated checksum */
if (checksum)
*checksum = checksum_local;
i40e_validate_nvm_checksum_free:
i40e_release_nvm(hw);
i40e_validate_nvm_checksum_exit:
return ret_code;
}

View File

@ -0,0 +1,82 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_OSDEP_H_
#define _I40E_OSDEP_H_
#include <linux/types.h>
#include <linux/if_ether.h>
#include <linux/if_vlan.h>
#include <linux/tcp.h>
#include <linux/pci.h>
#include <linux/highuid.h>
/* get readq/writeq support for 32 bit kernels, use the low-first version */
#include <asm-generic/io-64-nonatomic-lo-hi.h>
/* File to be the magic between shared code and
* actual OS primitives
*/
#define hw_dbg(hw, S, A...) do {} while (0)
#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
#define rd32(a, reg) readl((a)->hw_addr + (reg))
#define wr64(a, reg, value) writeq((value), ((a)->hw_addr + (reg)))
#define rd64(a, reg) readq((a)->hw_addr + (reg))
#define i40e_flush(a) readl((a)->hw_addr + I40E_GLGEN_STAT)
/* memory allocation tracking */
struct i40e_dma_mem {
void *va;
dma_addr_t pa;
u32 size;
} __packed;
#define i40e_allocate_dma_mem(h, m, unused, s, a) \
i40e_allocate_dma_mem_d(h, m, s, a)
#define i40e_free_dma_mem(h, m) i40e_free_dma_mem_d(h, m)
struct i40e_virt_mem {
void *va;
u32 size;
} __packed;
#define i40e_allocate_virt_mem(h, m, s) i40e_allocate_virt_mem_d(h, m, s)
#define i40e_free_virt_mem(h, m) i40e_free_virt_mem_d(h, m)
#define i40e_debug(h, m, s, ...) \
do { \
if (((m) & (h)->debug_mask)) \
pr_info("i40e %02x.%x " s, \
(h)->bus.device, (h)->bus.func, \
##__VA_ARGS__); \
} while (0)
typedef enum i40e_status_code i40e_status;
#endif /* _I40E_OSDEP_H_ */

View File

@ -0,0 +1,239 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_PROTOTYPE_H_
#define _I40E_PROTOTYPE_H_
#include "i40e_type.h"
#include "i40e_alloc.h"
#include "i40e_virtchnl.h"
/* Prototypes for shared code functions that are not in
* the standard function pointer structures. These are
* mostly because they are needed even before the init
* has happened and will assist in the early SW and FW
* setup.
*/
/* adminq functions */
i40e_status i40e_init_adminq(struct i40e_hw *hw);
i40e_status i40e_shutdown_adminq(struct i40e_hw *hw);
void i40e_adminq_init_ring_data(struct i40e_hw *hw);
i40e_status i40e_clean_arq_element(struct i40e_hw *hw,
struct i40e_arq_event_info *e,
u16 *events_pending);
i40e_status i40e_asq_send_command(struct i40e_hw *hw,
struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
u16 buff_size,
struct i40e_asq_cmd_details *cmd_details);
bool i40e_asq_done(struct i40e_hw *hw);
/* debug function for adminq */
void i40e_debug_aq(struct i40e_hw *hw,
enum i40e_debug_mask mask,
void *desc,
void *buffer);
void i40e_idle_aq(struct i40e_hw *hw);
void i40e_resume_aq(struct i40e_hw *hw);
u32 i40e_led_get(struct i40e_hw *hw);
void i40e_led_set(struct i40e_hw *hw, u32 mode);
/* admin send queue commands */
i40e_status i40e_aq_get_firmware_version(struct i40e_hw *hw,
u16 *fw_major_version, u16 *fw_minor_version,
u16 *api_major_version, u16 *api_minor_version,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_queue_shutdown(struct i40e_hw *hw,
bool unloading);
i40e_status i40e_aq_set_phy_reset(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_link_restart_an(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_get_link_info(struct i40e_hw *hw,
bool enable_lse, struct i40e_link_status *link,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_local_advt_reg(struct i40e_hw *hw,
u64 advt_reg,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_send_driver_version(struct i40e_hw *hw,
struct i40e_driver_version *dv,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_add_vsi(struct i40e_hw *hw,
struct i40e_vsi_context *vsi_ctx,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
u16 vsi_id, bool set_filter,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_get_vsi_params(struct i40e_hw *hw,
struct i40e_vsi_context *vsi_ctx,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw,
struct i40e_vsi_context *vsi_ctx,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid,
u16 downlink_seid, u8 enabled_tc,
bool default_port, u16 *pveb_seid,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_get_veb_parameters(struct i40e_hw *hw,
u16 veb_seid, u16 *switch_id, bool *floating,
u16 *statistic_index, u16 *vebs_used,
u16 *vebs_free,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_add_macvlan(struct i40e_hw *hw, u16 vsi_id,
struct i40e_aqc_add_macvlan_element_data *mv_list,
u16 count, struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 vsi_id,
struct i40e_aqc_remove_macvlan_element_data *mv_list,
u16 count, struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_add_vlan(struct i40e_hw *hw, u16 vsi_id,
struct i40e_aqc_add_remove_vlan_element_data *v_list,
u8 count, struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_remove_vlan(struct i40e_hw *hw, u16 vsi_id,
struct i40e_aqc_add_remove_vlan_element_data *v_list,
u8 count, struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid,
u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
struct i40e_aqc_get_switch_config_resp *buf,
u16 buf_size, u16 *start_seid,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_request_resource(struct i40e_hw *hw,
enum i40e_aq_resources_ids resource,
enum i40e_aq_resource_access_type access,
u8 sdp_number, u64 *timeout,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_release_resource(struct i40e_hw *hw,
enum i40e_aq_resources_ids resource,
u8 sdp_number,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer,
u32 offset, u16 length, void *data,
bool last_command,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_discover_capabilities(struct i40e_hw *hw,
void *buff, u16 buff_size, u16 *data_size,
enum i40e_admin_queue_opc list_type_opc,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer,
u32 offset, u16 length, void *data,
bool last_command,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type,
u8 mib_type, void *buff, u16 buff_size,
u16 *local_len, u16 *remote_len,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw,
bool enable_update,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_start_lldp(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_delete_element(struct i40e_hw *hw, u16 seid,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_mac_address_write(struct i40e_hw *hw,
u16 flags, u8 *mac_addr,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_hmc_resource_profile(struct i40e_hw *hw,
enum i40e_aq_hmc_profile profile,
u8 pe_vf_enabled_count,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw,
u16 seid, u16 credit, u8 max_bw,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, u16 seid,
struct i40e_aqc_configure_vsi_tc_bw_data *bw_data,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_query_vsi_bw_config(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_query_vsi_bw_config_resp *bw_data,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_query_port_ets_config(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_query_port_ets_config_resp *bw_data,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data,
struct i40e_asq_cmd_details *cmd_details);
/* i40e_common */
i40e_status i40e_init_shared_code(struct i40e_hw *hw);
i40e_status i40e_pf_reset(struct i40e_hw *hw);
void i40e_clear_pxe_mode(struct i40e_hw *hw);
bool i40e_get_link_status(struct i40e_hw *hw);
i40e_status i40e_get_mac_addr(struct i40e_hw *hw,
u8 *mac_addr);
i40e_status i40e_validate_mac_addr(u8 *mac_addr);
i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
struct i40e_lldp_variables *lldp_cfg);
/* prototype for functions used for NVM access */
i40e_status i40e_init_nvm(struct i40e_hw *hw);
i40e_status i40e_acquire_nvm(struct i40e_hw *hw,
enum i40e_aq_resource_access_type access);
void i40e_release_nvm(struct i40e_hw *hw);
i40e_status i40e_read_nvm_srrd(struct i40e_hw *hw, u16 offset,
u16 *data);
i40e_status i40e_read_nvm_word(struct i40e_hw *hw, u16 offset,
u16 *data);
i40e_status i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset,
u16 *words, u16 *data);
i40e_status i40e_validate_nvm_checksum(struct i40e_hw *hw,
u16 *checksum);
/* prototype for functions used for SW locks */
/* i40e_common for VF drivers*/
void i40e_vf_parse_hw_config(struct i40e_hw *hw,
struct i40e_virtchnl_vf_resource *msg);
i40e_status i40e_vf_reset(struct i40e_hw *hw);
i40e_status i40e_aq_send_msg_to_pf(struct i40e_hw *hw,
enum i40e_virtchnl_ops v_opcode,
i40e_status v_retval,
u8 *msg, u16 msglen,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_set_filter_control(struct i40e_hw *hw,
struct i40e_filter_control_settings *settings);
#endif /* _I40E_PROTOTYPE_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,101 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_STATUS_H_
#define _I40E_STATUS_H_
/* Error Codes */
enum i40e_status_code {
I40E_SUCCESS = 0,
I40E_ERR_NVM = -1,
I40E_ERR_NVM_CHECKSUM = -2,
I40E_ERR_PHY = -3,
I40E_ERR_CONFIG = -4,
I40E_ERR_PARAM = -5,
I40E_ERR_MAC_TYPE = -6,
I40E_ERR_UNKNOWN_PHY = -7,
I40E_ERR_LINK_SETUP = -8,
I40E_ERR_ADAPTER_STOPPED = -9,
I40E_ERR_INVALID_MAC_ADDR = -10,
I40E_ERR_DEVICE_NOT_SUPPORTED = -11,
I40E_ERR_MASTER_REQUESTS_PENDING = -12,
I40E_ERR_INVALID_LINK_SETTINGS = -13,
I40E_ERR_AUTONEG_NOT_COMPLETE = -14,
I40E_ERR_RESET_FAILED = -15,
I40E_ERR_SWFW_SYNC = -16,
I40E_ERR_NO_AVAILABLE_VSI = -17,
I40E_ERR_NO_MEMORY = -18,
I40E_ERR_BAD_PTR = -19,
I40E_ERR_RING_FULL = -20,
I40E_ERR_INVALID_PD_ID = -21,
I40E_ERR_INVALID_QP_ID = -22,
I40E_ERR_INVALID_CQ_ID = -23,
I40E_ERR_INVALID_CEQ_ID = -24,
I40E_ERR_INVALID_AEQ_ID = -25,
I40E_ERR_INVALID_SIZE = -26,
I40E_ERR_INVALID_ARP_INDEX = -27,
I40E_ERR_INVALID_FPM_FUNC_ID = -28,
I40E_ERR_QP_INVALID_MSG_SIZE = -29,
I40E_ERR_QP_TOOMANY_WRS_POSTED = -30,
I40E_ERR_INVALID_FRAG_COUNT = -31,
I40E_ERR_QUEUE_EMPTY = -32,
I40E_ERR_INVALID_ALIGNMENT = -33,
I40E_ERR_FLUSHED_QUEUE = -34,
I40E_ERR_INVALID_PUSH_PAGE_INDEX = -35,
I40E_ERR_INVALID_IMM_DATA_SIZE = -36,
I40E_ERR_TIMEOUT = -37,
I40E_ERR_OPCODE_MISMATCH = -38,
I40E_ERR_CQP_COMPL_ERROR = -39,
I40E_ERR_INVALID_VF_ID = -40,
I40E_ERR_INVALID_HMCFN_ID = -41,
I40E_ERR_BACKING_PAGE_ERROR = -42,
I40E_ERR_NO_PBLCHUNKS_AVAILABLE = -43,
I40E_ERR_INVALID_PBLE_INDEX = -44,
I40E_ERR_INVALID_SD_INDEX = -45,
I40E_ERR_INVALID_PAGE_DESC_INDEX = -46,
I40E_ERR_INVALID_SD_TYPE = -47,
I40E_ERR_MEMCPY_FAILED = -48,
I40E_ERR_INVALID_HMC_OBJ_INDEX = -49,
I40E_ERR_INVALID_HMC_OBJ_COUNT = -50,
I40E_ERR_INVALID_SRQ_ARM_LIMIT = -51,
I40E_ERR_SRQ_ENABLED = -52,
I40E_ERR_ADMIN_QUEUE_ERROR = -53,
I40E_ERR_ADMIN_QUEUE_TIMEOUT = -54,
I40E_ERR_BUF_TOO_SHORT = -55,
I40E_ERR_ADMIN_QUEUE_FULL = -56,
I40E_ERR_ADMIN_QUEUE_NO_WORK = -57,
I40E_ERR_BAD_IWARP_CQE = -58,
I40E_ERR_NVM_BLANK_MODE = -59,
I40E_ERR_NOT_IMPLEMENTED = -60,
I40E_ERR_PE_DOORBELL_NOT_ENABLED = -61,
I40E_ERR_DIAG_TEST_FAILED = -62,
I40E_ERR_NOT_READY = -63,
I40E_NOT_SUPPORTED = -64,
I40E_ERR_FIRMWARE_API_VERSION = -65,
};
#endif /* _I40E_STATUS_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,259 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
/* Interrupt Throttling and Rate Limiting (storm control) Goodies */
#define I40E_MAX_ITR 0x07FF
#define I40E_MIN_ITR 0x0001
#define I40E_ITR_USEC_RESOLUTION 2
#define I40E_MAX_IRATE 0x03F
#define I40E_MIN_IRATE 0x001
#define I40E_IRATE_USEC_RESOLUTION 4
#define I40E_ITR_100K 0x0005
#define I40E_ITR_20K 0x0019
#define I40E_ITR_8K 0x003E
#define I40E_ITR_4K 0x007A
#define I40E_ITR_RX_DEF I40E_ITR_8K
#define I40E_ITR_TX_DEF I40E_ITR_4K
#define I40E_ITR_DYNAMIC 0x8000 /* use top bit as a flag */
#define I40E_MIN_INT_RATE 250 /* ~= 1000000 / (I40E_MAX_ITR * 2) */
#define I40E_MAX_INT_RATE 500000 /* == 1000000 / (I40E_MIN_ITR * 2) */
#define I40E_DEFAULT_IRQ_WORK 256
#define ITR_TO_REG(setting) ((setting & ~I40E_ITR_DYNAMIC) >> 1)
#define ITR_IS_DYNAMIC(setting) (!!(setting & I40E_ITR_DYNAMIC))
#define ITR_REG_TO_USEC(itr_reg) (itr_reg << 1)
#define I40E_QUEUE_END_OF_LIST 0x7FF
#define I40E_ITR_NONE 3
#define I40E_RX_ITR 0
#define I40E_TX_ITR 1
#define I40E_PE_ITR 2
/* Supported Rx Buffer Sizes */
#define I40E_RXBUFFER_512 512 /* Used for packet split */
#define I40E_RXBUFFER_2048 2048
#define I40E_RXBUFFER_3072 3072 /* For FCoE MTU of 2158 */
#define I40E_RXBUFFER_4096 4096
#define I40E_RXBUFFER_8192 8192
#define I40E_MAX_RXBUFFER 9728 /* largest size for single descriptor */
/* NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we
* reserve 2 more, and skb_shared_info adds an additional 384 bytes more,
* this adds up to 512 bytes of extra data meaning the smallest allocation
* we could have is 1K.
* i.e. RXBUFFER_512 --> size-1024 slab
*/
#define I40E_RX_HDR_SIZE I40E_RXBUFFER_512
/* How many Rx Buffers do we bundle into one write to the hardware ? */
#define I40E_RX_BUFFER_WRITE 16 /* Must be power of 2 */
#define I40E_RX_NEXT_DESC(r, i, n) \
do { \
(i)++; \
if ((i) == (r)->count) \
i = 0; \
(n) = I40E_RX_DESC((r), (i)); \
} while (0)
#define I40E_RX_NEXT_DESC_PREFETCH(r, i, n) \
do { \
I40E_RX_NEXT_DESC((r), (i), (n)); \
prefetch((n)); \
} while (0)
#define i40e_rx_desc i40e_32byte_rx_desc
#define I40E_MIN_TX_LEN 17
#define I40E_MAX_DATA_PER_TXD 16383 /* aka 16kB - 1 */
/* Tx Descriptors needed, worst case */
#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), I40E_MAX_DATA_PER_TXD)
#define DESC_NEEDED ((MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE)) + 4)
#define I40E_TX_FLAGS_CSUM (u32)(1)
#define I40E_TX_FLAGS_HW_VLAN (u32)(1 << 1)
#define I40E_TX_FLAGS_SW_VLAN (u32)(1 << 2)
#define I40E_TX_FLAGS_TSO (u32)(1 << 3)
#define I40E_TX_FLAGS_IPV4 (u32)(1 << 4)
#define I40E_TX_FLAGS_IPV6 (u32)(1 << 5)
#define I40E_TX_FLAGS_FCCRC (u32)(1 << 6)
#define I40E_TX_FLAGS_FSO (u32)(1 << 7)
#define I40E_TX_FLAGS_TXSW (u32)(1 << 8)
#define I40E_TX_FLAGS_MAPPED_AS_PAGE (u32)(1 << 9)
#define I40E_TX_FLAGS_VLAN_MASK 0xffff0000
#define I40E_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000
#define I40E_TX_FLAGS_VLAN_PRIO_SHIFT 29
#define I40E_TX_FLAGS_VLAN_SHIFT 16
struct i40e_tx_buffer {
struct sk_buff *skb;
dma_addr_t dma;
unsigned long time_stamp;
u16 length;
u32 tx_flags;
struct i40e_tx_desc *next_to_watch;
unsigned int bytecount;
u16 gso_segs;
u8 mapped_as_page;
};
struct i40e_rx_buffer {
struct sk_buff *skb;
dma_addr_t dma;
struct page *page;
dma_addr_t page_dma;
unsigned int page_offset;
};
struct i40e_tx_queue_stats {
u64 packets;
u64 bytes;
u64 restart_queue;
u64 tx_busy;
u64 completed;
u64 tx_done_old;
};
struct i40e_rx_queue_stats {
u64 packets;
u64 bytes;
u64 non_eop_descs;
u64 alloc_rx_page_failed;
u64 alloc_rx_buff_failed;
};
enum i40e_ring_state_t {
__I40E_TX_FDIR_INIT_DONE,
__I40E_TX_XPS_INIT_DONE,
__I40E_TX_DETECT_HANG,
__I40E_HANG_CHECK_ARMED,
__I40E_RX_PS_ENABLED,
__I40E_RX_LRO_ENABLED,
__I40E_RX_16BYTE_DESC_ENABLED,
};
#define ring_is_ps_enabled(ring) \
test_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
#define set_ring_ps_enabled(ring) \
set_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
#define clear_ring_ps_enabled(ring) \
clear_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
#define check_for_tx_hang(ring) \
test_bit(__I40E_TX_DETECT_HANG, &(ring)->state)
#define set_check_for_tx_hang(ring) \
set_bit(__I40E_TX_DETECT_HANG, &(ring)->state)
#define clear_check_for_tx_hang(ring) \
clear_bit(__I40E_TX_DETECT_HANG, &(ring)->state)
#define ring_is_lro_enabled(ring) \
test_bit(__I40E_RX_LRO_ENABLED, &(ring)->state)
#define set_ring_lro_enabled(ring) \
set_bit(__I40E_RX_LRO_ENABLED, &(ring)->state)
#define clear_ring_lro_enabled(ring) \
clear_bit(__I40E_RX_LRO_ENABLED, &(ring)->state)
#define ring_is_16byte_desc_enabled(ring) \
test_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
#define set_ring_16byte_desc_enabled(ring) \
set_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
#define clear_ring_16byte_desc_enabled(ring) \
clear_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
/* struct that defines a descriptor ring, associated with a VSI */
struct i40e_ring {
void *desc; /* Descriptor ring memory */
struct device *dev; /* Used for DMA mapping */
struct net_device *netdev; /* netdev ring maps to */
union {
struct i40e_tx_buffer *tx_bi;
struct i40e_rx_buffer *rx_bi;
};
unsigned long state;
u16 queue_index; /* Queue number of ring */
u8 dcb_tc; /* Traffic class of ring */
u8 __iomem *tail;
u16 count; /* Number of descriptors */
u16 reg_idx; /* HW register index of the ring */
u16 rx_hdr_len;
u16 rx_buf_len;
u8 dtype;
#define I40E_RX_DTYPE_NO_SPLIT 0
#define I40E_RX_DTYPE_SPLIT_ALWAYS 1
#define I40E_RX_DTYPE_HEADER_SPLIT 2
u8 hsplit;
#define I40E_RX_SPLIT_L2 0x1
#define I40E_RX_SPLIT_IP 0x2
#define I40E_RX_SPLIT_TCP_UDP 0x4
#define I40E_RX_SPLIT_SCTP 0x8
/* used in interrupt processing */
u16 next_to_use;
u16 next_to_clean;
u8 atr_sample_rate;
u8 atr_count;
bool ring_active; /* is ring online or not */
/* stats structs */
union {
struct i40e_tx_queue_stats tx_stats;
struct i40e_rx_queue_stats rx_stats;
};
unsigned int size; /* length of descriptor ring in bytes */
dma_addr_t dma; /* physical address of ring */
struct i40e_vsi *vsi; /* Backreference to associated VSI */
struct i40e_q_vector *q_vector; /* Backreference to associated vector */
} ____cacheline_internodealigned_in_smp;
enum i40e_latency_range {
I40E_LOWEST_LATENCY = 0,
I40E_LOW_LATENCY = 1,
I40E_BULK_LATENCY = 2,
};
struct i40e_ring_container {
#define I40E_MAX_RINGPAIR_PER_VECTOR 8
/* array of pointers to rings */
struct i40e_ring *ring[I40E_MAX_RINGPAIR_PER_VECTOR];
unsigned int total_bytes; /* total bytes processed this int */
unsigned int total_packets; /* total packets processed this int */
u16 count;
enum i40e_latency_range latency_range;
u16 itr;
};
void i40e_alloc_rx_buffers(struct i40e_ring *rxr, u16 cleaned_count);
netdev_tx_t i40e_lan_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
void i40e_clean_tx_ring(struct i40e_ring *tx_ring);
void i40e_clean_rx_ring(struct i40e_ring *rx_ring);
int i40e_setup_tx_descriptors(struct i40e_ring *tx_ring);
int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring);
void i40e_free_tx_resources(struct i40e_ring *tx_ring);
void i40e_free_rx_resources(struct i40e_ring *rx_ring);
int i40e_napi_poll(struct napi_struct *napi, int budget);

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,368 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_VIRTCHNL_H_
#define _I40E_VIRTCHNL_H_
#include "i40e_type.h"
/* Description:
* This header file describes the VF-PF communication protocol used
* by the various i40e drivers.
*
* Admin queue buffer usage:
* desc->opcode is always i40e_aqc_opc_send_msg_to_pf
* flags, retval, datalen, and data addr are all used normally.
* Firmware copies the cookie fields when sending messages between the PF and
* VF, but uses all other fields internally. Due to this limitation, we
* must send all messages as "indirect", i.e. using an external buffer.
*
* All the vsi indexes are relative to the VF. Each VF can have maximum of
* three VSIs. All the queue indexes are relative to the VSI. Each VF can
* have a maximum of sixteen queues for all of its VSIs.
*
* The PF is required to return a status code in v_retval for all messages
* except RESET_VF, which does not require any response. The return value is of
* i40e_status_code type, defined in the i40e_type.h.
*
* In general, VF driver initialization should roughly follow the order of these
* opcodes. The VF driver must first validate the API version of the PF driver,
* then request a reset, then get resources, then configure queues and
* interrupts. After these operations are complete, the VF driver may start
* its queues, optionally add MAC and VLAN filters, and process traffic.
*/
/* Opcodes for VF-PF communication. These are placed in the v_opcode field
* of the virtchnl_msg structure.
*/
enum i40e_virtchnl_ops {
/* VF sends req. to pf for the following
* ops.
*/
I40E_VIRTCHNL_OP_UNKNOWN = 0,
I40E_VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
I40E_VIRTCHNL_OP_RESET_VF,
I40E_VIRTCHNL_OP_GET_VF_RESOURCES,
I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE,
I40E_VIRTCHNL_OP_CONFIG_RX_QUEUE,
I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES,
I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP,
I40E_VIRTCHNL_OP_ENABLE_QUEUES,
I40E_VIRTCHNL_OP_DISABLE_QUEUES,
I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS,
I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS,
I40E_VIRTCHNL_OP_ADD_VLAN,
I40E_VIRTCHNL_OP_DEL_VLAN,
I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE,
I40E_VIRTCHNL_OP_GET_STATS,
I40E_VIRTCHNL_OP_FCOE,
/* PF sends status change events to vfs using
* the following op.
*/
I40E_VIRTCHNL_OP_EVENT,
};
/* Virtual channel message descriptor. This overlays the admin queue
* descriptor. All other data is passed in external buffers.
*/
struct i40e_virtchnl_msg {
u8 pad[8]; /* AQ flags/opcode/len/retval fields */
enum i40e_virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
i40e_status v_retval; /* ditto for desc->retval */
u32 vfid; /* used by PF when sending to VF */
};
/* Message descriptions and data structures.*/
/* I40E_VIRTCHNL_OP_VERSION
* VF posts its version number to the PF. PF responds with its version number
* in the same format, along with a return code.
* Reply from PF has its major/minor versions also in param0 and param1.
* If there is a major version mismatch, then the VF cannot operate.
* If there is a minor version mismatch, then the VF can operate but should
* add a warning to the system log.
*
* This enum element MUST always be specified as == 1, regardless of other
* changes in the API. The PF must always respond to this message without
* error regardless of version mismatch.
*/
#define I40E_VIRTCHNL_VERSION_MAJOR 1
#define I40E_VIRTCHNL_VERSION_MINOR 0
struct i40e_virtchnl_version_info {
u32 major;
u32 minor;
};
/* I40E_VIRTCHNL_OP_RESET_VF
* VF sends this request to PF with no parameters
* PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
* until reset completion is indicated. The admin queue must be reinitialized
* after this operation.
*
* When reset is complete, PF must ensure that all queues in all VSIs associated
* with the VF are stopped, all queue configurations in the HMC are set to 0,
* and all MAC and VLAN filters (except the default MAC address) on all VSIs
* are cleared.
*/
/* I40E_VIRTCHNL_OP_GET_VF_RESOURCES
* VF sends this request to PF with no parameters
* PF responds with an indirect message containing
* i40e_virtchnl_vf_resource and one or more
* i40e_virtchnl_vsi_resource structures.
*/
struct i40e_virtchnl_vsi_resource {
u16 vsi_id;
u16 num_queue_pairs;
enum i40e_vsi_type vsi_type;
u16 qset_handle;
u8 default_mac_addr[I40E_ETH_LENGTH_OF_ADDRESS];
};
/* VF offload flags */
#define I40E_VIRTCHNL_VF_OFFLOAD_L2 0x00000001
#define I40E_VIRTCHNL_VF_OFFLOAD_FCOE 0x00000004
#define I40E_VIRTCHNL_VF_OFFLOAD_VLAN 0x00010000
struct i40e_virtchnl_vf_resource {
u16 num_vsis;
u16 num_queue_pairs;
u16 max_vectors;
u16 max_mtu;
u32 vf_offload_flags;
u32 max_fcoe_contexts;
u32 max_fcoe_filters;
struct i40e_virtchnl_vsi_resource vsi_res[1];
};
/* I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE
* VF sends this message to set up parameters for one TX queue.
* External data buffer contains one instance of i40e_virtchnl_txq_info.
* PF configures requested queue and returns a status code.
*/
/* Tx queue config info */
struct i40e_virtchnl_txq_info {
u16 vsi_id;
u16 queue_id;
u16 ring_len; /* number of descriptors, multiple of 8 */
u16 headwb_enabled;
u64 dma_ring_addr;
u64 dma_headwb_addr;
};
/* I40E_VIRTCHNL_OP_CONFIG_RX_QUEUE
* VF sends this message to set up parameters for one RX queue.
* External data buffer contains one instance of i40e_virtchnl_rxq_info.
* PF configures requested queue and returns a status code.
*/
/* Rx queue config info */
struct i40e_virtchnl_rxq_info {
u16 vsi_id;
u16 queue_id;
u32 ring_len; /* number of descriptors, multiple of 32 */
u16 hdr_size;
u16 splithdr_enabled;
u32 databuffer_size;
u32 max_pkt_size;
u64 dma_ring_addr;
enum i40e_hmc_obj_rx_hsplit_0 rx_split_pos;
};
/* I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES
* VF sends this message to set parameters for all active TX and RX queues
* associated with the specified VSI.
* PF configures queues and returns status.
* If the number of queues specified is greater than the number of queues
* associated with the VSI, an error is returned and no queues are configured.
*/
struct i40e_virtchnl_queue_pair_info {
/* NOTE: vsi_id and queue_id should be identical for both queues. */
struct i40e_virtchnl_txq_info txq;
struct i40e_virtchnl_rxq_info rxq;
};
struct i40e_virtchnl_vsi_queue_config_info {
u16 vsi_id;
u16 num_queue_pairs;
struct i40e_virtchnl_queue_pair_info qpair[1];
};
/* I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP
* VF uses this message to map vectors to queues.
* The rxq_map and txq_map fields are bitmaps used to indicate which queues
* are to be associated with the specified vector.
* The "other" causes are always mapped to vector 0.
* PF configures interrupt mapping and returns status.
*/
struct i40e_virtchnl_vector_map {
u16 vsi_id;
u16 vector_id;
u16 rxq_map;
u16 txq_map;
u16 rxitr_idx;
u16 txitr_idx;
};
struct i40e_virtchnl_irq_map_info {
u16 num_vectors;
struct i40e_virtchnl_vector_map vecmap[1];
};
/* I40E_VIRTCHNL_OP_ENABLE_QUEUES
* I40E_VIRTCHNL_OP_DISABLE_QUEUES
* VF sends these message to enable or disable TX/RX queue pairs.
* The queues fields are bitmaps indicating which queues to act upon.
* (Currently, we only support 16 queues per VF, but we make the field
* u32 to allow for expansion.)
* PF performs requested action and returns status.
*/
struct i40e_virtchnl_queue_select {
u16 vsi_id;
u16 pad;
u32 rx_queues;
u32 tx_queues;
};
/* I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS
* VF sends this message in order to add one or more unicast or multicast
* address filters for the specified VSI.
* PF adds the filters and returns status.
*/
/* I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS
* VF sends this message in order to remove one or more unicast or multicast
* filters for the specified VSI.
* PF removes the filters and returns status.
*/
struct i40e_virtchnl_ether_addr {
u8 addr[I40E_ETH_LENGTH_OF_ADDRESS];
u8 pad[2];
};
struct i40e_virtchnl_ether_addr_list {
u16 vsi_id;
u16 num_elements;
struct i40e_virtchnl_ether_addr list[1];
};
/* I40E_VIRTCHNL_OP_ADD_VLAN
* VF sends this message to add one or more VLAN tag filters for receives.
* PF adds the filters and returns status.
* If a port VLAN is configured by the PF, this operation will return an
* error to the VF.
*/
/* I40E_VIRTCHNL_OP_DEL_VLAN
* VF sends this message to remove one or more VLAN tag filters for receives.
* PF removes the filters and returns status.
* If a port VLAN is configured by the PF, this operation will return an
* error to the VF.
*/
struct i40e_virtchnl_vlan_filter_list {
u16 vsi_id;
u16 num_elements;
u16 vlan_id[1];
};
/* I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
* VF sends VSI id and flags.
* PF returns status code in retval.
* Note: we assume that broadcast accept mode is always enabled.
*/
struct i40e_virtchnl_promisc_info {
u16 vsi_id;
u16 flags;
};
#define I40E_FLAG_VF_UNICAST_PROMISC 0x00000001
#define I40E_FLAG_VF_MULTICAST_PROMISC 0x00000002
/* I40E_VIRTCHNL_OP_GET_STATS
* VF sends this message to request stats for the selected VSI. VF uses
* the i40e_virtchnl_queue_select struct to specify the VSI. The queue_id
* field is ignored by the PF.
*
* PF replies with struct i40e_eth_stats in an external buffer.
*/
/* I40E_VIRTCHNL_OP_EVENT
* PF sends this message to inform the VF driver of events that may affect it.
* No direct response is expected from the VF, though it may generate other
* messages in response to this one.
*/
enum i40e_virtchnl_event_codes {
I40E_VIRTCHNL_EVENT_UNKNOWN = 0,
I40E_VIRTCHNL_EVENT_LINK_CHANGE,
I40E_VIRTCHNL_EVENT_RESET_IMPENDING,
I40E_VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
};
#define I40E_PF_EVENT_SEVERITY_INFO 0
#define I40E_PF_EVENT_SEVERITY_CERTAIN_DOOM 255
struct i40e_virtchnl_pf_event {
enum i40e_virtchnl_event_codes event;
union {
struct {
enum i40e_aq_link_speed link_speed;
bool link_status;
} link_event;
} event_data;
int severity;
};
/* The following are TBD, not necessary for LAN functionality.
* I40E_VIRTCHNL_OP_FCOE
*/
/* VF reset states - these are written into the RSTAT register:
* I40E_VFGEN_RSTAT1 on the PF
* I40E_VFGEN_RSTAT on the VF
* When the PF initiates a reset, it writes 0
* When the reset is complete, it writes 1
* When the PF detects that the VF has recovered, it writes 2
* VF checks this register periodically to determine if a reset has occurred,
* then polls it to know when the reset is complete.
* If either the PF or VF reads the register while the hardware
* is in a reset state, it will return DEADBEEF, which, when masked
* will result in 3.
*/
enum i40e_vfr_states {
I40E_VFR_INPROGRESS = 0,
I40E_VFR_COMPLETED,
I40E_VFR_VFACTIVE,
I40E_VFR_UNKNOWN,
};
#endif /* _I40E_VIRTCHNL_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,120 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
* Copyright(c) 2013 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*
* The full GNU General Public License is included in this distribution in
* the file called "COPYING".
*
* Contact Information:
* e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
* Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*
******************************************************************************/
#ifndef _I40E_VIRTCHNL_PF_H_
#define _I40E_VIRTCHNL_PF_H_
#include "i40e.h"
#define I40E_MAX_MACVLAN_FILTERS 256
#define I40E_MAX_VLAN_FILTERS 256
#define I40E_MAX_VLANID 4095
#define I40E_VIRTCHNL_SUPPORTED_QTYPES 2
#define I40E_DEFAULT_NUM_MDD_EVENTS_ALLOWED 3
#define I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED 10
#define I40E_VLAN_PRIORITY_SHIFT 12
#define I40E_VLAN_MASK 0xFFF
#define I40E_PRIORITY_MASK 0x7000
/* Various queue ctrls */
enum i40e_queue_ctrl {
I40E_QUEUE_CTRL_UNKNOWN = 0,
I40E_QUEUE_CTRL_ENABLE,
I40E_QUEUE_CTRL_ENABLECHECK,
I40E_QUEUE_CTRL_DISABLE,
I40E_QUEUE_CTRL_DISABLECHECK,
I40E_QUEUE_CTRL_FASTDISABLE,
I40E_QUEUE_CTRL_FASTDISABLECHECK,
};
/* VF states */
enum i40e_vf_states {
I40E_VF_STAT_INIT = 0,
I40E_VF_STAT_ACTIVE,
I40E_VF_STAT_FCOEENA,
I40E_VF_STAT_DISABLED,
};
/* VF capabilities */
enum i40e_vf_capabilities {
I40E_VIRTCHNL_VF_CAP_PRIVILEGE = 0,
I40E_VIRTCHNL_VF_CAP_L2,
};
/* VF information structure */
struct i40e_vf {
struct i40e_pf *pf;
/* vf id in the pf space */
u16 vf_id;
/* all vf vsis connect to the same parent */
enum i40e_switch_element_types parent_type;
/* vf Port Extender (PE) stag if used */
u16 stag;
struct i40e_virtchnl_ether_addr default_lan_addr;
struct i40e_virtchnl_ether_addr default_fcoe_addr;
/* VSI indices - actual VSI pointers are maintained in the PF structure
* When assigned, these will be non-zero, because VSI 0 is always
* the main LAN VSI for the PF.
*/
u8 lan_vsi_index; /* index into PF struct */
u8 lan_vsi_id; /* ID as used by firmware */
u8 num_queue_pairs; /* num of qps assigned to vf vsis */
u64 num_mdd_events; /* num of mdd events detected */
u64 num_invalid_msgs; /* num of malformed or invalid msgs detected */
u64 num_valid_msgs; /* num of valid msgs detected */
unsigned long vf_caps; /* vf's adv. capabilities */
unsigned long vf_states; /* vf's runtime states */
};
void i40e_free_vfs(struct i40e_pf *pf);
int i40e_pci_sriov_configure(struct pci_dev *dev, int num_vfs);
int i40e_vc_process_vf_msg(struct i40e_pf *pf, u16 vf_id, u32 v_opcode,
u32 v_retval, u8 *msg, u16 msglen);
int i40e_vc_process_vflr_event(struct i40e_pf *pf);
int i40e_reset_vf(struct i40e_vf *vf, bool flr);
void i40e_vc_notify_vf_reset(struct i40e_vf *vf);
/* vf configuration related iplink handlers */
int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac);
int i40e_ndo_set_vf_port_vlan(struct net_device *netdev,
int vf_id, u16 vlan_id, u8 qos);
int i40e_ndo_set_vf_bw(struct net_device *netdev, int vf_id, int tx_rate);
int i40e_ndo_get_vf_config(struct net_device *netdev,
int vf_id, struct ifla_vf_info *ivi);
void i40e_vc_notify_link_state(struct i40e_pf *pf);
void i40e_vc_notify_reset(struct i40e_pf *pf);
#endif /* _I40E_VIRTCHNL_PF_H_ */