Merge branch 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6

* 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6: (116 commits)
  sk98lin: planned removal
  AT91: MACB support
  sky2: version 1.12
  sky2: add new chip ids
  sky2: Yukon Extreme support
  sky2: safer transmit timeout
  sky2: TSO support for EC_U
  sky2: use dev_err for error reports
  sky2: add Wake On Lan support
  fix unaligned exception in /drivers/net/wireless/orinoco.c
  Remove unused kernel config option DLCI_COUNT
  z85230: spinlock logic
  mips: declance: Driver model for the PMAD-A
  Spidernet: Rework RX linked list
  NET: turn local_save_flags() + local_irq_disable() into local_irq_save()
  NET-3c59x: turn local_save_flags() + local_irq_disable() into local_irq_save()
  hp100: convert pci_module_init() to pci_register_driver()
  NetXen: Added ethtool support for user level tools.
  NetXen: Firmware crb init changes.
  maintainers: add atl1 maintainers
  ...
This commit is contained in:
Linus Torvalds 2007-02-07 19:21:56 -08:00
commit 7677ced48e
125 changed files with 25610 additions and 5289 deletions

View File

@ -333,3 +333,10 @@ Why: Unmaintained for years, superceded by JFFS2 for years.
Who: Jeff Garzik <jeff@garzik.org> Who: Jeff Garzik <jeff@garzik.org>
--------------------------- ---------------------------
What: sk98lin network driver
When: July 2007
Why: In kernel tree version of driver is unmaintained. Sk98lin driver
replaced by the skge driver.
Who: Stephen Hemminger <shemminger@osdl.org>

View File

@ -598,6 +598,16 @@ M: ecashin@coraid.com
W: http://www.coraid.com/support/linux W: http://www.coraid.com/support/linux
S: Supported S: Supported
ATL1 ETHERNET DRIVER
P: Jay Cliburn
M: jcliburn@gmail.com
P: Chris Snook
M: csnook@redhat.com
L: atl1-devel@lists.sourceforge.net
W: http://sourceforge.net/projects/atl1
W: http://atl1.sourceforge.net
S: Maintained
ATM ATM
P: Chas Williams P: Chas Williams
M: chas@cmf.nrl.navy.mil M: chas@cmf.nrl.navy.mil
@ -2485,6 +2495,12 @@ L: orinoco-devel@lists.sourceforge.net
W: http://www.nongnu.org/orinoco/ W: http://www.nongnu.org/orinoco/
S: Maintained S: Maintained
PA SEMI ETHERNET DRIVER
P: Olof Johansson
M: olof@lixom.net
L: netdev@vger.kernel.org
S: Maintained
PARALLEL PORT SUPPORT PARALLEL PORT SUPPORT
P: Phil Blundell P: Phil Blundell
M: philb@gnu.org M: philb@gnu.org
@ -2654,7 +2670,7 @@ S: Supported
PRISM54 WIRELESS DRIVER PRISM54 WIRELESS DRIVER
P: Prism54 Development Team P: Prism54 Development Team
M: prism54-private@prism54.org M: developers@islsm.org
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
W: http://prism54.org W: http://prism54.org
S: Maintained S: Maintained

View File

@ -792,8 +792,7 @@ static void poll_vortex(struct net_device *dev)
{ {
struct vortex_private *vp = netdev_priv(dev); struct vortex_private *vp = netdev_priv(dev);
unsigned long flags; unsigned long flags;
local_save_flags(flags); local_irq_save(flags);
local_irq_disable();
(vp->full_bus_master_rx ? boomerang_interrupt:vortex_interrupt)(dev->irq,dev); (vp->full_bus_master_rx ? boomerang_interrupt:vortex_interrupt)(dev->irq,dev);
local_irq_restore(flags); local_irq_restore(flags);
} }

View File

@ -190,7 +190,7 @@ config MII
config MACB config MACB
tristate "Atmel MACB support" tristate "Atmel MACB support"
depends on NET_ETHERNET && AVR32 depends on NET_ETHERNET && (AVR32 || ARCH_AT91SAM9260 || ARCH_AT91SAM9263)
select MII select MII
help help
The Atmel MACB ethernet interface is found on many AT32 and AT91 The Atmel MACB ethernet interface is found on many AT32 and AT91
@ -235,16 +235,6 @@ config BMAC
To compile this driver as a module, choose M here: the module To compile this driver as a module, choose M here: the module
will be called bmac. will be called bmac.
config OAKNET
tristate "National DP83902AV (Oak ethernet) support"
depends on NET_ETHERNET && PPC && BROKEN
select CRC32
help
Say Y if your machine has this type of Ethernet network card.
To compile this driver as a module, choose M here: the module
will be called oaknet.
config ARIADNE config ARIADNE
tristate "Ariadne support" tristate "Ariadne support"
depends on NET_ETHERNET && ZORRO depends on NET_ETHERNET && ZORRO
@ -1155,21 +1145,6 @@ config SEEQ8005
<file:Documentation/networking/net-modules.txt>. The module <file:Documentation/networking/net-modules.txt>. The module
will be called seeq8005. will be called seeq8005.
config SKMC
tristate "SKnet MCA support"
depends on NET_ETHERNET && MCA && BROKEN
---help---
These are Micro Channel Ethernet adapters. You need to say Y to "MCA
support" in order to use this driver. Supported cards are the SKnet
Junior MC2 and the SKnet MC2(+). The driver automatically
distinguishes between the two cards. Note that using multiple boards
of different type hasn't been tested with this driver. Say Y if you
have one of these Ethernet adapters.
To compile this driver as a module, choose M here and read
<file:Documentation/networking/net-modules.txt>. The module
will be called sk_mca.
config NE2_MCA config NE2_MCA
tristate "NE/2 (ne2000 MCA version) support" tristate "NE/2 (ne2000 MCA version) support"
depends on NET_ETHERNET && MCA_LEGACY depends on NET_ETHERNET && MCA_LEGACY
@ -1788,6 +1763,18 @@ config LAN_SAA9730
workstations. workstations.
See <http://www.semiconductors.philips.com/pip/SAA9730_flyer_1>. See <http://www.semiconductors.philips.com/pip/SAA9730_flyer_1>.
config SC92031
tristate "Silan SC92031 PCI Fast Ethernet Adapter driver (EXPERIMENTAL)"
depends on NET_PCI && PCI && EXPERIMENTAL
select CRC32
---help---
This is a driver for the Fast Ethernet PCI network cards based on
the Silan SC92031 chip (sometimes also called Rsltek 8139D). If you
have one of these, say Y here.
To compile this driver as a module, choose M here: the module
will be called sc92031. This is recommended.
config NET_POCKET config NET_POCKET
bool "Pocket and portable adapters" bool "Pocket and portable adapters"
depends on NET_ETHERNET && PARPORT depends on NET_ETHERNET && PARPORT
@ -2392,6 +2379,24 @@ config CHELSIO_T1_NAPI
NAPI is a driver API designed to reduce CPU and interrupt load NAPI is a driver API designed to reduce CPU and interrupt load
when the driver is receiving lots of packets from the card. when the driver is receiving lots of packets from the card.
config CHELSIO_T3
tristate "Chelsio Communications T3 10Gb Ethernet support"
depends on PCI
help
This driver supports Chelsio T3-based gigabit and 10Gb Ethernet
adapters.
For general information about Chelsio and our products, visit
our website at <http://www.chelsio.com>.
For customer support, please visit our customer support page at
<http://www.chelsio.com/support.htm>.
Please send feedback to <linux-bugs@chelsio.com>.
To compile this driver as a module, choose M here: the module
will be called cxgb3.
config EHEA config EHEA
tristate "eHEA Ethernet support" tristate "eHEA Ethernet support"
depends on IBMEBUS depends on IBMEBUS
@ -2488,6 +2493,13 @@ config NETXEN_NIC
help help
This enables the support for NetXen's Gigabit Ethernet card. This enables the support for NetXen's Gigabit Ethernet card.
config PASEMI_MAC
tristate "PA Semi 1/10Gbit MAC"
depends on PPC64 && PCI
help
This driver supports the on-chip 1/10Gbit Ethernet controller on
PA Semi's PWRficient line of chips.
endmenu endmenu
source "drivers/net/tokenring/Kconfig" source "drivers/net/tokenring/Kconfig"
@ -2541,6 +2553,7 @@ config DEFXX
config SKFP config SKFP
tristate "SysKonnect FDDI PCI support" tristate "SysKonnect FDDI PCI support"
depends on FDDI && PCI depends on FDDI && PCI
select BITREVERSE
---help--- ---help---
Say Y here if you have a SysKonnect FDDI PCI adapter. Say Y here if you have a SysKonnect FDDI PCI adapter.
The following adapters are supported by this driver: The following adapters are supported by this driver:

View File

@ -6,6 +6,7 @@ obj-$(CONFIG_E1000) += e1000/
obj-$(CONFIG_IBM_EMAC) += ibm_emac/ obj-$(CONFIG_IBM_EMAC) += ibm_emac/
obj-$(CONFIG_IXGB) += ixgb/ obj-$(CONFIG_IXGB) += ixgb/
obj-$(CONFIG_CHELSIO_T1) += chelsio/ obj-$(CONFIG_CHELSIO_T1) += chelsio/
obj-$(CONFIG_CHELSIO_T3) += cxgb3/
obj-$(CONFIG_EHEA) += ehea/ obj-$(CONFIG_EHEA) += ehea/
obj-$(CONFIG_BONDING) += bonding/ obj-$(CONFIG_BONDING) += bonding/
obj-$(CONFIG_GIANFAR) += gianfar_driver.o obj-$(CONFIG_GIANFAR) += gianfar_driver.o
@ -36,8 +37,6 @@ obj-$(CONFIG_CASSINI) += cassini.o
obj-$(CONFIG_MACE) += mace.o obj-$(CONFIG_MACE) += mace.o
obj-$(CONFIG_BMAC) += bmac.o obj-$(CONFIG_BMAC) += bmac.o
obj-$(CONFIG_OAKNET) += oaknet.o 8390.o
obj-$(CONFIG_DGRS) += dgrs.o obj-$(CONFIG_DGRS) += dgrs.o
obj-$(CONFIG_VORTEX) += 3c59x.o obj-$(CONFIG_VORTEX) += 3c59x.o
obj-$(CONFIG_TYPHOON) += typhoon.o obj-$(CONFIG_TYPHOON) += typhoon.o
@ -137,7 +136,6 @@ obj-$(CONFIG_AT1700) += at1700.o
obj-$(CONFIG_EL1) += 3c501.o obj-$(CONFIG_EL1) += 3c501.o
obj-$(CONFIG_EL16) += 3c507.o obj-$(CONFIG_EL16) += 3c507.o
obj-$(CONFIG_ELMC) += 3c523.o obj-$(CONFIG_ELMC) += 3c523.o
obj-$(CONFIG_SKMC) += sk_mca.o
obj-$(CONFIG_IBMLANA) += ibmlana.o obj-$(CONFIG_IBMLANA) += ibmlana.o
obj-$(CONFIG_ELMC_II) += 3c527.o obj-$(CONFIG_ELMC_II) += 3c527.o
obj-$(CONFIG_EL3) += 3c509.o obj-$(CONFIG_EL3) += 3c509.o
@ -160,6 +158,7 @@ obj-$(CONFIG_APRICOT) += 82596.o
obj-$(CONFIG_LASI_82596) += lasi_82596.o obj-$(CONFIG_LASI_82596) += lasi_82596.o
obj-$(CONFIG_MVME16x_NET) += 82596.o obj-$(CONFIG_MVME16x_NET) += 82596.o
obj-$(CONFIG_BVME6000_NET) += 82596.o obj-$(CONFIG_BVME6000_NET) += 82596.o
obj-$(CONFIG_SC92031) += sc92031.o
# This is also a 82596 and should probably be merged # This is also a 82596 and should probably be merged
obj-$(CONFIG_LP486E) += lp486e.o obj-$(CONFIG_LP486E) += lp486e.o
@ -196,6 +195,7 @@ obj-$(CONFIG_SMC91X) += smc91x.o
obj-$(CONFIG_SMC911X) += smc911x.o obj-$(CONFIG_SMC911X) += smc911x.o
obj-$(CONFIG_DM9000) += dm9000.o obj-$(CONFIG_DM9000) += dm9000.o
obj-$(CONFIG_FEC_8XX) += fec_8xx/ obj-$(CONFIG_FEC_8XX) += fec_8xx/
obj-$(CONFIG_PASEMI_MAC) += pasemi_mac.o
obj-$(CONFIG_MACB) += macb.o obj-$(CONFIG_MACB) += macb.o

View File

@ -59,7 +59,6 @@ extern struct net_device *wavelan_probe(int unit);
extern struct net_device *arlan_probe(int unit); extern struct net_device *arlan_probe(int unit);
extern struct net_device *el16_probe(int unit); extern struct net_device *el16_probe(int unit);
extern struct net_device *elmc_probe(int unit); extern struct net_device *elmc_probe(int unit);
extern struct net_device *skmca_probe(int unit);
extern struct net_device *elplus_probe(int unit); extern struct net_device *elplus_probe(int unit);
extern struct net_device *ac3200_probe(int unit); extern struct net_device *ac3200_probe(int unit);
extern struct net_device *es_probe(int unit); extern struct net_device *es_probe(int unit);
@ -152,9 +151,6 @@ static struct devprobe2 mca_probes[] __initdata = {
#endif #endif
#ifdef CONFIG_ELMC_II /* 3c527 */ #ifdef CONFIG_ELMC_II /* 3c527 */
{mc32_probe, 0}, {mc32_probe, 0},
#endif
#ifdef CONFIG_SKMC /* SKnet Microchannel */
{skmca_probe, 0},
#endif #endif
{NULL, 0}, {NULL, 0},
}; };

View File

@ -1334,8 +1334,7 @@ err_no_interrupt:
static void amd8111e_poll(struct net_device *dev) static void amd8111e_poll(struct net_device *dev)
{ {
unsigned long flags; unsigned long flags;
local_save_flags(flags); local_irq_save(flags);
local_irq_disable();
amd8111e_interrupt(0, dev); amd8111e_interrupt(0, dev);
local_irq_restore(flags); local_irq_restore(flags);
} }

View File

@ -721,7 +721,7 @@ static void b44_recycle_rx(struct b44 *bp, int src_idx, u32 dest_idx_unmasked)
struct ring_info *src_map, *dest_map; struct ring_info *src_map, *dest_map;
struct rx_header *rh; struct rx_header *rh;
int dest_idx; int dest_idx;
u32 ctrl; __le32 ctrl;
dest_idx = dest_idx_unmasked & (B44_RX_RING_SIZE - 1); dest_idx = dest_idx_unmasked & (B44_RX_RING_SIZE - 1);
dest_desc = &bp->rx_ring[dest_idx]; dest_desc = &bp->rx_ring[dest_idx];
@ -783,7 +783,7 @@ static int b44_rx(struct b44 *bp, int budget)
RX_PKT_BUF_SZ, RX_PKT_BUF_SZ,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
rh = (struct rx_header *) skb->data; rh = (struct rx_header *) skb->data;
len = cpu_to_le16(rh->len); len = le16_to_cpu(rh->len);
if ((len > (RX_PKT_BUF_SZ - bp->rx_offset)) || if ((len > (RX_PKT_BUF_SZ - bp->rx_offset)) ||
(rh->flags & cpu_to_le16(RX_FLAG_ERRORS))) { (rh->flags & cpu_to_le16(RX_FLAG_ERRORS))) {
drop_it: drop_it:
@ -799,7 +799,7 @@ static int b44_rx(struct b44 *bp, int budget)
do { do {
udelay(2); udelay(2);
barrier(); barrier();
len = cpu_to_le16(rh->len); len = le16_to_cpu(rh->len);
} while (len == 0 && i++ < 5); } while (len == 0 && i++ < 5);
if (len == 0) if (len == 0)
goto drop_it; goto drop_it;
@ -2061,7 +2061,7 @@ out:
static int b44_read_eeprom(struct b44 *bp, u8 *data) static int b44_read_eeprom(struct b44 *bp, u8 *data)
{ {
long i; long i;
u16 *ptr = (u16 *) data; __le16 *ptr = (__le16 *) data;
for (i = 0; i < 128; i += 2) for (i = 0; i < 128; i += 2)
ptr[i / 2] = cpu_to_le16(readw(bp->regs + 4096 + i)); ptr[i / 2] = cpu_to_le16(readw(bp->regs + 4096 + i));

View File

@ -308,8 +308,8 @@
#define MII_TLEDCTRL_ENABLE 0x0040 #define MII_TLEDCTRL_ENABLE 0x0040
struct dma_desc { struct dma_desc {
u32 ctrl; __le32 ctrl;
u32 addr; __le32 addr;
}; };
/* There are only 12 bits in the DMA engine for descriptor offsetting /* There are only 12 bits in the DMA engine for descriptor offsetting
@ -327,9 +327,9 @@ struct dma_desc {
#define RX_COPY_THRESHOLD 256 #define RX_COPY_THRESHOLD 256
struct rx_header { struct rx_header {
u16 len; __le16 len;
u16 flags; __le16 flags;
u16 pad[12]; __le16 pad[12];
}; };
#define RX_HEADER_LEN 28 #define RX_HEADER_LEN 28

View File

@ -18,6 +18,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/crc32.h> #include <linux/crc32.h>
#include <linux/bitrev.h>
#include <asm/prom.h> #include <asm/prom.h>
#include <asm/dbdma.h> #include <asm/dbdma.h>
#include <asm/io.h> #include <asm/io.h>
@ -140,7 +141,6 @@ static unsigned char *bmac_emergency_rxbuf;
+ (N_RX_RING + N_TX_RING + 4) * sizeof(struct dbdma_cmd) \ + (N_RX_RING + N_TX_RING + 4) * sizeof(struct dbdma_cmd) \
+ sizeof(struct sk_buff_head)) + sizeof(struct sk_buff_head))
static unsigned char bitrev(unsigned char b);
static int bmac_open(struct net_device *dev); static int bmac_open(struct net_device *dev);
static int bmac_close(struct net_device *dev); static int bmac_close(struct net_device *dev);
static int bmac_transmit_packet(struct sk_buff *skb, struct net_device *dev); static int bmac_transmit_packet(struct sk_buff *skb, struct net_device *dev);
@ -586,18 +586,6 @@ bmac_construct_rxbuff(struct sk_buff *skb, volatile struct dbdma_cmd *cp)
virt_to_bus(addr), 0); virt_to_bus(addr), 0);
} }
/* Bit-reverse one byte of an ethernet hardware address. */
static unsigned char
bitrev(unsigned char b)
{
int d = 0, i;
for (i = 0; i < 8; ++i, b >>= 1)
d = (d << 1) | (b & 1);
return d;
}
static void static void
bmac_init_tx_ring(struct bmac_data *bp) bmac_init_tx_ring(struct bmac_data *bp)
{ {
@ -1224,8 +1212,8 @@ bmac_get_station_address(struct net_device *dev, unsigned char *ea)
{ {
reset_and_select_srom(dev); reset_and_select_srom(dev);
data = read_srom(dev, i + EnetAddressOffset/2, SROMAddressBits); data = read_srom(dev, i + EnetAddressOffset/2, SROMAddressBits);
ea[2*i] = bitrev(data & 0x0ff); ea[2*i] = bitrev8(data & 0x0ff);
ea[2*i+1] = bitrev((data >> 8) & 0x0ff); ea[2*i+1] = bitrev8((data >> 8) & 0x0ff);
} }
} }
@ -1315,7 +1303,7 @@ static int __devinit bmac_probe(struct macio_dev *mdev, const struct of_device_i
rev = addr[0] == 0 && addr[1] == 0xA0; rev = addr[0] == 0 && addr[1] == 0xA0;
for (j = 0; j < 6; ++j) for (j = 0; j < 6; ++j)
dev->dev_addr[j] = rev? bitrev(addr[j]): addr[j]; dev->dev_addr[j] = rev ? bitrev8(addr[j]): addr[j];
/* Enable chip without interrupts for now */ /* Enable chip without interrupts for now */
bmac_enable_and_reset_chip(dev); bmac_enable_and_reset_chip(dev);

View File

@ -39,12 +39,9 @@
#include <linux/if_vlan.h> #include <linux/if_vlan.h>
#define BCM_VLAN 1 #define BCM_VLAN 1
#endif #endif
#ifdef NETIF_F_TSO
#include <net/ip.h> #include <net/ip.h>
#include <net/tcp.h> #include <net/tcp.h>
#include <net/checksum.h> #include <net/checksum.h>
#define BCM_TSO 1
#endif
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/crc32.h> #include <linux/crc32.h>
#include <linux/prefetch.h> #include <linux/prefetch.h>
@ -1728,7 +1725,7 @@ bnx2_tx_int(struct bnx2 *bp)
tx_buf = &bp->tx_buf_ring[sw_ring_cons]; tx_buf = &bp->tx_buf_ring[sw_ring_cons];
skb = tx_buf->skb; skb = tx_buf->skb;
#ifdef BCM_TSO
/* partial BD completions possible with TSO packets */ /* partial BD completions possible with TSO packets */
if (skb_is_gso(skb)) { if (skb_is_gso(skb)) {
u16 last_idx, last_ring_idx; u16 last_idx, last_ring_idx;
@ -1744,7 +1741,7 @@ bnx2_tx_int(struct bnx2 *bp)
break; break;
} }
} }
#endif
pci_unmap_single(bp->pdev, pci_unmap_addr(tx_buf, mapping), pci_unmap_single(bp->pdev, pci_unmap_addr(tx_buf, mapping),
skb_headlen(skb), PCI_DMA_TODEVICE); skb_headlen(skb), PCI_DMA_TODEVICE);
@ -4514,7 +4511,6 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
vlan_tag_flags |= vlan_tag_flags |=
(TX_BD_FLAGS_VLAN_TAG | (vlan_tx_tag_get(skb) << 16)); (TX_BD_FLAGS_VLAN_TAG | (vlan_tx_tag_get(skb) << 16));
} }
#ifdef BCM_TSO
if ((mss = skb_shinfo(skb)->gso_size) && if ((mss = skb_shinfo(skb)->gso_size) &&
(skb->len > (bp->dev->mtu + ETH_HLEN))) { (skb->len > (bp->dev->mtu + ETH_HLEN))) {
u32 tcp_opt_len, ip_tcp_len; u32 tcp_opt_len, ip_tcp_len;
@ -4547,7 +4543,6 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
} }
} }
else else
#endif
{ {
mss = 0; mss = 0;
} }
@ -5544,10 +5539,8 @@ static const struct ethtool_ops bnx2_ethtool_ops = {
.set_tx_csum = ethtool_op_set_tx_csum, .set_tx_csum = ethtool_op_set_tx_csum,
.get_sg = ethtool_op_get_sg, .get_sg = ethtool_op_get_sg,
.set_sg = ethtool_op_set_sg, .set_sg = ethtool_op_set_sg,
#ifdef BCM_TSO
.get_tso = ethtool_op_get_tso, .get_tso = ethtool_op_get_tso,
.set_tso = bnx2_set_tso, .set_tso = bnx2_set_tso,
#endif
.self_test_count = bnx2_self_test_count, .self_test_count = bnx2_self_test_count,
.self_test = bnx2_self_test, .self_test = bnx2_self_test,
.get_strings = bnx2_get_strings, .get_strings = bnx2_get_strings,
@ -6104,9 +6097,7 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
#ifdef BCM_VLAN #ifdef BCM_VLAN
dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
#endif #endif
#ifdef BCM_TSO
dev->features |= NETIF_F_TSO | NETIF_F_TSO_ECN; dev->features |= NETIF_F_TSO | NETIF_F_TSO_ECN;
#endif
netif_carrier_off(bp->dev); netif_carrier_off(bp->dev);

View File

@ -4704,6 +4704,7 @@ static int bond_check_params(struct bond_params *params)
static struct lock_class_key bonding_netdev_xmit_lock_key; static struct lock_class_key bonding_netdev_xmit_lock_key;
/* Create a new bond based on the specified name and bonding parameters. /* Create a new bond based on the specified name and bonding parameters.
* If name is NULL, obtain a suitable "bond%d" name for us.
* Caller must NOT hold rtnl_lock; we need to release it here before we * Caller must NOT hold rtnl_lock; we need to release it here before we
* set up our sysfs entries. * set up our sysfs entries.
*/ */
@ -4713,7 +4714,8 @@ int bond_create(char *name, struct bond_params *params, struct bonding **newbond
int res; int res;
rtnl_lock(); rtnl_lock();
bond_dev = alloc_netdev(sizeof(struct bonding), name, ether_setup); bond_dev = alloc_netdev(sizeof(struct bonding), name ? name : "",
ether_setup);
if (!bond_dev) { if (!bond_dev) {
printk(KERN_ERR DRV_NAME printk(KERN_ERR DRV_NAME
": %s: eek! can't alloc netdev!\n", ": %s: eek! can't alloc netdev!\n",
@ -4722,6 +4724,12 @@ int bond_create(char *name, struct bond_params *params, struct bonding **newbond
goto out_rtnl; goto out_rtnl;
} }
if (!name) {
res = dev_alloc_name(bond_dev, "bond%d");
if (res < 0)
goto out_netdev;
}
/* bond_init() must be called after dev_alloc_name() (for the /* bond_init() must be called after dev_alloc_name() (for the
* /proc files), but before register_netdevice(), because we * /proc files), but before register_netdevice(), because we
* need to set function pointers. * need to set function pointers.
@ -4748,14 +4756,19 @@ int bond_create(char *name, struct bond_params *params, struct bonding **newbond
rtnl_unlock(); /* allows sysfs registration of net device */ rtnl_unlock(); /* allows sysfs registration of net device */
res = bond_create_sysfs_entry(bond_dev->priv); res = bond_create_sysfs_entry(bond_dev->priv);
goto done; if (res < 0) {
rtnl_lock();
goto out_bond;
}
return 0;
out_bond: out_bond:
bond_deinit(bond_dev); bond_deinit(bond_dev);
out_netdev: out_netdev:
free_netdev(bond_dev); free_netdev(bond_dev);
out_rtnl: out_rtnl:
rtnl_unlock(); rtnl_unlock();
done:
return res; return res;
} }
@ -4763,7 +4776,6 @@ static int __init bonding_init(void)
{ {
int i; int i;
int res; int res;
char new_bond_name[8]; /* Enough room for 999 bonds at init. */
printk(KERN_INFO "%s", version); printk(KERN_INFO "%s", version);
@ -4776,8 +4788,7 @@ static int __init bonding_init(void)
bond_create_proc_dir(); bond_create_proc_dir();
#endif #endif
for (i = 0; i < max_bonds; i++) { for (i = 0; i < max_bonds; i++) {
sprintf(new_bond_name, "bond%d",i); res = bond_create(NULL, &bonding_defaults, NULL);
res = bond_create(new_bond_name,&bonding_defaults, NULL);
if (res) if (res)
goto err; goto err;
} }

View File

@ -1372,6 +1372,21 @@ int bond_create_sysfs(void)
return -ENODEV; return -ENODEV;
ret = class_create_file(netdev_class, &class_attr_bonding_masters); ret = class_create_file(netdev_class, &class_attr_bonding_masters);
/*
* Permit multiple loads of the module by ignoring failures to
* create the bonding_masters sysfs file. Bonding devices
* created by second or subsequent loads of the module will
* not be listed in, or controllable by, bonding_masters, but
* will have the usual "bonding" sysfs directory.
*
* This is done to preserve backwards compatibility for
* initscripts/sysconfig, which load bonding multiple times to
* configure multiple bonding devices.
*/
if (ret == -EEXIST) {
netdev_class = NULL;
return 0;
}
return ret; return ret;

View File

@ -22,8 +22,8 @@
#include "bond_3ad.h" #include "bond_3ad.h"
#include "bond_alb.h" #include "bond_alb.h"
#define DRV_VERSION "3.1.1" #define DRV_VERSION "3.1.2"
#define DRV_RELDATE "September 26, 2006" #define DRV_RELDATE "January 20, 2007"
#define DRV_NAME "bonding" #define DRV_NAME "bonding"
#define DRV_DESCRIPTION "Ethernet Channel Bonding Driver" #define DRV_DESCRIPTION "Ethernet Channel Bonding Driver"
@ -237,12 +237,13 @@ static inline struct bonding *bond_get_bond_by_slave(struct slave *slave)
#define BOND_ARP_VALIDATE_ALL (BOND_ARP_VALIDATE_ACTIVE | \ #define BOND_ARP_VALIDATE_ALL (BOND_ARP_VALIDATE_ACTIVE | \
BOND_ARP_VALIDATE_BACKUP) BOND_ARP_VALIDATE_BACKUP)
extern inline int slave_do_arp_validate(struct bonding *bond, struct slave *slave) static inline int slave_do_arp_validate(struct bonding *bond,
struct slave *slave)
{ {
return bond->params.arp_validate & (1 << slave->state); return bond->params.arp_validate & (1 << slave->state);
} }
extern inline unsigned long slave_last_rx(struct bonding *bond, static inline unsigned long slave_last_rx(struct bonding *bond,
struct slave *slave) struct slave *slave)
{ {
if (slave_do_arp_validate(bond, slave)) if (slave_do_arp_validate(bond, slave))

View File

@ -324,7 +324,7 @@ struct board_info {
unsigned char mdio_phybaseaddr; unsigned char mdio_phybaseaddr;
struct gmac *gmac; struct gmac *gmac;
struct gphy *gphy; struct gphy *gphy;
struct mdio_ops *mdio_ops; struct mdio_ops *mdio_ops;
const char *desc; const char *desc;
}; };

View File

@ -103,7 +103,7 @@ enum CPL_opcode {
CPL_MIGRATE_C2T_RPL = 0xDD, CPL_MIGRATE_C2T_RPL = 0xDD,
CPL_ERROR = 0xD7, CPL_ERROR = 0xD7,
/* internal: driver -> TOM */ /* internal: driver -> TOM */
CPL_MSS_CHANGE = 0xE1 CPL_MSS_CHANGE = 0xE1
}; };
@ -159,8 +159,8 @@ enum { // TX_PKT_LSO ethernet types
}; };
union opcode_tid { union opcode_tid {
u32 opcode_tid; u32 opcode_tid;
u8 opcode; u8 opcode;
}; };
#define S_OPCODE 24 #define S_OPCODE 24
@ -234,7 +234,7 @@ struct cpl_pass_accept_req {
u32 local_ip; u32 local_ip;
u32 peer_ip; u32 peer_ip;
u32 tos_tid; u32 tos_tid;
struct tcp_options tcp_options; struct tcp_options tcp_options;
u8 dst_mac[6]; u8 dst_mac[6];
u16 vlan_tag; u16 vlan_tag;
u8 src_mac[6]; u8 src_mac[6];
@ -250,12 +250,12 @@ struct cpl_pass_accept_rpl {
u32 peer_ip; u32 peer_ip;
u32 opt0h; u32 opt0h;
union { union {
u32 opt0l; u32 opt0l;
struct { struct {
u8 rsvd[3]; u8 rsvd[3];
u8 status; u8 status;
};
}; };
};
}; };
struct cpl_act_open_req { struct cpl_act_open_req {

View File

@ -69,14 +69,14 @@ static inline void cancel_mac_stats_update(struct adapter *ap)
cancel_delayed_work(&ap->stats_update_task); cancel_delayed_work(&ap->stats_update_task);
} }
#define MAX_CMDQ_ENTRIES 16384 #define MAX_CMDQ_ENTRIES 16384
#define MAX_CMDQ1_ENTRIES 1024 #define MAX_CMDQ1_ENTRIES 1024
#define MAX_RX_BUFFERS 16384 #define MAX_RX_BUFFERS 16384
#define MAX_RX_JUMBO_BUFFERS 16384 #define MAX_RX_JUMBO_BUFFERS 16384
#define MAX_TX_BUFFERS_HIGH 16384U #define MAX_TX_BUFFERS_HIGH 16384U
#define MAX_TX_BUFFERS_LOW 1536U #define MAX_TX_BUFFERS_LOW 1536U
#define MAX_TX_BUFFERS 1460U #define MAX_TX_BUFFERS 1460U
#define MIN_FL_ENTRIES 32 #define MIN_FL_ENTRIES 32
#define DFLT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK | \ #define DFLT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK | \
NETIF_MSG_TIMER | NETIF_MSG_IFDOWN | NETIF_MSG_IFUP |\ NETIF_MSG_TIMER | NETIF_MSG_IFDOWN | NETIF_MSG_IFUP |\
@ -143,7 +143,7 @@ static void link_report(struct port_info *p)
case SPEED_100: s = "100Mbps"; break; case SPEED_100: s = "100Mbps"; break;
} }
printk(KERN_INFO "%s: link up, %s, %s-duplex\n", printk(KERN_INFO "%s: link up, %s, %s-duplex\n",
p->dev->name, s, p->dev->name, s,
p->link_config.duplex == DUPLEX_FULL ? "full" : "half"); p->link_config.duplex == DUPLEX_FULL ? "full" : "half");
} }
@ -233,7 +233,7 @@ static int cxgb_up(struct adapter *adapter)
t1_sge_start(adapter->sge); t1_sge_start(adapter->sge);
t1_interrupts_enable(adapter); t1_interrupts_enable(adapter);
out_err: out_err:
return err; return err;
} }
@ -454,51 +454,21 @@ static void get_stats(struct net_device *dev, struct ethtool_stats *stats,
const struct cmac_statistics *s; const struct cmac_statistics *s;
const struct sge_intr_counts *t; const struct sge_intr_counts *t;
struct sge_port_stats ss; struct sge_port_stats ss;
unsigned int len;
s = mac->ops->statistics_update(mac, MAC_STATS_UPDATE_FULL); s = mac->ops->statistics_update(mac, MAC_STATS_UPDATE_FULL);
*data++ = s->TxOctetsOK; len = sizeof(u64)*(&s->TxFCSErrors + 1 - &s->TxOctetsOK);
*data++ = s->TxOctetsBad; memcpy(data, &s->TxOctetsOK, len);
*data++ = s->TxUnicastFramesOK; data += len;
*data++ = s->TxMulticastFramesOK;
*data++ = s->TxBroadcastFramesOK;
*data++ = s->TxPauseFrames;
*data++ = s->TxFramesWithDeferredXmissions;
*data++ = s->TxLateCollisions;
*data++ = s->TxTotalCollisions;
*data++ = s->TxFramesAbortedDueToXSCollisions;
*data++ = s->TxUnderrun;
*data++ = s->TxLengthErrors;
*data++ = s->TxInternalMACXmitError;
*data++ = s->TxFramesWithExcessiveDeferral;
*data++ = s->TxFCSErrors;
*data++ = s->RxOctetsOK; len = sizeof(u64)*(&s->RxFrameTooLongErrors + 1 - &s->RxOctetsOK);
*data++ = s->RxOctetsBad; memcpy(data, &s->RxOctetsOK, len);
*data++ = s->RxUnicastFramesOK; data += len;
*data++ = s->RxMulticastFramesOK;
*data++ = s->RxBroadcastFramesOK;
*data++ = s->RxPauseFrames;
*data++ = s->RxFCSErrors;
*data++ = s->RxAlignErrors;
*data++ = s->RxSymbolErrors;
*data++ = s->RxDataErrors;
*data++ = s->RxSequenceErrors;
*data++ = s->RxRuntErrors;
*data++ = s->RxJabberErrors;
*data++ = s->RxInternalMACRcvError;
*data++ = s->RxInRangeLengthErrors;
*data++ = s->RxOutOfRangeLengthField;
*data++ = s->RxFrameTooLongErrors;
t1_sge_get_port_stats(adapter->sge, dev->if_port, &ss); t1_sge_get_port_stats(adapter->sge, dev->if_port, &ss);
*data++ = ss.rx_packets; memcpy(data, &ss, sizeof(ss));
*data++ = ss.rx_cso_good; data += sizeof(ss);
*data++ = ss.tx_packets;
*data++ = ss.tx_cso;
*data++ = ss.tx_tso;
*data++ = ss.vlan_xtract;
*data++ = ss.vlan_insert;
t = t1_sge_get_intr_counts(adapter->sge); t = t1_sge_get_intr_counts(adapter->sge);
*data++ = t->rx_drops; *data++ = t->rx_drops;
@ -749,7 +719,7 @@ static int set_sge_param(struct net_device *dev, struct ethtool_ringparam *e)
return -EINVAL; return -EINVAL;
if (adapter->flags & FULL_INIT_DONE) if (adapter->flags & FULL_INIT_DONE)
return -EBUSY; return -EBUSY;
adapter->params.sge.freelQ_size[!jumbo_fl] = e->rx_pending; adapter->params.sge.freelQ_size[!jumbo_fl] = e->rx_pending;
adapter->params.sge.freelQ_size[jumbo_fl] = e->rx_jumbo_pending; adapter->params.sge.freelQ_size[jumbo_fl] = e->rx_jumbo_pending;
@ -764,7 +734,7 @@ static int set_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
struct adapter *adapter = dev->priv; struct adapter *adapter = dev->priv;
adapter->params.sge.rx_coalesce_usecs = c->rx_coalesce_usecs; adapter->params.sge.rx_coalesce_usecs = c->rx_coalesce_usecs;
adapter->params.sge.coalesce_enable = c->use_adaptive_rx_coalesce; adapter->params.sge.coalesce_enable = c->use_adaptive_rx_coalesce;
adapter->params.sge.sample_interval_usecs = c->rate_sample_interval; adapter->params.sge.sample_interval_usecs = c->rate_sample_interval;
t1_sge_set_coalesce_params(adapter->sge, &adapter->params.sge); t1_sge_set_coalesce_params(adapter->sge, &adapter->params.sge);
return 0; return 0;
@ -782,9 +752,9 @@ static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
static int get_eeprom_len(struct net_device *dev) static int get_eeprom_len(struct net_device *dev)
{ {
struct adapter *adapter = dev->priv; struct adapter *adapter = dev->priv;
return t1_is_asic(adapter) ? EEPROM_SIZE : 0; return t1_is_asic(adapter) ? EEPROM_SIZE : 0;
} }
#define EEPROM_MAGIC(ap) \ #define EEPROM_MAGIC(ap) \
@ -848,7 +818,7 @@ static int t1_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
u32 val; u32 val;
if (!phy->mdio_read) if (!phy->mdio_read)
return -EOPNOTSUPP; return -EOPNOTSUPP;
phy->mdio_read(adapter, data->phy_id, 0, data->reg_num & 0x1f, phy->mdio_read(adapter, data->phy_id, 0, data->reg_num & 0x1f,
&val); &val);
data->val_out = val; data->val_out = val;
@ -860,7 +830,7 @@ static int t1_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
if (!capable(CAP_NET_ADMIN)) if (!capable(CAP_NET_ADMIN))
return -EPERM; return -EPERM;
if (!phy->mdio_write) if (!phy->mdio_write)
return -EOPNOTSUPP; return -EOPNOTSUPP;
phy->mdio_write(adapter, data->phy_id, 0, data->reg_num & 0x1f, phy->mdio_write(adapter, data->phy_id, 0, data->reg_num & 0x1f,
data->val_in); data->val_in);
break; break;
@ -879,9 +849,9 @@ static int t1_change_mtu(struct net_device *dev, int new_mtu)
struct cmac *mac = adapter->port[dev->if_port].mac; struct cmac *mac = adapter->port[dev->if_port].mac;
if (!mac->ops->set_mtu) if (!mac->ops->set_mtu)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (new_mtu < 68) if (new_mtu < 68)
return -EINVAL; return -EINVAL;
if ((ret = mac->ops->set_mtu(mac, new_mtu))) if ((ret = mac->ops->set_mtu(mac, new_mtu)))
return ret; return ret;
dev->mtu = new_mtu; dev->mtu = new_mtu;
@ -1211,9 +1181,9 @@ static int __devinit init_one(struct pci_dev *pdev,
return 0; return 0;
out_release_adapter_res: out_release_adapter_res:
t1_free_sw_modules(adapter); t1_free_sw_modules(adapter);
out_free_dev: out_free_dev:
if (adapter) { if (adapter) {
if (adapter->regs) if (adapter->regs)
iounmap(adapter->regs); iounmap(adapter->regs);
@ -1222,7 +1192,7 @@ static int __devinit init_one(struct pci_dev *pdev,
free_netdev(adapter->port[i].dev); free_netdev(adapter->port[i].dev);
} }
pci_release_regions(pdev); pci_release_regions(pdev);
out_disable_pdev: out_disable_pdev:
pci_disable_device(pdev); pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL); pci_set_drvdata(pdev, NULL);
return err; return err;
@ -1273,28 +1243,27 @@ static int t1_clock(struct adapter *adapter, int mode)
int M_MEM_VAL; int M_MEM_VAL;
enum { enum {
M_CORE_BITS = 9, M_CORE_BITS = 9,
T_CORE_VAL = 0, T_CORE_VAL = 0,
T_CORE_BITS = 2, T_CORE_BITS = 2,
N_CORE_VAL = 0, N_CORE_VAL = 0,
N_CORE_BITS = 2, N_CORE_BITS = 2,
M_MEM_BITS = 9, M_MEM_BITS = 9,
T_MEM_VAL = 0, T_MEM_VAL = 0,
T_MEM_BITS = 2, T_MEM_BITS = 2,
N_MEM_VAL = 0, N_MEM_VAL = 0,
N_MEM_BITS = 2, N_MEM_BITS = 2,
NP_LOAD = 1 << 17, NP_LOAD = 1 << 17,
S_LOAD_MEM = 1 << 5, S_LOAD_MEM = 1 << 5,
S_LOAD_CORE = 1 << 6, S_LOAD_CORE = 1 << 6,
S_CLOCK = 1 << 3 S_CLOCK = 1 << 3
}; };
if (!t1_is_T1B(adapter)) if (!t1_is_T1B(adapter))
return -ENODEV; /* Can't re-clock this chip. */ return -ENODEV; /* Can't re-clock this chip. */
if (mode & 2) { if (mode & 2)
return 0; /* show current mode. */ return 0; /* show current mode. */
}
if ((adapter->t1powersave & 1) == (mode & 1)) if ((adapter->t1powersave & 1) == (mode & 1))
return -EALREADY; /* ASIC already running in mode. */ return -EALREADY; /* ASIC already running in mode. */
@ -1386,26 +1355,26 @@ static inline void t1_sw_reset(struct pci_dev *pdev)
static void __devexit remove_one(struct pci_dev *pdev) static void __devexit remove_one(struct pci_dev *pdev)
{ {
struct net_device *dev = pci_get_drvdata(pdev); struct net_device *dev = pci_get_drvdata(pdev);
struct adapter *adapter = dev->priv;
int i;
if (dev) { for_each_port(adapter, i) {
int i; if (test_bit(i, &adapter->registered_device_map))
struct adapter *adapter = dev->priv; unregister_netdev(adapter->port[i].dev);
for_each_port(adapter, i)
if (test_bit(i, &adapter->registered_device_map))
unregister_netdev(adapter->port[i].dev);
t1_free_sw_modules(adapter);
iounmap(adapter->regs);
while (--i >= 0)
if (adapter->port[i].dev)
free_netdev(adapter->port[i].dev);
pci_release_regions(pdev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
t1_sw_reset(pdev);
} }
t1_free_sw_modules(adapter);
iounmap(adapter->regs);
while (--i >= 0) {
if (adapter->port[i].dev)
free_netdev(adapter->port[i].dev);
}
pci_release_regions(pdev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
t1_sw_reset(pdev);
} }
static struct pci_driver driver = { static struct pci_driver driver = {

View File

@ -46,14 +46,14 @@ enum {
}; };
/* ELMER0 registers */ /* ELMER0 registers */
#define A_ELMER0_VERSION 0x100000 #define A_ELMER0_VERSION 0x100000
#define A_ELMER0_PHY_CFG 0x100004 #define A_ELMER0_PHY_CFG 0x100004
#define A_ELMER0_INT_ENABLE 0x100008 #define A_ELMER0_INT_ENABLE 0x100008
#define A_ELMER0_INT_CAUSE 0x10000c #define A_ELMER0_INT_CAUSE 0x10000c
#define A_ELMER0_GPI_CFG 0x100010 #define A_ELMER0_GPI_CFG 0x100010
#define A_ELMER0_GPI_STAT 0x100014 #define A_ELMER0_GPI_STAT 0x100014
#define A_ELMER0_GPO 0x100018 #define A_ELMER0_GPO 0x100018
#define A_ELMER0_PORT0_MI1_CFG 0x400000 #define A_ELMER0_PORT0_MI1_CFG 0x400000
#define S_MI1_MDI_ENABLE 0 #define S_MI1_MDI_ENABLE 0
#define V_MI1_MDI_ENABLE(x) ((x) << S_MI1_MDI_ENABLE) #define V_MI1_MDI_ENABLE(x) ((x) << S_MI1_MDI_ENABLE)
@ -111,18 +111,18 @@ enum {
#define V_MI1_OP_BUSY(x) ((x) << S_MI1_OP_BUSY) #define V_MI1_OP_BUSY(x) ((x) << S_MI1_OP_BUSY)
#define F_MI1_OP_BUSY V_MI1_OP_BUSY(1U) #define F_MI1_OP_BUSY V_MI1_OP_BUSY(1U)
#define A_ELMER0_PORT1_MI1_CFG 0x500000 #define A_ELMER0_PORT1_MI1_CFG 0x500000
#define A_ELMER0_PORT1_MI1_ADDR 0x500004 #define A_ELMER0_PORT1_MI1_ADDR 0x500004
#define A_ELMER0_PORT1_MI1_DATA 0x500008 #define A_ELMER0_PORT1_MI1_DATA 0x500008
#define A_ELMER0_PORT1_MI1_OP 0x50000c #define A_ELMER0_PORT1_MI1_OP 0x50000c
#define A_ELMER0_PORT2_MI1_CFG 0x600000 #define A_ELMER0_PORT2_MI1_CFG 0x600000
#define A_ELMER0_PORT2_MI1_ADDR 0x600004 #define A_ELMER0_PORT2_MI1_ADDR 0x600004
#define A_ELMER0_PORT2_MI1_DATA 0x600008 #define A_ELMER0_PORT2_MI1_DATA 0x600008
#define A_ELMER0_PORT2_MI1_OP 0x60000c #define A_ELMER0_PORT2_MI1_OP 0x60000c
#define A_ELMER0_PORT3_MI1_CFG 0x700000 #define A_ELMER0_PORT3_MI1_CFG 0x700000
#define A_ELMER0_PORT3_MI1_ADDR 0x700004 #define A_ELMER0_PORT3_MI1_ADDR 0x700004
#define A_ELMER0_PORT3_MI1_DATA 0x700008 #define A_ELMER0_PORT3_MI1_DATA 0x700008
#define A_ELMER0_PORT3_MI1_OP 0x70000c #define A_ELMER0_PORT3_MI1_OP 0x70000c
/* Simple bit definition for GPI and GP0 registers. */ /* Simple bit definition for GPI and GP0 registers. */
#define ELMER0_GP_BIT0 0x0001 #define ELMER0_GP_BIT0 0x0001

View File

@ -202,9 +202,9 @@ static void espi_setup_for_pm3393(adapter_t *adapter)
static void espi_setup_for_vsc7321(adapter_t *adapter) static void espi_setup_for_vsc7321(adapter_t *adapter)
{ {
writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN0); writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN0);
writel(0x1f401f4, adapter->regs + A_ESPI_SCH_TOKEN1); writel(0x1f401f4, adapter->regs + A_ESPI_SCH_TOKEN1);
writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN2); writel(0x1f4, adapter->regs + A_ESPI_SCH_TOKEN2);
writel(0xa00, adapter->regs + A_ESPI_RX_FIFO_ALMOST_FULL_WATERMARK); writel(0xa00, adapter->regs + A_ESPI_RX_FIFO_ALMOST_FULL_WATERMARK);
writel(0x1ff, adapter->regs + A_ESPI_RX_FIFO_ALMOST_EMPTY_WATERMARK); writel(0x1ff, adapter->regs + A_ESPI_RX_FIFO_ALMOST_EMPTY_WATERMARK);
writel(1, adapter->regs + A_ESPI_CALENDAR_LENGTH); writel(1, adapter->regs + A_ESPI_CALENDAR_LENGTH);
@ -247,10 +247,10 @@ int t1_espi_init(struct peespi *espi, int mac_type, int nports)
writel(V_OUT_OF_SYNC_COUNT(4) | writel(V_OUT_OF_SYNC_COUNT(4) |
V_DIP2_PARITY_ERR_THRES(3) | V_DIP2_PARITY_ERR_THRES(3) |
V_DIP4_THRES(1), adapter->regs + A_ESPI_MISC_CONTROL); V_DIP4_THRES(1), adapter->regs + A_ESPI_MISC_CONTROL);
writel(nports == 4 ? 0x200040 : 0x1000080, writel(nports == 4 ? 0x200040 : 0x1000080,
adapter->regs + A_ESPI_MAXBURST1_MAXBURST2); adapter->regs + A_ESPI_MAXBURST1_MAXBURST2);
} else } else
writel(0x800100, adapter->regs + A_ESPI_MAXBURST1_MAXBURST2); writel(0x800100, adapter->regs + A_ESPI_MAXBURST1_MAXBURST2);
if (mac_type == CHBT_MAC_PM3393) if (mac_type == CHBT_MAC_PM3393)
espi_setup_for_pm3393(adapter); espi_setup_for_pm3393(adapter);
@ -301,7 +301,8 @@ void t1_espi_set_misc_ctrl(adapter_t *adapter, u32 val)
{ {
struct peespi *espi = adapter->espi; struct peespi *espi = adapter->espi;
if (!is_T2(adapter)) return; if (!is_T2(adapter))
return;
spin_lock(&espi->lock); spin_lock(&espi->lock);
espi->misc_ctrl = (val & ~MON_MASK) | espi->misc_ctrl = (val & ~MON_MASK) |
(espi->misc_ctrl & MON_MASK); (espi->misc_ctrl & MON_MASK);
@ -340,32 +341,31 @@ u32 t1_espi_get_mon(adapter_t *adapter, u32 addr, u8 wait)
* compare with t1_espi_get_mon(), it reads espiInTxSop[0 ~ 3] in * compare with t1_espi_get_mon(), it reads espiInTxSop[0 ~ 3] in
* one shot, since there is no per port counter on the out side. * one shot, since there is no per port counter on the out side.
*/ */
int int t1_espi_get_mon_t204(adapter_t *adapter, u32 *valp, u8 wait)
t1_espi_get_mon_t204(adapter_t *adapter, u32 *valp, u8 wait)
{ {
struct peespi *espi = adapter->espi; struct peespi *espi = adapter->espi;
u8 i, nport = (u8)adapter->params.nports; u8 i, nport = (u8)adapter->params.nports;
if (!wait) { if (!wait) {
if (!spin_trylock(&espi->lock)) if (!spin_trylock(&espi->lock))
return -1; return -1;
} else } else
spin_lock(&espi->lock); spin_lock(&espi->lock);
if ( (espi->misc_ctrl & MON_MASK) != F_MONITORED_DIRECTION ) { if ((espi->misc_ctrl & MON_MASK) != F_MONITORED_DIRECTION) {
espi->misc_ctrl = (espi->misc_ctrl & ~MON_MASK) | espi->misc_ctrl = (espi->misc_ctrl & ~MON_MASK) |
F_MONITORED_DIRECTION; F_MONITORED_DIRECTION;
writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL); writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL);
} }
for (i = 0 ; i < nport; i++, valp++) { for (i = 0 ; i < nport; i++, valp++) {
if (i) { if (i) {
writel(espi->misc_ctrl | V_MONITORED_PORT_NUM(i), writel(espi->misc_ctrl | V_MONITORED_PORT_NUM(i),
adapter->regs + A_ESPI_MISC_CONTROL); adapter->regs + A_ESPI_MISC_CONTROL);
} }
*valp = readl(adapter->regs + A_ESPI_SCH_TOKEN3); *valp = readl(adapter->regs + A_ESPI_SCH_TOKEN3);
} }
writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL); writel(espi->misc_ctrl, adapter->regs + A_ESPI_MISC_CONTROL);
spin_unlock(&espi->lock); spin_unlock(&espi->lock);
return 0; return 0;
} }

View File

@ -98,9 +98,9 @@
#define A_MI0_DATA_INT 0xb10 #define A_MI0_DATA_INT 0xb10
/* GMAC registers */ /* GMAC registers */
#define A_GMAC_MACID_LO 0x28 #define A_GMAC_MACID_LO 0x28
#define A_GMAC_MACID_HI 0x2c #define A_GMAC_MACID_HI 0x2c
#define A_GMAC_CSR 0x30 #define A_GMAC_CSR 0x30
#define S_INTERFACE 0 #define S_INTERFACE 0
#define M_INTERFACE 0x3 #define M_INTERFACE 0x3

View File

@ -42,8 +42,15 @@
#include "common.h" #include "common.h"
enum { MAC_STATS_UPDATE_FAST, MAC_STATS_UPDATE_FULL }; enum {
enum { MAC_DIRECTION_RX = 1, MAC_DIRECTION_TX = 2 }; MAC_STATS_UPDATE_FAST,
MAC_STATS_UPDATE_FULL
};
enum {
MAC_DIRECTION_RX = 1,
MAC_DIRECTION_TX = 2
};
struct cmac_statistics { struct cmac_statistics {
/* Transmit */ /* Transmit */

View File

@ -145,48 +145,61 @@ static void disable_port(struct cmac *mac)
t1_tpi_write(mac->adapter, REG_PORT_ENABLE, val); t1_tpi_write(mac->adapter, REG_PORT_ENABLE, val);
} }
#define RMON_UPDATE(mac, name, stat_name) \
t1_tpi_read((mac)->adapter, MACREG(mac, REG_##name), &val); \
(mac)->stats.stat_name += val;
/* /*
* Read the current values of the RMON counters and add them to the cumulative * Read the current values of the RMON counters and add them to the cumulative
* port statistics. The HW RMON counters are cleared by this operation. * port statistics. The HW RMON counters are cleared by this operation.
*/ */
static void port_stats_update(struct cmac *mac) static void port_stats_update(struct cmac *mac)
{ {
u32 val; static struct {
unsigned int reg;
unsigned int offset;
} hw_stats[] = {
/* Rx stats */ #define HW_STAT(name, stat_name) \
RMON_UPDATE(mac, RxOctetsTotalOK, RxOctetsOK); { REG_##name, \
RMON_UPDATE(mac, RxOctetsBad, RxOctetsBad); (&((struct cmac_statistics *)NULL)->stat_name) - (u64 *)NULL }
RMON_UPDATE(mac, RxUCPkts, RxUnicastFramesOK);
RMON_UPDATE(mac, RxMCPkts, RxMulticastFramesOK);
RMON_UPDATE(mac, RxBCPkts, RxBroadcastFramesOK);
RMON_UPDATE(mac, RxJumboPkts, RxJumboFramesOK);
RMON_UPDATE(mac, RxFCSErrors, RxFCSErrors);
RMON_UPDATE(mac, RxAlignErrors, RxAlignErrors);
RMON_UPDATE(mac, RxLongErrors, RxFrameTooLongErrors);
RMON_UPDATE(mac, RxVeryLongErrors, RxFrameTooLongErrors);
RMON_UPDATE(mac, RxPauseMacControlCounter, RxPauseFrames);
RMON_UPDATE(mac, RxDataErrors, RxDataErrors);
RMON_UPDATE(mac, RxJabberErrors, RxJabberErrors);
RMON_UPDATE(mac, RxRuntErrors, RxRuntErrors);
RMON_UPDATE(mac, RxShortErrors, RxRuntErrors);
RMON_UPDATE(mac, RxSequenceErrors, RxSequenceErrors);
RMON_UPDATE(mac, RxSymbolErrors, RxSymbolErrors);
/* Tx stats (skip collision stats as we are full-duplex only) */ /* Rx stats */
RMON_UPDATE(mac, TxOctetsTotalOK, TxOctetsOK); HW_STAT(RxOctetsTotalOK, RxOctetsOK),
RMON_UPDATE(mac, TxOctetsBad, TxOctetsBad); HW_STAT(RxOctetsBad, RxOctetsBad),
RMON_UPDATE(mac, TxUCPkts, TxUnicastFramesOK); HW_STAT(RxUCPkts, RxUnicastFramesOK),
RMON_UPDATE(mac, TxMCPkts, TxMulticastFramesOK); HW_STAT(RxMCPkts, RxMulticastFramesOK),
RMON_UPDATE(mac, TxBCPkts, TxBroadcastFramesOK); HW_STAT(RxBCPkts, RxBroadcastFramesOK),
RMON_UPDATE(mac, TxJumboPkts, TxJumboFramesOK); HW_STAT(RxJumboPkts, RxJumboFramesOK),
RMON_UPDATE(mac, TxPauseFrames, TxPauseFrames); HW_STAT(RxFCSErrors, RxFCSErrors),
RMON_UPDATE(mac, TxExcessiveLengthDrop, TxLengthErrors); HW_STAT(RxAlignErrors, RxAlignErrors),
RMON_UPDATE(mac, TxUnderrun, TxUnderrun); HW_STAT(RxLongErrors, RxFrameTooLongErrors),
RMON_UPDATE(mac, TxCRCErrors, TxFCSErrors); HW_STAT(RxVeryLongErrors, RxFrameTooLongErrors),
HW_STAT(RxPauseMacControlCounter, RxPauseFrames),
HW_STAT(RxDataErrors, RxDataErrors),
HW_STAT(RxJabberErrors, RxJabberErrors),
HW_STAT(RxRuntErrors, RxRuntErrors),
HW_STAT(RxShortErrors, RxRuntErrors),
HW_STAT(RxSequenceErrors, RxSequenceErrors),
HW_STAT(RxSymbolErrors, RxSymbolErrors),
/* Tx stats (skip collision stats as we are full-duplex only) */
HW_STAT(TxOctetsTotalOK, TxOctetsOK),
HW_STAT(TxOctetsBad, TxOctetsBad),
HW_STAT(TxUCPkts, TxUnicastFramesOK),
HW_STAT(TxMCPkts, TxMulticastFramesOK),
HW_STAT(TxBCPkts, TxBroadcastFramesOK),
HW_STAT(TxJumboPkts, TxJumboFramesOK),
HW_STAT(TxPauseFrames, TxPauseFrames),
HW_STAT(TxExcessiveLengthDrop, TxLengthErrors),
HW_STAT(TxUnderrun, TxUnderrun),
HW_STAT(TxCRCErrors, TxFCSErrors)
}, *p = hw_stats;
u64 *stats = (u64 *) &mac->stats;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(hw_stats); i++) {
u32 val;
t1_tpi_read(mac->adapter, MACREG(mac, p->reg), &val);
stats[p->offset] += val;
}
} }
/* No-op interrupt operation as this MAC does not support interrupts */ /* No-op interrupt operation as this MAC does not support interrupts */
@ -273,7 +286,8 @@ static int mac_set_rx_mode(struct cmac *mac, struct t1_rx_mode *rm)
static int mac_set_mtu(struct cmac *mac, int mtu) static int mac_set_mtu(struct cmac *mac, int mtu)
{ {
/* MAX_FRAME_SIZE inludes header + FCS, mtu doesn't */ /* MAX_FRAME_SIZE inludes header + FCS, mtu doesn't */
if (mtu > (MAX_FRAME_SIZE - 14 - 4)) return -EINVAL; if (mtu > (MAX_FRAME_SIZE - 14 - 4))
return -EINVAL;
t1_tpi_write(mac->adapter, MACREG(mac, REG_MAX_FRAME_SIZE), t1_tpi_write(mac->adapter, MACREG(mac, REG_MAX_FRAME_SIZE),
mtu + 14 + 4); mtu + 14 + 4);
return 0; return 0;
@ -357,8 +371,8 @@ static void enable_port(struct cmac *mac)
val |= (1 << index); val |= (1 << index);
t1_tpi_write(adapter, REG_PORT_ENABLE, val); t1_tpi_write(adapter, REG_PORT_ENABLE, val);
index <<= 2; index <<= 2;
if (is_T2(adapter)) { if (is_T2(adapter)) {
/* T204: set the Fifo water level & threshold */ /* T204: set the Fifo water level & threshold */
t1_tpi_write(adapter, RX_FIFO_HIGH_WATERMARK_BASE + index, 0x740); t1_tpi_write(adapter, RX_FIFO_HIGH_WATERMARK_BASE + index, 0x740);
t1_tpi_write(adapter, RX_FIFO_LOW_WATERMARK_BASE + index, 0x730); t1_tpi_write(adapter, RX_FIFO_LOW_WATERMARK_BASE + index, 0x730);
@ -389,6 +403,10 @@ static int mac_disable(struct cmac *mac, int which)
return 0; return 0;
} }
#define RMON_UPDATE(mac, name, stat_name) \
t1_tpi_read((mac)->adapter, MACREG(mac, REG_##name), &val); \
(mac)->stats.stat_name += val;
/* /*
* This function is called periodically to accumulate the current values of the * This function is called periodically to accumulate the current values of the
* RMON counters into the port statistics. Since the counters are only 32 bits * RMON counters into the port statistics. Since the counters are only 32 bits
@ -460,10 +478,12 @@ static struct cmac *ixf1010_mac_create(adapter_t *adapter, int index)
struct cmac *mac; struct cmac *mac;
u32 val; u32 val;
if (index > 9) return NULL; if (index > 9)
return NULL;
mac = kzalloc(sizeof(*mac) + sizeof(cmac_instance), GFP_KERNEL); mac = kzalloc(sizeof(*mac) + sizeof(cmac_instance), GFP_KERNEL);
if (!mac) return NULL; if (!mac)
return NULL;
mac->ops = &ixf1010_ops; mac->ops = &ixf1010_ops;
mac->instance = (cmac_instance *)(mac + 1); mac->instance = (cmac_instance *)(mac + 1);

View File

@ -73,9 +73,8 @@ static int mv88e1xxx_interrupt_enable(struct cphy *cphy)
t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer); t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);
elmer |= ELMER0_GP_BIT1; elmer |= ELMER0_GP_BIT1;
if (is_T2(cphy->adapter)) { if (is_T2(cphy->adapter))
elmer |= ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4; elmer |= ELMER0_GP_BIT2 | ELMER0_GP_BIT3 | ELMER0_GP_BIT4;
}
t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer); t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);
} }
return 0; return 0;
@ -92,9 +91,8 @@ static int mv88e1xxx_interrupt_disable(struct cphy *cphy)
t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer); t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);
elmer &= ~ELMER0_GP_BIT1; elmer &= ~ELMER0_GP_BIT1;
if (is_T2(cphy->adapter)) { if (is_T2(cphy->adapter))
elmer &= ~(ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4); elmer &= ~(ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4);
}
t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer); t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);
} }
return 0; return 0;
@ -112,9 +110,8 @@ static int mv88e1xxx_interrupt_clear(struct cphy *cphy)
if (t1_is_asic(cphy->adapter)) { if (t1_is_asic(cphy->adapter)) {
t1_tpi_read(cphy->adapter, A_ELMER0_INT_CAUSE, &elmer); t1_tpi_read(cphy->adapter, A_ELMER0_INT_CAUSE, &elmer);
elmer |= ELMER0_GP_BIT1; elmer |= ELMER0_GP_BIT1;
if (is_T2(cphy->adapter)) { if (is_T2(cphy->adapter))
elmer |= ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4; elmer |= ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4;
}
t1_tpi_write(cphy->adapter, A_ELMER0_INT_CAUSE, elmer); t1_tpi_write(cphy->adapter, A_ELMER0_INT_CAUSE, elmer);
} }
return 0; return 0;
@ -300,7 +297,7 @@ static int mv88e1xxx_interrupt_handler(struct cphy *cphy)
/* /*
* Loop until cause reads zero. Need to handle bouncing interrupts. * Loop until cause reads zero. Need to handle bouncing interrupts.
*/ */
while (1) { while (1) {
u32 cause; u32 cause;
@ -308,15 +305,16 @@ static int mv88e1xxx_interrupt_handler(struct cphy *cphy)
MV88E1XXX_INTERRUPT_STATUS_REGISTER, MV88E1XXX_INTERRUPT_STATUS_REGISTER,
&cause); &cause);
cause &= INTR_ENABLE_MASK; cause &= INTR_ENABLE_MASK;
if (!cause) break; if (!cause)
break;
if (cause & MV88E1XXX_INTR_LINK_CHNG) { if (cause & MV88E1XXX_INTR_LINK_CHNG) {
(void) simple_mdio_read(cphy, (void) simple_mdio_read(cphy,
MV88E1XXX_SPECIFIC_STATUS_REGISTER, &status); MV88E1XXX_SPECIFIC_STATUS_REGISTER, &status);
if (status & MV88E1XXX_INTR_LINK_CHNG) { if (status & MV88E1XXX_INTR_LINK_CHNG)
cphy->state |= PHY_LINK_UP; cphy->state |= PHY_LINK_UP;
} else { else {
cphy->state &= ~PHY_LINK_UP; cphy->state &= ~PHY_LINK_UP;
if (cphy->state & PHY_AUTONEG_EN) if (cphy->state & PHY_AUTONEG_EN)
cphy->state &= ~PHY_AUTONEG_RDY; cphy->state &= ~PHY_AUTONEG_RDY;
@ -360,7 +358,8 @@ static struct cphy *mv88e1xxx_phy_create(adapter_t *adapter, int phy_addr,
{ {
struct cphy *cphy = kzalloc(sizeof(*cphy), GFP_KERNEL); struct cphy *cphy = kzalloc(sizeof(*cphy), GFP_KERNEL);
if (!cphy) return NULL; if (!cphy)
return NULL;
cphy_init(cphy, adapter, phy_addr, &mv88e1xxx_ops, mdio_ops); cphy_init(cphy, adapter, phy_addr, &mv88e1xxx_ops, mdio_ops);
@ -377,11 +376,11 @@ static struct cphy *mv88e1xxx_phy_create(adapter_t *adapter, int phy_addr,
} }
(void) mv88e1xxx_downshift_set(cphy, 1); /* Enable downshift */ (void) mv88e1xxx_downshift_set(cphy, 1); /* Enable downshift */
/* LED */ /* LED */
if (is_T2(adapter)) { if (is_T2(adapter)) {
(void) simple_mdio_write(cphy, (void) simple_mdio_write(cphy,
MV88E1XXX_LED_CONTROL_REGISTER, 0x1); MV88E1XXX_LED_CONTROL_REGISTER, 0x1);
} }
return cphy; return cphy;
} }

View File

@ -10,25 +10,25 @@ static int my3126_reset(struct cphy *cphy, int wait)
* This can be done through registers. It is not required since * This can be done through registers. It is not required since
* a full chip reset is used. * a full chip reset is used.
*/ */
return (0); return 0;
} }
static int my3126_interrupt_enable(struct cphy *cphy) static int my3126_interrupt_enable(struct cphy *cphy)
{ {
schedule_delayed_work(&cphy->phy_update, HZ/30); schedule_delayed_work(&cphy->phy_update, HZ/30);
t1_tpi_read(cphy->adapter, A_ELMER0_GPO, &cphy->elmer_gpo); t1_tpi_read(cphy->adapter, A_ELMER0_GPO, &cphy->elmer_gpo);
return (0); return 0;
} }
static int my3126_interrupt_disable(struct cphy *cphy) static int my3126_interrupt_disable(struct cphy *cphy)
{ {
cancel_rearming_delayed_work(&cphy->phy_update); cancel_rearming_delayed_work(&cphy->phy_update);
return (0); return 0;
} }
static int my3126_interrupt_clear(struct cphy *cphy) static int my3126_interrupt_clear(struct cphy *cphy)
{ {
return (0); return 0;
} }
#define OFFSET(REG_ADDR) (REG_ADDR << 2) #define OFFSET(REG_ADDR) (REG_ADDR << 2)
@ -102,7 +102,7 @@ static void my3216_poll(struct work_struct *work)
static int my3126_set_loopback(struct cphy *cphy, int on) static int my3126_set_loopback(struct cphy *cphy, int on)
{ {
return (0); return 0;
} }
/* To check the activity LED */ /* To check the activity LED */
@ -146,7 +146,7 @@ static int my3126_get_link_status(struct cphy *cphy,
if (fc) if (fc)
*fc = PAUSE_RX | PAUSE_TX; *fc = PAUSE_RX | PAUSE_TX;
return (0); return 0;
} }
static void my3126_destroy(struct cphy *cphy) static void my3126_destroy(struct cphy *cphy)
@ -177,7 +177,7 @@ static struct cphy *my3126_phy_create(adapter_t *adapter,
INIT_DELAYED_WORK(&cphy->phy_update, my3216_poll); INIT_DELAYED_WORK(&cphy->phy_update, my3216_poll);
cphy->bmsr = 0; cphy->bmsr = 0;
return (cphy); return cphy;
} }
/* Chip Reset */ /* Chip Reset */
@ -198,7 +198,7 @@ static int my3126_phy_reset(adapter_t * adapter)
val |= 0x8000; val |= 0x8000;
t1_tpi_write(adapter, A_ELMER0_GPO, val); t1_tpi_write(adapter, A_ELMER0_GPO, val);
udelay(100); udelay(100);
return (0); return 0;
} }
struct gphy t1_my3126_ops = { struct gphy t1_my3126_ops = {

View File

@ -446,17 +446,51 @@ static void pm3393_rmon_update(struct adapter *adapter, u32 offs, u64 *val,
*val += 1ull << 40; *val += 1ull << 40;
} }
#define RMON_UPDATE(mac, name, stat_name) \
pm3393_rmon_update((mac)->adapter, OFFSET(name), \
&(mac)->stats.stat_name, \
(ro &((name - SUNI1x10GEXP_REG_MSTAT_COUNTER_0_LOW) >> 2)))
static const struct cmac_statistics *pm3393_update_statistics(struct cmac *mac, static const struct cmac_statistics *pm3393_update_statistics(struct cmac *mac,
int flag) int flag)
{ {
u64 ro; static struct {
u32 val0, val1, val2, val3; unsigned int reg;
unsigned int offset;
} hw_stats [] = {
#define HW_STAT(name, stat_name) \
{ name, (&((struct cmac_statistics *)NULL)->stat_name) - (u64 *)NULL }
/* Rx stats */
HW_STAT(RxOctetsReceivedOK, RxOctetsOK),
HW_STAT(RxUnicastFramesReceivedOK, RxUnicastFramesOK),
HW_STAT(RxMulticastFramesReceivedOK, RxMulticastFramesOK),
HW_STAT(RxBroadcastFramesReceivedOK, RxBroadcastFramesOK),
HW_STAT(RxPAUSEMACCtrlFramesReceived, RxPauseFrames),
HW_STAT(RxFrameCheckSequenceErrors, RxFCSErrors),
HW_STAT(RxFramesLostDueToInternalMACErrors,
RxInternalMACRcvError),
HW_STAT(RxSymbolErrors, RxSymbolErrors),
HW_STAT(RxInRangeLengthErrors, RxInRangeLengthErrors),
HW_STAT(RxFramesTooLongErrors , RxFrameTooLongErrors),
HW_STAT(RxJabbers, RxJabberErrors),
HW_STAT(RxFragments, RxRuntErrors),
HW_STAT(RxUndersizedFrames, RxRuntErrors),
HW_STAT(RxJumboFramesReceivedOK, RxJumboFramesOK),
HW_STAT(RxJumboOctetsReceivedOK, RxJumboOctetsOK),
/* Tx stats */
HW_STAT(TxOctetsTransmittedOK, TxOctetsOK),
HW_STAT(TxFramesLostDueToInternalMACTransmissionError,
TxInternalMACXmitError),
HW_STAT(TxTransmitSystemError, TxFCSErrors),
HW_STAT(TxUnicastFramesTransmittedOK, TxUnicastFramesOK),
HW_STAT(TxMulticastFramesTransmittedOK, TxMulticastFramesOK),
HW_STAT(TxBroadcastFramesTransmittedOK, TxBroadcastFramesOK),
HW_STAT(TxPAUSEMACCtrlFramesTransmitted, TxPauseFrames),
HW_STAT(TxJumboFramesReceivedOK, TxJumboFramesOK),
HW_STAT(TxJumboOctetsReceivedOK, TxJumboOctetsOK)
}, *p = hw_stats;
u64 ro;
u32 val0, val1, val2, val3;
u64 *stats = (u64 *) &mac->stats;
unsigned int i;
/* Snap the counters */ /* Snap the counters */
pmwrite(mac, SUNI1x10GEXP_REG_MSTAT_CONTROL, pmwrite(mac, SUNI1x10GEXP_REG_MSTAT_CONTROL,
@ -470,35 +504,14 @@ static const struct cmac_statistics *pm3393_update_statistics(struct cmac *mac,
ro = ((u64)val0 & 0xffff) | (((u64)val1 & 0xffff) << 16) | ro = ((u64)val0 & 0xffff) | (((u64)val1 & 0xffff) << 16) |
(((u64)val2 & 0xffff) << 32) | (((u64)val3 & 0xffff) << 48); (((u64)val2 & 0xffff) << 32) | (((u64)val3 & 0xffff) << 48);
/* Rx stats */ for (i = 0; i < ARRAY_SIZE(hw_stats); i++) {
RMON_UPDATE(mac, RxOctetsReceivedOK, RxOctetsOK); unsigned reg = p->reg - SUNI1x10GEXP_REG_MSTAT_COUNTER_0_LOW;
RMON_UPDATE(mac, RxUnicastFramesReceivedOK, RxUnicastFramesOK);
RMON_UPDATE(mac, RxMulticastFramesReceivedOK, RxMulticastFramesOK); pm3393_rmon_update((mac)->adapter, OFFSET(p->reg),
RMON_UPDATE(mac, RxBroadcastFramesReceivedOK, RxBroadcastFramesOK); stats + p->offset, ro & (reg >> 2));
RMON_UPDATE(mac, RxPAUSEMACCtrlFramesReceived, RxPauseFrames); }
RMON_UPDATE(mac, RxFrameCheckSequenceErrors, RxFCSErrors);
RMON_UPDATE(mac, RxFramesLostDueToInternalMACErrors,
RxInternalMACRcvError);
RMON_UPDATE(mac, RxSymbolErrors, RxSymbolErrors);
RMON_UPDATE(mac, RxInRangeLengthErrors, RxInRangeLengthErrors);
RMON_UPDATE(mac, RxFramesTooLongErrors , RxFrameTooLongErrors);
RMON_UPDATE(mac, RxJabbers, RxJabberErrors);
RMON_UPDATE(mac, RxFragments, RxRuntErrors);
RMON_UPDATE(mac, RxUndersizedFrames, RxRuntErrors);
RMON_UPDATE(mac, RxJumboFramesReceivedOK, RxJumboFramesOK);
RMON_UPDATE(mac, RxJumboOctetsReceivedOK, RxJumboOctetsOK);
/* Tx stats */
RMON_UPDATE(mac, TxOctetsTransmittedOK, TxOctetsOK);
RMON_UPDATE(mac, TxFramesLostDueToInternalMACTransmissionError,
TxInternalMACXmitError);
RMON_UPDATE(mac, TxTransmitSystemError, TxFCSErrors);
RMON_UPDATE(mac, TxUnicastFramesTransmittedOK, TxUnicastFramesOK);
RMON_UPDATE(mac, TxMulticastFramesTransmittedOK, TxMulticastFramesOK);
RMON_UPDATE(mac, TxBroadcastFramesTransmittedOK, TxBroadcastFramesOK);
RMON_UPDATE(mac, TxPAUSEMACCtrlFramesTransmitted, TxPauseFrames);
RMON_UPDATE(mac, TxJumboFramesReceivedOK, TxJumboFramesOK);
RMON_UPDATE(mac, TxJumboOctetsReceivedOK, TxJumboOctetsOK);
return &mac->stats; return &mac->stats;
} }
@ -534,9 +547,9 @@ static int pm3393_macaddress_set(struct cmac *cmac, u8 ma[6])
/* Store local copy */ /* Store local copy */
memcpy(cmac->instance->mac_addr, ma, 6); memcpy(cmac->instance->mac_addr, ma, 6);
lo = ((u32) ma[1] << 8) | (u32) ma[0]; lo = ((u32) ma[1] << 8) | (u32) ma[0];
mid = ((u32) ma[3] << 8) | (u32) ma[2]; mid = ((u32) ma[3] << 8) | (u32) ma[2];
hi = ((u32) ma[5] << 8) | (u32) ma[4]; hi = ((u32) ma[5] << 8) | (u32) ma[4];
/* Disable Rx/Tx MAC before configuring it. */ /* Disable Rx/Tx MAC before configuring it. */
if (enabled) if (enabled)

View File

@ -71,12 +71,9 @@
#define SGE_FREEL_REFILL_THRESH 16 #define SGE_FREEL_REFILL_THRESH 16
#define SGE_RESPQ_E_N 1024 #define SGE_RESPQ_E_N 1024
#define SGE_INTRTIMER_NRES 1000 #define SGE_INTRTIMER_NRES 1000
#define SGE_RX_COPY_THRES 256
#define SGE_RX_SM_BUF_SIZE 1536 #define SGE_RX_SM_BUF_SIZE 1536
#define SGE_TX_DESC_MAX_PLEN 16384 #define SGE_TX_DESC_MAX_PLEN 16384
# define SGE_RX_DROP_THRES 2
#define SGE_RESPQ_REPLENISH_THRES (SGE_RESPQ_E_N / 4) #define SGE_RESPQ_REPLENISH_THRES (SGE_RESPQ_E_N / 4)
/* /*
@ -85,10 +82,6 @@
*/ */
#define TX_RECLAIM_PERIOD (HZ / 4) #define TX_RECLAIM_PERIOD (HZ / 4)
#ifndef NET_IP_ALIGN
# define NET_IP_ALIGN 2
#endif
#define M_CMD_LEN 0x7fffffff #define M_CMD_LEN 0x7fffffff
#define V_CMD_LEN(v) (v) #define V_CMD_LEN(v) (v)
#define G_CMD_LEN(v) ((v) & M_CMD_LEN) #define G_CMD_LEN(v) ((v) & M_CMD_LEN)
@ -195,7 +188,7 @@ struct cmdQ {
struct cmdQ_e *entries; /* HW command descriptor Q */ struct cmdQ_e *entries; /* HW command descriptor Q */
struct cmdQ_ce *centries; /* SW command context descriptor Q */ struct cmdQ_ce *centries; /* SW command context descriptor Q */
dma_addr_t dma_addr; /* DMA addr HW command descriptor Q */ dma_addr_t dma_addr; /* DMA addr HW command descriptor Q */
spinlock_t lock; /* Lock to protect cmdQ enqueuing */ spinlock_t lock; /* Lock to protect cmdQ enqueuing */
}; };
struct freelQ { struct freelQ {
@ -241,9 +234,9 @@ struct sched_port {
/* Per T204 device */ /* Per T204 device */
struct sched { struct sched {
ktime_t last_updated; /* last time quotas were computed */ ktime_t last_updated; /* last time quotas were computed */
unsigned int max_avail; /* max bits to be sent to any port */ unsigned int max_avail; /* max bits to be sent to any port */
unsigned int port; /* port index (round robin ports) */ unsigned int port; /* port index (round robin ports) */
unsigned int num; /* num skbs in per port queues */ unsigned int num; /* num skbs in per port queues */
struct sched_port p[MAX_NPORTS]; struct sched_port p[MAX_NPORTS];
struct tasklet_struct sched_tsk;/* tasklet used to run scheduler */ struct tasklet_struct sched_tsk;/* tasklet used to run scheduler */
}; };
@ -259,10 +252,10 @@ static void restart_sched(unsigned long);
* contention. * contention.
*/ */
struct sge { struct sge {
struct adapter *adapter; /* adapter backpointer */ struct adapter *adapter; /* adapter backpointer */
struct net_device *netdev; /* netdevice backpointer */ struct net_device *netdev; /* netdevice backpointer */
struct freelQ freelQ[SGE_FREELQ_N]; /* buffer free lists */ struct freelQ freelQ[SGE_FREELQ_N]; /* buffer free lists */
struct respQ respQ; /* response Q */ struct respQ respQ; /* response Q */
unsigned long stopped_tx_queues; /* bitmap of suspended Tx queues */ unsigned long stopped_tx_queues; /* bitmap of suspended Tx queues */
unsigned int rx_pkt_pad; /* RX padding for L2 packets */ unsigned int rx_pkt_pad; /* RX padding for L2 packets */
unsigned int jumbo_fl; /* jumbo freelist Q index */ unsigned int jumbo_fl; /* jumbo freelist Q index */
@ -460,7 +453,7 @@ static struct sk_buff *sched_skb(struct sge *sge, struct sk_buff *skb,
if (credits < MAX_SKB_FRAGS + 1) if (credits < MAX_SKB_FRAGS + 1)
goto out; goto out;
again: again:
for (i = 0; i < MAX_NPORTS; i++) { for (i = 0; i < MAX_NPORTS; i++) {
s->port = ++s->port & (MAX_NPORTS - 1); s->port = ++s->port & (MAX_NPORTS - 1);
skbq = &s->p[s->port].skbq; skbq = &s->p[s->port].skbq;
@ -483,8 +476,8 @@ static struct sk_buff *sched_skb(struct sge *sge, struct sk_buff *skb,
if (update-- && sched_update_avail(sge)) if (update-- && sched_update_avail(sge))
goto again; goto again;
out: out:
/* If there are more pending skbs, we use the hardware to schedule us /* If there are more pending skbs, we use the hardware to schedule us
* again. * again.
*/ */
if (s->num && !skb) { if (s->num && !skb) {
@ -575,11 +568,10 @@ static int alloc_rx_resources(struct sge *sge, struct sge_params *p)
q->size = p->freelQ_size[i]; q->size = p->freelQ_size[i];
q->dma_offset = sge->rx_pkt_pad ? 0 : NET_IP_ALIGN; q->dma_offset = sge->rx_pkt_pad ? 0 : NET_IP_ALIGN;
size = sizeof(struct freelQ_e) * q->size; size = sizeof(struct freelQ_e) * q->size;
q->entries = (struct freelQ_e *) q->entries = pci_alloc_consistent(pdev, size, &q->dma_addr);
pci_alloc_consistent(pdev, size, &q->dma_addr);
if (!q->entries) if (!q->entries)
goto err_no_mem; goto err_no_mem;
memset(q->entries, 0, size);
size = sizeof(struct freelQ_ce) * q->size; size = sizeof(struct freelQ_ce) * q->size;
q->centries = kzalloc(size, GFP_KERNEL); q->centries = kzalloc(size, GFP_KERNEL);
if (!q->centries) if (!q->centries)
@ -613,11 +605,10 @@ static int alloc_rx_resources(struct sge *sge, struct sge_params *p)
sge->respQ.size = SGE_RESPQ_E_N; sge->respQ.size = SGE_RESPQ_E_N;
sge->respQ.credits = 0; sge->respQ.credits = 0;
size = sizeof(struct respQ_e) * sge->respQ.size; size = sizeof(struct respQ_e) * sge->respQ.size;
sge->respQ.entries = (struct respQ_e *) sge->respQ.entries =
pci_alloc_consistent(pdev, size, &sge->respQ.dma_addr); pci_alloc_consistent(pdev, size, &sge->respQ.dma_addr);
if (!sge->respQ.entries) if (!sge->respQ.entries)
goto err_no_mem; goto err_no_mem;
memset(sge->respQ.entries, 0, size);
return 0; return 0;
err_no_mem: err_no_mem:
@ -637,20 +628,12 @@ static void free_cmdQ_buffers(struct sge *sge, struct cmdQ *q, unsigned int n)
q->in_use -= n; q->in_use -= n;
ce = &q->centries[cidx]; ce = &q->centries[cidx];
while (n--) { while (n--) {
if (q->sop) { if (likely(pci_unmap_len(ce, dma_len))) {
if (likely(pci_unmap_len(ce, dma_len))) { pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),
pci_unmap_single(pdev, pci_unmap_len(ce, dma_len),
pci_unmap_addr(ce, dma_addr), PCI_DMA_TODEVICE);
pci_unmap_len(ce, dma_len), if (q->sop)
PCI_DMA_TODEVICE);
q->sop = 0; q->sop = 0;
}
} else {
if (likely(pci_unmap_len(ce, dma_len))) {
pci_unmap_page(pdev, pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len),
PCI_DMA_TODEVICE);
}
} }
if (ce->skb) { if (ce->skb) {
dev_kfree_skb_any(ce->skb); dev_kfree_skb_any(ce->skb);
@ -711,11 +694,10 @@ static int alloc_tx_resources(struct sge *sge, struct sge_params *p)
q->stop_thres = 0; q->stop_thres = 0;
spin_lock_init(&q->lock); spin_lock_init(&q->lock);
size = sizeof(struct cmdQ_e) * q->size; size = sizeof(struct cmdQ_e) * q->size;
q->entries = (struct cmdQ_e *) q->entries = pci_alloc_consistent(pdev, size, &q->dma_addr);
pci_alloc_consistent(pdev, size, &q->dma_addr);
if (!q->entries) if (!q->entries)
goto err_no_mem; goto err_no_mem;
memset(q->entries, 0, size);
size = sizeof(struct cmdQ_ce) * q->size; size = sizeof(struct cmdQ_ce) * q->size;
q->centries = kzalloc(size, GFP_KERNEL); q->centries = kzalloc(size, GFP_KERNEL);
if (!q->centries) if (!q->centries)
@ -770,7 +752,7 @@ void t1_set_vlan_accel(struct adapter *adapter, int on_off)
static void configure_sge(struct sge *sge, struct sge_params *p) static void configure_sge(struct sge *sge, struct sge_params *p)
{ {
struct adapter *ap = sge->adapter; struct adapter *ap = sge->adapter;
writel(0, ap->regs + A_SG_CONTROL); writel(0, ap->regs + A_SG_CONTROL);
setup_ring_params(ap, sge->cmdQ[0].dma_addr, sge->cmdQ[0].size, setup_ring_params(ap, sge->cmdQ[0].dma_addr, sge->cmdQ[0].size,
A_SG_CMD0BASELWR, A_SG_CMD0BASEUPR, A_SG_CMD0SIZE); A_SG_CMD0BASELWR, A_SG_CMD0BASEUPR, A_SG_CMD0SIZE);
@ -850,7 +832,6 @@ static void refill_free_list(struct sge *sge, struct freelQ *q)
struct freelQ_e *e = &q->entries[q->pidx]; struct freelQ_e *e = &q->entries[q->pidx];
unsigned int dma_len = q->rx_buffer_size - q->dma_offset; unsigned int dma_len = q->rx_buffer_size - q->dma_offset;
while (q->credits < q->size) { while (q->credits < q->size) {
struct sk_buff *skb; struct sk_buff *skb;
dma_addr_t mapping; dma_addr_t mapping;
@ -862,6 +843,8 @@ static void refill_free_list(struct sge *sge, struct freelQ *q)
skb_reserve(skb, q->dma_offset); skb_reserve(skb, q->dma_offset);
mapping = pci_map_single(pdev, skb->data, dma_len, mapping = pci_map_single(pdev, skb->data, dma_len,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
skb_reserve(skb, sge->rx_pkt_pad);
ce->skb = skb; ce->skb = skb;
pci_unmap_addr_set(ce, dma_addr, mapping); pci_unmap_addr_set(ce, dma_addr, mapping);
pci_unmap_len_set(ce, dma_len, dma_len); pci_unmap_len_set(ce, dma_len, dma_len);
@ -881,7 +864,6 @@ static void refill_free_list(struct sge *sge, struct freelQ *q)
} }
q->credits++; q->credits++;
} }
} }
/* /*
@ -1041,6 +1023,10 @@ static void recycle_fl_buf(struct freelQ *fl, int idx)
} }
} }
static int copybreak __read_mostly = 256;
module_param(copybreak, int, 0);
MODULE_PARM_DESC(copybreak, "Receive copy threshold");
/** /**
* get_packet - return the next ingress packet buffer * get_packet - return the next ingress packet buffer
* @pdev: the PCI device that received the packet * @pdev: the PCI device that received the packet
@ -1060,45 +1046,42 @@ static void recycle_fl_buf(struct freelQ *fl, int idx)
* be copied but there is no memory for the copy. * be copied but there is no memory for the copy.
*/ */
static inline struct sk_buff *get_packet(struct pci_dev *pdev, static inline struct sk_buff *get_packet(struct pci_dev *pdev,
struct freelQ *fl, unsigned int len, struct freelQ *fl, unsigned int len)
int dma_pad, int skb_pad,
unsigned int copy_thres,
unsigned int drop_thres)
{ {
struct sk_buff *skb; struct sk_buff *skb;
struct freelQ_ce *ce = &fl->centries[fl->cidx]; const struct freelQ_ce *ce = &fl->centries[fl->cidx];
if (len < copy_thres) { if (len < copybreak) {
skb = alloc_skb(len + skb_pad, GFP_ATOMIC); skb = alloc_skb(len + 2, GFP_ATOMIC);
if (likely(skb != NULL)) { if (!skb)
skb_reserve(skb, skb_pad);
skb_put(skb, len);
pci_dma_sync_single_for_cpu(pdev,
pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len),
PCI_DMA_FROMDEVICE);
memcpy(skb->data, ce->skb->data + dma_pad, len);
pci_dma_sync_single_for_device(pdev,
pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len),
PCI_DMA_FROMDEVICE);
} else if (!drop_thres)
goto use_orig_buf; goto use_orig_buf;
skb_reserve(skb, 2); /* align IP header */
skb_put(skb, len);
pci_dma_sync_single_for_cpu(pdev,
pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len),
PCI_DMA_FROMDEVICE);
memcpy(skb->data, ce->skb->data, len);
pci_dma_sync_single_for_device(pdev,
pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len),
PCI_DMA_FROMDEVICE);
recycle_fl_buf(fl, fl->cidx); recycle_fl_buf(fl, fl->cidx);
return skb; return skb;
} }
if (fl->credits < drop_thres) { use_orig_buf:
if (fl->credits < 2) {
recycle_fl_buf(fl, fl->cidx); recycle_fl_buf(fl, fl->cidx);
return NULL; return NULL;
} }
use_orig_buf:
pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr), pci_unmap_single(pdev, pci_unmap_addr(ce, dma_addr),
pci_unmap_len(ce, dma_len), PCI_DMA_FROMDEVICE); pci_unmap_len(ce, dma_len), PCI_DMA_FROMDEVICE);
skb = ce->skb; skb = ce->skb;
skb_reserve(skb, dma_pad); prefetch(skb->data);
skb_put(skb, len); skb_put(skb, len);
return skb; return skb;
} }
@ -1137,6 +1120,7 @@ static void unexpected_offload(struct adapter *adapter, struct freelQ *fl)
static inline unsigned int compute_large_page_tx_descs(struct sk_buff *skb) static inline unsigned int compute_large_page_tx_descs(struct sk_buff *skb)
{ {
unsigned int count = 0; unsigned int count = 0;
if (PAGE_SIZE > SGE_TX_DESC_MAX_PLEN) { if (PAGE_SIZE > SGE_TX_DESC_MAX_PLEN) {
unsigned int nfrags = skb_shinfo(skb)->nr_frags; unsigned int nfrags = skb_shinfo(skb)->nr_frags;
unsigned int i, len = skb->len - skb->data_len; unsigned int i, len = skb->len - skb->data_len;
@ -1343,7 +1327,7 @@ static void restart_sched(unsigned long arg)
while ((skb = sched_skb(sge, NULL, credits)) != NULL) { while ((skb = sched_skb(sge, NULL, credits)) != NULL) {
unsigned int genbit, pidx, count; unsigned int genbit, pidx, count;
count = 1 + skb_shinfo(skb)->nr_frags; count = 1 + skb_shinfo(skb)->nr_frags;
count += compute_large_page_tx_descs(skb); count += compute_large_page_tx_descs(skb);
q->in_use += count; q->in_use += count;
genbit = q->genbit; genbit = q->genbit;
pidx = q->pidx; pidx = q->pidx;
@ -1375,27 +1359,25 @@ static void restart_sched(unsigned long arg)
* *
* Process an ingress ethernet pakcet and deliver it to the stack. * Process an ingress ethernet pakcet and deliver it to the stack.
*/ */
static int sge_rx(struct sge *sge, struct freelQ *fl, unsigned int len) static void sge_rx(struct sge *sge, struct freelQ *fl, unsigned int len)
{ {
struct sk_buff *skb; struct sk_buff *skb;
struct cpl_rx_pkt *p; const struct cpl_rx_pkt *p;
struct adapter *adapter = sge->adapter; struct adapter *adapter = sge->adapter;
struct sge_port_stats *st; struct sge_port_stats *st;
skb = get_packet(adapter->pdev, fl, len - sge->rx_pkt_pad, skb = get_packet(adapter->pdev, fl, len - sge->rx_pkt_pad);
sge->rx_pkt_pad, 2, SGE_RX_COPY_THRES,
SGE_RX_DROP_THRES);
if (unlikely(!skb)) { if (unlikely(!skb)) {
sge->stats.rx_drops++; sge->stats.rx_drops++;
return 0; return;
} }
p = (struct cpl_rx_pkt *)skb->data; p = (const struct cpl_rx_pkt *) skb->data;
skb_pull(skb, sizeof(*p));
if (p->iff >= adapter->params.nports) { if (p->iff >= adapter->params.nports) {
kfree_skb(skb); kfree_skb(skb);
return 0; return;
} }
__skb_pull(skb, sizeof(*p));
skb->dev = adapter->port[p->iff].dev; skb->dev = adapter->port[p->iff].dev;
skb->dev->last_rx = jiffies; skb->dev->last_rx = jiffies;
@ -1427,7 +1409,6 @@ static int sge_rx(struct sge *sge, struct freelQ *fl, unsigned int len)
netif_rx(skb); netif_rx(skb);
#endif #endif
} }
return 0;
} }
/* /*
@ -1448,29 +1429,28 @@ static inline int enough_free_Tx_descs(const struct cmdQ *q)
static void restart_tx_queues(struct sge *sge) static void restart_tx_queues(struct sge *sge)
{ {
struct adapter *adap = sge->adapter; struct adapter *adap = sge->adapter;
int i;
if (enough_free_Tx_descs(&sge->cmdQ[0])) { if (!enough_free_Tx_descs(&sge->cmdQ[0]))
int i; return;
for_each_port(adap, i) { for_each_port(adap, i) {
struct net_device *nd = adap->port[i].dev; struct net_device *nd = adap->port[i].dev;
if (test_and_clear_bit(nd->if_port, if (test_and_clear_bit(nd->if_port, &sge->stopped_tx_queues) &&
&sge->stopped_tx_queues) && netif_running(nd)) {
netif_running(nd)) { sge->stats.cmdQ_restarted[2]++;
sge->stats.cmdQ_restarted[2]++; netif_wake_queue(nd);
netif_wake_queue(nd);
}
} }
} }
} }
/* /*
* update_tx_info is called from the interrupt handler/NAPI to return cmdQ0 * update_tx_info is called from the interrupt handler/NAPI to return cmdQ0
* information. * information.
*/ */
static unsigned int update_tx_info(struct adapter *adapter, static unsigned int update_tx_info(struct adapter *adapter,
unsigned int flags, unsigned int flags,
unsigned int pr0) unsigned int pr0)
{ {
struct sge *sge = adapter->sge; struct sge *sge = adapter->sge;
@ -1510,29 +1490,30 @@ static int process_responses(struct adapter *adapter, int budget)
struct sge *sge = adapter->sge; struct sge *sge = adapter->sge;
struct respQ *q = &sge->respQ; struct respQ *q = &sge->respQ;
struct respQ_e *e = &q->entries[q->cidx]; struct respQ_e *e = &q->entries[q->cidx];
int budget_left = budget; int done = 0;
unsigned int flags = 0; unsigned int flags = 0;
unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0}; unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0};
while (likely(budget_left && e->GenerationBit == q->genbit)) { while (done < budget && e->GenerationBit == q->genbit) {
flags |= e->Qsleeping; flags |= e->Qsleeping;
cmdq_processed[0] += e->Cmdq0CreditReturn; cmdq_processed[0] += e->Cmdq0CreditReturn;
cmdq_processed[1] += e->Cmdq1CreditReturn; cmdq_processed[1] += e->Cmdq1CreditReturn;
/* We batch updates to the TX side to avoid cacheline /* We batch updates to the TX side to avoid cacheline
* ping-pong of TX state information on MP where the sender * ping-pong of TX state information on MP where the sender
* might run on a different CPU than this function... * might run on a different CPU than this function...
*/ */
if (unlikely(flags & F_CMDQ0_ENABLE || cmdq_processed[0] > 64)) { if (unlikely((flags & F_CMDQ0_ENABLE) || cmdq_processed[0] > 64)) {
flags = update_tx_info(adapter, flags, cmdq_processed[0]); flags = update_tx_info(adapter, flags, cmdq_processed[0]);
cmdq_processed[0] = 0; cmdq_processed[0] = 0;
} }
if (unlikely(cmdq_processed[1] > 16)) { if (unlikely(cmdq_processed[1] > 16)) {
sge->cmdQ[1].processed += cmdq_processed[1]; sge->cmdQ[1].processed += cmdq_processed[1];
cmdq_processed[1] = 0; cmdq_processed[1] = 0;
} }
if (likely(e->DataValid)) { if (likely(e->DataValid)) {
struct freelQ *fl = &sge->freelQ[e->FreelistQid]; struct freelQ *fl = &sge->freelQ[e->FreelistQid];
@ -1542,12 +1523,16 @@ static int process_responses(struct adapter *adapter, int budget)
else else
sge_rx(sge, fl, e->BufferLength); sge_rx(sge, fl, e->BufferLength);
++done;
/* /*
* Note: this depends on each packet consuming a * Note: this depends on each packet consuming a
* single free-list buffer; cf. the BUG above. * single free-list buffer; cf. the BUG above.
*/ */
if (++fl->cidx == fl->size) if (++fl->cidx == fl->size)
fl->cidx = 0; fl->cidx = 0;
prefetch(fl->centries[fl->cidx].skb);
if (unlikely(--fl->credits < if (unlikely(--fl->credits <
fl->size - SGE_FREEL_REFILL_THRESH)) fl->size - SGE_FREEL_REFILL_THRESH))
refill_free_list(sge, fl); refill_free_list(sge, fl);
@ -1566,14 +1551,20 @@ static int process_responses(struct adapter *adapter, int budget)
writel(q->credits, adapter->regs + A_SG_RSPQUEUECREDIT); writel(q->credits, adapter->regs + A_SG_RSPQUEUECREDIT);
q->credits = 0; q->credits = 0;
} }
--budget_left;
} }
flags = update_tx_info(adapter, flags, cmdq_processed[0]); flags = update_tx_info(adapter, flags, cmdq_processed[0]);
sge->cmdQ[1].processed += cmdq_processed[1]; sge->cmdQ[1].processed += cmdq_processed[1];
budget -= budget_left; return done;
return budget; }
static inline int responses_pending(const struct adapter *adapter)
{
const struct respQ *Q = &adapter->sge->respQ;
const struct respQ_e *e = &Q->entries[Q->cidx];
return (e->GenerationBit == Q->genbit);
} }
#ifdef CONFIG_CHELSIO_T1_NAPI #ifdef CONFIG_CHELSIO_T1_NAPI
@ -1585,19 +1576,25 @@ static int process_responses(struct adapter *adapter, int budget)
* which the caller must ensure is a valid pure response. Returns 1 if it * which the caller must ensure is a valid pure response. Returns 1 if it
* encounters a valid data-carrying response, 0 otherwise. * encounters a valid data-carrying response, 0 otherwise.
*/ */
static int process_pure_responses(struct adapter *adapter, struct respQ_e *e) static int process_pure_responses(struct adapter *adapter)
{ {
struct sge *sge = adapter->sge; struct sge *sge = adapter->sge;
struct respQ *q = &sge->respQ; struct respQ *q = &sge->respQ;
struct respQ_e *e = &q->entries[q->cidx];
const struct freelQ *fl = &sge->freelQ[e->FreelistQid];
unsigned int flags = 0; unsigned int flags = 0;
unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0}; unsigned int cmdq_processed[SGE_CMDQ_N] = {0, 0};
prefetch(fl->centries[fl->cidx].skb);
if (e->DataValid)
return 1;
do { do {
flags |= e->Qsleeping; flags |= e->Qsleeping;
cmdq_processed[0] += e->Cmdq0CreditReturn; cmdq_processed[0] += e->Cmdq0CreditReturn;
cmdq_processed[1] += e->Cmdq1CreditReturn; cmdq_processed[1] += e->Cmdq1CreditReturn;
e++; e++;
if (unlikely(++q->cidx == q->size)) { if (unlikely(++q->cidx == q->size)) {
q->cidx = 0; q->cidx = 0;
@ -1613,7 +1610,7 @@ static int process_pure_responses(struct adapter *adapter, struct respQ_e *e)
sge->stats.pure_rsps++; sge->stats.pure_rsps++;
} while (e->GenerationBit == q->genbit && !e->DataValid); } while (e->GenerationBit == q->genbit && !e->DataValid);
flags = update_tx_info(adapter, flags, cmdq_processed[0]); flags = update_tx_info(adapter, flags, cmdq_processed[0]);
sge->cmdQ[1].processed += cmdq_processed[1]; sge->cmdQ[1].processed += cmdq_processed[1];
return e->GenerationBit == q->genbit; return e->GenerationBit == q->genbit;
@ -1627,23 +1624,20 @@ static int process_pure_responses(struct adapter *adapter, struct respQ_e *e)
int t1_poll(struct net_device *dev, int *budget) int t1_poll(struct net_device *dev, int *budget)
{ {
struct adapter *adapter = dev->priv; struct adapter *adapter = dev->priv;
int effective_budget = min(*budget, dev->quota); int work_done;
int work_done = process_responses(adapter, effective_budget);
work_done = process_responses(adapter, min(*budget, dev->quota));
*budget -= work_done; *budget -= work_done;
dev->quota -= work_done; dev->quota -= work_done;
if (work_done >= effective_budget) if (unlikely(responses_pending(adapter)))
return 1; return 1;
spin_lock_irq(&adapter->async_lock); netif_rx_complete(dev);
__netif_rx_complete(dev);
writel(adapter->sge->respQ.cidx, adapter->regs + A_SG_SLEEPING); writel(adapter->sge->respQ.cidx, adapter->regs + A_SG_SLEEPING);
writel(adapter->slow_intr_mask | F_PL_INTR_SGE_DATA,
adapter->regs + A_PL_ENABLE);
spin_unlock_irq(&adapter->async_lock);
return 0; return 0;
} }
/* /*
@ -1652,44 +1646,33 @@ int t1_poll(struct net_device *dev, int *budget)
irqreturn_t t1_interrupt(int irq, void *data) irqreturn_t t1_interrupt(int irq, void *data)
{ {
struct adapter *adapter = data; struct adapter *adapter = data;
struct net_device *dev = adapter->sge->netdev;
struct sge *sge = adapter->sge; struct sge *sge = adapter->sge;
u32 cause; int handled;
int handled = 0;
cause = readl(adapter->regs + A_PL_CAUSE); if (likely(responses_pending(adapter))) {
if (cause == 0 || cause == ~0) struct net_device *dev = sge->netdev;
return IRQ_NONE;
writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);
if (__netif_rx_schedule_prep(dev)) {
if (process_pure_responses(adapter))
__netif_rx_schedule(dev);
else {
/* no data, no NAPI needed */
writel(sge->respQ.cidx, adapter->regs + A_SG_SLEEPING);
netif_poll_enable(dev); /* undo schedule_prep */
}
}
return IRQ_HANDLED;
}
spin_lock(&adapter->async_lock); spin_lock(&adapter->async_lock);
if (cause & F_PL_INTR_SGE_DATA) { handled = t1_slow_intr_handler(adapter);
struct respQ *q = &adapter->sge->respQ; spin_unlock(&adapter->async_lock);
struct respQ_e *e = &q->entries[q->cidx];
handled = 1;
writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);
if (e->GenerationBit == q->genbit &&
__netif_rx_schedule_prep(dev)) {
if (e->DataValid || process_pure_responses(adapter, e)) {
/* mask off data IRQ */
writel(adapter->slow_intr_mask,
adapter->regs + A_PL_ENABLE);
__netif_rx_schedule(sge->netdev);
goto unlock;
}
/* no data, no NAPI needed */
netif_poll_enable(dev);
}
writel(q->cidx, adapter->regs + A_SG_SLEEPING);
} else
handled = t1_slow_intr_handler(adapter);
if (!handled) if (!handled)
sge->stats.unhandled_irqs++; sge->stats.unhandled_irqs++;
unlock:
spin_unlock(&adapter->async_lock);
return IRQ_RETVAL(handled != 0); return IRQ_RETVAL(handled != 0);
} }
@ -1712,17 +1695,13 @@ unlock:
irqreturn_t t1_interrupt(int irq, void *cookie) irqreturn_t t1_interrupt(int irq, void *cookie)
{ {
int work_done; int work_done;
struct respQ_e *e;
struct adapter *adapter = cookie; struct adapter *adapter = cookie;
struct respQ *Q = &adapter->sge->respQ;
spin_lock(&adapter->async_lock); spin_lock(&adapter->async_lock);
e = &Q->entries[Q->cidx];
prefetch(e);
writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE); writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);
if (likely(e->GenerationBit == Q->genbit)) if (likely(responses_pending(adapter)))
work_done = process_responses(adapter, -1); work_done = process_responses(adapter, -1);
else else
work_done = t1_slow_intr_handler(adapter); work_done = t1_slow_intr_handler(adapter);
@ -1796,7 +1775,7 @@ static int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,
* through the scheduler. * through the scheduler.
*/ */
if (sge->tx_sched && !qid && skb->dev) { if (sge->tx_sched && !qid && skb->dev) {
use_sched: use_sched:
use_sched_skb = 1; use_sched_skb = 1;
/* Note that the scheduler might return a different skb than /* Note that the scheduler might return a different skb than
* the one passed in. * the one passed in.
@ -1900,7 +1879,7 @@ int t1_start_xmit(struct sk_buff *skb, struct net_device *dev)
cpl = (struct cpl_tx_pkt *)hdr; cpl = (struct cpl_tx_pkt *)hdr;
} else { } else {
/* /*
* Packets shorter than ETH_HLEN can break the MAC, drop them * Packets shorter than ETH_HLEN can break the MAC, drop them
* early. Also, we may get oversized packets because some * early. Also, we may get oversized packets because some
* parts of the kernel don't handle our unusual hard_header_len * parts of the kernel don't handle our unusual hard_header_len
* right, drop those too. * right, drop those too.
@ -1984,9 +1963,9 @@ send:
* then silently discard to avoid leak. * then silently discard to avoid leak.
*/ */
if (unlikely(ret != NETDEV_TX_OK && skb != orig_skb)) { if (unlikely(ret != NETDEV_TX_OK && skb != orig_skb)) {
dev_kfree_skb_any(skb); dev_kfree_skb_any(skb);
ret = NETDEV_TX_OK; ret = NETDEV_TX_OK;
} }
return ret; return ret;
} }
@ -2099,31 +2078,35 @@ static void espibug_workaround_t204(unsigned long data)
if (adapter->open_device_map & PORT_MASK) { if (adapter->open_device_map & PORT_MASK) {
int i; int i;
if (t1_espi_get_mon_t204(adapter, &(seop[0]), 0) < 0) {
return;
}
for (i = 0; i < nports; i++) {
struct sk_buff *skb = sge->espibug_skb[i];
if ( (netif_running(adapter->port[i].dev)) &&
!(netif_queue_stopped(adapter->port[i].dev)) &&
(seop[i] && ((seop[i] & 0xfff) == 0)) &&
skb ) {
if (!skb->cb[0]) {
u8 ch_mac_addr[ETH_ALEN] =
{0x0, 0x7, 0x43, 0x0, 0x0, 0x0};
memcpy(skb->data + sizeof(struct cpl_tx_pkt),
ch_mac_addr, ETH_ALEN);
memcpy(skb->data + skb->len - 10,
ch_mac_addr, ETH_ALEN);
skb->cb[0] = 0xff;
}
/* bump the reference count to avoid freeing of if (t1_espi_get_mon_t204(adapter, &(seop[0]), 0) < 0)
* the skb once the DMA has completed. return;
*/
skb = skb_get(skb); for (i = 0; i < nports; i++) {
t1_sge_tx(skb, adapter, 0, adapter->port[i].dev); struct sk_buff *skb = sge->espibug_skb[i];
if (!netif_running(adapter->port[i].dev) ||
netif_queue_stopped(adapter->port[i].dev) ||
!seop[i] || ((seop[i] & 0xfff) != 0) || !skb)
continue;
if (!skb->cb[0]) {
u8 ch_mac_addr[ETH_ALEN] = {
0x0, 0x7, 0x43, 0x0, 0x0, 0x0
};
memcpy(skb->data + sizeof(struct cpl_tx_pkt),
ch_mac_addr, ETH_ALEN);
memcpy(skb->data + skb->len - 10,
ch_mac_addr, ETH_ALEN);
skb->cb[0] = 0xff;
} }
/* bump the reference count to avoid freeing of
* the skb once the DMA has completed.
*/
skb = skb_get(skb);
t1_sge_tx(skb, adapter, 0, adapter->port[i].dev);
} }
} }
mod_timer(&sge->espibug_timer, jiffies + sge->espibug_timeout); mod_timer(&sge->espibug_timer, jiffies + sge->espibug_timeout);
@ -2192,9 +2175,8 @@ struct sge * __devinit t1_sge_create(struct adapter *adapter,
if (adapter->params.nports > 1) { if (adapter->params.nports > 1) {
tx_sched_init(sge); tx_sched_init(sge);
sge->espibug_timer.function = espibug_workaround_t204; sge->espibug_timer.function = espibug_workaround_t204;
} else { } else
sge->espibug_timer.function = espibug_workaround; sge->espibug_timer.function = espibug_workaround;
}
sge->espibug_timer.data = (unsigned long)sge->adapter; sge->espibug_timer.data = (unsigned long)sge->adapter;
sge->espibug_timeout = 1; sge->espibug_timeout = 1;
@ -2202,7 +2184,7 @@ struct sge * __devinit t1_sge_create(struct adapter *adapter,
if (adapter->params.nports > 1) if (adapter->params.nports > 1)
sge->espibug_timeout = HZ/100; sge->espibug_timeout = HZ/100;
} }
p->cmdQ_size[0] = SGE_CMDQ0_E_N; p->cmdQ_size[0] = SGE_CMDQ0_E_N;
p->cmdQ_size[1] = SGE_CMDQ1_E_N; p->cmdQ_size[1] = SGE_CMDQ1_E_N;

View File

@ -223,13 +223,13 @@ static int fpga_slow_intr(adapter_t *adapter)
t1_sge_intr_error_handler(adapter->sge); t1_sge_intr_error_handler(adapter->sge);
if (cause & FPGA_PCIX_INTERRUPT_GMAC) if (cause & FPGA_PCIX_INTERRUPT_GMAC)
fpga_phy_intr_handler(adapter); fpga_phy_intr_handler(adapter);
if (cause & FPGA_PCIX_INTERRUPT_TP) { if (cause & FPGA_PCIX_INTERRUPT_TP) {
/* /*
* FPGA doesn't support MC4 interrupts and it requires * FPGA doesn't support MC4 interrupts and it requires
* this odd layer of indirection for MC5. * this odd layer of indirection for MC5.
*/ */
u32 tp_cause = readl(adapter->regs + FPGA_TP_ADDR_INTERRUPT_CAUSE); u32 tp_cause = readl(adapter->regs + FPGA_TP_ADDR_INTERRUPT_CAUSE);
/* Clear TP interrupt */ /* Clear TP interrupt */
@ -262,8 +262,7 @@ static int mi1_wait_until_ready(adapter_t *adapter, int mi1_reg)
udelay(10); udelay(10);
} while (busy && --attempts); } while (busy && --attempts);
if (busy) if (busy)
CH_ALERT("%s: MDIO operation timed out\n", CH_ALERT("%s: MDIO operation timed out\n", adapter->name);
adapter->name);
return busy; return busy;
} }
@ -605,22 +604,23 @@ int t1_elmer0_ext_intr_handler(adapter_t *adapter)
switch (board_info(adapter)->board) { switch (board_info(adapter)->board) {
#ifdef CONFIG_CHELSIO_T1_1G #ifdef CONFIG_CHELSIO_T1_1G
case CHBT_BOARD_CHT204: case CHBT_BOARD_CHT204:
case CHBT_BOARD_CHT204E: case CHBT_BOARD_CHT204E:
case CHBT_BOARD_CHN204: case CHBT_BOARD_CHN204:
case CHBT_BOARD_CHT204V: { case CHBT_BOARD_CHT204V: {
int i, port_bit; int i, port_bit;
for_each_port(adapter, i) { for_each_port(adapter, i) {
port_bit = i + 1; port_bit = i + 1;
if (!(cause & (1 << port_bit))) continue; if (!(cause & (1 << port_bit)))
continue;
phy = adapter->port[i].phy; phy = adapter->port[i].phy;
phy_cause = phy->ops->interrupt_handler(phy); phy_cause = phy->ops->interrupt_handler(phy);
if (phy_cause & cphy_cause_link_change) if (phy_cause & cphy_cause_link_change)
t1_link_changed(adapter, i); t1_link_changed(adapter, i);
} }
break; break;
} }
case CHBT_BOARD_CHT101: case CHBT_BOARD_CHT101:
if (cause & ELMER0_GP_BIT1) { /* Marvell 88E1111 interrupt */ if (cause & ELMER0_GP_BIT1) { /* Marvell 88E1111 interrupt */
phy = adapter->port[0].phy; phy = adapter->port[0].phy;
@ -631,13 +631,13 @@ int t1_elmer0_ext_intr_handler(adapter_t *adapter)
break; break;
case CHBT_BOARD_7500: { case CHBT_BOARD_7500: {
int p; int p;
/* /*
* Elmer0's interrupt cause isn't useful here because there is * Elmer0's interrupt cause isn't useful here because there is
* only one bit that can be set for all 4 ports. This means * only one bit that can be set for all 4 ports. This means
* we are forced to check every PHY's interrupt status * we are forced to check every PHY's interrupt status
* register to see who initiated the interrupt. * register to see who initiated the interrupt.
*/ */
for_each_port(adapter, p) { for_each_port(adapter, p) {
phy = adapter->port[p].phy; phy = adapter->port[p].phy;
phy_cause = phy->ops->interrupt_handler(phy); phy_cause = phy->ops->interrupt_handler(phy);
if (phy_cause & cphy_cause_link_change) if (phy_cause & cphy_cause_link_change)
@ -658,7 +658,7 @@ int t1_elmer0_ext_intr_handler(adapter_t *adapter)
break; break;
case CHBT_BOARD_8000: case CHBT_BOARD_8000:
case CHBT_BOARD_CHT110: case CHBT_BOARD_CHT110:
CH_DBG(adapter, INTR, "External interrupt cause 0x%x\n", CH_DBG(adapter, INTR, "External interrupt cause 0x%x\n",
cause); cause);
if (cause & ELMER0_GP_BIT1) { /* PMC3393 INTB */ if (cause & ELMER0_GP_BIT1) { /* PMC3393 INTB */
struct cmac *mac = adapter->port[0].mac; struct cmac *mac = adapter->port[0].mac;
@ -670,9 +670,9 @@ int t1_elmer0_ext_intr_handler(adapter_t *adapter)
t1_tpi_read(adapter, t1_tpi_read(adapter,
A_ELMER0_GPI_STAT, &mod_detect); A_ELMER0_GPI_STAT, &mod_detect);
CH_MSG(adapter, INFO, LINK, "XPAK %s\n", CH_MSG(adapter, INFO, LINK, "XPAK %s\n",
mod_detect ? "removed" : "inserted"); mod_detect ? "removed" : "inserted");
} }
break; break;
#ifdef CONFIG_CHELSIO_T1_COUGAR #ifdef CONFIG_CHELSIO_T1_COUGAR
case CHBT_BOARD_COUGAR: case CHBT_BOARD_COUGAR:
@ -688,7 +688,8 @@ int t1_elmer0_ext_intr_handler(adapter_t *adapter)
for_each_port(adapter, i) { for_each_port(adapter, i) {
port_bit = i ? i + 1 : 0; port_bit = i ? i + 1 : 0;
if (!(cause & (1 << port_bit))) continue; if (!(cause & (1 << port_bit)))
continue;
phy = adapter->port[i].phy; phy = adapter->port[i].phy;
phy_cause = phy->ops->interrupt_handler(phy); phy_cause = phy->ops->interrupt_handler(phy);
@ -755,7 +756,7 @@ void t1_interrupts_disable(adapter_t* adapter)
/* Disable PCIX & external chip interrupts. */ /* Disable PCIX & external chip interrupts. */
if (t1_is_asic(adapter)) if (t1_is_asic(adapter))
writel(0, adapter->regs + A_PL_ENABLE); writel(0, adapter->regs + A_PL_ENABLE);
/* PCI-X interrupts */ /* PCI-X interrupts */
pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_ENABLE, 0); pci_write_config_dword(adapter->pdev, A_PCICFG_INTR_ENABLE, 0);
@ -830,11 +831,11 @@ int t1_slow_intr_handler(adapter_t *adapter)
/* Power sequencing is a work-around for Intel's XPAKs. */ /* Power sequencing is a work-around for Intel's XPAKs. */
static void power_sequence_xpak(adapter_t* adapter) static void power_sequence_xpak(adapter_t* adapter)
{ {
u32 mod_detect; u32 mod_detect;
u32 gpo; u32 gpo;
/* Check for XPAK */ /* Check for XPAK */
t1_tpi_read(adapter, A_ELMER0_GPI_STAT, &mod_detect); t1_tpi_read(adapter, A_ELMER0_GPI_STAT, &mod_detect);
if (!(ELMER0_GP_BIT5 & mod_detect)) { if (!(ELMER0_GP_BIT5 & mod_detect)) {
/* XPAK is present */ /* XPAK is present */
t1_tpi_read(adapter, A_ELMER0_GPO, &gpo); t1_tpi_read(adapter, A_ELMER0_GPO, &gpo);
@ -877,31 +878,31 @@ static int board_init(adapter_t *adapter, const struct board_info *bi)
case CHBT_BOARD_N210: case CHBT_BOARD_N210:
case CHBT_BOARD_CHT210: case CHBT_BOARD_CHT210:
case CHBT_BOARD_COUGAR: case CHBT_BOARD_COUGAR:
t1_tpi_par(adapter, 0xf); t1_tpi_par(adapter, 0xf);
t1_tpi_write(adapter, A_ELMER0_GPO, 0x800); t1_tpi_write(adapter, A_ELMER0_GPO, 0x800);
break; break;
case CHBT_BOARD_CHT110: case CHBT_BOARD_CHT110:
t1_tpi_par(adapter, 0xf); t1_tpi_par(adapter, 0xf);
t1_tpi_write(adapter, A_ELMER0_GPO, 0x1800); t1_tpi_write(adapter, A_ELMER0_GPO, 0x1800);
/* TBD XXX Might not need. This fixes a problem /* TBD XXX Might not need. This fixes a problem
* described in the Intel SR XPAK errata. * described in the Intel SR XPAK errata.
*/ */
power_sequence_xpak(adapter); power_sequence_xpak(adapter);
break; break;
#ifdef CONFIG_CHELSIO_T1_1G #ifdef CONFIG_CHELSIO_T1_1G
case CHBT_BOARD_CHT204E: case CHBT_BOARD_CHT204E:
/* add config space write here */ /* add config space write here */
case CHBT_BOARD_CHT204: case CHBT_BOARD_CHT204:
case CHBT_BOARD_CHT204V: case CHBT_BOARD_CHT204V:
case CHBT_BOARD_CHN204: case CHBT_BOARD_CHN204:
t1_tpi_par(adapter, 0xf); t1_tpi_par(adapter, 0xf);
t1_tpi_write(adapter, A_ELMER0_GPO, 0x804); t1_tpi_write(adapter, A_ELMER0_GPO, 0x804);
break; break;
case CHBT_BOARD_CHT101: case CHBT_BOARD_CHT101:
case CHBT_BOARD_7500: case CHBT_BOARD_7500:
t1_tpi_par(adapter, 0xf); t1_tpi_par(adapter, 0xf);
t1_tpi_write(adapter, A_ELMER0_GPO, 0x1804); t1_tpi_write(adapter, A_ELMER0_GPO, 0x1804);
break; break;
#endif #endif
} }
@ -941,7 +942,7 @@ int t1_init_hw_modules(adapter_t *adapter)
goto out_err; goto out_err;
err = 0; err = 0;
out_err: out_err:
return err; return err;
} }
@ -983,7 +984,7 @@ void t1_free_sw_modules(adapter_t *adapter)
if (adapter->espi) if (adapter->espi)
t1_espi_destroy(adapter->espi); t1_espi_destroy(adapter->espi);
#ifdef CONFIG_CHELSIO_T1_COUGAR #ifdef CONFIG_CHELSIO_T1_COUGAR
if (adapter->cspi) if (adapter->cspi)
t1_cspi_destroy(adapter->cspi); t1_cspi_destroy(adapter->cspi);
#endif #endif
} }
@ -1010,7 +1011,7 @@ static void __devinit init_link_config(struct link_config *lc,
CH_ERR("%s: CSPI initialization failed\n", CH_ERR("%s: CSPI initialization failed\n",
adapter->name); adapter->name);
goto error; goto error;
} }
#endif #endif
/* /*

View File

@ -17,39 +17,36 @@ struct petp {
static void tp_init(adapter_t * ap, const struct tp_params *p, static void tp_init(adapter_t * ap, const struct tp_params *p,
unsigned int tp_clk) unsigned int tp_clk)
{ {
if (t1_is_asic(ap)) { u32 val;
u32 val;
val = F_TP_IN_CSPI_CPL | F_TP_IN_CSPI_CHECK_IP_CSUM | if (!t1_is_asic(ap))
F_TP_IN_CSPI_CHECK_TCP_CSUM | F_TP_IN_ESPI_ETHERNET; return;
if (!p->pm_size)
val |= F_OFFLOAD_DISABLE;
else
val |= F_TP_IN_ESPI_CHECK_IP_CSUM |
F_TP_IN_ESPI_CHECK_TCP_CSUM;
writel(val, ap->regs + A_TP_IN_CONFIG);
writel(F_TP_OUT_CSPI_CPL |
F_TP_OUT_ESPI_ETHERNET |
F_TP_OUT_ESPI_GENERATE_IP_CSUM |
F_TP_OUT_ESPI_GENERATE_TCP_CSUM,
ap->regs + A_TP_OUT_CONFIG);
writel(V_IP_TTL(64) |
F_PATH_MTU /* IP DF bit */ |
V_5TUPLE_LOOKUP(p->use_5tuple_mode) |
V_SYN_COOKIE_PARAMETER(29),
ap->regs + A_TP_GLOBAL_CONFIG);
/*
* Enable pause frame deadlock prevention.
*/
if (is_T2(ap) && ap->params.nports > 1) {
u32 drop_ticks = DROP_MSEC * (tp_clk / 1000);
writel(F_ENABLE_TX_DROP | F_ENABLE_TX_ERROR | val = F_TP_IN_CSPI_CPL | F_TP_IN_CSPI_CHECK_IP_CSUM |
V_DROP_TICKS_CNT(drop_ticks) | F_TP_IN_CSPI_CHECK_TCP_CSUM | F_TP_IN_ESPI_ETHERNET;
V_NUM_PKTS_DROPPED(DROP_PKTS_CNT), if (!p->pm_size)
ap->regs + A_TP_TX_DROP_CONFIG); val |= F_OFFLOAD_DISABLE;
} else
val |= F_TP_IN_ESPI_CHECK_IP_CSUM | F_TP_IN_ESPI_CHECK_TCP_CSUM;
writel(val, ap->regs + A_TP_IN_CONFIG);
writel(F_TP_OUT_CSPI_CPL |
F_TP_OUT_ESPI_ETHERNET |
F_TP_OUT_ESPI_GENERATE_IP_CSUM |
F_TP_OUT_ESPI_GENERATE_TCP_CSUM, ap->regs + A_TP_OUT_CONFIG);
writel(V_IP_TTL(64) |
F_PATH_MTU /* IP DF bit */ |
V_5TUPLE_LOOKUP(p->use_5tuple_mode) |
V_SYN_COOKIE_PARAMETER(29), ap->regs + A_TP_GLOBAL_CONFIG);
/*
* Enable pause frame deadlock prevention.
*/
if (is_T2(ap) && ap->params.nports > 1) {
u32 drop_ticks = DROP_MSEC * (tp_clk / 1000);
writel(F_ENABLE_TX_DROP | F_ENABLE_TX_ERROR |
V_DROP_TICKS_CNT(drop_ticks) |
V_NUM_PKTS_DROPPED(DROP_PKTS_CNT),
ap->regs + A_TP_TX_DROP_CONFIG);
} }
} }
@ -61,6 +58,7 @@ void t1_tp_destroy(struct petp *tp)
struct petp *__devinit t1_tp_create(adapter_t * adapter, struct tp_params *p) struct petp *__devinit t1_tp_create(adapter_t * adapter, struct tp_params *p)
{ {
struct petp *tp = kzalloc(sizeof(*tp), GFP_KERNEL); struct petp *tp = kzalloc(sizeof(*tp), GFP_KERNEL);
if (!tp) if (!tp)
return NULL; return NULL;

View File

@ -226,22 +226,21 @@ static void run_table(adapter_t *adapter, struct init_table *ib, int len)
if (ib[i].addr == INITBLOCK_SLEEP) { if (ib[i].addr == INITBLOCK_SLEEP) {
udelay( ib[i].data ); udelay( ib[i].data );
CH_ERR("sleep %d us\n",ib[i].data); CH_ERR("sleep %d us\n",ib[i].data);
} else { } else
vsc_write( adapter, ib[i].addr, ib[i].data ); vsc_write( adapter, ib[i].addr, ib[i].data );
}
} }
} }
static int bist_rd(adapter_t *adapter, int moduleid, int address) static int bist_rd(adapter_t *adapter, int moduleid, int address)
{ {
int data=0; int data = 0;
u32 result=0; u32 result = 0;
if( (address != 0x0) && if ((address != 0x0) &&
(address != 0x1) && (address != 0x1) &&
(address != 0x2) && (address != 0x2) &&
(address != 0xd) && (address != 0xd) &&
(address != 0xe)) (address != 0xe))
CH_ERR("No bist address: 0x%x\n", address); CH_ERR("No bist address: 0x%x\n", address);
data = ((0x00 << 24) | ((address & 0xff) << 16) | (0x00 << 8) | data = ((0x00 << 24) | ((address & 0xff) << 16) | (0x00 << 8) |
@ -251,27 +250,27 @@ static int bist_rd(adapter_t *adapter, int moduleid, int address)
udelay(10); udelay(10);
vsc_read(adapter, REG_RAM_BIST_RESULT, &result); vsc_read(adapter, REG_RAM_BIST_RESULT, &result);
if((result & (1<<9)) != 0x0) if ((result & (1 << 9)) != 0x0)
CH_ERR("Still in bist read: 0x%x\n", result); CH_ERR("Still in bist read: 0x%x\n", result);
else if((result & (1<<8)) != 0x0) else if ((result & (1 << 8)) != 0x0)
CH_ERR("bist read error: 0x%x\n", result); CH_ERR("bist read error: 0x%x\n", result);
return(result & 0xff); return (result & 0xff);
} }
static int bist_wr(adapter_t *adapter, int moduleid, int address, int value) static int bist_wr(adapter_t *adapter, int moduleid, int address, int value)
{ {
int data=0; int data = 0;
u32 result=0; u32 result = 0;
if( (address != 0x0) && if ((address != 0x0) &&
(address != 0x1) && (address != 0x1) &&
(address != 0x2) && (address != 0x2) &&
(address != 0xd) && (address != 0xd) &&
(address != 0xe)) (address != 0xe))
CH_ERR("No bist address: 0x%x\n", address); CH_ERR("No bist address: 0x%x\n", address);
if( value>255 ) if (value > 255)
CH_ERR("Suspicious write out of range value: 0x%x\n", value); CH_ERR("Suspicious write out of range value: 0x%x\n", value);
data = ((0x01 << 24) | ((address & 0xff) << 16) | (value << 8) | data = ((0x01 << 24) | ((address & 0xff) << 16) | (value << 8) |
@ -281,12 +280,12 @@ static int bist_wr(adapter_t *adapter, int moduleid, int address, int value)
udelay(5); udelay(5);
vsc_read(adapter, REG_RAM_BIST_CMD, &result); vsc_read(adapter, REG_RAM_BIST_CMD, &result);
if((result & (1<<27)) != 0x0) if ((result & (1 << 27)) != 0x0)
CH_ERR("Still in bist write: 0x%x\n", result); CH_ERR("Still in bist write: 0x%x\n", result);
else if((result & (1<<26)) != 0x0) else if ((result & (1 << 26)) != 0x0)
CH_ERR("bist write error: 0x%x\n", result); CH_ERR("bist write error: 0x%x\n", result);
return(0); return 0;
} }
static int run_bist(adapter_t *adapter, int moduleid) static int run_bist(adapter_t *adapter, int moduleid)
@ -295,7 +294,7 @@ static int run_bist(adapter_t *adapter, int moduleid)
(void) bist_wr(adapter,moduleid, 0x00, 0x02); (void) bist_wr(adapter,moduleid, 0x00, 0x02);
(void) bist_wr(adapter,moduleid, 0x01, 0x01); (void) bist_wr(adapter,moduleid, 0x01, 0x01);
return(0); return 0;
} }
static int check_bist(adapter_t *adapter, int moduleid) static int check_bist(adapter_t *adapter, int moduleid)
@ -309,27 +308,26 @@ static int check_bist(adapter_t *adapter, int moduleid)
if ((result & 3) != 0x3) if ((result & 3) != 0x3)
CH_ERR("Result: 0x%x BIST error in ram %d, column: 0x%04x\n", CH_ERR("Result: 0x%x BIST error in ram %d, column: 0x%04x\n",
result, moduleid, column); result, moduleid, column);
return(0); return 0;
} }
static int enable_mem(adapter_t *adapter, int moduleid) static int enable_mem(adapter_t *adapter, int moduleid)
{ {
/*enable mem*/ /*enable mem*/
(void) bist_wr(adapter,moduleid, 0x00, 0x00); (void) bist_wr(adapter,moduleid, 0x00, 0x00);
return(0); return 0;
} }
static int run_bist_all(adapter_t *adapter) static int run_bist_all(adapter_t *adapter)
{ {
int port=0; int port = 0;
u32 val=0; u32 val = 0;
vsc_write(adapter, REG_MEM_BIST, 0x5); vsc_write(adapter, REG_MEM_BIST, 0x5);
vsc_read(adapter, REG_MEM_BIST, &val); vsc_read(adapter, REG_MEM_BIST, &val);
for(port=0; port<12; port++){ for (port = 0; port < 12; port++)
vsc_write(adapter, REG_DEV_SETUP(port), 0x0); vsc_write(adapter, REG_DEV_SETUP(port), 0x0);
}
udelay(300); udelay(300);
vsc_write(adapter, REG_SPI4_MISC, 0x00040409); vsc_write(adapter, REG_SPI4_MISC, 0x00040409);
@ -352,13 +350,13 @@ static int run_bist_all(adapter_t *adapter)
udelay(300); udelay(300);
vsc_write(adapter, REG_SPI4_MISC, 0x60040400); vsc_write(adapter, REG_SPI4_MISC, 0x60040400);
udelay(300); udelay(300);
for(port=0; port<12; port++){ for (port = 0; port < 12; port++)
vsc_write(adapter, REG_DEV_SETUP(port), 0x1); vsc_write(adapter, REG_DEV_SETUP(port), 0x1);
}
udelay(300); udelay(300);
vsc_write(adapter, REG_MEM_BIST, 0x0); vsc_write(adapter, REG_MEM_BIST, 0x0);
mdelay(10); mdelay(10);
return(0); return 0;
} }
static int mac_intr_handler(struct cmac *mac) static int mac_intr_handler(struct cmac *mac)
@ -591,40 +589,46 @@ static void rmon_update(struct cmac *mac, unsigned int addr, u64 *stat)
static void port_stats_update(struct cmac *mac) static void port_stats_update(struct cmac *mac)
{ {
int port = mac->instance->index; struct {
unsigned int reg;
unsigned int offset;
} hw_stats[] = {
/* Rx stats */ #define HW_STAT(reg, stat_name) \
{ reg, (&((struct cmac_statistics *)NULL)->stat_name) - (u64 *)NULL }
/* Rx stats */
HW_STAT(RxUnicast, RxUnicastFramesOK),
HW_STAT(RxMulticast, RxMulticastFramesOK),
HW_STAT(RxBroadcast, RxBroadcastFramesOK),
HW_STAT(Crc, RxFCSErrors),
HW_STAT(RxAlignment, RxAlignErrors),
HW_STAT(RxOversize, RxFrameTooLongErrors),
HW_STAT(RxPause, RxPauseFrames),
HW_STAT(RxJabbers, RxJabberErrors),
HW_STAT(RxFragments, RxRuntErrors),
HW_STAT(RxUndersize, RxRuntErrors),
HW_STAT(RxSymbolCarrier, RxSymbolErrors),
HW_STAT(RxSize1519ToMax, RxJumboFramesOK),
/* Tx stats (skip collision stats as we are full-duplex only) */
HW_STAT(TxUnicast, TxUnicastFramesOK),
HW_STAT(TxMulticast, TxMulticastFramesOK),
HW_STAT(TxBroadcast, TxBroadcastFramesOK),
HW_STAT(TxPause, TxPauseFrames),
HW_STAT(TxUnderrun, TxUnderrun),
HW_STAT(TxSize1519ToMax, TxJumboFramesOK),
}, *p = hw_stats;
unsigned int port = mac->instance->index;
u64 *stats = (u64 *)&mac->stats;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(hw_stats); i++)
rmon_update(mac, CRA(0x4, port, p->reg), stats + p->offset);
rmon_update(mac, REG_TX_OK_BYTES(port), &mac->stats.TxOctetsOK);
rmon_update(mac, REG_RX_OK_BYTES(port), &mac->stats.RxOctetsOK); rmon_update(mac, REG_RX_OK_BYTES(port), &mac->stats.RxOctetsOK);
rmon_update(mac, REG_RX_BAD_BYTES(port), &mac->stats.RxOctetsBad); rmon_update(mac, REG_RX_BAD_BYTES(port), &mac->stats.RxOctetsBad);
rmon_update(mac, REG_RX_UNICAST(port), &mac->stats.RxUnicastFramesOK);
rmon_update(mac, REG_RX_MULTICAST(port),
&mac->stats.RxMulticastFramesOK);
rmon_update(mac, REG_RX_BROADCAST(port),
&mac->stats.RxBroadcastFramesOK);
rmon_update(mac, REG_CRC(port), &mac->stats.RxFCSErrors);
rmon_update(mac, REG_RX_ALIGNMENT(port), &mac->stats.RxAlignErrors);
rmon_update(mac, REG_RX_OVERSIZE(port),
&mac->stats.RxFrameTooLongErrors);
rmon_update(mac, REG_RX_PAUSE(port), &mac->stats.RxPauseFrames);
rmon_update(mac, REG_RX_JABBERS(port), &mac->stats.RxJabberErrors);
rmon_update(mac, REG_RX_FRAGMENTS(port), &mac->stats.RxRuntErrors);
rmon_update(mac, REG_RX_UNDERSIZE(port), &mac->stats.RxRuntErrors);
rmon_update(mac, REG_RX_SYMBOL_CARRIER(port),
&mac->stats.RxSymbolErrors);
rmon_update(mac, REG_RX_SIZE_1519_TO_MAX(port),
&mac->stats.RxJumboFramesOK);
/* Tx stats (skip collision stats as we are full-duplex only) */
rmon_update(mac, REG_TX_OK_BYTES(port), &mac->stats.TxOctetsOK);
rmon_update(mac, REG_TX_UNICAST(port), &mac->stats.TxUnicastFramesOK);
rmon_update(mac, REG_TX_MULTICAST(port),
&mac->stats.TxMulticastFramesOK);
rmon_update(mac, REG_TX_BROADCAST(port),
&mac->stats.TxBroadcastFramesOK);
rmon_update(mac, REG_TX_PAUSE(port), &mac->stats.TxPauseFrames);
rmon_update(mac, REG_TX_UNDERRUN(port), &mac->stats.TxUnderrun);
rmon_update(mac, REG_TX_SIZE_1519_TO_MAX(port),
&mac->stats.TxJumboFramesOK);
} }
/* /*
@ -686,7 +690,8 @@ static struct cmac *vsc7326_mac_create(adapter_t *adapter, int index)
int i; int i;
mac = kzalloc(sizeof(*mac) + sizeof(cmac_instance), GFP_KERNEL); mac = kzalloc(sizeof(*mac) + sizeof(cmac_instance), GFP_KERNEL);
if (!mac) return NULL; if (!mac)
return NULL;
mac->ops = &vsc7326_ops; mac->ops = &vsc7326_ops;
mac->instance = (cmac_instance *)(mac + 1); mac->instance = (cmac_instance *)(mac + 1);

View File

@ -192,73 +192,84 @@
#define REG_HDX(pn) CRA(0x1,pn,0x19) /* Half-duplex config */ #define REG_HDX(pn) CRA(0x1,pn,0x19) /* Half-duplex config */
/* Statistics */ /* Statistics */
/* CRA(0x4,pn,reg) */
/* reg below */
/* pn = port number, 0-a, a = 10GbE */ /* pn = port number, 0-a, a = 10GbE */
#define REG_RX_IN_BYTES(pn) CRA(0x4,pn,0x00) /* # Rx in octets */
#define REG_RX_SYMBOL_CARRIER(pn) CRA(0x4,pn,0x01) /* Frames w/ symbol errors */
#define REG_RX_PAUSE(pn) CRA(0x4,pn,0x02) /* # pause frames received */
#define REG_RX_UNSUP_OPCODE(pn) CRA(0x4,pn,0x03) /* # control frames with unsupported opcode */
#define REG_RX_OK_BYTES(pn) CRA(0x4,pn,0x04) /* # octets in good frames */
#define REG_RX_BAD_BYTES(pn) CRA(0x4,pn,0x05) /* # octets in bad frames */
#define REG_RX_UNICAST(pn) CRA(0x4,pn,0x06) /* # good unicast frames */
#define REG_RX_MULTICAST(pn) CRA(0x4,pn,0x07) /* # good multicast frames */
#define REG_RX_BROADCAST(pn) CRA(0x4,pn,0x08) /* # good broadcast frames */
#define REG_CRC(pn) CRA(0x4,pn,0x09) /* # frames w/ bad CRC only */
#define REG_RX_ALIGNMENT(pn) CRA(0x4,pn,0x0a) /* # frames w/ alignment err */
#define REG_RX_UNDERSIZE(pn) CRA(0x4,pn,0x0b) /* # frames undersize */
#define REG_RX_FRAGMENTS(pn) CRA(0x4,pn,0x0c) /* # frames undersize w/ crc err */
#define REG_RX_IN_RANGE_LENGTH_ERROR(pn) CRA(0x4,pn,0x0d) /* # frames with length error */
#define REG_RX_OUT_OF_RANGE_ERROR(pn) CRA(0x4,pn,0x0e) /* # frames with illegal length field */
#define REG_RX_OVERSIZE(pn) CRA(0x4,pn,0x0f) /* # frames oversize */
#define REG_RX_JABBERS(pn) CRA(0x4,pn,0x10) /* # frames oversize w/ crc err */
#define REG_RX_SIZE_64(pn) CRA(0x4,pn,0x11) /* # frames 64 octets long */
#define REG_RX_SIZE_65_TO_127(pn) CRA(0x4,pn,0x12) /* # frames 65-127 octets */
#define REG_RX_SIZE_128_TO_255(pn) CRA(0x4,pn,0x13) /* # frames 128-255 */
#define REG_RX_SIZE_256_TO_511(pn) CRA(0x4,pn,0x14) /* # frames 256-511 */
#define REG_RX_SIZE_512_TO_1023(pn) CRA(0x4,pn,0x15) /* # frames 512-1023 */
#define REG_RX_SIZE_1024_TO_1518(pn) CRA(0x4,pn,0x16) /* # frames 1024-1518 */
#define REG_RX_SIZE_1519_TO_MAX(pn) CRA(0x4,pn,0x17) /* # frames 1519-max */
#define REG_TX_OUT_BYTES(pn) CRA(0x4,pn,0x18) /* # octets tx */ enum {
#define REG_TX_PAUSE(pn) CRA(0x4,pn,0x19) /* # pause frames sent */ RxInBytes = 0x00, // # Rx in octets
#define REG_TX_OK_BYTES(pn) CRA(0x4,pn,0x1a) /* # octets tx OK */ RxSymbolCarrier = 0x01, // Frames w/ symbol errors
#define REG_TX_UNICAST(pn) CRA(0x4,pn,0x1b) /* # frames unicast */ RxPause = 0x02, // # pause frames received
#define REG_TX_MULTICAST(pn) CRA(0x4,pn,0x1c) /* # frames multicast */ RxUnsupOpcode = 0x03, // # control frames with unsupported opcode
#define REG_TX_BROADCAST(pn) CRA(0x4,pn,0x1d) /* # frames broadcast */ RxOkBytes = 0x04, // # octets in good frames
#define REG_TX_MULTIPLE_COLL(pn) CRA(0x4,pn,0x1e) /* # frames tx after multiple collisions */ RxBadBytes = 0x05, // # octets in bad frames
#define REG_TX_LATE_COLL(pn) CRA(0x4,pn,0x1f) /* # late collisions detected */ RxUnicast = 0x06, // # good unicast frames
#define REG_TX_XCOLL(pn) CRA(0x4,pn,0x20) /* # frames lost, excessive collisions */ RxMulticast = 0x07, // # good multicast frames
#define REG_TX_DEFER(pn) CRA(0x4,pn,0x21) /* # frames deferred on first tx attempt */ RxBroadcast = 0x08, // # good broadcast frames
#define REG_TX_XDEFER(pn) CRA(0x4,pn,0x22) /* # frames excessively deferred */ Crc = 0x09, // # frames w/ bad CRC only
#define REG_TX_CSENSE(pn) CRA(0x4,pn,0x23) /* carrier sense errors at frame end */ RxAlignment = 0x0a, // # frames w/ alignment err
#define REG_TX_SIZE_64(pn) CRA(0x4,pn,0x24) /* # frames 64 octets long */ RxUndersize = 0x0b, // # frames undersize
#define REG_TX_SIZE_65_TO_127(pn) CRA(0x4,pn,0x25) /* # frames 65-127 octets */ RxFragments = 0x0c, // # frames undersize w/ crc err
#define REG_TX_SIZE_128_TO_255(pn) CRA(0x4,pn,0x26) /* # frames 128-255 */ RxInRangeLengthError = 0x0d, // # frames with length error
#define REG_TX_SIZE_256_TO_511(pn) CRA(0x4,pn,0x27) /* # frames 256-511 */ RxOutOfRangeError = 0x0e, // # frames with illegal length field
#define REG_TX_SIZE_512_TO_1023(pn) CRA(0x4,pn,0x28) /* # frames 512-1023 */ RxOversize = 0x0f, // # frames oversize
#define REG_TX_SIZE_1024_TO_1518(pn) CRA(0x4,pn,0x29) /* # frames 1024-1518 */ RxJabbers = 0x10, // # frames oversize w/ crc err
#define REG_TX_SIZE_1519_TO_MAX(pn) CRA(0x4,pn,0x2a) /* # frames 1519-max */ RxSize64 = 0x11, // # frames 64 octets long
#define REG_TX_SINGLE_COLL(pn) CRA(0x4,pn,0x2b) /* # frames tx after single collision */ RxSize65To127 = 0x12, // # frames 65-127 octets
#define REG_TX_BACKOFF2(pn) CRA(0x4,pn,0x2c) /* # frames tx ok after 2 backoffs/collisions */ RxSize128To255 = 0x13, // # frames 128-255
#define REG_TX_BACKOFF3(pn) CRA(0x4,pn,0x2d) /* after 3 backoffs/collisions */ RxSize256To511 = 0x14, // # frames 256-511
#define REG_TX_BACKOFF4(pn) CRA(0x4,pn,0x2e) /* after 4 */ RxSize512To1023 = 0x15, // # frames 512-1023
#define REG_TX_BACKOFF5(pn) CRA(0x4,pn,0x2f) /* after 5 */ RxSize1024To1518 = 0x16, // # frames 1024-1518
#define REG_TX_BACKOFF6(pn) CRA(0x4,pn,0x30) /* after 6 */ RxSize1519ToMax = 0x17, // # frames 1519-max
#define REG_TX_BACKOFF7(pn) CRA(0x4,pn,0x31) /* after 7 */
#define REG_TX_BACKOFF8(pn) CRA(0x4,pn,0x32) /* after 8 */
#define REG_TX_BACKOFF9(pn) CRA(0x4,pn,0x33) /* after 9 */
#define REG_TX_BACKOFF10(pn) CRA(0x4,pn,0x34) /* after 10 */
#define REG_TX_BACKOFF11(pn) CRA(0x4,pn,0x35) /* after 11 */
#define REG_TX_BACKOFF12(pn) CRA(0x4,pn,0x36) /* after 12 */
#define REG_TX_BACKOFF13(pn) CRA(0x4,pn,0x37) /* after 13 */
#define REG_TX_BACKOFF14(pn) CRA(0x4,pn,0x38) /* after 14 */
#define REG_TX_BACKOFF15(pn) CRA(0x4,pn,0x39) /* after 15 */
#define REG_TX_UNDERRUN(pn) CRA(0x4,pn,0x3a) /* # frames dropped from underrun */
#define REG_RX_XGMII_PROT_ERR CRA(0x4,0xa,0x3b) /* # protocol errors detected on XGMII interface */
#define REG_RX_IPG_SHRINK(pn) CRA(0x4,pn,0x3c) /* # of IPG shrinks detected */
#define REG_STAT_STICKY1G(pn) CRA(0x4,pn,0x3e) /* tri-speed sticky bits */ TxOutBytes = 0x18, // # octets tx
#define REG_STAT_STICKY10G CRA(0x4,0xa,0x3e) /* 10GbE sticky bits */ TxPause = 0x19, // # pause frames sent
#define REG_STAT_INIT(pn) CRA(0x4,pn,0x3f) /* Clear all statistics */ TxOkBytes = 0x1a, // # octets tx OK
TxUnicast = 0x1b, // # frames unicast
TxMulticast = 0x1c, // # frames multicast
TxBroadcast = 0x1d, // # frames broadcast
TxMultipleColl = 0x1e, // # frames tx after multiple collisions
TxLateColl = 0x1f, // # late collisions detected
TxXcoll = 0x20, // # frames lost, excessive collisions
TxDefer = 0x21, // # frames deferred on first tx attempt
TxXdefer = 0x22, // # frames excessively deferred
TxCsense = 0x23, // carrier sense errors at frame end
TxSize64 = 0x24, // # frames 64 octets long
TxSize65To127 = 0x25, // # frames 65-127 octets
TxSize128To255 = 0x26, // # frames 128-255
TxSize256To511 = 0x27, // # frames 256-511
TxSize512To1023 = 0x28, // # frames 512-1023
TxSize1024To1518 = 0x29, // # frames 1024-1518
TxSize1519ToMax = 0x2a, // # frames 1519-max
TxSingleColl = 0x2b, // # frames tx after single collision
TxBackoff2 = 0x2c, // # frames tx ok after 2 backoffs/collisions
TxBackoff3 = 0x2d, // after 3 backoffs/collisions
TxBackoff4 = 0x2e, // after 4
TxBackoff5 = 0x2f, // after 5
TxBackoff6 = 0x30, // after 6
TxBackoff7 = 0x31, // after 7
TxBackoff8 = 0x32, // after 8
TxBackoff9 = 0x33, // after 9
TxBackoff10 = 0x34, // after 10
TxBackoff11 = 0x35, // after 11
TxBackoff12 = 0x36, // after 12
TxBackoff13 = 0x37, // after 13
TxBackoff14 = 0x38, // after 14
TxBackoff15 = 0x39, // after 15
TxUnderrun = 0x3a, // # frames dropped from underrun
// Hole. See REG_RX_XGMII_PROT_ERR below.
RxIpgShrink = 0x3c, // # of IPG shrinks detected
// Duplicate. See REG_STAT_STICKY10G below.
StatSticky1G = 0x3e, // tri-speed sticky bits
StatInit = 0x3f // Clear all statistics
};
#define REG_RX_XGMII_PROT_ERR CRA(0x4,0xa,0x3b) /* # protocol errors detected on XGMII interface */
#define REG_STAT_STICKY10G CRA(0x4,0xa,StatSticky1G) /* 10GbE sticky bits */
#define REG_RX_OK_BYTES(pn) CRA(0x4,pn,RxOkBytes)
#define REG_RX_BAD_BYTES(pn) CRA(0x4,pn,RxBadBytes)
#define REG_TX_OK_BYTES(pn) CRA(0x4,pn,TxOkBytes)
/* MII-Management Block registers */ /* MII-Management Block registers */
/* These are for MII-M interface 0, which is the bidirectional LVTTL one. If /* These are for MII-M interface 0, which is the bidirectional LVTTL one. If

View File

@ -54,7 +54,7 @@ enum {
}; };
#define CFG_CHG_INTR_MASK (VSC_INTR_LINK_CHG | VSC_INTR_NEG_ERR | \ #define CFG_CHG_INTR_MASK (VSC_INTR_LINK_CHG | VSC_INTR_NEG_ERR | \
VSC_INTR_NEG_DONE) VSC_INTR_NEG_DONE)
#define INTR_MASK (CFG_CHG_INTR_MASK | VSC_INTR_TX_FIFO | VSC_INTR_RX_FIFO | \ #define INTR_MASK (CFG_CHG_INTR_MASK | VSC_INTR_TX_FIFO | VSC_INTR_RX_FIFO | \
VSC_INTR_ENABLE) VSC_INTR_ENABLE)
@ -94,19 +94,18 @@ static int vsc8244_intr_enable(struct cphy *cphy)
{ {
simple_mdio_write(cphy, VSC8244_INTR_ENABLE, INTR_MASK); simple_mdio_write(cphy, VSC8244_INTR_ENABLE, INTR_MASK);
/* Enable interrupts through Elmer */ /* Enable interrupts through Elmer */
if (t1_is_asic(cphy->adapter)) { if (t1_is_asic(cphy->adapter)) {
u32 elmer; u32 elmer;
t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer); t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);
elmer |= ELMER0_GP_BIT1; elmer |= ELMER0_GP_BIT1;
if (is_T2(cphy->adapter)) { if (is_T2(cphy->adapter))
elmer |= ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4; elmer |= ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4;
}
t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer); t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);
} }
return 0; return 0;
} }
static int vsc8244_intr_disable(struct cphy *cphy) static int vsc8244_intr_disable(struct cphy *cphy)
@ -118,19 +117,18 @@ static int vsc8244_intr_disable(struct cphy *cphy)
t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer); t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);
elmer &= ~ELMER0_GP_BIT1; elmer &= ~ELMER0_GP_BIT1;
if (is_T2(cphy->adapter)) { if (is_T2(cphy->adapter))
elmer &= ~(ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4); elmer &= ~(ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4);
}
t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer); t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);
} }
return 0; return 0;
} }
static int vsc8244_intr_clear(struct cphy *cphy) static int vsc8244_intr_clear(struct cphy *cphy)
{ {
u32 val; u32 val;
u32 elmer; u32 elmer;
/* Clear PHY interrupts by reading the register. */ /* Clear PHY interrupts by reading the register. */
simple_mdio_read(cphy, VSC8244_INTR_ENABLE, &val); simple_mdio_read(cphy, VSC8244_INTR_ENABLE, &val);
@ -138,13 +136,12 @@ static int vsc8244_intr_clear(struct cphy *cphy)
if (t1_is_asic(cphy->adapter)) { if (t1_is_asic(cphy->adapter)) {
t1_tpi_read(cphy->adapter, A_ELMER0_INT_CAUSE, &elmer); t1_tpi_read(cphy->adapter, A_ELMER0_INT_CAUSE, &elmer);
elmer |= ELMER0_GP_BIT1; elmer |= ELMER0_GP_BIT1;
if (is_T2(cphy->adapter)) { if (is_T2(cphy->adapter))
elmer |= ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4; elmer |= ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4;
}
t1_tpi_write(cphy->adapter, A_ELMER0_INT_CAUSE, elmer); t1_tpi_write(cphy->adapter, A_ELMER0_INT_CAUSE, elmer);
} }
return 0; return 0;
} }
/* /*
@ -179,13 +176,13 @@ static int vsc8244_set_speed_duplex(struct cphy *phy, int speed, int duplex)
int t1_mdio_set_bits(struct cphy *phy, int mmd, int reg, unsigned int bits) int t1_mdio_set_bits(struct cphy *phy, int mmd, int reg, unsigned int bits)
{ {
int ret; int ret;
unsigned int val; unsigned int val;
ret = mdio_read(phy, mmd, reg, &val); ret = mdio_read(phy, mmd, reg, &val);
if (!ret) if (!ret)
ret = mdio_write(phy, mmd, reg, val | bits); ret = mdio_write(phy, mmd, reg, val | bits);
return ret; return ret;
} }
static int vsc8244_autoneg_enable(struct cphy *cphy) static int vsc8244_autoneg_enable(struct cphy *cphy)
@ -235,7 +232,7 @@ static int vsc8244_advertise(struct cphy *phy, unsigned int advertise_map)
} }
static int vsc8244_get_link_status(struct cphy *cphy, int *link_ok, static int vsc8244_get_link_status(struct cphy *cphy, int *link_ok,
int *speed, int *duplex, int *fc) int *speed, int *duplex, int *fc)
{ {
unsigned int bmcr, status, lpa, adv; unsigned int bmcr, status, lpa, adv;
int err, sp = -1, dplx = -1, pause = 0; int err, sp = -1, dplx = -1, pause = 0;
@ -343,11 +340,13 @@ static struct cphy_ops vsc8244_ops = {
.get_link_status = vsc8244_get_link_status .get_link_status = vsc8244_get_link_status
}; };
static struct cphy* vsc8244_phy_create(adapter_t *adapter, int phy_addr, struct mdio_ops *mdio_ops) static struct cphy* vsc8244_phy_create(adapter_t *adapter, int phy_addr,
struct mdio_ops *mdio_ops)
{ {
struct cphy *cphy = kzalloc(sizeof(*cphy), GFP_KERNEL); struct cphy *cphy = kzalloc(sizeof(*cphy), GFP_KERNEL);
if (!cphy) return NULL; if (!cphy)
return NULL;
cphy_init(cphy, adapter, phy_addr, &vsc8244_ops, mdio_ops); cphy_init(cphy, adapter, phy_addr, &vsc8244_ops, mdio_ops);

View File

@ -0,0 +1,8 @@
#
# Chelsio T3 driver
#
obj-$(CONFIG_CHELSIO_T3) += cxgb3.o
cxgb3-objs := cxgb3_main.o ael1002.o vsc8211.o t3_hw.o mc5.o \
xgmac.o sge.o l2t.o cxgb3_offload.o

279
drivers/net/cxgb3/adapter.h Normal file
View File

@ -0,0 +1,279 @@
/*
* Copyright (c) 2003-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
/* This file should not be included directly. Include common.h instead. */
#ifndef __T3_ADAPTER_H__
#define __T3_ADAPTER_H__
#include <linux/pci.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
#include <linux/timer.h>
#include <linux/cache.h>
#include <linux/mutex.h>
#include "t3cdev.h"
#include <asm/semaphore.h>
#include <asm/bitops.h>
#include <asm/io.h>
typedef irqreturn_t(*intr_handler_t) (int, void *);
struct vlan_group;
struct port_info {
struct vlan_group *vlan_grp;
const struct port_type_info *port_type;
u8 port_id;
u8 rx_csum_offload;
u8 nqsets;
u8 first_qset;
struct cphy phy;
struct cmac mac;
struct link_config link_config;
struct net_device_stats netstats;
int activity;
};
enum { /* adapter flags */
FULL_INIT_DONE = (1 << 0),
USING_MSI = (1 << 1),
USING_MSIX = (1 << 2),
QUEUES_BOUND = (1 << 3),
};
struct rx_desc;
struct rx_sw_desc;
struct sge_fl { /* SGE per free-buffer list state */
unsigned int buf_size; /* size of each Rx buffer */
unsigned int credits; /* # of available Rx buffers */
unsigned int size; /* capacity of free list */
unsigned int cidx; /* consumer index */
unsigned int pidx; /* producer index */
unsigned int gen; /* free list generation */
struct rx_desc *desc; /* address of HW Rx descriptor ring */
struct rx_sw_desc *sdesc; /* address of SW Rx descriptor ring */
dma_addr_t phys_addr; /* physical address of HW ring start */
unsigned int cntxt_id; /* SGE context id for the free list */
unsigned long empty; /* # of times queue ran out of buffers */
};
/*
* Bundle size for grouping offload RX packets for delivery to the stack.
* Don't make this too big as we do prefetch on each packet in a bundle.
*/
# define RX_BUNDLE_SIZE 8
struct rsp_desc;
struct sge_rspq { /* state for an SGE response queue */
unsigned int credits; /* # of pending response credits */
unsigned int size; /* capacity of response queue */
unsigned int cidx; /* consumer index */
unsigned int gen; /* current generation bit */
unsigned int polling; /* is the queue serviced through NAPI? */
unsigned int holdoff_tmr; /* interrupt holdoff timer in 100ns */
unsigned int next_holdoff; /* holdoff time for next interrupt */
struct rsp_desc *desc; /* address of HW response ring */
dma_addr_t phys_addr; /* physical address of the ring */
unsigned int cntxt_id; /* SGE context id for the response q */
spinlock_t lock; /* guards response processing */
struct sk_buff *rx_head; /* offload packet receive queue head */
struct sk_buff *rx_tail; /* offload packet receive queue tail */
unsigned long offload_pkts;
unsigned long offload_bundles;
unsigned long eth_pkts; /* # of ethernet packets */
unsigned long pure_rsps; /* # of pure (non-data) responses */
unsigned long imm_data; /* responses with immediate data */
unsigned long rx_drops; /* # of packets dropped due to no mem */
unsigned long async_notif; /* # of asynchronous notification events */
unsigned long empty; /* # of times queue ran out of credits */
unsigned long nomem; /* # of responses deferred due to no mem */
unsigned long unhandled_irqs; /* # of spurious intrs */
};
struct tx_desc;
struct tx_sw_desc;
struct sge_txq { /* state for an SGE Tx queue */
unsigned long flags; /* HW DMA fetch status */
unsigned int in_use; /* # of in-use Tx descriptors */
unsigned int size; /* # of descriptors */
unsigned int processed; /* total # of descs HW has processed */
unsigned int cleaned; /* total # of descs SW has reclaimed */
unsigned int stop_thres; /* SW TX queue suspend threshold */
unsigned int cidx; /* consumer index */
unsigned int pidx; /* producer index */
unsigned int gen; /* current value of generation bit */
unsigned int unacked; /* Tx descriptors used since last COMPL */
struct tx_desc *desc; /* address of HW Tx descriptor ring */
struct tx_sw_desc *sdesc; /* address of SW Tx descriptor ring */
spinlock_t lock; /* guards enqueueing of new packets */
unsigned int token; /* WR token */
dma_addr_t phys_addr; /* physical address of the ring */
struct sk_buff_head sendq; /* List of backpressured offload packets */
struct tasklet_struct qresume_tsk; /* restarts the queue */
unsigned int cntxt_id; /* SGE context id for the Tx q */
unsigned long stops; /* # of times q has been stopped */
unsigned long restarts; /* # of queue restarts */
};
enum { /* per port SGE statistics */
SGE_PSTAT_TSO, /* # of TSO requests */
SGE_PSTAT_RX_CSUM_GOOD, /* # of successful RX csum offloads */
SGE_PSTAT_TX_CSUM, /* # of TX checksum offloads */
SGE_PSTAT_VLANEX, /* # of VLAN tag extractions */
SGE_PSTAT_VLANINS, /* # of VLAN tag insertions */
SGE_PSTAT_MAX /* must be last */
};
struct sge_qset { /* an SGE queue set */
struct sge_rspq rspq;
struct sge_fl fl[SGE_RXQ_PER_SET];
struct sge_txq txq[SGE_TXQ_PER_SET];
struct net_device *netdev; /* associated net device */
unsigned long txq_stopped; /* which Tx queues are stopped */
struct timer_list tx_reclaim_timer; /* reclaims TX buffers */
unsigned long port_stats[SGE_PSTAT_MAX];
} ____cacheline_aligned;
struct sge {
struct sge_qset qs[SGE_QSETS];
spinlock_t reg_lock; /* guards non-atomic SGE registers (eg context) */
};
struct adapter {
struct t3cdev tdev;
struct list_head adapter_list;
void __iomem *regs;
struct pci_dev *pdev;
unsigned long registered_device_map;
unsigned long open_device_map;
unsigned long flags;
const char *name;
int msg_enable;
unsigned int mmio_len;
struct adapter_params params;
unsigned int slow_intr_mask;
unsigned long irq_stats[IRQ_NUM_STATS];
struct {
unsigned short vec;
char desc[22];
} msix_info[SGE_QSETS + 1];
/* T3 modules */
struct sge sge;
struct mc7 pmrx;
struct mc7 pmtx;
struct mc7 cm;
struct mc5 mc5;
struct net_device *port[MAX_NPORTS];
unsigned int check_task_cnt;
struct delayed_work adap_check_task;
struct work_struct ext_intr_handler_task;
/*
* Dummy netdevices are needed when using multiple receive queues with
* NAPI as each netdevice can service only one queue.
*/
struct net_device *dummy_netdev[SGE_QSETS - 1];
struct dentry *debugfs_root;
struct mutex mdio_lock;
spinlock_t stats_lock;
spinlock_t work_lock;
};
static inline u32 t3_read_reg(struct adapter *adapter, u32 reg_addr)
{
u32 val = readl(adapter->regs + reg_addr);
CH_DBG(adapter, MMIO, "read register 0x%x value 0x%x\n", reg_addr, val);
return val;
}
static inline void t3_write_reg(struct adapter *adapter, u32 reg_addr, u32 val)
{
CH_DBG(adapter, MMIO, "setting register 0x%x to 0x%x\n", reg_addr, val);
writel(val, adapter->regs + reg_addr);
}
static inline struct port_info *adap2pinfo(struct adapter *adap, int idx)
{
return netdev_priv(adap->port[idx]);
}
/*
* We use the spare atalk_ptr to map a net device to its SGE queue set.
* This is a macro so it can be used as l-value.
*/
#define dev2qset(netdev) ((netdev)->atalk_ptr)
#define OFFLOAD_DEVMAP_BIT 15
#define tdev2adap(d) container_of(d, struct adapter, tdev)
static inline int offload_running(struct adapter *adapter)
{
return test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map);
}
int t3_offload_tx(struct t3cdev *tdev, struct sk_buff *skb);
void t3_os_ext_intr_handler(struct adapter *adapter);
void t3_os_link_changed(struct adapter *adapter, int port_id, int link_status,
int speed, int duplex, int fc);
void t3_sge_start(struct adapter *adap);
void t3_sge_stop(struct adapter *adap);
void t3_free_sge_resources(struct adapter *adap);
void t3_sge_err_intr_handler(struct adapter *adapter);
intr_handler_t t3_intr_handler(struct adapter *adap, int polling);
int t3_eth_xmit(struct sk_buff *skb, struct net_device *dev);
int t3_mgmt_tx(struct adapter *adap, struct sk_buff *skb);
void t3_update_qset_coalesce(struct sge_qset *qs, const struct qset_params *p);
int t3_sge_alloc_qset(struct adapter *adapter, unsigned int id, int nports,
int irq_vec_idx, const struct qset_params *p,
int ntxq, struct net_device *netdev);
int t3_get_desc(const struct sge_qset *qs, unsigned int qnum, unsigned int idx,
unsigned char *data);
irqreturn_t t3_sge_intr_msix(int irq, void *cookie);
#endif /* __T3_ADAPTER_H__ */

251
drivers/net/cxgb3/ael1002.c Normal file
View File

@ -0,0 +1,251 @@
/*
* Copyright (c) 2005-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "common.h"
#include "regs.h"
enum {
AEL100X_TX_DISABLE = 9,
AEL100X_TX_CONFIG1 = 0xc002,
AEL1002_PWR_DOWN_HI = 0xc011,
AEL1002_PWR_DOWN_LO = 0xc012,
AEL1002_XFI_EQL = 0xc015,
AEL1002_LB_EN = 0xc017,
LASI_CTRL = 0x9002,
LASI_STAT = 0x9005
};
static void ael100x_txon(struct cphy *phy)
{
int tx_on_gpio = phy->addr == 0 ? F_GPIO7_OUT_VAL : F_GPIO2_OUT_VAL;
msleep(100);
t3_set_reg_field(phy->adapter, A_T3DBG_GPIO_EN, 0, tx_on_gpio);
msleep(30);
}
static int ael1002_power_down(struct cphy *phy, int enable)
{
int err;
err = mdio_write(phy, MDIO_DEV_PMA_PMD, AEL100X_TX_DISABLE, !!enable);
if (!err)
err = t3_mdio_change_bits(phy, MDIO_DEV_PMA_PMD, MII_BMCR,
BMCR_PDOWN, enable ? BMCR_PDOWN : 0);
return err;
}
static int ael1002_reset(struct cphy *phy, int wait)
{
int err;
if ((err = ael1002_power_down(phy, 0)) ||
(err = mdio_write(phy, MDIO_DEV_PMA_PMD, AEL100X_TX_CONFIG1, 1)) ||
(err = mdio_write(phy, MDIO_DEV_PMA_PMD, AEL1002_PWR_DOWN_HI, 0)) ||
(err = mdio_write(phy, MDIO_DEV_PMA_PMD, AEL1002_PWR_DOWN_LO, 0)) ||
(err = mdio_write(phy, MDIO_DEV_PMA_PMD, AEL1002_XFI_EQL, 0x18)) ||
(err = t3_mdio_change_bits(phy, MDIO_DEV_PMA_PMD, AEL1002_LB_EN,
0, 1 << 5)))
return err;
return 0;
}
static int ael1002_intr_noop(struct cphy *phy)
{
return 0;
}
static int ael100x_get_link_status(struct cphy *phy, int *link_ok,
int *speed, int *duplex, int *fc)
{
if (link_ok) {
unsigned int status;
int err = mdio_read(phy, MDIO_DEV_PMA_PMD, MII_BMSR, &status);
/*
* BMSR_LSTATUS is latch-low, so if it is 0 we need to read it
* once more to get the current link state.
*/
if (!err && !(status & BMSR_LSTATUS))
err = mdio_read(phy, MDIO_DEV_PMA_PMD, MII_BMSR,
&status);
if (err)
return err;
*link_ok = !!(status & BMSR_LSTATUS);
}
if (speed)
*speed = SPEED_10000;
if (duplex)
*duplex = DUPLEX_FULL;
return 0;
}
static struct cphy_ops ael1002_ops = {
.reset = ael1002_reset,
.intr_enable = ael1002_intr_noop,
.intr_disable = ael1002_intr_noop,
.intr_clear = ael1002_intr_noop,
.intr_handler = ael1002_intr_noop,
.get_link_status = ael100x_get_link_status,
.power_down = ael1002_power_down,
};
void t3_ael1002_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops)
{
cphy_init(phy, adapter, phy_addr, &ael1002_ops, mdio_ops);
ael100x_txon(phy);
}
static int ael1006_reset(struct cphy *phy, int wait)
{
return t3_phy_reset(phy, MDIO_DEV_PMA_PMD, wait);
}
static int ael1006_intr_enable(struct cphy *phy)
{
return mdio_write(phy, MDIO_DEV_PMA_PMD, LASI_CTRL, 1);
}
static int ael1006_intr_disable(struct cphy *phy)
{
return mdio_write(phy, MDIO_DEV_PMA_PMD, LASI_CTRL, 0);
}
static int ael1006_intr_clear(struct cphy *phy)
{
u32 val;
return mdio_read(phy, MDIO_DEV_PMA_PMD, LASI_STAT, &val);
}
static int ael1006_intr_handler(struct cphy *phy)
{
unsigned int status;
int err = mdio_read(phy, MDIO_DEV_PMA_PMD, LASI_STAT, &status);
if (err)
return err;
return (status & 1) ? cphy_cause_link_change : 0;
}
static int ael1006_power_down(struct cphy *phy, int enable)
{
return t3_mdio_change_bits(phy, MDIO_DEV_PMA_PMD, MII_BMCR,
BMCR_PDOWN, enable ? BMCR_PDOWN : 0);
}
static struct cphy_ops ael1006_ops = {
.reset = ael1006_reset,
.intr_enable = ael1006_intr_enable,
.intr_disable = ael1006_intr_disable,
.intr_clear = ael1006_intr_clear,
.intr_handler = ael1006_intr_handler,
.get_link_status = ael100x_get_link_status,
.power_down = ael1006_power_down,
};
void t3_ael1006_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops)
{
cphy_init(phy, adapter, phy_addr, &ael1006_ops, mdio_ops);
ael100x_txon(phy);
}
static struct cphy_ops qt2045_ops = {
.reset = ael1006_reset,
.intr_enable = ael1006_intr_enable,
.intr_disable = ael1006_intr_disable,
.intr_clear = ael1006_intr_clear,
.intr_handler = ael1006_intr_handler,
.get_link_status = ael100x_get_link_status,
.power_down = ael1006_power_down,
};
void t3_qt2045_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops)
{
unsigned int stat;
cphy_init(phy, adapter, phy_addr, &qt2045_ops, mdio_ops);
/*
* Some cards where the PHY is supposed to be at address 0 actually
* have it at 1.
*/
if (!phy_addr && !mdio_read(phy, MDIO_DEV_PMA_PMD, MII_BMSR, &stat) &&
stat == 0xffff)
phy->addr = 1;
}
static int xaui_direct_reset(struct cphy *phy, int wait)
{
return 0;
}
static int xaui_direct_get_link_status(struct cphy *phy, int *link_ok,
int *speed, int *duplex, int *fc)
{
if (link_ok) {
unsigned int status;
status = t3_read_reg(phy->adapter,
XGM_REG(A_XGM_SERDES_STAT0, phy->addr));
*link_ok = !(status & F_LOWSIG0);
}
if (speed)
*speed = SPEED_10000;
if (duplex)
*duplex = DUPLEX_FULL;
return 0;
}
static int xaui_direct_power_down(struct cphy *phy, int enable)
{
return 0;
}
static struct cphy_ops xaui_direct_ops = {
.reset = xaui_direct_reset,
.intr_enable = ael1002_intr_noop,
.intr_disable = ael1002_intr_noop,
.intr_clear = ael1002_intr_noop,
.intr_handler = ael1002_intr_noop,
.get_link_status = xaui_direct_get_link_status,
.power_down = xaui_direct_power_down,
};
void t3_xaui_direct_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops)
{
cphy_init(phy, adapter, 1, &xaui_direct_ops, mdio_ops);
}

729
drivers/net/cxgb3/common.h Normal file
View File

@ -0,0 +1,729 @@
/*
* Copyright (c) 2005-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CHELSIO_COMMON_H
#define __CHELSIO_COMMON_H
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/ctype.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/netdevice.h>
#include <linux/ethtool.h>
#include <linux/mii.h>
#include "version.h"
#define CH_ERR(adap, fmt, ...) dev_err(&adap->pdev->dev, fmt, ## __VA_ARGS__)
#define CH_WARN(adap, fmt, ...) dev_warn(&adap->pdev->dev, fmt, ## __VA_ARGS__)
#define CH_ALERT(adap, fmt, ...) \
dev_printk(KERN_ALERT, &adap->pdev->dev, fmt, ## __VA_ARGS__)
/*
* More powerful macro that selectively prints messages based on msg_enable.
* For info and debugging messages.
*/
#define CH_MSG(adapter, level, category, fmt, ...) do { \
if ((adapter)->msg_enable & NETIF_MSG_##category) \
dev_printk(KERN_##level, &adapter->pdev->dev, fmt, \
## __VA_ARGS__); \
} while (0)
#ifdef DEBUG
# define CH_DBG(adapter, category, fmt, ...) \
CH_MSG(adapter, DEBUG, category, fmt, ## __VA_ARGS__)
#else
# define CH_DBG(adapter, category, fmt, ...)
#endif
/* Additional NETIF_MSG_* categories */
#define NETIF_MSG_MMIO 0x8000000
struct t3_rx_mode {
struct net_device *dev;
struct dev_mc_list *mclist;
unsigned int idx;
};
static inline void init_rx_mode(struct t3_rx_mode *p, struct net_device *dev,
struct dev_mc_list *mclist)
{
p->dev = dev;
p->mclist = mclist;
p->idx = 0;
}
static inline u8 *t3_get_next_mcaddr(struct t3_rx_mode *rm)
{
u8 *addr = NULL;
if (rm->mclist && rm->idx < rm->dev->mc_count) {
addr = rm->mclist->dmi_addr;
rm->mclist = rm->mclist->next;
rm->idx++;
}
return addr;
}
enum {
MAX_NPORTS = 2, /* max # of ports */
MAX_FRAME_SIZE = 10240, /* max MAC frame size, including header + FCS */
EEPROMSIZE = 8192, /* Serial EEPROM size */
RSS_TABLE_SIZE = 64, /* size of RSS lookup and mapping tables */
TCB_SIZE = 128, /* TCB size */
NMTUS = 16, /* size of MTU table */
NCCTRL_WIN = 32, /* # of congestion control windows */
};
#define MAX_RX_COALESCING_LEN 16224U
enum {
PAUSE_RX = 1 << 0,
PAUSE_TX = 1 << 1,
PAUSE_AUTONEG = 1 << 2
};
enum {
SUPPORTED_OFFLOAD = 1 << 24,
SUPPORTED_IRQ = 1 << 25
};
enum { /* adapter interrupt-maintained statistics */
STAT_ULP_CH0_PBL_OOB,
STAT_ULP_CH1_PBL_OOB,
STAT_PCI_CORR_ECC,
IRQ_NUM_STATS /* keep last */
};
enum {
SGE_QSETS = 8, /* # of SGE Tx/Rx/RspQ sets */
SGE_RXQ_PER_SET = 2, /* # of Rx queues per set */
SGE_TXQ_PER_SET = 3 /* # of Tx queues per set */
};
enum sge_context_type { /* SGE egress context types */
SGE_CNTXT_RDMA = 0,
SGE_CNTXT_ETH = 2,
SGE_CNTXT_OFLD = 4,
SGE_CNTXT_CTRL = 5
};
enum {
AN_PKT_SIZE = 32, /* async notification packet size */
IMMED_PKT_SIZE = 48 /* packet size for immediate data */
};
struct sg_ent { /* SGE scatter/gather entry */
u32 len[2];
u64 addr[2];
};
#ifndef SGE_NUM_GENBITS
/* Must be 1 or 2 */
# define SGE_NUM_GENBITS 2
#endif
#define TX_DESC_FLITS 16U
#define WR_FLITS (TX_DESC_FLITS + 1 - SGE_NUM_GENBITS)
struct cphy;
struct adapter;
struct mdio_ops {
int (*read)(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int *val);
int (*write)(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int val);
};
struct adapter_info {
unsigned char nports; /* # of ports */
unsigned char phy_base_addr; /* MDIO PHY base address */
unsigned char mdien;
unsigned char mdiinv;
unsigned int gpio_out; /* GPIO output settings */
unsigned int gpio_intr; /* GPIO IRQ enable mask */
unsigned long caps; /* adapter capabilities */
const struct mdio_ops *mdio_ops; /* MDIO operations */
const char *desc; /* product description */
};
struct port_type_info {
void (*phy_prep)(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *ops);
unsigned int caps;
const char *desc;
};
struct mc5_stats {
unsigned long parity_err;
unsigned long active_rgn_full;
unsigned long nfa_srch_err;
unsigned long unknown_cmd;
unsigned long reqq_parity_err;
unsigned long dispq_parity_err;
unsigned long del_act_empty;
};
struct mc7_stats {
unsigned long corr_err;
unsigned long uncorr_err;
unsigned long parity_err;
unsigned long addr_err;
};
struct mac_stats {
u64 tx_octets; /* total # of octets in good frames */
u64 tx_octets_bad; /* total # of octets in error frames */
u64 tx_frames; /* all good frames */
u64 tx_mcast_frames; /* good multicast frames */
u64 tx_bcast_frames; /* good broadcast frames */
u64 tx_pause; /* # of transmitted pause frames */
u64 tx_deferred; /* frames with deferred transmissions */
u64 tx_late_collisions; /* # of late collisions */
u64 tx_total_collisions; /* # of total collisions */
u64 tx_excess_collisions; /* frame errors from excessive collissions */
u64 tx_underrun; /* # of Tx FIFO underruns */
u64 tx_len_errs; /* # of Tx length errors */
u64 tx_mac_internal_errs; /* # of internal MAC errors on Tx */
u64 tx_excess_deferral; /* # of frames with excessive deferral */
u64 tx_fcs_errs; /* # of frames with bad FCS */
u64 tx_frames_64; /* # of Tx frames in a particular range */
u64 tx_frames_65_127;
u64 tx_frames_128_255;
u64 tx_frames_256_511;
u64 tx_frames_512_1023;
u64 tx_frames_1024_1518;
u64 tx_frames_1519_max;
u64 rx_octets; /* total # of octets in good frames */
u64 rx_octets_bad; /* total # of octets in error frames */
u64 rx_frames; /* all good frames */
u64 rx_mcast_frames; /* good multicast frames */
u64 rx_bcast_frames; /* good broadcast frames */
u64 rx_pause; /* # of received pause frames */
u64 rx_fcs_errs; /* # of received frames with bad FCS */
u64 rx_align_errs; /* alignment errors */
u64 rx_symbol_errs; /* symbol errors */
u64 rx_data_errs; /* data errors */
u64 rx_sequence_errs; /* sequence errors */
u64 rx_runt; /* # of runt frames */
u64 rx_jabber; /* # of jabber frames */
u64 rx_short; /* # of short frames */
u64 rx_too_long; /* # of oversized frames */
u64 rx_mac_internal_errs; /* # of internal MAC errors on Rx */
u64 rx_frames_64; /* # of Rx frames in a particular range */
u64 rx_frames_65_127;
u64 rx_frames_128_255;
u64 rx_frames_256_511;
u64 rx_frames_512_1023;
u64 rx_frames_1024_1518;
u64 rx_frames_1519_max;
u64 rx_cong_drops; /* # of Rx drops due to SGE congestion */
unsigned long tx_fifo_parity_err;
unsigned long rx_fifo_parity_err;
unsigned long tx_fifo_urun;
unsigned long rx_fifo_ovfl;
unsigned long serdes_signal_loss;
unsigned long xaui_pcs_ctc_err;
unsigned long xaui_pcs_align_change;
};
struct tp_mib_stats {
u32 ipInReceive_hi;
u32 ipInReceive_lo;
u32 ipInHdrErrors_hi;
u32 ipInHdrErrors_lo;
u32 ipInAddrErrors_hi;
u32 ipInAddrErrors_lo;
u32 ipInUnknownProtos_hi;
u32 ipInUnknownProtos_lo;
u32 ipInDiscards_hi;
u32 ipInDiscards_lo;
u32 ipInDelivers_hi;
u32 ipInDelivers_lo;
u32 ipOutRequests_hi;
u32 ipOutRequests_lo;
u32 ipOutDiscards_hi;
u32 ipOutDiscards_lo;
u32 ipOutNoRoutes_hi;
u32 ipOutNoRoutes_lo;
u32 ipReasmTimeout;
u32 ipReasmReqds;
u32 ipReasmOKs;
u32 ipReasmFails;
u32 reserved[8];
u32 tcpActiveOpens;
u32 tcpPassiveOpens;
u32 tcpAttemptFails;
u32 tcpEstabResets;
u32 tcpOutRsts;
u32 tcpCurrEstab;
u32 tcpInSegs_hi;
u32 tcpInSegs_lo;
u32 tcpOutSegs_hi;
u32 tcpOutSegs_lo;
u32 tcpRetransSeg_hi;
u32 tcpRetransSeg_lo;
u32 tcpInErrs_hi;
u32 tcpInErrs_lo;
u32 tcpRtoMin;
u32 tcpRtoMax;
};
struct tp_params {
unsigned int nchan; /* # of channels */
unsigned int pmrx_size; /* total PMRX capacity */
unsigned int pmtx_size; /* total PMTX capacity */
unsigned int cm_size; /* total CM capacity */
unsigned int chan_rx_size; /* per channel Rx size */
unsigned int chan_tx_size; /* per channel Tx size */
unsigned int rx_pg_size; /* Rx page size */
unsigned int tx_pg_size; /* Tx page size */
unsigned int rx_num_pgs; /* # of Rx pages */
unsigned int tx_num_pgs; /* # of Tx pages */
unsigned int ntimer_qs; /* # of timer queues */
};
struct qset_params { /* SGE queue set parameters */
unsigned int polling; /* polling/interrupt service for rspq */
unsigned int coalesce_usecs; /* irq coalescing timer */
unsigned int rspq_size; /* # of entries in response queue */
unsigned int fl_size; /* # of entries in regular free list */
unsigned int jumbo_size; /* # of entries in jumbo free list */
unsigned int txq_size[SGE_TXQ_PER_SET]; /* Tx queue sizes */
unsigned int cong_thres; /* FL congestion threshold */
};
struct sge_params {
unsigned int max_pkt_size; /* max offload pkt size */
struct qset_params qset[SGE_QSETS];
};
struct mc5_params {
unsigned int mode; /* selects MC5 width */
unsigned int nservers; /* size of server region */
unsigned int nfilters; /* size of filter region */
unsigned int nroutes; /* size of routing region */
};
/* Default MC5 region sizes */
enum {
DEFAULT_NSERVERS = 512,
DEFAULT_NFILTERS = 128
};
/* MC5 modes, these must be non-0 */
enum {
MC5_MODE_144_BIT = 1,
MC5_MODE_72_BIT = 2
};
struct vpd_params {
unsigned int cclk;
unsigned int mclk;
unsigned int uclk;
unsigned int mdc;
unsigned int mem_timing;
u8 eth_base[6];
u8 port_type[MAX_NPORTS];
unsigned short xauicfg[2];
};
struct pci_params {
unsigned int vpd_cap_addr;
unsigned int pcie_cap_addr;
unsigned short speed;
unsigned char width;
unsigned char variant;
};
enum {
PCI_VARIANT_PCI,
PCI_VARIANT_PCIX_MODE1_PARITY,
PCI_VARIANT_PCIX_MODE1_ECC,
PCI_VARIANT_PCIX_266_MODE2,
PCI_VARIANT_PCIE
};
struct adapter_params {
struct sge_params sge;
struct mc5_params mc5;
struct tp_params tp;
struct vpd_params vpd;
struct pci_params pci;
const struct adapter_info *info;
unsigned short mtus[NMTUS];
unsigned short a_wnd[NCCTRL_WIN];
unsigned short b_wnd[NCCTRL_WIN];
unsigned int nports; /* # of ethernet ports */
unsigned int stats_update_period; /* MAC stats accumulation period */
unsigned int linkpoll_period; /* link poll period in 0.1s */
unsigned int rev; /* chip revision */
};
struct trace_params {
u32 sip;
u32 sip_mask;
u32 dip;
u32 dip_mask;
u16 sport;
u16 sport_mask;
u16 dport;
u16 dport_mask;
u32 vlan:12;
u32 vlan_mask:12;
u32 intf:4;
u32 intf_mask:4;
u8 proto;
u8 proto_mask;
};
struct link_config {
unsigned int supported; /* link capabilities */
unsigned int advertising; /* advertised capabilities */
unsigned short requested_speed; /* speed user has requested */
unsigned short speed; /* actual link speed */
unsigned char requested_duplex; /* duplex user has requested */
unsigned char duplex; /* actual link duplex */
unsigned char requested_fc; /* flow control user has requested */
unsigned char fc; /* actual link flow control */
unsigned char autoneg; /* autonegotiating? */
unsigned int link_ok; /* link up? */
};
#define SPEED_INVALID 0xffff
#define DUPLEX_INVALID 0xff
struct mc5 {
struct adapter *adapter;
unsigned int tcam_size;
unsigned char part_type;
unsigned char parity_enabled;
unsigned char mode;
struct mc5_stats stats;
};
static inline unsigned int t3_mc5_size(const struct mc5 *p)
{
return p->tcam_size;
}
struct mc7 {
struct adapter *adapter; /* backpointer to adapter */
unsigned int size; /* memory size in bytes */
unsigned int width; /* MC7 interface width */
unsigned int offset; /* register address offset for MC7 instance */
const char *name; /* name of MC7 instance */
struct mc7_stats stats; /* MC7 statistics */
};
static inline unsigned int t3_mc7_size(const struct mc7 *p)
{
return p->size;
}
struct cmac {
struct adapter *adapter;
unsigned int offset;
unsigned int nucast; /* # of address filters for unicast MACs */
struct mac_stats stats;
};
enum {
MAC_DIRECTION_RX = 1,
MAC_DIRECTION_TX = 2,
MAC_RXFIFO_SIZE = 32768
};
/* IEEE 802.3ae specified MDIO devices */
enum {
MDIO_DEV_PMA_PMD = 1,
MDIO_DEV_WIS = 2,
MDIO_DEV_PCS = 3,
MDIO_DEV_XGXS = 4
};
/* PHY loopback direction */
enum {
PHY_LOOPBACK_TX = 1,
PHY_LOOPBACK_RX = 2
};
/* PHY interrupt types */
enum {
cphy_cause_link_change = 1,
cphy_cause_fifo_error = 2
};
/* PHY operations */
struct cphy_ops {
void (*destroy)(struct cphy *phy);
int (*reset)(struct cphy *phy, int wait);
int (*intr_enable)(struct cphy *phy);
int (*intr_disable)(struct cphy *phy);
int (*intr_clear)(struct cphy *phy);
int (*intr_handler)(struct cphy *phy);
int (*autoneg_enable)(struct cphy *phy);
int (*autoneg_restart)(struct cphy *phy);
int (*advertise)(struct cphy *phy, unsigned int advertise_map);
int (*set_loopback)(struct cphy *phy, int mmd, int dir, int enable);
int (*set_speed_duplex)(struct cphy *phy, int speed, int duplex);
int (*get_link_status)(struct cphy *phy, int *link_ok, int *speed,
int *duplex, int *fc);
int (*power_down)(struct cphy *phy, int enable);
};
/* A PHY instance */
struct cphy {
int addr; /* PHY address */
struct adapter *adapter; /* associated adapter */
unsigned long fifo_errors; /* FIFO over/under-flows */
const struct cphy_ops *ops; /* PHY operations */
int (*mdio_read)(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int *val);
int (*mdio_write)(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int val);
};
/* Convenience MDIO read/write wrappers */
static inline int mdio_read(struct cphy *phy, int mmd, int reg,
unsigned int *valp)
{
return phy->mdio_read(phy->adapter, phy->addr, mmd, reg, valp);
}
static inline int mdio_write(struct cphy *phy, int mmd, int reg,
unsigned int val)
{
return phy->mdio_write(phy->adapter, phy->addr, mmd, reg, val);
}
/* Convenience initializer */
static inline void cphy_init(struct cphy *phy, struct adapter *adapter,
int phy_addr, struct cphy_ops *phy_ops,
const struct mdio_ops *mdio_ops)
{
phy->adapter = adapter;
phy->addr = phy_addr;
phy->ops = phy_ops;
if (mdio_ops) {
phy->mdio_read = mdio_ops->read;
phy->mdio_write = mdio_ops->write;
}
}
/* Accumulate MAC statistics every 180 seconds. For 1G we multiply by 10. */
#define MAC_STATS_ACCUM_SECS 180
#define XGM_REG(reg_addr, idx) \
((reg_addr) + (idx) * (XGMAC0_1_BASE_ADDR - XGMAC0_0_BASE_ADDR))
struct addr_val_pair {
unsigned int reg_addr;
unsigned int val;
};
#include "adapter.h"
#ifndef PCI_VENDOR_ID_CHELSIO
# define PCI_VENDOR_ID_CHELSIO 0x1425
#endif
#define for_each_port(adapter, iter) \
for (iter = 0; iter < (adapter)->params.nports; ++iter)
#define adapter_info(adap) ((adap)->params.info)
static inline int uses_xaui(const struct adapter *adap)
{
return adapter_info(adap)->caps & SUPPORTED_AUI;
}
static inline int is_10G(const struct adapter *adap)
{
return adapter_info(adap)->caps & SUPPORTED_10000baseT_Full;
}
static inline int is_offload(const struct adapter *adap)
{
return adapter_info(adap)->caps & SUPPORTED_OFFLOAD;
}
static inline unsigned int core_ticks_per_usec(const struct adapter *adap)
{
return adap->params.vpd.cclk / 1000;
}
static inline unsigned int is_pcie(const struct adapter *adap)
{
return adap->params.pci.variant == PCI_VARIANT_PCIE;
}
void t3_set_reg_field(struct adapter *adap, unsigned int addr, u32 mask,
u32 val);
void t3_write_regs(struct adapter *adapter, const struct addr_val_pair *p,
int n, unsigned int offset);
int t3_wait_op_done_val(struct adapter *adapter, int reg, u32 mask,
int polarity, int attempts, int delay, u32 *valp);
static inline int t3_wait_op_done(struct adapter *adapter, int reg, u32 mask,
int polarity, int attempts, int delay)
{
return t3_wait_op_done_val(adapter, reg, mask, polarity, attempts,
delay, NULL);
}
int t3_mdio_change_bits(struct cphy *phy, int mmd, int reg, unsigned int clear,
unsigned int set);
int t3_phy_reset(struct cphy *phy, int mmd, int wait);
int t3_phy_advertise(struct cphy *phy, unsigned int advert);
int t3_set_phy_speed_duplex(struct cphy *phy, int speed, int duplex);
void t3_intr_enable(struct adapter *adapter);
void t3_intr_disable(struct adapter *adapter);
void t3_intr_clear(struct adapter *adapter);
void t3_port_intr_enable(struct adapter *adapter, int idx);
void t3_port_intr_disable(struct adapter *adapter, int idx);
void t3_port_intr_clear(struct adapter *adapter, int idx);
int t3_slow_intr_handler(struct adapter *adapter);
int t3_phy_intr_handler(struct adapter *adapter);
void t3_link_changed(struct adapter *adapter, int port_id);
int t3_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc);
const struct adapter_info *t3_get_adapter_info(unsigned int board_id);
int t3_seeprom_read(struct adapter *adapter, u32 addr, u32 *data);
int t3_seeprom_write(struct adapter *adapter, u32 addr, u32 data);
int t3_seeprom_wp(struct adapter *adapter, int enable);
int t3_read_flash(struct adapter *adapter, unsigned int addr,
unsigned int nwords, u32 *data, int byte_oriented);
int t3_load_fw(struct adapter *adapter, const u8 * fw_data, unsigned int size);
int t3_get_fw_version(struct adapter *adapter, u32 *vers);
int t3_check_fw_version(struct adapter *adapter);
int t3_init_hw(struct adapter *adapter, u32 fw_params);
void mac_prep(struct cmac *mac, struct adapter *adapter, int index);
void early_hw_init(struct adapter *adapter, const struct adapter_info *ai);
int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
int reset);
void t3_led_ready(struct adapter *adapter);
void t3_fatal_err(struct adapter *adapter);
void t3_set_vlan_accel(struct adapter *adapter, unsigned int ports, int on);
void t3_config_rss(struct adapter *adapter, unsigned int rss_config,
const u8 * cpus, const u16 *rspq);
int t3_read_rss(struct adapter *adapter, u8 * lkup, u16 *map);
int t3_mps_set_active_ports(struct adapter *adap, unsigned int port_mask);
int t3_cim_ctl_blk_read(struct adapter *adap, unsigned int addr,
unsigned int n, unsigned int *valp);
int t3_mc7_bd_read(struct mc7 *mc7, unsigned int start, unsigned int n,
u64 *buf);
int t3_mac_reset(struct cmac *mac);
void t3b_pcs_reset(struct cmac *mac);
int t3_mac_enable(struct cmac *mac, int which);
int t3_mac_disable(struct cmac *mac, int which);
int t3_mac_set_mtu(struct cmac *mac, unsigned int mtu);
int t3_mac_set_rx_mode(struct cmac *mac, struct t3_rx_mode *rm);
int t3_mac_set_address(struct cmac *mac, unsigned int idx, u8 addr[6]);
int t3_mac_set_num_ucast(struct cmac *mac, int n);
const struct mac_stats *t3_mac_update_stats(struct cmac *mac);
int t3_mac_set_speed_duplex_fc(struct cmac *mac, int speed, int duplex, int fc);
void t3_mc5_prep(struct adapter *adapter, struct mc5 *mc5, int mode);
int t3_mc5_init(struct mc5 *mc5, unsigned int nservers, unsigned int nfilters,
unsigned int nroutes);
void t3_mc5_intr_handler(struct mc5 *mc5);
int t3_read_mc5_range(const struct mc5 *mc5, unsigned int start, unsigned int n,
u32 *buf);
int t3_tp_set_coalescing_size(struct adapter *adap, unsigned int size, int psh);
void t3_tp_set_max_rxsize(struct adapter *adap, unsigned int size);
void t3_tp_set_offload_mode(struct adapter *adap, int enable);
void t3_tp_get_mib_stats(struct adapter *adap, struct tp_mib_stats *tps);
void t3_load_mtus(struct adapter *adap, unsigned short mtus[NMTUS],
unsigned short alpha[NCCTRL_WIN],
unsigned short beta[NCCTRL_WIN], unsigned short mtu_cap);
void t3_read_hw_mtus(struct adapter *adap, unsigned short mtus[NMTUS]);
void t3_get_cong_cntl_tab(struct adapter *adap,
unsigned short incr[NMTUS][NCCTRL_WIN]);
void t3_config_trace_filter(struct adapter *adapter,
const struct trace_params *tp, int filter_index,
int invert, int enable);
int t3_config_sched(struct adapter *adap, unsigned int kbps, int sched);
void t3_sge_prep(struct adapter *adap, struct sge_params *p);
void t3_sge_init(struct adapter *adap, struct sge_params *p);
int t3_sge_init_ecntxt(struct adapter *adapter, unsigned int id, int gts_enable,
enum sge_context_type type, int respq, u64 base_addr,
unsigned int size, unsigned int token, int gen,
unsigned int cidx);
int t3_sge_init_flcntxt(struct adapter *adapter, unsigned int id,
int gts_enable, u64 base_addr, unsigned int size,
unsigned int esize, unsigned int cong_thres, int gen,
unsigned int cidx);
int t3_sge_init_rspcntxt(struct adapter *adapter, unsigned int id,
int irq_vec_idx, u64 base_addr, unsigned int size,
unsigned int fl_thres, int gen, unsigned int cidx);
int t3_sge_init_cqcntxt(struct adapter *adapter, unsigned int id, u64 base_addr,
unsigned int size, int rspq, int ovfl_mode,
unsigned int credits, unsigned int credit_thres);
int t3_sge_enable_ecntxt(struct adapter *adapter, unsigned int id, int enable);
int t3_sge_disable_fl(struct adapter *adapter, unsigned int id);
int t3_sge_disable_rspcntxt(struct adapter *adapter, unsigned int id);
int t3_sge_disable_cqcntxt(struct adapter *adapter, unsigned int id);
int t3_sge_read_ecntxt(struct adapter *adapter, unsigned int id, u32 data[4]);
int t3_sge_read_fl(struct adapter *adapter, unsigned int id, u32 data[4]);
int t3_sge_read_cq(struct adapter *adapter, unsigned int id, u32 data[4]);
int t3_sge_read_rspq(struct adapter *adapter, unsigned int id, u32 data[4]);
int t3_sge_cqcntxt_op(struct adapter *adapter, unsigned int id, unsigned int op,
unsigned int credits);
void t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
void t3_ael1002_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
void t3_ael1006_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
void t3_qt2045_phy_prep(struct cphy *phy, struct adapter *adapter, int phy_addr,
const struct mdio_ops *mdio_ops);
void t3_xaui_direct_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
#endif /* __CHELSIO_COMMON_H */

View File

@ -0,0 +1,164 @@
/*
* Copyright (c) 2003-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _CXGB3_OFFLOAD_CTL_DEFS_H
#define _CXGB3_OFFLOAD_CTL_DEFS_H
enum {
GET_MAX_OUTSTANDING_WR,
GET_TX_MAX_CHUNK,
GET_TID_RANGE,
GET_STID_RANGE,
GET_RTBL_RANGE,
GET_L2T_CAPACITY,
GET_MTUS,
GET_WR_LEN,
GET_IFF_FROM_MAC,
GET_DDP_PARAMS,
GET_PORTS,
ULP_ISCSI_GET_PARAMS,
ULP_ISCSI_SET_PARAMS,
RDMA_GET_PARAMS,
RDMA_CQ_OP,
RDMA_CQ_SETUP,
RDMA_CQ_DISABLE,
RDMA_CTRL_QP_SETUP,
RDMA_GET_MEM,
};
/*
* Structure used to describe a TID range. Valid TIDs are [base, base+num).
*/
struct tid_range {
unsigned int base; /* first TID */
unsigned int num; /* number of TIDs in range */
};
/*
* Structure used to request the size and contents of the MTU table.
*/
struct mtutab {
unsigned int size; /* # of entries in the MTU table */
const unsigned short *mtus; /* the MTU table values */
};
struct net_device;
/*
* Structure used to request the adapter net_device owning a given MAC address.
*/
struct iff_mac {
struct net_device *dev; /* the net_device */
const unsigned char *mac_addr; /* MAC address to lookup */
u16 vlan_tag;
};
struct pci_dev;
/*
* Structure used to request the TCP DDP parameters.
*/
struct ddp_params {
unsigned int llimit; /* TDDP region start address */
unsigned int ulimit; /* TDDP region end address */
unsigned int tag_mask; /* TDDP tag mask */
struct pci_dev *pdev;
};
struct adap_ports {
unsigned int nports; /* number of ports on this adapter */
struct net_device *lldevs[2];
};
/*
* Structure used to return information to the iscsi layer.
*/
struct ulp_iscsi_info {
unsigned int offset;
unsigned int llimit;
unsigned int ulimit;
unsigned int tagmask;
unsigned int pgsz3;
unsigned int pgsz2;
unsigned int pgsz1;
unsigned int pgsz0;
unsigned int max_rxsz;
unsigned int max_txsz;
struct pci_dev *pdev;
};
/*
* Structure used to return information to the RDMA layer.
*/
struct rdma_info {
unsigned int tpt_base; /* TPT base address */
unsigned int tpt_top; /* TPT last entry address */
unsigned int pbl_base; /* PBL base address */
unsigned int pbl_top; /* PBL last entry address */
unsigned int rqt_base; /* RQT base address */
unsigned int rqt_top; /* RQT last entry address */
unsigned int udbell_len; /* user doorbell region length */
unsigned long udbell_physbase; /* user doorbell physical start addr */
void __iomem *kdb_addr; /* kernel doorbell register address */
struct pci_dev *pdev; /* associated PCI device */
};
/*
* Structure used to request an operation on an RDMA completion queue.
*/
struct rdma_cq_op {
unsigned int id;
unsigned int op;
unsigned int credits;
};
/*
* Structure used to setup RDMA completion queues.
*/
struct rdma_cq_setup {
unsigned int id;
unsigned long long base_addr;
unsigned int size;
unsigned int credits;
unsigned int credit_thres;
unsigned int ovfl_mode;
};
/*
* Structure used to setup the RDMA control egress context.
*/
struct rdma_ctrlqp_setup {
unsigned long long base_addr;
unsigned int size;
};
#endif /* _CXGB3_OFFLOAD_CTL_DEFS_H */

View File

@ -0,0 +1,99 @@
/*
* Copyright (c) 2006-2007 Chelsio, Inc. All rights reserved.
* Copyright (c) 2006-2007 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _CHELSIO_DEFS_H
#define _CHELSIO_DEFS_H
#include <linux/skbuff.h>
#include <net/tcp.h>
#include "t3cdev.h"
#include "cxgb3_offload.h"
#define VALIDATE_TID 1
void *cxgb_alloc_mem(unsigned long size);
void cxgb_free_mem(void *addr);
void cxgb_neigh_update(struct neighbour *neigh);
void cxgb_redirect(struct dst_entry *old, struct dst_entry *new);
/*
* Map an ATID or STID to their entries in the corresponding TID tables.
*/
static inline union active_open_entry *atid2entry(const struct tid_info *t,
unsigned int atid)
{
return &t->atid_tab[atid - t->atid_base];
}
static inline union listen_entry *stid2entry(const struct tid_info *t,
unsigned int stid)
{
return &t->stid_tab[stid - t->stid_base];
}
/*
* Find the connection corresponding to a TID.
*/
static inline struct t3c_tid_entry *lookup_tid(const struct tid_info *t,
unsigned int tid)
{
return tid < t->ntids ? &(t->tid_tab[tid]) : NULL;
}
/*
* Find the connection corresponding to a server TID.
*/
static inline struct t3c_tid_entry *lookup_stid(const struct tid_info *t,
unsigned int tid)
{
if (tid < t->stid_base || tid >= t->stid_base + t->nstids)
return NULL;
return &(stid2entry(t, tid)->t3c_tid);
}
/*
* Find the connection corresponding to an active-open TID.
*/
static inline struct t3c_tid_entry *lookup_atid(const struct tid_info *t,
unsigned int tid)
{
if (tid < t->atid_base || tid >= t->atid_base + t->natids)
return NULL;
return &(atid2entry(t, tid)->t3c_tid);
}
int process_rx(struct t3cdev *dev, struct sk_buff **skbs, int n);
int attach_t3cdev(struct t3cdev *dev);
void detach_t3cdev(struct t3cdev *dev);
#endif

View File

@ -0,0 +1,185 @@
/*
* Copyright (c) 2003-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CHIOCTL_H__
#define __CHIOCTL_H__
/*
* Ioctl commands specific to this driver.
*/
enum {
CHELSIO_SETREG = 1024,
CHELSIO_GETREG,
CHELSIO_SETTPI,
CHELSIO_GETTPI,
CHELSIO_GETMTUTAB,
CHELSIO_SETMTUTAB,
CHELSIO_GETMTU,
CHELSIO_SET_PM,
CHELSIO_GET_PM,
CHELSIO_GET_TCAM,
CHELSIO_SET_TCAM,
CHELSIO_GET_TCB,
CHELSIO_GET_MEM,
CHELSIO_LOAD_FW,
CHELSIO_GET_PROTO,
CHELSIO_SET_PROTO,
CHELSIO_SET_TRACE_FILTER,
CHELSIO_SET_QSET_PARAMS,
CHELSIO_GET_QSET_PARAMS,
CHELSIO_SET_QSET_NUM,
CHELSIO_GET_QSET_NUM,
CHELSIO_SET_PKTSCHED,
};
struct ch_reg {
uint32_t cmd;
uint32_t addr;
uint32_t val;
};
struct ch_cntxt {
uint32_t cmd;
uint32_t cntxt_type;
uint32_t cntxt_id;
uint32_t data[4];
};
/* context types */
enum { CNTXT_TYPE_EGRESS, CNTXT_TYPE_FL, CNTXT_TYPE_RSP, CNTXT_TYPE_CQ };
struct ch_desc {
uint32_t cmd;
uint32_t queue_num;
uint32_t idx;
uint32_t size;
uint8_t data[128];
};
struct ch_mem_range {
uint32_t cmd;
uint32_t mem_id;
uint32_t addr;
uint32_t len;
uint32_t version;
uint8_t buf[0];
};
struct ch_qset_params {
uint32_t cmd;
uint32_t qset_idx;
int32_t txq_size[3];
int32_t rspq_size;
int32_t fl_size[2];
int32_t intr_lat;
int32_t polling;
int32_t cong_thres;
};
struct ch_pktsched_params {
uint32_t cmd;
uint8_t sched;
uint8_t idx;
uint8_t min;
uint8_t max;
uint8_t binding;
};
#ifndef TCB_SIZE
# define TCB_SIZE 128
#endif
/* TCB size in 32-bit words */
#define TCB_WORDS (TCB_SIZE / 4)
enum { MEM_CM, MEM_PMRX, MEM_PMTX }; /* ch_mem_range.mem_id values */
struct ch_mtus {
uint32_t cmd;
uint32_t nmtus;
uint16_t mtus[NMTUS];
};
struct ch_pm {
uint32_t cmd;
uint32_t tx_pg_sz;
uint32_t tx_num_pg;
uint32_t rx_pg_sz;
uint32_t rx_num_pg;
uint32_t pm_total;
};
struct ch_tcam {
uint32_t cmd;
uint32_t tcam_size;
uint32_t nservers;
uint32_t nroutes;
uint32_t nfilters;
};
struct ch_tcb {
uint32_t cmd;
uint32_t tcb_index;
uint32_t tcb_data[TCB_WORDS];
};
struct ch_tcam_word {
uint32_t cmd;
uint32_t addr;
uint32_t buf[3];
};
struct ch_trace {
uint32_t cmd;
uint32_t sip;
uint32_t sip_mask;
uint32_t dip;
uint32_t dip_mask;
uint16_t sport;
uint16_t sport_mask;
uint16_t dport;
uint16_t dport_mask;
uint32_t vlan:12;
uint32_t vlan_mask:12;
uint32_t intf:4;
uint32_t intf_mask:4;
uint8_t proto;
uint8_t proto_mask;
uint8_t invert_match:1;
uint8_t config_tx:1;
uint8_t config_rx:1;
uint8_t trace_tx:1;
uint8_t trace_rx:1;
};
#define SIOCCHIOCTL SIOCDEVPRIVATE
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,193 @@
/*
* Copyright (c) 2006-2007 Chelsio, Inc. All rights reserved.
* Copyright (c) 2006-2007 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _CXGB3_OFFLOAD_H
#define _CXGB3_OFFLOAD_H
#include <linux/list.h>
#include <linux/skbuff.h>
#include "l2t.h"
#include "t3cdev.h"
#include "t3_cpl.h"
struct adapter;
void cxgb3_offload_init(void);
void cxgb3_adapter_ofld(struct adapter *adapter);
void cxgb3_adapter_unofld(struct adapter *adapter);
int cxgb3_offload_activate(struct adapter *adapter);
void cxgb3_offload_deactivate(struct adapter *adapter);
void cxgb3_set_dummy_ops(struct t3cdev *dev);
/*
* Client registration. Users of T3 driver must register themselves.
* The T3 driver will call the add function of every client for each T3
* adapter activated, passing up the t3cdev ptr. Each client fills out an
* array of callback functions to process CPL messages.
*/
void cxgb3_register_client(struct cxgb3_client *client);
void cxgb3_unregister_client(struct cxgb3_client *client);
void cxgb3_add_clients(struct t3cdev *tdev);
void cxgb3_remove_clients(struct t3cdev *tdev);
typedef int (*cxgb3_cpl_handler_func)(struct t3cdev *dev,
struct sk_buff *skb, void *ctx);
struct cxgb3_client {
char *name;
void (*add) (struct t3cdev *);
void (*remove) (struct t3cdev *);
cxgb3_cpl_handler_func *handlers;
int (*redirect)(void *ctx, struct dst_entry *old,
struct dst_entry *new, struct l2t_entry *l2t);
struct list_head client_list;
};
/*
* TID allocation services.
*/
int cxgb3_alloc_atid(struct t3cdev *dev, struct cxgb3_client *client,
void *ctx);
int cxgb3_alloc_stid(struct t3cdev *dev, struct cxgb3_client *client,
void *ctx);
void *cxgb3_free_atid(struct t3cdev *dev, int atid);
void cxgb3_free_stid(struct t3cdev *dev, int stid);
void cxgb3_insert_tid(struct t3cdev *dev, struct cxgb3_client *client,
void *ctx, unsigned int tid);
void cxgb3_queue_tid_release(struct t3cdev *dev, unsigned int tid);
void cxgb3_remove_tid(struct t3cdev *dev, void *ctx, unsigned int tid);
struct t3c_tid_entry {
struct cxgb3_client *client;
void *ctx;
};
/* CPL message priority levels */
enum {
CPL_PRIORITY_DATA = 0, /* data messages */
CPL_PRIORITY_SETUP = 1, /* connection setup messages */
CPL_PRIORITY_TEARDOWN = 0, /* connection teardown messages */
CPL_PRIORITY_LISTEN = 1, /* listen start/stop messages */
CPL_PRIORITY_ACK = 1, /* RX ACK messages */
CPL_PRIORITY_CONTROL = 1 /* offload control messages */
};
/* Flags for return value of CPL message handlers */
enum {
CPL_RET_BUF_DONE = 1, /* buffer processing done, buffer may be freed */
CPL_RET_BAD_MSG = 2, /* bad CPL message (e.g., unknown opcode) */
CPL_RET_UNKNOWN_TID = 4 /* unexpected unknown TID */
};
typedef int (*cpl_handler_func)(struct t3cdev *dev, struct sk_buff *skb);
/*
* Returns a pointer to the first byte of the CPL header in an sk_buff that
* contains a CPL message.
*/
static inline void *cplhdr(struct sk_buff *skb)
{
return skb->data;
}
void t3_register_cpl_handler(unsigned int opcode, cpl_handler_func h);
union listen_entry {
struct t3c_tid_entry t3c_tid;
union listen_entry *next;
};
union active_open_entry {
struct t3c_tid_entry t3c_tid;
union active_open_entry *next;
};
/*
* Holds the size, base address, free list start, etc of the TID, server TID,
* and active-open TID tables for a offload device.
* The tables themselves are allocated dynamically.
*/
struct tid_info {
struct t3c_tid_entry *tid_tab;
unsigned int ntids;
atomic_t tids_in_use;
union listen_entry *stid_tab;
unsigned int nstids;
unsigned int stid_base;
union active_open_entry *atid_tab;
unsigned int natids;
unsigned int atid_base;
/*
* The following members are accessed R/W so we put them in their own
* cache lines.
*
* XXX We could combine the atid fields above with the lock here since
* atids are use once (unlike other tids). OTOH the above fields are
* usually in cache due to tid_tab.
*/
spinlock_t atid_lock ____cacheline_aligned_in_smp;
union active_open_entry *afree;
unsigned int atids_in_use;
spinlock_t stid_lock ____cacheline_aligned;
union listen_entry *sfree;
unsigned int stids_in_use;
};
struct t3c_data {
struct list_head list_node;
struct t3cdev *dev;
unsigned int tx_max_chunk; /* max payload for TX_DATA */
unsigned int max_wrs; /* max in-flight WRs per connection */
unsigned int nmtus;
const unsigned short *mtus;
struct tid_info tid_maps;
struct t3c_tid_entry *tid_release_list;
spinlock_t tid_release_lock;
struct work_struct tid_release_task;
};
/*
* t3cdev -> t3c_data accessor
*/
#define T3C_DATA(dev) (*(struct t3c_data **)&(dev)->l4opt)
#endif

View File

@ -0,0 +1,177 @@
/*
* Copyright (c) 2004-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _FIRMWARE_EXPORTS_H_
#define _FIRMWARE_EXPORTS_H_
/* WR OPCODES supported by the firmware.
*/
#define FW_WROPCODE_FORWARD 0x01
#define FW_WROPCODE_BYPASS 0x05
#define FW_WROPCODE_TUNNEL_TX_PKT 0x03
#define FW_WROPOCDE_ULPTX_DATA_SGL 0x00
#define FW_WROPCODE_ULPTX_MEM_READ 0x02
#define FW_WROPCODE_ULPTX_PKT 0x04
#define FW_WROPCODE_ULPTX_INVALIDATE 0x06
#define FW_WROPCODE_TUNNEL_RX_PKT 0x07
#define FW_WROPCODE_OFLD_GETTCB_RPL 0x08
#define FW_WROPCODE_OFLD_CLOSE_CON 0x09
#define FW_WROPCODE_OFLD_TP_ABORT_CON_REQ 0x0A
#define FW_WROPCODE_OFLD_HOST_ABORT_CON_RPL 0x0F
#define FW_WROPCODE_OFLD_HOST_ABORT_CON_REQ 0x0B
#define FW_WROPCODE_OFLD_TP_ABORT_CON_RPL 0x0C
#define FW_WROPCODE_OFLD_TX_DATA 0x0D
#define FW_WROPCODE_OFLD_TX_DATA_ACK 0x0E
#define FW_WROPCODE_RI_RDMA_INIT 0x10
#define FW_WROPCODE_RI_RDMA_WRITE 0x11
#define FW_WROPCODE_RI_RDMA_READ_REQ 0x12
#define FW_WROPCODE_RI_RDMA_READ_RESP 0x13
#define FW_WROPCODE_RI_SEND 0x14
#define FW_WROPCODE_RI_TERMINATE 0x15
#define FW_WROPCODE_RI_RDMA_READ 0x16
#define FW_WROPCODE_RI_RECEIVE 0x17
#define FW_WROPCODE_RI_BIND_MW 0x18
#define FW_WROPCODE_RI_FASTREGISTER_MR 0x19
#define FW_WROPCODE_RI_LOCAL_INV 0x1A
#define FW_WROPCODE_RI_MODIFY_QP 0x1B
#define FW_WROPCODE_RI_BYPASS 0x1C
#define FW_WROPOCDE_RSVD 0x1E
#define FW_WROPCODE_SGE_EGRESSCONTEXT_RR 0x1F
#define FW_WROPCODE_MNGT 0x1D
#define FW_MNGTOPCODE_PKTSCHED_SET 0x00
/* Maximum size of a WR sent from the host, limited by the SGE.
*
* Note: WR coming from ULP or TP are only limited by CIM.
*/
#define FW_WR_SIZE 128
/* Maximum number of outstanding WRs sent from the host. Value must be
* programmed in the CTRL/TUNNEL/QP SGE Egress Context and used by
* offload modules to limit the number of WRs per connection.
*/
#define FW_T3_WR_NUM 16
#define FW_N3_WR_NUM 7
#ifndef N3
# define FW_WR_NUM FW_T3_WR_NUM
#else
# define FW_WR_NUM FW_N3_WR_NUM
#endif
/* FW_TUNNEL_NUM corresponds to the number of supported TUNNEL Queues. These
* queues must start at SGE Egress Context FW_TUNNEL_SGEEC_START and must
* start at 'TID' (or 'uP Token') FW_TUNNEL_TID_START.
*
* Ingress Traffic (e.g. DMA completion credit) for TUNNEL Queue[i] is sent
* to RESP Queue[i].
*/
#define FW_TUNNEL_NUM 8
#define FW_TUNNEL_SGEEC_START 8
#define FW_TUNNEL_TID_START 65544
/* FW_CTRL_NUM corresponds to the number of supported CTRL Queues. These queues
* must start at SGE Egress Context FW_CTRL_SGEEC_START and must start at 'TID'
* (or 'uP Token') FW_CTRL_TID_START.
*
* Ingress Traffic for CTRL Queue[i] is sent to RESP Queue[i].
*/
#define FW_CTRL_NUM 8
#define FW_CTRL_SGEEC_START 65528
#define FW_CTRL_TID_START 65536
/* FW_OFLD_NUM corresponds to the number of supported OFFLOAD Queues. These
* queues must start at SGE Egress Context FW_OFLD_SGEEC_START.
*
* Note: the 'uP Token' in the SGE Egress Context fields is irrelevant for
* OFFLOAD Queues, as the host is responsible for providing the correct TID in
* every WR.
*
* Ingress Trafffic for OFFLOAD Queue[i] is sent to RESP Queue[i].
*/
#define FW_OFLD_NUM 8
#define FW_OFLD_SGEEC_START 0
/*
*
*/
#define FW_RI_NUM 1
#define FW_RI_SGEEC_START 65527
#define FW_RI_TID_START 65552
/*
* The RX_PKT_TID
*/
#define FW_RX_PKT_NUM 1
#define FW_RX_PKT_TID_START 65553
/* FW_WRC_NUM corresponds to the number of Work Request Context that supported
* by the firmware.
*/
#define FW_WRC_NUM \
(65536 + FW_TUNNEL_NUM + FW_CTRL_NUM + FW_RI_NUM + FW_RX_PKT_NUM)
/*
* FW type and version.
*/
#define S_FW_VERSION_TYPE 28
#define M_FW_VERSION_TYPE 0xF
#define V_FW_VERSION_TYPE(x) ((x) << S_FW_VERSION_TYPE)
#define G_FW_VERSION_TYPE(x) \
(((x) >> S_FW_VERSION_TYPE) & M_FW_VERSION_TYPE)
#define S_FW_VERSION_MAJOR 16
#define M_FW_VERSION_MAJOR 0xFFF
#define V_FW_VERSION_MAJOR(x) ((x) << S_FW_VERSION_MAJOR)
#define G_FW_VERSION_MAJOR(x) \
(((x) >> S_FW_VERSION_MAJOR) & M_FW_VERSION_MAJOR)
#define S_FW_VERSION_MINOR 8
#define M_FW_VERSION_MINOR 0xFF
#define V_FW_VERSION_MINOR(x) ((x) << S_FW_VERSION_MINOR)
#define G_FW_VERSION_MINOR(x) \
(((x) >> S_FW_VERSION_MINOR) & M_FW_VERSION_MINOR)
#define S_FW_VERSION_MICRO 0
#define M_FW_VERSION_MICRO 0xFF
#define V_FW_VERSION_MICRO(x) ((x) << S_FW_VERSION_MICRO)
#define G_FW_VERSION_MICRO(x) \
(((x) >> S_FW_VERSION_MICRO) & M_FW_VERSION_MICRO)
#endif /* _FIRMWARE_EXPORTS_H_ */

450
drivers/net/cxgb3/l2t.c Normal file
View File

@ -0,0 +1,450 @@
/*
* Copyright (c) 2003-2007 Chelsio, Inc. All rights reserved.
* Copyright (c) 2006-2007 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/skbuff.h>
#include <linux/netdevice.h>
#include <linux/if.h>
#include <linux/if_vlan.h>
#include <linux/jhash.h>
#include <net/neighbour.h>
#include "common.h"
#include "t3cdev.h"
#include "cxgb3_defs.h"
#include "l2t.h"
#include "t3_cpl.h"
#include "firmware_exports.h"
#define VLAN_NONE 0xfff
/*
* Module locking notes: There is a RW lock protecting the L2 table as a
* whole plus a spinlock per L2T entry. Entry lookups and allocations happen
* under the protection of the table lock, individual entry changes happen
* while holding that entry's spinlock. The table lock nests outside the
* entry locks. Allocations of new entries take the table lock as writers so
* no other lookups can happen while allocating new entries. Entry updates
* take the table lock as readers so multiple entries can be updated in
* parallel. An L2T entry can be dropped by decrementing its reference count
* and therefore can happen in parallel with entry allocation but no entry
* can change state or increment its ref count during allocation as both of
* these perform lookups.
*/
static inline unsigned int vlan_prio(const struct l2t_entry *e)
{
return e->vlan >> 13;
}
static inline unsigned int arp_hash(u32 key, int ifindex,
const struct l2t_data *d)
{
return jhash_2words(key, ifindex, 0) & (d->nentries - 1);
}
static inline void neigh_replace(struct l2t_entry *e, struct neighbour *n)
{
neigh_hold(n);
if (e->neigh)
neigh_release(e->neigh);
e->neigh = n;
}
/*
* Set up an L2T entry and send any packets waiting in the arp queue. The
* supplied skb is used for the CPL_L2T_WRITE_REQ. Must be called with the
* entry locked.
*/
static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,
struct l2t_entry *e)
{
struct cpl_l2t_write_req *req;
if (!skb) {
skb = alloc_skb(sizeof(*req), GFP_ATOMIC);
if (!skb)
return -ENOMEM;
}
req = (struct cpl_l2t_write_req *)__skb_put(skb, sizeof(*req));
req->wr.wr_hi = htonl(V_WR_OP(FW_WROPCODE_FORWARD));
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_L2T_WRITE_REQ, e->idx));
req->params = htonl(V_L2T_W_IDX(e->idx) | V_L2T_W_IFF(e->smt_idx) |
V_L2T_W_VLAN(e->vlan & VLAN_VID_MASK) |
V_L2T_W_PRIO(vlan_prio(e)));
memcpy(e->dmac, e->neigh->ha, sizeof(e->dmac));
memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));
skb->priority = CPL_PRIORITY_CONTROL;
cxgb3_ofld_send(dev, skb);
while (e->arpq_head) {
skb = e->arpq_head;
e->arpq_head = skb->next;
skb->next = NULL;
cxgb3_ofld_send(dev, skb);
}
e->arpq_tail = NULL;
e->state = L2T_STATE_VALID;
return 0;
}
/*
* Add a packet to the an L2T entry's queue of packets awaiting resolution.
* Must be called with the entry's lock held.
*/
static inline void arpq_enqueue(struct l2t_entry *e, struct sk_buff *skb)
{
skb->next = NULL;
if (e->arpq_head)
e->arpq_tail->next = skb;
else
e->arpq_head = skb;
e->arpq_tail = skb;
}
int t3_l2t_send_slow(struct t3cdev *dev, struct sk_buff *skb,
struct l2t_entry *e)
{
again:
switch (e->state) {
case L2T_STATE_STALE: /* entry is stale, kick off revalidation */
neigh_event_send(e->neigh, NULL);
spin_lock_bh(&e->lock);
if (e->state == L2T_STATE_STALE)
e->state = L2T_STATE_VALID;
spin_unlock_bh(&e->lock);
case L2T_STATE_VALID: /* fast-path, send the packet on */
return cxgb3_ofld_send(dev, skb);
case L2T_STATE_RESOLVING:
spin_lock_bh(&e->lock);
if (e->state != L2T_STATE_RESOLVING) {
/* ARP already completed */
spin_unlock_bh(&e->lock);
goto again;
}
arpq_enqueue(e, skb);
spin_unlock_bh(&e->lock);
/*
* Only the first packet added to the arpq should kick off
* resolution. However, because the alloc_skb below can fail,
* we allow each packet added to the arpq to retry resolution
* as a way of recovering from transient memory exhaustion.
* A better way would be to use a work request to retry L2T
* entries when there's no memory.
*/
if (!neigh_event_send(e->neigh, NULL)) {
skb = alloc_skb(sizeof(struct cpl_l2t_write_req),
GFP_ATOMIC);
if (!skb)
break;
spin_lock_bh(&e->lock);
if (e->arpq_head)
setup_l2e_send_pending(dev, skb, e);
else /* we lost the race */
__kfree_skb(skb);
spin_unlock_bh(&e->lock);
}
}
return 0;
}
EXPORT_SYMBOL(t3_l2t_send_slow);
void t3_l2t_send_event(struct t3cdev *dev, struct l2t_entry *e)
{
again:
switch (e->state) {
case L2T_STATE_STALE: /* entry is stale, kick off revalidation */
neigh_event_send(e->neigh, NULL);
spin_lock_bh(&e->lock);
if (e->state == L2T_STATE_STALE) {
e->state = L2T_STATE_VALID;
}
spin_unlock_bh(&e->lock);
return;
case L2T_STATE_VALID: /* fast-path, send the packet on */
return;
case L2T_STATE_RESOLVING:
spin_lock_bh(&e->lock);
if (e->state != L2T_STATE_RESOLVING) {
/* ARP already completed */
spin_unlock_bh(&e->lock);
goto again;
}
spin_unlock_bh(&e->lock);
/*
* Only the first packet added to the arpq should kick off
* resolution. However, because the alloc_skb below can fail,
* we allow each packet added to the arpq to retry resolution
* as a way of recovering from transient memory exhaustion.
* A better way would be to use a work request to retry L2T
* entries when there's no memory.
*/
neigh_event_send(e->neigh, NULL);
}
return;
}
EXPORT_SYMBOL(t3_l2t_send_event);
/*
* Allocate a free L2T entry. Must be called with l2t_data.lock held.
*/
static struct l2t_entry *alloc_l2e(struct l2t_data *d)
{
struct l2t_entry *end, *e, **p;
if (!atomic_read(&d->nfree))
return NULL;
/* there's definitely a free entry */
for (e = d->rover, end = &d->l2tab[d->nentries]; e != end; ++e)
if (atomic_read(&e->refcnt) == 0)
goto found;
for (e = &d->l2tab[1]; atomic_read(&e->refcnt); ++e) ;
found:
d->rover = e + 1;
atomic_dec(&d->nfree);
/*
* The entry we found may be an inactive entry that is
* presently in the hash table. We need to remove it.
*/
if (e->state != L2T_STATE_UNUSED) {
int hash = arp_hash(e->addr, e->ifindex, d);
for (p = &d->l2tab[hash].first; *p; p = &(*p)->next)
if (*p == e) {
*p = e->next;
break;
}
e->state = L2T_STATE_UNUSED;
}
return e;
}
/*
* Called when an L2T entry has no more users. The entry is left in the hash
* table since it is likely to be reused but we also bump nfree to indicate
* that the entry can be reallocated for a different neighbor. We also drop
* the existing neighbor reference in case the neighbor is going away and is
* waiting on our reference.
*
* Because entries can be reallocated to other neighbors once their ref count
* drops to 0 we need to take the entry's lock to avoid races with a new
* incarnation.
*/
void t3_l2e_free(struct l2t_data *d, struct l2t_entry *e)
{
spin_lock_bh(&e->lock);
if (atomic_read(&e->refcnt) == 0) { /* hasn't been recycled */
if (e->neigh) {
neigh_release(e->neigh);
e->neigh = NULL;
}
}
spin_unlock_bh(&e->lock);
atomic_inc(&d->nfree);
}
EXPORT_SYMBOL(t3_l2e_free);
/*
* Update an L2T entry that was previously used for the same next hop as neigh.
* Must be called with softirqs disabled.
*/
static inline void reuse_entry(struct l2t_entry *e, struct neighbour *neigh)
{
unsigned int nud_state;
spin_lock(&e->lock); /* avoid race with t3_l2t_free */
if (neigh != e->neigh)
neigh_replace(e, neigh);
nud_state = neigh->nud_state;
if (memcmp(e->dmac, neigh->ha, sizeof(e->dmac)) ||
!(nud_state & NUD_VALID))
e->state = L2T_STATE_RESOLVING;
else if (nud_state & NUD_CONNECTED)
e->state = L2T_STATE_VALID;
else
e->state = L2T_STATE_STALE;
spin_unlock(&e->lock);
}
struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct neighbour *neigh,
struct net_device *dev)
{
struct l2t_entry *e;
struct l2t_data *d = L2DATA(cdev);
u32 addr = *(u32 *) neigh->primary_key;
int ifidx = neigh->dev->ifindex;
int hash = arp_hash(addr, ifidx, d);
struct port_info *p = netdev_priv(dev);
int smt_idx = p->port_id;
write_lock_bh(&d->lock);
for (e = d->l2tab[hash].first; e; e = e->next)
if (e->addr == addr && e->ifindex == ifidx &&
e->smt_idx == smt_idx) {
l2t_hold(d, e);
if (atomic_read(&e->refcnt) == 1)
reuse_entry(e, neigh);
goto done;
}
/* Need to allocate a new entry */
e = alloc_l2e(d);
if (e) {
spin_lock(&e->lock); /* avoid race with t3_l2t_free */
e->next = d->l2tab[hash].first;
d->l2tab[hash].first = e;
e->state = L2T_STATE_RESOLVING;
e->addr = addr;
e->ifindex = ifidx;
e->smt_idx = smt_idx;
atomic_set(&e->refcnt, 1);
neigh_replace(e, neigh);
if (neigh->dev->priv_flags & IFF_802_1Q_VLAN)
e->vlan = VLAN_DEV_INFO(neigh->dev)->vlan_id;
else
e->vlan = VLAN_NONE;
spin_unlock(&e->lock);
}
done:
write_unlock_bh(&d->lock);
return e;
}
EXPORT_SYMBOL(t3_l2t_get);
/*
* Called when address resolution fails for an L2T entry to handle packets
* on the arpq head. If a packet specifies a failure handler it is invoked,
* otherwise the packets is sent to the offload device.
*
* XXX: maybe we should abandon the latter behavior and just require a failure
* handler.
*/
static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff *arpq)
{
while (arpq) {
struct sk_buff *skb = arpq;
struct l2t_skb_cb *cb = L2T_SKB_CB(skb);
arpq = skb->next;
skb->next = NULL;
if (cb->arp_failure_handler)
cb->arp_failure_handler(dev, skb);
else
cxgb3_ofld_send(dev, skb);
}
}
/*
* Called when the host's ARP layer makes a change to some entry that is
* loaded into the HW L2 table.
*/
void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh)
{
struct l2t_entry *e;
struct sk_buff *arpq = NULL;
struct l2t_data *d = L2DATA(dev);
u32 addr = *(u32 *) neigh->primary_key;
int ifidx = neigh->dev->ifindex;
int hash = arp_hash(addr, ifidx, d);
read_lock_bh(&d->lock);
for (e = d->l2tab[hash].first; e; e = e->next)
if (e->addr == addr && e->ifindex == ifidx) {
spin_lock(&e->lock);
goto found;
}
read_unlock_bh(&d->lock);
return;
found:
read_unlock(&d->lock);
if (atomic_read(&e->refcnt)) {
if (neigh != e->neigh)
neigh_replace(e, neigh);
if (e->state == L2T_STATE_RESOLVING) {
if (neigh->nud_state & NUD_FAILED) {
arpq = e->arpq_head;
e->arpq_head = e->arpq_tail = NULL;
} else if (neigh_is_connected(neigh))
setup_l2e_send_pending(dev, NULL, e);
} else {
e->state = neigh_is_connected(neigh) ?
L2T_STATE_VALID : L2T_STATE_STALE;
if (memcmp(e->dmac, neigh->ha, 6))
setup_l2e_send_pending(dev, NULL, e);
}
}
spin_unlock_bh(&e->lock);
if (arpq)
handle_failed_resolution(dev, arpq);
}
struct l2t_data *t3_init_l2t(unsigned int l2t_capacity)
{
struct l2t_data *d;
int i, size = sizeof(*d) + l2t_capacity * sizeof(struct l2t_entry);
d = cxgb_alloc_mem(size);
if (!d)
return NULL;
d->nentries = l2t_capacity;
d->rover = &d->l2tab[1]; /* entry 0 is not used */
atomic_set(&d->nfree, l2t_capacity - 1);
rwlock_init(&d->lock);
for (i = 0; i < l2t_capacity; ++i) {
d->l2tab[i].idx = i;
d->l2tab[i].state = L2T_STATE_UNUSED;
spin_lock_init(&d->l2tab[i].lock);
atomic_set(&d->l2tab[i].refcnt, 0);
}
return d;
}
void t3_free_l2t(struct l2t_data *d)
{
cxgb_free_mem(d);
}

143
drivers/net/cxgb3/l2t.h Normal file
View File

@ -0,0 +1,143 @@
/*
* Copyright (c) 2003-2007 Chelsio, Inc. All rights reserved.
* Copyright (c) 2006-2007 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _CHELSIO_L2T_H
#define _CHELSIO_L2T_H
#include <linux/spinlock.h>
#include "t3cdev.h"
#include <asm/atomic.h>
enum {
L2T_STATE_VALID, /* entry is up to date */
L2T_STATE_STALE, /* entry may be used but needs revalidation */
L2T_STATE_RESOLVING, /* entry needs address resolution */
L2T_STATE_UNUSED /* entry not in use */
};
struct neighbour;
struct sk_buff;
/*
* Each L2T entry plays multiple roles. First of all, it keeps state for the
* corresponding entry of the HW L2 table and maintains a queue of offload
* packets awaiting address resolution. Second, it is a node of a hash table
* chain, where the nodes of the chain are linked together through their next
* pointer. Finally, each node is a bucket of a hash table, pointing to the
* first element in its chain through its first pointer.
*/
struct l2t_entry {
u16 state; /* entry state */
u16 idx; /* entry index */
u32 addr; /* dest IP address */
int ifindex; /* neighbor's net_device's ifindex */
u16 smt_idx; /* SMT index */
u16 vlan; /* VLAN TCI (id: bits 0-11, prio: 13-15 */
struct neighbour *neigh; /* associated neighbour */
struct l2t_entry *first; /* start of hash chain */
struct l2t_entry *next; /* next l2t_entry on chain */
struct sk_buff *arpq_head; /* queue of packets awaiting resolution */
struct sk_buff *arpq_tail;
spinlock_t lock;
atomic_t refcnt; /* entry reference count */
u8 dmac[6]; /* neighbour's MAC address */
};
struct l2t_data {
unsigned int nentries; /* number of entries */
struct l2t_entry *rover; /* starting point for next allocation */
atomic_t nfree; /* number of free entries */
rwlock_t lock;
struct l2t_entry l2tab[0];
};
typedef void (*arp_failure_handler_func)(struct t3cdev * dev,
struct sk_buff * skb);
/*
* Callback stored in an skb to handle address resolution failure.
*/
struct l2t_skb_cb {
arp_failure_handler_func arp_failure_handler;
};
#define L2T_SKB_CB(skb) ((struct l2t_skb_cb *)(skb)->cb)
static inline void set_arp_failure_handler(struct sk_buff *skb,
arp_failure_handler_func hnd)
{
L2T_SKB_CB(skb)->arp_failure_handler = hnd;
}
/*
* Getting to the L2 data from an offload device.
*/
#define L2DATA(dev) ((dev)->l2opt)
#define W_TCB_L2T_IX 0
#define S_TCB_L2T_IX 7
#define M_TCB_L2T_IX 0x7ffULL
#define V_TCB_L2T_IX(x) ((x) << S_TCB_L2T_IX)
void t3_l2e_free(struct l2t_data *d, struct l2t_entry *e);
void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh);
struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct neighbour *neigh,
struct net_device *dev);
int t3_l2t_send_slow(struct t3cdev *dev, struct sk_buff *skb,
struct l2t_entry *e);
void t3_l2t_send_event(struct t3cdev *dev, struct l2t_entry *e);
struct l2t_data *t3_init_l2t(unsigned int l2t_capacity);
void t3_free_l2t(struct l2t_data *d);
int cxgb3_ofld_send(struct t3cdev *dev, struct sk_buff *skb);
static inline int l2t_send(struct t3cdev *dev, struct sk_buff *skb,
struct l2t_entry *e)
{
if (likely(e->state == L2T_STATE_VALID))
return cxgb3_ofld_send(dev, skb);
return t3_l2t_send_slow(dev, skb, e);
}
static inline void l2t_release(struct l2t_data *d, struct l2t_entry *e)
{
if (atomic_dec_and_test(&e->refcnt))
t3_l2e_free(d, e);
}
static inline void l2t_hold(struct l2t_data *d, struct l2t_entry *e)
{
if (atomic_add_return(1, &e->refcnt) == 1) /* 0 -> 1 transition */
atomic_dec(&d->nfree);
}
#endif

473
drivers/net/cxgb3/mc5.c Normal file
View File

@ -0,0 +1,473 @@
/*
* Copyright (c) 2003-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "common.h"
#include "regs.h"
enum {
IDT75P52100 = 4,
IDT75N43102 = 5
};
/* DBGI command mode */
enum {
DBGI_MODE_MBUS = 0,
DBGI_MODE_IDT52100 = 5
};
/* IDT 75P52100 commands */
#define IDT_CMD_READ 0
#define IDT_CMD_WRITE 1
#define IDT_CMD_SEARCH 2
#define IDT_CMD_LEARN 3
/* IDT LAR register address and value for 144-bit mode (low 32 bits) */
#define IDT_LAR_ADR0 0x180006
#define IDT_LAR_MODE144 0xffff0000
/* IDT SCR and SSR addresses (low 32 bits) */
#define IDT_SCR_ADR0 0x180000
#define IDT_SSR0_ADR0 0x180002
#define IDT_SSR1_ADR0 0x180004
/* IDT GMR base address (low 32 bits) */
#define IDT_GMR_BASE_ADR0 0x180020
/* IDT data and mask array base addresses (low 32 bits) */
#define IDT_DATARY_BASE_ADR0 0
#define IDT_MSKARY_BASE_ADR0 0x80000
/* IDT 75N43102 commands */
#define IDT4_CMD_SEARCH144 3
#define IDT4_CMD_WRITE 4
#define IDT4_CMD_READ 5
/* IDT 75N43102 SCR address (low 32 bits) */
#define IDT4_SCR_ADR0 0x3
/* IDT 75N43102 GMR base addresses (low 32 bits) */
#define IDT4_GMR_BASE0 0x10
#define IDT4_GMR_BASE1 0x20
#define IDT4_GMR_BASE2 0x30
/* IDT 75N43102 data and mask array base addresses (low 32 bits) */
#define IDT4_DATARY_BASE_ADR0 0x1000000
#define IDT4_MSKARY_BASE_ADR0 0x2000000
#define MAX_WRITE_ATTEMPTS 5
#define MAX_ROUTES 2048
/*
* Issue a command to the TCAM and wait for its completion. The address and
* any data required by the command must have been setup by the caller.
*/
static int mc5_cmd_write(struct adapter *adapter, u32 cmd)
{
t3_write_reg(adapter, A_MC5_DB_DBGI_REQ_CMD, cmd);
return t3_wait_op_done(adapter, A_MC5_DB_DBGI_RSP_STATUS,
F_DBGIRSPVALID, 1, MAX_WRITE_ATTEMPTS, 1);
}
static inline void dbgi_wr_addr3(struct adapter *adapter, u32 v1, u32 v2,
u32 v3)
{
t3_write_reg(adapter, A_MC5_DB_DBGI_REQ_ADDR0, v1);
t3_write_reg(adapter, A_MC5_DB_DBGI_REQ_ADDR1, v2);
t3_write_reg(adapter, A_MC5_DB_DBGI_REQ_ADDR2, v3);
}
static inline void dbgi_wr_data3(struct adapter *adapter, u32 v1, u32 v2,
u32 v3)
{
t3_write_reg(adapter, A_MC5_DB_DBGI_REQ_DATA0, v1);
t3_write_reg(adapter, A_MC5_DB_DBGI_REQ_DATA1, v2);
t3_write_reg(adapter, A_MC5_DB_DBGI_REQ_DATA2, v3);
}
static inline void dbgi_rd_rsp3(struct adapter *adapter, u32 *v1, u32 *v2,
u32 *v3)
{
*v1 = t3_read_reg(adapter, A_MC5_DB_DBGI_RSP_DATA0);
*v2 = t3_read_reg(adapter, A_MC5_DB_DBGI_RSP_DATA1);
*v3 = t3_read_reg(adapter, A_MC5_DB_DBGI_RSP_DATA2);
}
/*
* Write data to the TCAM register at address (0, 0, addr_lo) using the TCAM
* command cmd. The data to be written must have been set up by the caller.
* Returns -1 on failure, 0 on success.
*/
static int mc5_write(struct adapter *adapter, u32 addr_lo, u32 cmd)
{
t3_write_reg(adapter, A_MC5_DB_DBGI_REQ_ADDR0, addr_lo);
if (mc5_cmd_write(adapter, cmd) == 0)
return 0;
CH_ERR(adapter, "MC5 timeout writing to TCAM address 0x%x\n",
addr_lo);
return -1;
}
static int init_mask_data_array(struct mc5 *mc5, u32 mask_array_base,
u32 data_array_base, u32 write_cmd,
int addr_shift)
{
unsigned int i;
struct adapter *adap = mc5->adapter;
/*
* We need the size of the TCAM data and mask arrays in terms of
* 72-bit entries.
*/
unsigned int size72 = mc5->tcam_size;
unsigned int server_base = t3_read_reg(adap, A_MC5_DB_SERVER_INDEX);
if (mc5->mode == MC5_MODE_144_BIT) {
size72 *= 2; /* 1 144-bit entry is 2 72-bit entries */
server_base *= 2;
}
/* Clear the data array */
dbgi_wr_data3(adap, 0, 0, 0);
for (i = 0; i < size72; i++)
if (mc5_write(adap, data_array_base + (i << addr_shift),
write_cmd))
return -1;
/* Initialize the mask array. */
dbgi_wr_data3(adap, 0xffffffff, 0xffffffff, 0xff);
for (i = 0; i < size72; i++) {
if (i == server_base) /* entering server or routing region */
t3_write_reg(adap, A_MC5_DB_DBGI_REQ_DATA0,
mc5->mode == MC5_MODE_144_BIT ?
0xfffffff9 : 0xfffffffd);
if (mc5_write(adap, mask_array_base + (i << addr_shift),
write_cmd))
return -1;
}
return 0;
}
static int init_idt52100(struct mc5 *mc5)
{
int i;
struct adapter *adap = mc5->adapter;
t3_write_reg(adap, A_MC5_DB_RSP_LATENCY,
V_RDLAT(0x15) | V_LRNLAT(0x15) | V_SRCHLAT(0x15));
t3_write_reg(adap, A_MC5_DB_PART_ID_INDEX, 2);
/*
* Use GMRs 14-15 for ELOOKUP, GMRs 12-13 for SYN lookups, and
* GMRs 8-9 for ACK- and AOPEN searches.
*/
t3_write_reg(adap, A_MC5_DB_POPEN_DATA_WR_CMD, IDT_CMD_WRITE);
t3_write_reg(adap, A_MC5_DB_POPEN_MASK_WR_CMD, IDT_CMD_WRITE);
t3_write_reg(adap, A_MC5_DB_AOPEN_SRCH_CMD, IDT_CMD_SEARCH);
t3_write_reg(adap, A_MC5_DB_AOPEN_LRN_CMD, IDT_CMD_LEARN);
t3_write_reg(adap, A_MC5_DB_SYN_SRCH_CMD, IDT_CMD_SEARCH | 0x6000);
t3_write_reg(adap, A_MC5_DB_SYN_LRN_CMD, IDT_CMD_LEARN);
t3_write_reg(adap, A_MC5_DB_ACK_SRCH_CMD, IDT_CMD_SEARCH);
t3_write_reg(adap, A_MC5_DB_ACK_LRN_CMD, IDT_CMD_LEARN);
t3_write_reg(adap, A_MC5_DB_ILOOKUP_CMD, IDT_CMD_SEARCH);
t3_write_reg(adap, A_MC5_DB_ELOOKUP_CMD, IDT_CMD_SEARCH | 0x7000);
t3_write_reg(adap, A_MC5_DB_DATA_WRITE_CMD, IDT_CMD_WRITE);
t3_write_reg(adap, A_MC5_DB_DATA_READ_CMD, IDT_CMD_READ);
/* Set DBGI command mode for IDT TCAM. */
t3_write_reg(adap, A_MC5_DB_DBGI_CONFIG, DBGI_MODE_IDT52100);
/* Set up LAR */
dbgi_wr_data3(adap, IDT_LAR_MODE144, 0, 0);
if (mc5_write(adap, IDT_LAR_ADR0, IDT_CMD_WRITE))
goto err;
/* Set up SSRs */
dbgi_wr_data3(adap, 0xffffffff, 0xffffffff, 0);
if (mc5_write(adap, IDT_SSR0_ADR0, IDT_CMD_WRITE) ||
mc5_write(adap, IDT_SSR1_ADR0, IDT_CMD_WRITE))
goto err;
/* Set up GMRs */
for (i = 0; i < 32; ++i) {
if (i >= 12 && i < 15)
dbgi_wr_data3(adap, 0xfffffff9, 0xffffffff, 0xff);
else if (i == 15)
dbgi_wr_data3(adap, 0xfffffff9, 0xffff8007, 0xff);
else
dbgi_wr_data3(adap, 0xffffffff, 0xffffffff, 0xff);
if (mc5_write(adap, IDT_GMR_BASE_ADR0 + i, IDT_CMD_WRITE))
goto err;
}
/* Set up SCR */
dbgi_wr_data3(adap, 1, 0, 0);
if (mc5_write(adap, IDT_SCR_ADR0, IDT_CMD_WRITE))
goto err;
return init_mask_data_array(mc5, IDT_MSKARY_BASE_ADR0,
IDT_DATARY_BASE_ADR0, IDT_CMD_WRITE, 0);
err:
return -EIO;
}
static int init_idt43102(struct mc5 *mc5)
{
int i;
struct adapter *adap = mc5->adapter;
t3_write_reg(adap, A_MC5_DB_RSP_LATENCY,
adap->params.rev == 0 ? V_RDLAT(0xd) | V_SRCHLAT(0x11) :
V_RDLAT(0xd) | V_SRCHLAT(0x12));
/*
* Use GMRs 24-25 for ELOOKUP, GMRs 20-21 for SYN lookups, and no mask
* for ACK- and AOPEN searches.
*/
t3_write_reg(adap, A_MC5_DB_POPEN_DATA_WR_CMD, IDT4_CMD_WRITE);
t3_write_reg(adap, A_MC5_DB_POPEN_MASK_WR_CMD, IDT4_CMD_WRITE);
t3_write_reg(adap, A_MC5_DB_AOPEN_SRCH_CMD,
IDT4_CMD_SEARCH144 | 0x3800);
t3_write_reg(adap, A_MC5_DB_SYN_SRCH_CMD, IDT4_CMD_SEARCH144);
t3_write_reg(adap, A_MC5_DB_ACK_SRCH_CMD, IDT4_CMD_SEARCH144 | 0x3800);
t3_write_reg(adap, A_MC5_DB_ILOOKUP_CMD, IDT4_CMD_SEARCH144 | 0x3800);
t3_write_reg(adap, A_MC5_DB_ELOOKUP_CMD, IDT4_CMD_SEARCH144 | 0x800);
t3_write_reg(adap, A_MC5_DB_DATA_WRITE_CMD, IDT4_CMD_WRITE);
t3_write_reg(adap, A_MC5_DB_DATA_READ_CMD, IDT4_CMD_READ);
t3_write_reg(adap, A_MC5_DB_PART_ID_INDEX, 3);
/* Set DBGI command mode for IDT TCAM. */
t3_write_reg(adap, A_MC5_DB_DBGI_CONFIG, DBGI_MODE_IDT52100);
/* Set up GMRs */
dbgi_wr_data3(adap, 0xffffffff, 0xffffffff, 0xff);
for (i = 0; i < 7; ++i)
if (mc5_write(adap, IDT4_GMR_BASE0 + i, IDT4_CMD_WRITE))
goto err;
for (i = 0; i < 4; ++i)
if (mc5_write(adap, IDT4_GMR_BASE2 + i, IDT4_CMD_WRITE))
goto err;
dbgi_wr_data3(adap, 0xfffffff9, 0xffffffff, 0xff);
if (mc5_write(adap, IDT4_GMR_BASE1, IDT4_CMD_WRITE) ||
mc5_write(adap, IDT4_GMR_BASE1 + 1, IDT4_CMD_WRITE) ||
mc5_write(adap, IDT4_GMR_BASE1 + 4, IDT4_CMD_WRITE))
goto err;
dbgi_wr_data3(adap, 0xfffffff9, 0xffff8007, 0xff);
if (mc5_write(adap, IDT4_GMR_BASE1 + 5, IDT4_CMD_WRITE))
goto err;
/* Set up SCR */
dbgi_wr_data3(adap, 0xf0000000, 0, 0);
if (mc5_write(adap, IDT4_SCR_ADR0, IDT4_CMD_WRITE))
goto err;
return init_mask_data_array(mc5, IDT4_MSKARY_BASE_ADR0,
IDT4_DATARY_BASE_ADR0, IDT4_CMD_WRITE, 1);
err:
return -EIO;
}
/* Put MC5 in DBGI mode. */
static inline void mc5_dbgi_mode_enable(const struct mc5 *mc5)
{
t3_write_reg(mc5->adapter, A_MC5_DB_CONFIG,
V_TMMODE(mc5->mode == MC5_MODE_72_BIT) | F_DBGIEN);
}
/* Put MC5 in M-Bus mode. */
static void mc5_dbgi_mode_disable(const struct mc5 *mc5)
{
t3_write_reg(mc5->adapter, A_MC5_DB_CONFIG,
V_TMMODE(mc5->mode == MC5_MODE_72_BIT) |
V_COMPEN(mc5->mode == MC5_MODE_72_BIT) |
V_PRTYEN(mc5->parity_enabled) | F_MBUSEN);
}
/*
* Initialization that requires the OS and protocol layers to already
* be intialized goes here.
*/
int t3_mc5_init(struct mc5 *mc5, unsigned int nservers, unsigned int nfilters,
unsigned int nroutes)
{
u32 cfg;
int err;
unsigned int tcam_size = mc5->tcam_size;
struct adapter *adap = mc5->adapter;
if (nroutes > MAX_ROUTES || nroutes + nservers + nfilters > tcam_size)
return -EINVAL;
/* Reset the TCAM */
cfg = t3_read_reg(adap, A_MC5_DB_CONFIG) & ~F_TMMODE;
cfg |= V_TMMODE(mc5->mode == MC5_MODE_72_BIT) | F_TMRST;
t3_write_reg(adap, A_MC5_DB_CONFIG, cfg);
if (t3_wait_op_done(adap, A_MC5_DB_CONFIG, F_TMRDY, 1, 500, 0)) {
CH_ERR(adap, "TCAM reset timed out\n");
return -1;
}
t3_write_reg(adap, A_MC5_DB_ROUTING_TABLE_INDEX, tcam_size - nroutes);
t3_write_reg(adap, A_MC5_DB_FILTER_TABLE,
tcam_size - nroutes - nfilters);
t3_write_reg(adap, A_MC5_DB_SERVER_INDEX,
tcam_size - nroutes - nfilters - nservers);
mc5->parity_enabled = 1;
/* All the TCAM addresses we access have only the low 32 bits non 0 */
t3_write_reg(adap, A_MC5_DB_DBGI_REQ_ADDR1, 0);
t3_write_reg(adap, A_MC5_DB_DBGI_REQ_ADDR2, 0);
mc5_dbgi_mode_enable(mc5);
switch (mc5->part_type) {
case IDT75P52100:
err = init_idt52100(mc5);
break;
case IDT75N43102:
err = init_idt43102(mc5);
break;
default:
CH_ERR(adap, "Unsupported TCAM type %d\n", mc5->part_type);
err = -EINVAL;
break;
}
mc5_dbgi_mode_disable(mc5);
return err;
}
/*
* read_mc5_range - dump a part of the memory managed by MC5
* @mc5: the MC5 handle
* @start: the start address for the dump
* @n: number of 72-bit words to read
* @buf: result buffer
*
* Read n 72-bit words from MC5 memory from the given start location.
*/
int t3_read_mc5_range(const struct mc5 *mc5, unsigned int start,
unsigned int n, u32 *buf)
{
u32 read_cmd;
int err = 0;
struct adapter *adap = mc5->adapter;
if (mc5->part_type == IDT75P52100)
read_cmd = IDT_CMD_READ;
else if (mc5->part_type == IDT75N43102)
read_cmd = IDT4_CMD_READ;
else
return -EINVAL;
mc5_dbgi_mode_enable(mc5);
while (n--) {
t3_write_reg(adap, A_MC5_DB_DBGI_REQ_ADDR0, start++);
if (mc5_cmd_write(adap, read_cmd)) {
err = -EIO;
break;
}
dbgi_rd_rsp3(adap, buf + 2, buf + 1, buf);
buf += 3;
}
mc5_dbgi_mode_disable(mc5);
return 0;
}
#define MC5_INT_FATAL (F_PARITYERR | F_REQQPARERR | F_DISPQPARERR)
/*
* MC5 interrupt handler
*/
void t3_mc5_intr_handler(struct mc5 *mc5)
{
struct adapter *adap = mc5->adapter;
u32 cause = t3_read_reg(adap, A_MC5_DB_INT_CAUSE);
if ((cause & F_PARITYERR) && mc5->parity_enabled) {
CH_ALERT(adap, "MC5 parity error\n");
mc5->stats.parity_err++;
}
if (cause & F_REQQPARERR) {
CH_ALERT(adap, "MC5 request queue parity error\n");
mc5->stats.reqq_parity_err++;
}
if (cause & F_DISPQPARERR) {
CH_ALERT(adap, "MC5 dispatch queue parity error\n");
mc5->stats.dispq_parity_err++;
}
if (cause & F_ACTRGNFULL)
mc5->stats.active_rgn_full++;
if (cause & F_NFASRCHFAIL)
mc5->stats.nfa_srch_err++;
if (cause & F_UNKNOWNCMD)
mc5->stats.unknown_cmd++;
if (cause & F_DELACTEMPTY)
mc5->stats.del_act_empty++;
if (cause & MC5_INT_FATAL)
t3_fatal_err(adap);
t3_write_reg(adap, A_MC5_DB_INT_CAUSE, cause);
}
void __devinit t3_mc5_prep(struct adapter *adapter, struct mc5 *mc5, int mode)
{
#define K * 1024
static unsigned int tcam_part_size[] = { /* in K 72-bit entries */
64 K, 128 K, 256 K, 32 K
};
#undef K
u32 cfg = t3_read_reg(adapter, A_MC5_DB_CONFIG);
mc5->adapter = adapter;
mc5->mode = (unsigned char)mode;
mc5->part_type = (unsigned char)G_TMTYPE(cfg);
if (cfg & F_TMTYPEHI)
mc5->part_type |= 4;
mc5->tcam_size = tcam_part_size[G_TMPARTSIZE(cfg)];
if (mode == MC5_MODE_144_BIT)
mc5->tcam_size /= 2;
}

2195
drivers/net/cxgb3/regs.h Normal file

File diff suppressed because it is too large Load Diff

2681
drivers/net/cxgb3/sge.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,251 @@
/*
* This file is automatically generated --- any changes will be lost.
*/
#ifndef _SGE_DEFS_H
#define _SGE_DEFS_H
#define S_EC_CREDITS 0
#define M_EC_CREDITS 0x7FFF
#define V_EC_CREDITS(x) ((x) << S_EC_CREDITS)
#define G_EC_CREDITS(x) (((x) >> S_EC_CREDITS) & M_EC_CREDITS)
#define S_EC_GTS 15
#define V_EC_GTS(x) ((x) << S_EC_GTS)
#define F_EC_GTS V_EC_GTS(1U)
#define S_EC_INDEX 16
#define M_EC_INDEX 0xFFFF
#define V_EC_INDEX(x) ((x) << S_EC_INDEX)
#define G_EC_INDEX(x) (((x) >> S_EC_INDEX) & M_EC_INDEX)
#define S_EC_SIZE 0
#define M_EC_SIZE 0xFFFF
#define V_EC_SIZE(x) ((x) << S_EC_SIZE)
#define G_EC_SIZE(x) (((x) >> S_EC_SIZE) & M_EC_SIZE)
#define S_EC_BASE_LO 16
#define M_EC_BASE_LO 0xFFFF
#define V_EC_BASE_LO(x) ((x) << S_EC_BASE_LO)
#define G_EC_BASE_LO(x) (((x) >> S_EC_BASE_LO) & M_EC_BASE_LO)
#define S_EC_BASE_HI 0
#define M_EC_BASE_HI 0xF
#define V_EC_BASE_HI(x) ((x) << S_EC_BASE_HI)
#define G_EC_BASE_HI(x) (((x) >> S_EC_BASE_HI) & M_EC_BASE_HI)
#define S_EC_RESPQ 4
#define M_EC_RESPQ 0x7
#define V_EC_RESPQ(x) ((x) << S_EC_RESPQ)
#define G_EC_RESPQ(x) (((x) >> S_EC_RESPQ) & M_EC_RESPQ)
#define S_EC_TYPE 7
#define M_EC_TYPE 0x7
#define V_EC_TYPE(x) ((x) << S_EC_TYPE)
#define G_EC_TYPE(x) (((x) >> S_EC_TYPE) & M_EC_TYPE)
#define S_EC_GEN 10
#define V_EC_GEN(x) ((x) << S_EC_GEN)
#define F_EC_GEN V_EC_GEN(1U)
#define S_EC_UP_TOKEN 11
#define M_EC_UP_TOKEN 0xFFFFF
#define V_EC_UP_TOKEN(x) ((x) << S_EC_UP_TOKEN)
#define G_EC_UP_TOKEN(x) (((x) >> S_EC_UP_TOKEN) & M_EC_UP_TOKEN)
#define S_EC_VALID 31
#define V_EC_VALID(x) ((x) << S_EC_VALID)
#define F_EC_VALID V_EC_VALID(1U)
#define S_RQ_MSI_VEC 20
#define M_RQ_MSI_VEC 0x3F
#define V_RQ_MSI_VEC(x) ((x) << S_RQ_MSI_VEC)
#define G_RQ_MSI_VEC(x) (((x) >> S_RQ_MSI_VEC) & M_RQ_MSI_VEC)
#define S_RQ_INTR_EN 26
#define V_RQ_INTR_EN(x) ((x) << S_RQ_INTR_EN)
#define F_RQ_INTR_EN V_RQ_INTR_EN(1U)
#define S_RQ_GEN 28
#define V_RQ_GEN(x) ((x) << S_RQ_GEN)
#define F_RQ_GEN V_RQ_GEN(1U)
#define S_CQ_INDEX 0
#define M_CQ_INDEX 0xFFFF
#define V_CQ_INDEX(x) ((x) << S_CQ_INDEX)
#define G_CQ_INDEX(x) (((x) >> S_CQ_INDEX) & M_CQ_INDEX)
#define S_CQ_SIZE 16
#define M_CQ_SIZE 0xFFFF
#define V_CQ_SIZE(x) ((x) << S_CQ_SIZE)
#define G_CQ_SIZE(x) (((x) >> S_CQ_SIZE) & M_CQ_SIZE)
#define S_CQ_BASE_HI 0
#define M_CQ_BASE_HI 0xFFFFF
#define V_CQ_BASE_HI(x) ((x) << S_CQ_BASE_HI)
#define G_CQ_BASE_HI(x) (((x) >> S_CQ_BASE_HI) & M_CQ_BASE_HI)
#define S_CQ_RSPQ 20
#define M_CQ_RSPQ 0x3F
#define V_CQ_RSPQ(x) ((x) << S_CQ_RSPQ)
#define G_CQ_RSPQ(x) (((x) >> S_CQ_RSPQ) & M_CQ_RSPQ)
#define S_CQ_ASYNC_NOTIF 26
#define V_CQ_ASYNC_NOTIF(x) ((x) << S_CQ_ASYNC_NOTIF)
#define F_CQ_ASYNC_NOTIF V_CQ_ASYNC_NOTIF(1U)
#define S_CQ_ARMED 27
#define V_CQ_ARMED(x) ((x) << S_CQ_ARMED)
#define F_CQ_ARMED V_CQ_ARMED(1U)
#define S_CQ_ASYNC_NOTIF_SOL 28
#define V_CQ_ASYNC_NOTIF_SOL(x) ((x) << S_CQ_ASYNC_NOTIF_SOL)
#define F_CQ_ASYNC_NOTIF_SOL V_CQ_ASYNC_NOTIF_SOL(1U)
#define S_CQ_GEN 29
#define V_CQ_GEN(x) ((x) << S_CQ_GEN)
#define F_CQ_GEN V_CQ_GEN(1U)
#define S_CQ_OVERFLOW_MODE 31
#define V_CQ_OVERFLOW_MODE(x) ((x) << S_CQ_OVERFLOW_MODE)
#define F_CQ_OVERFLOW_MODE V_CQ_OVERFLOW_MODE(1U)
#define S_CQ_CREDITS 0
#define M_CQ_CREDITS 0xFFFF
#define V_CQ_CREDITS(x) ((x) << S_CQ_CREDITS)
#define G_CQ_CREDITS(x) (((x) >> S_CQ_CREDITS) & M_CQ_CREDITS)
#define S_CQ_CREDIT_THRES 16
#define M_CQ_CREDIT_THRES 0x1FFF
#define V_CQ_CREDIT_THRES(x) ((x) << S_CQ_CREDIT_THRES)
#define G_CQ_CREDIT_THRES(x) (((x) >> S_CQ_CREDIT_THRES) & M_CQ_CREDIT_THRES)
#define S_FL_BASE_HI 0
#define M_FL_BASE_HI 0xFFFFF
#define V_FL_BASE_HI(x) ((x) << S_FL_BASE_HI)
#define G_FL_BASE_HI(x) (((x) >> S_FL_BASE_HI) & M_FL_BASE_HI)
#define S_FL_INDEX_LO 20
#define M_FL_INDEX_LO 0xFFF
#define V_FL_INDEX_LO(x) ((x) << S_FL_INDEX_LO)
#define G_FL_INDEX_LO(x) (((x) >> S_FL_INDEX_LO) & M_FL_INDEX_LO)
#define S_FL_INDEX_HI 0
#define M_FL_INDEX_HI 0xF
#define V_FL_INDEX_HI(x) ((x) << S_FL_INDEX_HI)
#define G_FL_INDEX_HI(x) (((x) >> S_FL_INDEX_HI) & M_FL_INDEX_HI)
#define S_FL_SIZE 4
#define M_FL_SIZE 0xFFFF
#define V_FL_SIZE(x) ((x) << S_FL_SIZE)
#define G_FL_SIZE(x) (((x) >> S_FL_SIZE) & M_FL_SIZE)
#define S_FL_GEN 20
#define V_FL_GEN(x) ((x) << S_FL_GEN)
#define F_FL_GEN V_FL_GEN(1U)
#define S_FL_ENTRY_SIZE_LO 21
#define M_FL_ENTRY_SIZE_LO 0x7FF
#define V_FL_ENTRY_SIZE_LO(x) ((x) << S_FL_ENTRY_SIZE_LO)
#define G_FL_ENTRY_SIZE_LO(x) (((x) >> S_FL_ENTRY_SIZE_LO) & M_FL_ENTRY_SIZE_LO)
#define S_FL_ENTRY_SIZE_HI 0
#define M_FL_ENTRY_SIZE_HI 0x1FFFFF
#define V_FL_ENTRY_SIZE_HI(x) ((x) << S_FL_ENTRY_SIZE_HI)
#define G_FL_ENTRY_SIZE_HI(x) (((x) >> S_FL_ENTRY_SIZE_HI) & M_FL_ENTRY_SIZE_HI)
#define S_FL_CONG_THRES 21
#define M_FL_CONG_THRES 0x3FF
#define V_FL_CONG_THRES(x) ((x) << S_FL_CONG_THRES)
#define G_FL_CONG_THRES(x) (((x) >> S_FL_CONG_THRES) & M_FL_CONG_THRES)
#define S_FL_GTS 31
#define V_FL_GTS(x) ((x) << S_FL_GTS)
#define F_FL_GTS V_FL_GTS(1U)
#define S_FLD_GEN1 31
#define V_FLD_GEN1(x) ((x) << S_FLD_GEN1)
#define F_FLD_GEN1 V_FLD_GEN1(1U)
#define S_FLD_GEN2 0
#define V_FLD_GEN2(x) ((x) << S_FLD_GEN2)
#define F_FLD_GEN2 V_FLD_GEN2(1U)
#define S_RSPD_TXQ1_CR 0
#define M_RSPD_TXQ1_CR 0x7F
#define V_RSPD_TXQ1_CR(x) ((x) << S_RSPD_TXQ1_CR)
#define G_RSPD_TXQ1_CR(x) (((x) >> S_RSPD_TXQ1_CR) & M_RSPD_TXQ1_CR)
#define S_RSPD_TXQ1_GTS 7
#define V_RSPD_TXQ1_GTS(x) ((x) << S_RSPD_TXQ1_GTS)
#define F_RSPD_TXQ1_GTS V_RSPD_TXQ1_GTS(1U)
#define S_RSPD_TXQ2_CR 8
#define M_RSPD_TXQ2_CR 0x7F
#define V_RSPD_TXQ2_CR(x) ((x) << S_RSPD_TXQ2_CR)
#define G_RSPD_TXQ2_CR(x) (((x) >> S_RSPD_TXQ2_CR) & M_RSPD_TXQ2_CR)
#define S_RSPD_TXQ2_GTS 15
#define V_RSPD_TXQ2_GTS(x) ((x) << S_RSPD_TXQ2_GTS)
#define F_RSPD_TXQ2_GTS V_RSPD_TXQ2_GTS(1U)
#define S_RSPD_TXQ0_CR 16
#define M_RSPD_TXQ0_CR 0x7F
#define V_RSPD_TXQ0_CR(x) ((x) << S_RSPD_TXQ0_CR)
#define G_RSPD_TXQ0_CR(x) (((x) >> S_RSPD_TXQ0_CR) & M_RSPD_TXQ0_CR)
#define S_RSPD_TXQ0_GTS 23
#define V_RSPD_TXQ0_GTS(x) ((x) << S_RSPD_TXQ0_GTS)
#define F_RSPD_TXQ0_GTS V_RSPD_TXQ0_GTS(1U)
#define S_RSPD_EOP 24
#define V_RSPD_EOP(x) ((x) << S_RSPD_EOP)
#define F_RSPD_EOP V_RSPD_EOP(1U)
#define S_RSPD_SOP 25
#define V_RSPD_SOP(x) ((x) << S_RSPD_SOP)
#define F_RSPD_SOP V_RSPD_SOP(1U)
#define S_RSPD_ASYNC_NOTIF 26
#define V_RSPD_ASYNC_NOTIF(x) ((x) << S_RSPD_ASYNC_NOTIF)
#define F_RSPD_ASYNC_NOTIF V_RSPD_ASYNC_NOTIF(1U)
#define S_RSPD_FL0_GTS 27
#define V_RSPD_FL0_GTS(x) ((x) << S_RSPD_FL0_GTS)
#define F_RSPD_FL0_GTS V_RSPD_FL0_GTS(1U)
#define S_RSPD_FL1_GTS 28
#define V_RSPD_FL1_GTS(x) ((x) << S_RSPD_FL1_GTS)
#define F_RSPD_FL1_GTS V_RSPD_FL1_GTS(1U)
#define S_RSPD_IMM_DATA_VALID 29
#define V_RSPD_IMM_DATA_VALID(x) ((x) << S_RSPD_IMM_DATA_VALID)
#define F_RSPD_IMM_DATA_VALID V_RSPD_IMM_DATA_VALID(1U)
#define S_RSPD_OFFLOAD 30
#define V_RSPD_OFFLOAD(x) ((x) << S_RSPD_OFFLOAD)
#define F_RSPD_OFFLOAD V_RSPD_OFFLOAD(1U)
#define S_RSPD_GEN1 31
#define V_RSPD_GEN1(x) ((x) << S_RSPD_GEN1)
#define F_RSPD_GEN1 V_RSPD_GEN1(1U)
#define S_RSPD_LEN 0
#define M_RSPD_LEN 0x7FFFFFFF
#define V_RSPD_LEN(x) ((x) << S_RSPD_LEN)
#define G_RSPD_LEN(x) (((x) >> S_RSPD_LEN) & M_RSPD_LEN)
#define S_RSPD_FLQ 31
#define V_RSPD_FLQ(x) ((x) << S_RSPD_FLQ)
#define F_RSPD_FLQ V_RSPD_FLQ(1U)
#define S_RSPD_GEN2 0
#define V_RSPD_GEN2(x) ((x) << S_RSPD_GEN2)
#define F_RSPD_GEN2 V_RSPD_GEN2(1U)
#define S_RSPD_INR_VEC 1
#define M_RSPD_INR_VEC 0x7F
#define V_RSPD_INR_VEC(x) ((x) << S_RSPD_INR_VEC)
#define G_RSPD_INR_VEC(x) (((x) >> S_RSPD_INR_VEC) & M_RSPD_INR_VEC)
#endif /* _SGE_DEFS_H */

1444
drivers/net/cxgb3/t3_cpl.h Normal file

File diff suppressed because it is too large Load Diff

3375
drivers/net/cxgb3/t3_hw.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,73 @@
/*
* Copyright (C) 2006-2007 Chelsio Communications. All rights reserved.
* Copyright (C) 2006-2007 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _T3CDEV_H_
#define _T3CDEV_H_
#include <linux/list.h>
#include <asm/atomic.h>
#include <asm/semaphore.h>
#include <linux/netdevice.h>
#include <linux/proc_fs.h>
#include <linux/skbuff.h>
#include <net/neighbour.h>
#define T3CNAMSIZ 16
/* Get the t3cdev associated with a net_device */
#define T3CDEV(netdev) (struct t3cdev *)(netdev->priv)
struct cxgb3_client;
enum t3ctype {
T3A = 0,
T3B
};
struct t3cdev {
char name[T3CNAMSIZ]; /* T3C device name */
enum t3ctype type;
struct list_head ofld_dev_list; /* for list linking */
struct net_device *lldev; /* LL dev associated with T3C messages */
struct proc_dir_entry *proc_dir; /* root of proc dir for this T3C */
int (*send)(struct t3cdev *dev, struct sk_buff *skb);
int (*recv)(struct t3cdev *dev, struct sk_buff **skb, int n);
int (*ctl)(struct t3cdev *dev, unsigned int req, void *data);
void (*neigh_update)(struct t3cdev *dev, struct neighbour *neigh);
void *priv; /* driver private data */
void *l2opt; /* optional layer 2 data */
void *l3opt; /* optional layer 3 data */
void *l4opt; /* optional layer 4 data */
void *ulp; /* ulp stuff */
};
#endif /* _T3CDEV_H_ */

View File

@ -0,0 +1,39 @@
/*
* Copyright (c) 2003-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
/* $Date: 2006/10/31 18:57:51 $ $RCSfile: version.h,v $ $Revision: 1.3 $ */
#ifndef __CHELSIO_VERSION_H
#define __CHELSIO_VERSION_H
#define DRV_DESC "Chelsio T3 Network Driver"
#define DRV_NAME "cxgb3"
/* Driver version */
#define DRV_VERSION "1.0"
#endif /* __CHELSIO_VERSION_H */

228
drivers/net/cxgb3/vsc8211.c Normal file
View File

@ -0,0 +1,228 @@
/*
* Copyright (c) 2005-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "common.h"
/* VSC8211 PHY specific registers. */
enum {
VSC8211_INTR_ENABLE = 25,
VSC8211_INTR_STATUS = 26,
VSC8211_AUX_CTRL_STAT = 28,
};
enum {
VSC_INTR_RX_ERR = 1 << 0,
VSC_INTR_MS_ERR = 1 << 1, /* master/slave resolution error */
VSC_INTR_CABLE = 1 << 2, /* cable impairment */
VSC_INTR_FALSE_CARR = 1 << 3, /* false carrier */
VSC_INTR_MEDIA_CHG = 1 << 4, /* AMS media change */
VSC_INTR_RX_FIFO = 1 << 5, /* Rx FIFO over/underflow */
VSC_INTR_TX_FIFO = 1 << 6, /* Tx FIFO over/underflow */
VSC_INTR_DESCRAMBL = 1 << 7, /* descrambler lock-lost */
VSC_INTR_SYMBOL_ERR = 1 << 8, /* symbol error */
VSC_INTR_NEG_DONE = 1 << 10, /* autoneg done */
VSC_INTR_NEG_ERR = 1 << 11, /* autoneg error */
VSC_INTR_LINK_CHG = 1 << 13, /* link change */
VSC_INTR_ENABLE = 1 << 15, /* interrupt enable */
};
#define CFG_CHG_INTR_MASK (VSC_INTR_LINK_CHG | VSC_INTR_NEG_ERR | \
VSC_INTR_NEG_DONE)
#define INTR_MASK (CFG_CHG_INTR_MASK | VSC_INTR_TX_FIFO | VSC_INTR_RX_FIFO | \
VSC_INTR_ENABLE)
/* PHY specific auxiliary control & status register fields */
#define S_ACSR_ACTIPHY_TMR 0
#define M_ACSR_ACTIPHY_TMR 0x3
#define V_ACSR_ACTIPHY_TMR(x) ((x) << S_ACSR_ACTIPHY_TMR)
#define S_ACSR_SPEED 3
#define M_ACSR_SPEED 0x3
#define G_ACSR_SPEED(x) (((x) >> S_ACSR_SPEED) & M_ACSR_SPEED)
#define S_ACSR_DUPLEX 5
#define F_ACSR_DUPLEX (1 << S_ACSR_DUPLEX)
#define S_ACSR_ACTIPHY 6
#define F_ACSR_ACTIPHY (1 << S_ACSR_ACTIPHY)
/*
* Reset the PHY. This PHY completes reset immediately so we never wait.
*/
static int vsc8211_reset(struct cphy *cphy, int wait)
{
return t3_phy_reset(cphy, 0, 0);
}
static int vsc8211_intr_enable(struct cphy *cphy)
{
return mdio_write(cphy, 0, VSC8211_INTR_ENABLE, INTR_MASK);
}
static int vsc8211_intr_disable(struct cphy *cphy)
{
return mdio_write(cphy, 0, VSC8211_INTR_ENABLE, 0);
}
static int vsc8211_intr_clear(struct cphy *cphy)
{
u32 val;
/* Clear PHY interrupts by reading the register. */
return mdio_read(cphy, 0, VSC8211_INTR_STATUS, &val);
}
static int vsc8211_autoneg_enable(struct cphy *cphy)
{
return t3_mdio_change_bits(cphy, 0, MII_BMCR, BMCR_PDOWN | BMCR_ISOLATE,
BMCR_ANENABLE | BMCR_ANRESTART);
}
static int vsc8211_autoneg_restart(struct cphy *cphy)
{
return t3_mdio_change_bits(cphy, 0, MII_BMCR, BMCR_PDOWN | BMCR_ISOLATE,
BMCR_ANRESTART);
}
static int vsc8211_get_link_status(struct cphy *cphy, int *link_ok,
int *speed, int *duplex, int *fc)
{
unsigned int bmcr, status, lpa, adv;
int err, sp = -1, dplx = -1, pause = 0;
err = mdio_read(cphy, 0, MII_BMCR, &bmcr);
if (!err)
err = mdio_read(cphy, 0, MII_BMSR, &status);
if (err)
return err;
if (link_ok) {
/*
* BMSR_LSTATUS is latch-low, so if it is 0 we need to read it
* once more to get the current link state.
*/
if (!(status & BMSR_LSTATUS))
err = mdio_read(cphy, 0, MII_BMSR, &status);
if (err)
return err;
*link_ok = (status & BMSR_LSTATUS) != 0;
}
if (!(bmcr & BMCR_ANENABLE)) {
dplx = (bmcr & BMCR_FULLDPLX) ? DUPLEX_FULL : DUPLEX_HALF;
if (bmcr & BMCR_SPEED1000)
sp = SPEED_1000;
else if (bmcr & BMCR_SPEED100)
sp = SPEED_100;
else
sp = SPEED_10;
} else if (status & BMSR_ANEGCOMPLETE) {
err = mdio_read(cphy, 0, VSC8211_AUX_CTRL_STAT, &status);
if (err)
return err;
dplx = (status & F_ACSR_DUPLEX) ? DUPLEX_FULL : DUPLEX_HALF;
sp = G_ACSR_SPEED(status);
if (sp == 0)
sp = SPEED_10;
else if (sp == 1)
sp = SPEED_100;
else
sp = SPEED_1000;
if (fc && dplx == DUPLEX_FULL) {
err = mdio_read(cphy, 0, MII_LPA, &lpa);
if (!err)
err = mdio_read(cphy, 0, MII_ADVERTISE, &adv);
if (err)
return err;
if (lpa & adv & ADVERTISE_PAUSE_CAP)
pause = PAUSE_RX | PAUSE_TX;
else if ((lpa & ADVERTISE_PAUSE_CAP) &&
(lpa & ADVERTISE_PAUSE_ASYM) &&
(adv & ADVERTISE_PAUSE_ASYM))
pause = PAUSE_TX;
else if ((lpa & ADVERTISE_PAUSE_ASYM) &&
(adv & ADVERTISE_PAUSE_CAP))
pause = PAUSE_RX;
}
}
if (speed)
*speed = sp;
if (duplex)
*duplex = dplx;
if (fc)
*fc = pause;
return 0;
}
static int vsc8211_power_down(struct cphy *cphy, int enable)
{
return t3_mdio_change_bits(cphy, 0, MII_BMCR, BMCR_PDOWN,
enable ? BMCR_PDOWN : 0);
}
static int vsc8211_intr_handler(struct cphy *cphy)
{
unsigned int cause;
int err, cphy_cause = 0;
err = mdio_read(cphy, 0, VSC8211_INTR_STATUS, &cause);
if (err)
return err;
cause &= INTR_MASK;
if (cause & CFG_CHG_INTR_MASK)
cphy_cause |= cphy_cause_link_change;
if (cause & (VSC_INTR_RX_FIFO | VSC_INTR_TX_FIFO))
cphy_cause |= cphy_cause_fifo_error;
return cphy_cause;
}
static struct cphy_ops vsc8211_ops = {
.reset = vsc8211_reset,
.intr_enable = vsc8211_intr_enable,
.intr_disable = vsc8211_intr_disable,
.intr_clear = vsc8211_intr_clear,
.intr_handler = vsc8211_intr_handler,
.autoneg_enable = vsc8211_autoneg_enable,
.autoneg_restart = vsc8211_autoneg_restart,
.advertise = t3_phy_advertise,
.set_speed_duplex = t3_set_phy_speed_duplex,
.get_link_status = vsc8211_get_link_status,
.power_down = vsc8211_power_down,
};
void t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops)
{
cphy_init(phy, adapter, phy_addr, &vsc8211_ops, mdio_ops);
}

409
drivers/net/cxgb3/xgmac.c Normal file
View File

@ -0,0 +1,409 @@
/*
* Copyright (c) 2005-2007 Chelsio, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "common.h"
#include "regs.h"
/*
* # of exact address filters. The first one is used for the station address,
* the rest are available for multicast addresses.
*/
#define EXACT_ADDR_FILTERS 8
static inline int macidx(const struct cmac *mac)
{
return mac->offset / (XGMAC0_1_BASE_ADDR - XGMAC0_0_BASE_ADDR);
}
static void xaui_serdes_reset(struct cmac *mac)
{
static const unsigned int clear[] = {
F_PWRDN0 | F_PWRDN1, F_RESETPLL01, F_RESET0 | F_RESET1,
F_PWRDN2 | F_PWRDN3, F_RESETPLL23, F_RESET2 | F_RESET3
};
int i;
struct adapter *adap = mac->adapter;
u32 ctrl = A_XGM_SERDES_CTRL0 + mac->offset;
t3_write_reg(adap, ctrl, adap->params.vpd.xauicfg[macidx(mac)] |
F_RESET3 | F_RESET2 | F_RESET1 | F_RESET0 |
F_PWRDN3 | F_PWRDN2 | F_PWRDN1 | F_PWRDN0 |
F_RESETPLL23 | F_RESETPLL01);
t3_read_reg(adap, ctrl);
udelay(15);
for (i = 0; i < ARRAY_SIZE(clear); i++) {
t3_set_reg_field(adap, ctrl, clear[i], 0);
udelay(15);
}
}
void t3b_pcs_reset(struct cmac *mac)
{
t3_set_reg_field(mac->adapter, A_XGM_RESET_CTRL + mac->offset,
F_PCS_RESET_, 0);
udelay(20);
t3_set_reg_field(mac->adapter, A_XGM_RESET_CTRL + mac->offset, 0,
F_PCS_RESET_);
}
int t3_mac_reset(struct cmac *mac)
{
static const struct addr_val_pair mac_reset_avp[] = {
{A_XGM_TX_CTRL, 0},
{A_XGM_RX_CTRL, 0},
{A_XGM_RX_CFG, F_DISPAUSEFRAMES | F_EN1536BFRAMES |
F_RMFCS | F_ENJUMBO | F_ENHASHMCAST},
{A_XGM_RX_HASH_LOW, 0},
{A_XGM_RX_HASH_HIGH, 0},
{A_XGM_RX_EXACT_MATCH_LOW_1, 0},
{A_XGM_RX_EXACT_MATCH_LOW_2, 0},
{A_XGM_RX_EXACT_MATCH_LOW_3, 0},
{A_XGM_RX_EXACT_MATCH_LOW_4, 0},
{A_XGM_RX_EXACT_MATCH_LOW_5, 0},
{A_XGM_RX_EXACT_MATCH_LOW_6, 0},
{A_XGM_RX_EXACT_MATCH_LOW_7, 0},
{A_XGM_RX_EXACT_MATCH_LOW_8, 0},
{A_XGM_STAT_CTRL, F_CLRSTATS}
};
u32 val;
struct adapter *adap = mac->adapter;
unsigned int oft = mac->offset;
t3_write_reg(adap, A_XGM_RESET_CTRL + oft, F_MAC_RESET_);
t3_read_reg(adap, A_XGM_RESET_CTRL + oft); /* flush */
t3_write_regs(adap, mac_reset_avp, ARRAY_SIZE(mac_reset_avp), oft);
t3_set_reg_field(adap, A_XGM_RXFIFO_CFG + oft,
F_RXSTRFRWRD | F_DISERRFRAMES,
uses_xaui(adap) ? 0 : F_RXSTRFRWRD);
if (uses_xaui(adap)) {
if (adap->params.rev == 0) {
t3_set_reg_field(adap, A_XGM_SERDES_CTRL + oft, 0,
F_RXENABLE | F_TXENABLE);
if (t3_wait_op_done(adap, A_XGM_SERDES_STATUS1 + oft,
F_CMULOCK, 1, 5, 2)) {
CH_ERR(adap,
"MAC %d XAUI SERDES CMU lock failed\n",
macidx(mac));
return -1;
}
t3_set_reg_field(adap, A_XGM_SERDES_CTRL + oft, 0,
F_SERDESRESET_);
} else
xaui_serdes_reset(mac);
}
if (adap->params.rev > 0)
t3_write_reg(adap, A_XGM_PAUSE_TIMER + oft, 0xf000);
val = F_MAC_RESET_;
if (is_10G(adap))
val |= F_PCS_RESET_;
else if (uses_xaui(adap))
val |= F_PCS_RESET_ | F_XG2G_RESET_;
else
val |= F_RGMII_RESET_ | F_XG2G_RESET_;
t3_write_reg(adap, A_XGM_RESET_CTRL + oft, val);
t3_read_reg(adap, A_XGM_RESET_CTRL + oft); /* flush */
if ((val & F_PCS_RESET_) && adap->params.rev) {
msleep(1);
t3b_pcs_reset(mac);
}
memset(&mac->stats, 0, sizeof(mac->stats));
return 0;
}
/*
* Set the exact match register 'idx' to recognize the given Ethernet address.
*/
static void set_addr_filter(struct cmac *mac, int idx, const u8 * addr)
{
u32 addr_lo, addr_hi;
unsigned int oft = mac->offset + idx * 8;
addr_lo = (addr[3] << 24) | (addr[2] << 16) | (addr[1] << 8) | addr[0];
addr_hi = (addr[5] << 8) | addr[4];
t3_write_reg(mac->adapter, A_XGM_RX_EXACT_MATCH_LOW_1 + oft, addr_lo);
t3_write_reg(mac->adapter, A_XGM_RX_EXACT_MATCH_HIGH_1 + oft, addr_hi);
}
/* Set one of the station's unicast MAC addresses. */
int t3_mac_set_address(struct cmac *mac, unsigned int idx, u8 addr[6])
{
if (idx >= mac->nucast)
return -EINVAL;
set_addr_filter(mac, idx, addr);
return 0;
}
/*
* Specify the number of exact address filters that should be reserved for
* unicast addresses. Caller should reload the unicast and multicast addresses
* after calling this.
*/
int t3_mac_set_num_ucast(struct cmac *mac, int n)
{
if (n > EXACT_ADDR_FILTERS)
return -EINVAL;
mac->nucast = n;
return 0;
}
/* Calculate the RX hash filter index of an Ethernet address */
static int hash_hw_addr(const u8 * addr)
{
int hash = 0, octet, bit, i = 0, c;
for (octet = 0; octet < 6; ++octet)
for (c = addr[octet], bit = 0; bit < 8; c >>= 1, ++bit) {
hash ^= (c & 1) << i;
if (++i == 6)
i = 0;
}
return hash;
}
int t3_mac_set_rx_mode(struct cmac *mac, struct t3_rx_mode *rm)
{
u32 val, hash_lo, hash_hi;
struct adapter *adap = mac->adapter;
unsigned int oft = mac->offset;
val = t3_read_reg(adap, A_XGM_RX_CFG + oft) & ~F_COPYALLFRAMES;
if (rm->dev->flags & IFF_PROMISC)
val |= F_COPYALLFRAMES;
t3_write_reg(adap, A_XGM_RX_CFG + oft, val);
if (rm->dev->flags & IFF_ALLMULTI)
hash_lo = hash_hi = 0xffffffff;
else {
u8 *addr;
int exact_addr_idx = mac->nucast;
hash_lo = hash_hi = 0;
while ((addr = t3_get_next_mcaddr(rm)))
if (exact_addr_idx < EXACT_ADDR_FILTERS)
set_addr_filter(mac, exact_addr_idx++, addr);
else {
int hash = hash_hw_addr(addr);
if (hash < 32)
hash_lo |= (1 << hash);
else
hash_hi |= (1 << (hash - 32));
}
}
t3_write_reg(adap, A_XGM_RX_HASH_LOW + oft, hash_lo);
t3_write_reg(adap, A_XGM_RX_HASH_HIGH + oft, hash_hi);
return 0;
}
int t3_mac_set_mtu(struct cmac *mac, unsigned int mtu)
{
int hwm, lwm;
unsigned int thres, v;
struct adapter *adap = mac->adapter;
/*
* MAX_FRAME_SIZE inludes header + FCS, mtu doesn't. The HW max
* packet size register includes header, but not FCS.
*/
mtu += 14;
if (mtu > MAX_FRAME_SIZE - 4)
return -EINVAL;
t3_write_reg(adap, A_XGM_RX_MAX_PKT_SIZE + mac->offset, mtu);
/*
* Adjust the PAUSE frame watermarks. We always set the LWM, and the
* HWM only if flow-control is enabled.
*/
hwm = max(MAC_RXFIFO_SIZE - 3 * mtu, MAC_RXFIFO_SIZE / 2U);
hwm = min(hwm, 3 * MAC_RXFIFO_SIZE / 4 + 1024);
lwm = hwm - 1024;
v = t3_read_reg(adap, A_XGM_RXFIFO_CFG + mac->offset);
v &= ~V_RXFIFOPAUSELWM(M_RXFIFOPAUSELWM);
v |= V_RXFIFOPAUSELWM(lwm / 8);
if (G_RXFIFOPAUSEHWM(v))
v = (v & ~V_RXFIFOPAUSEHWM(M_RXFIFOPAUSEHWM)) |
V_RXFIFOPAUSEHWM(hwm / 8);
t3_write_reg(adap, A_XGM_RXFIFO_CFG + mac->offset, v);
/* Adjust the TX FIFO threshold based on the MTU */
thres = (adap->params.vpd.cclk * 1000) / 15625;
thres = (thres * mtu) / 1000;
if (is_10G(adap))
thres /= 10;
thres = mtu > thres ? (mtu - thres + 7) / 8 : 0;
thres = max(thres, 8U); /* need at least 8 */
t3_set_reg_field(adap, A_XGM_TXFIFO_CFG + mac->offset,
V_TXFIFOTHRESH(M_TXFIFOTHRESH), V_TXFIFOTHRESH(thres));
return 0;
}
int t3_mac_set_speed_duplex_fc(struct cmac *mac, int speed, int duplex, int fc)
{
u32 val;
struct adapter *adap = mac->adapter;
unsigned int oft = mac->offset;
if (duplex >= 0 && duplex != DUPLEX_FULL)
return -EINVAL;
if (speed >= 0) {
if (speed == SPEED_10)
val = V_PORTSPEED(0);
else if (speed == SPEED_100)
val = V_PORTSPEED(1);
else if (speed == SPEED_1000)
val = V_PORTSPEED(2);
else if (speed == SPEED_10000)
val = V_PORTSPEED(3);
else
return -EINVAL;
t3_set_reg_field(adap, A_XGM_PORT_CFG + oft,
V_PORTSPEED(M_PORTSPEED), val);
}
val = t3_read_reg(adap, A_XGM_RXFIFO_CFG + oft);
val &= ~V_RXFIFOPAUSEHWM(M_RXFIFOPAUSEHWM);
if (fc & PAUSE_TX)
val |= V_RXFIFOPAUSEHWM(G_RXFIFOPAUSELWM(val) + 128); /* +1KB */
t3_write_reg(adap, A_XGM_RXFIFO_CFG + oft, val);
t3_set_reg_field(adap, A_XGM_TX_CFG + oft, F_TXPAUSEEN,
(fc & PAUSE_RX) ? F_TXPAUSEEN : 0);
return 0;
}
int t3_mac_enable(struct cmac *mac, int which)
{
int idx = macidx(mac);
struct adapter *adap = mac->adapter;
unsigned int oft = mac->offset;
if (which & MAC_DIRECTION_TX) {
t3_write_reg(adap, A_XGM_TX_CTRL + oft, F_TXEN);
t3_write_reg(adap, A_TP_PIO_ADDR, A_TP_TX_DROP_CFG_CH0 + idx);
t3_write_reg(adap, A_TP_PIO_DATA, 0xbf000001);
t3_write_reg(adap, A_TP_PIO_ADDR, A_TP_TX_DROP_MODE);
t3_set_reg_field(adap, A_TP_PIO_DATA, 1 << idx, 1 << idx);
}
if (which & MAC_DIRECTION_RX)
t3_write_reg(adap, A_XGM_RX_CTRL + oft, F_RXEN);
return 0;
}
int t3_mac_disable(struct cmac *mac, int which)
{
int idx = macidx(mac);
struct adapter *adap = mac->adapter;
if (which & MAC_DIRECTION_TX) {
t3_write_reg(adap, A_XGM_TX_CTRL + mac->offset, 0);
t3_write_reg(adap, A_TP_PIO_ADDR, A_TP_TX_DROP_CFG_CH0 + idx);
t3_write_reg(adap, A_TP_PIO_DATA, 0xc000001f);
t3_write_reg(adap, A_TP_PIO_ADDR, A_TP_TX_DROP_MODE);
t3_set_reg_field(adap, A_TP_PIO_DATA, 1 << idx, 0);
}
if (which & MAC_DIRECTION_RX)
t3_write_reg(adap, A_XGM_RX_CTRL + mac->offset, 0);
return 0;
}
/*
* This function is called periodically to accumulate the current values of the
* RMON counters into the port statistics. Since the packet counters are only
* 32 bits they can overflow in ~286 secs at 10G, so the function should be
* called more frequently than that. The byte counters are 45-bit wide, they
* would overflow in ~7.8 hours.
*/
const struct mac_stats *t3_mac_update_stats(struct cmac *mac)
{
#define RMON_READ(mac, addr) t3_read_reg(mac->adapter, addr + mac->offset)
#define RMON_UPDATE(mac, name, reg) \
(mac)->stats.name += (u64)RMON_READ(mac, A_XGM_STAT_##reg)
#define RMON_UPDATE64(mac, name, reg_lo, reg_hi) \
(mac)->stats.name += RMON_READ(mac, A_XGM_STAT_##reg_lo) + \
((u64)RMON_READ(mac, A_XGM_STAT_##reg_hi) << 32)
u32 v, lo;
RMON_UPDATE64(mac, rx_octets, RX_BYTES_LOW, RX_BYTES_HIGH);
RMON_UPDATE64(mac, rx_frames, RX_FRAMES_LOW, RX_FRAMES_HIGH);
RMON_UPDATE(mac, rx_mcast_frames, RX_MCAST_FRAMES);
RMON_UPDATE(mac, rx_bcast_frames, RX_BCAST_FRAMES);
RMON_UPDATE(mac, rx_fcs_errs, RX_CRC_ERR_FRAMES);
RMON_UPDATE(mac, rx_pause, RX_PAUSE_FRAMES);
RMON_UPDATE(mac, rx_jabber, RX_JABBER_FRAMES);
RMON_UPDATE(mac, rx_short, RX_SHORT_FRAMES);
RMON_UPDATE(mac, rx_symbol_errs, RX_SYM_CODE_ERR_FRAMES);
RMON_UPDATE(mac, rx_too_long, RX_OVERSIZE_FRAMES);
mac->stats.rx_too_long += RMON_READ(mac, A_XGM_RX_MAX_PKT_SIZE_ERR_CNT);
RMON_UPDATE(mac, rx_frames_64, RX_64B_FRAMES);
RMON_UPDATE(mac, rx_frames_65_127, RX_65_127B_FRAMES);
RMON_UPDATE(mac, rx_frames_128_255, RX_128_255B_FRAMES);
RMON_UPDATE(mac, rx_frames_256_511, RX_256_511B_FRAMES);
RMON_UPDATE(mac, rx_frames_512_1023, RX_512_1023B_FRAMES);
RMON_UPDATE(mac, rx_frames_1024_1518, RX_1024_1518B_FRAMES);
RMON_UPDATE(mac, rx_frames_1519_max, RX_1519_MAXB_FRAMES);
RMON_UPDATE64(mac, tx_octets, TX_BYTE_LOW, TX_BYTE_HIGH);
RMON_UPDATE64(mac, tx_frames, TX_FRAME_LOW, TX_FRAME_HIGH);
RMON_UPDATE(mac, tx_mcast_frames, TX_MCAST);
RMON_UPDATE(mac, tx_bcast_frames, TX_BCAST);
RMON_UPDATE(mac, tx_pause, TX_PAUSE);
/* This counts error frames in general (bad FCS, underrun, etc). */
RMON_UPDATE(mac, tx_underrun, TX_ERR_FRAMES);
RMON_UPDATE(mac, tx_frames_64, TX_64B_FRAMES);
RMON_UPDATE(mac, tx_frames_65_127, TX_65_127B_FRAMES);
RMON_UPDATE(mac, tx_frames_128_255, TX_128_255B_FRAMES);
RMON_UPDATE(mac, tx_frames_256_511, TX_256_511B_FRAMES);
RMON_UPDATE(mac, tx_frames_512_1023, TX_512_1023B_FRAMES);
RMON_UPDATE(mac, tx_frames_1024_1518, TX_1024_1518B_FRAMES);
RMON_UPDATE(mac, tx_frames_1519_max, TX_1519_MAXB_FRAMES);
/* The next stat isn't clear-on-read. */
t3_write_reg(mac->adapter, A_TP_MIB_INDEX, mac->offset ? 51 : 50);
v = t3_read_reg(mac->adapter, A_TP_MIB_RDATA);
lo = (u32) mac->stats.rx_cong_drops;
mac->stats.rx_cong_drops += (u64) (v - lo);
return &mac->stats;
}

View File

@ -5,7 +5,7 @@
* *
* adopted from sunlance.c by Richard van den Berg * adopted from sunlance.c by Richard van den Berg
* *
* Copyright (C) 2002, 2003, 2005 Maciej W. Rozycki * Copyright (C) 2002, 2003, 2005, 2006 Maciej W. Rozycki
* *
* additional sources: * additional sources:
* - PMAD-AA TURBOchannel Ethernet Module Functional Specification, * - PMAD-AA TURBOchannel Ethernet Module Functional Specification,
@ -44,6 +44,8 @@
* v0.010: Fixes for the PMAD mapping of the LANCE buffer and for the * v0.010: Fixes for the PMAD mapping of the LANCE buffer and for the
* PMAX requirement to only use halfword accesses to the * PMAX requirement to only use halfword accesses to the
* buffer. macro * buffer. macro
*
* v0.011: Converted the PMAD to the driver model. macro
*/ */
#include <linux/crc32.h> #include <linux/crc32.h>
@ -58,6 +60,7 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/stddef.h> #include <linux/stddef.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/tc.h>
#include <linux/types.h> #include <linux/types.h>
#include <asm/addrspace.h> #include <asm/addrspace.h>
@ -69,15 +72,16 @@
#include <asm/dec/kn01.h> #include <asm/dec/kn01.h>
#include <asm/dec/machtype.h> #include <asm/dec/machtype.h>
#include <asm/dec/system.h> #include <asm/dec/system.h>
#include <asm/dec/tc.h>
static char version[] __devinitdata = static char version[] __devinitdata =
"declance.c: v0.010 by Linux MIPS DECstation task force\n"; "declance.c: v0.011 by Linux MIPS DECstation task force\n";
MODULE_AUTHOR("Linux MIPS DECstation task force"); MODULE_AUTHOR("Linux MIPS DECstation task force");
MODULE_DESCRIPTION("DEC LANCE (DECstation onboard, PMAD-xx) driver"); MODULE_DESCRIPTION("DEC LANCE (DECstation onboard, PMAD-xx) driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
#define __unused __attribute__ ((unused))
/* /*
* card types * card types
*/ */
@ -246,7 +250,6 @@ struct lance_init_block {
struct lance_private { struct lance_private {
struct net_device *next; struct net_device *next;
int type; int type;
int slot;
int dma_irq; int dma_irq;
volatile struct lance_regs *ll; volatile struct lance_regs *ll;
@ -288,6 +291,7 @@ struct lance_regs {
int dec_lance_debug = 2; int dec_lance_debug = 2;
static struct tc_driver dec_lance_tc_driver;
static struct net_device *root_lance_dev; static struct net_device *root_lance_dev;
static inline void writereg(volatile unsigned short *regptr, short value) static inline void writereg(volatile unsigned short *regptr, short value)
@ -1023,7 +1027,7 @@ static void lance_set_multicast_retry(unsigned long _opaque)
lance_set_multicast(dev); lance_set_multicast(dev);
} }
static int __init dec_lance_init(const int type, const int slot) static int __init dec_lance_probe(struct device *bdev, const int type)
{ {
static unsigned version_printed; static unsigned version_printed;
static const char fmt[] = "declance%d"; static const char fmt[] = "declance%d";
@ -1031,6 +1035,7 @@ static int __init dec_lance_init(const int type, const int slot)
struct net_device *dev; struct net_device *dev;
struct lance_private *lp; struct lance_private *lp;
volatile struct lance_regs *ll; volatile struct lance_regs *ll;
resource_size_t start = 0, len = 0;
int i, ret; int i, ret;
unsigned long esar_base; unsigned long esar_base;
unsigned char *esar; unsigned char *esar;
@ -1038,14 +1043,18 @@ static int __init dec_lance_init(const int type, const int slot)
if (dec_lance_debug && version_printed++ == 0) if (dec_lance_debug && version_printed++ == 0)
printk(version); printk(version);
i = 0; if (bdev)
dev = root_lance_dev; snprintf(name, sizeof(name), "%s", bdev->bus_id);
while (dev) { else {
i++; i = 0;
lp = (struct lance_private *)dev->priv; dev = root_lance_dev;
dev = lp->next; while (dev) {
i++;
lp = (struct lance_private *)dev->priv;
dev = lp->next;
}
snprintf(name, sizeof(name), fmt, i);
} }
snprintf(name, sizeof(name), fmt, i);
dev = alloc_etherdev(sizeof(struct lance_private)); dev = alloc_etherdev(sizeof(struct lance_private));
if (!dev) { if (!dev) {
@ -1063,7 +1072,6 @@ static int __init dec_lance_init(const int type, const int slot)
spin_lock_init(&lp->lock); spin_lock_init(&lp->lock);
lp->type = type; lp->type = type;
lp->slot = slot;
switch (type) { switch (type) {
case ASIC_LANCE: case ASIC_LANCE:
dev->base_addr = CKSEG1ADDR(dec_kn_slot_base + IOASIC_LANCE); dev->base_addr = CKSEG1ADDR(dec_kn_slot_base + IOASIC_LANCE);
@ -1110,12 +1118,22 @@ static int __init dec_lance_init(const int type, const int slot)
break; break;
#ifdef CONFIG_TC #ifdef CONFIG_TC
case PMAD_LANCE: case PMAD_LANCE:
claim_tc_card(slot); dev_set_drvdata(bdev, dev);
dev->mem_start = CKSEG1ADDR(get_tc_base_addr(slot)); start = to_tc_dev(bdev)->resource.start;
len = to_tc_dev(bdev)->resource.end - start + 1;
if (!request_mem_region(start, len, bdev->bus_id)) {
printk(KERN_ERR
"%s: Unable to reserve MMIO resource\n",
bdev->bus_id);
ret = -EBUSY;
goto err_out_dev;
}
dev->mem_start = CKSEG1ADDR(start);
dev->mem_end = dev->mem_start + 0x100000; dev->mem_end = dev->mem_start + 0x100000;
dev->base_addr = dev->mem_start + 0x100000; dev->base_addr = dev->mem_start + 0x100000;
dev->irq = get_tc_irq_nr(slot); dev->irq = to_tc_dev(bdev)->interrupt;
esar_base = dev->mem_start + 0x1c0002; esar_base = dev->mem_start + 0x1c0002;
lp->dma_irq = -1; lp->dma_irq = -1;
@ -1174,7 +1192,7 @@ static int __init dec_lance_init(const int type, const int slot)
printk(KERN_ERR "%s: declance_init called with unknown type\n", printk(KERN_ERR "%s: declance_init called with unknown type\n",
name); name);
ret = -ENODEV; ret = -ENODEV;
goto err_out_free_dev; goto err_out_dev;
} }
ll = (struct lance_regs *) dev->base_addr; ll = (struct lance_regs *) dev->base_addr;
@ -1188,7 +1206,7 @@ static int __init dec_lance_init(const int type, const int slot)
"%s: Ethernet station address prom not found!\n", "%s: Ethernet station address prom not found!\n",
name); name);
ret = -ENODEV; ret = -ENODEV;
goto err_out_free_dev; goto err_out_resource;
} }
/* Check the prom contents */ /* Check the prom contents */
for (i = 0; i < 8; i++) { for (i = 0; i < 8; i++) {
@ -1198,7 +1216,7 @@ static int __init dec_lance_init(const int type, const int slot)
printk(KERN_ERR "%s: Something is wrong with the " printk(KERN_ERR "%s: Something is wrong with the "
"ethernet station address prom!\n", name); "ethernet station address prom!\n", name);
ret = -ENODEV; ret = -ENODEV;
goto err_out_free_dev; goto err_out_resource;
} }
} }
@ -1255,48 +1273,51 @@ static int __init dec_lance_init(const int type, const int slot)
if (ret) { if (ret) {
printk(KERN_ERR printk(KERN_ERR
"%s: Unable to register netdev, aborting.\n", name); "%s: Unable to register netdev, aborting.\n", name);
goto err_out_free_dev; goto err_out_resource;
} }
lp->next = root_lance_dev; if (!bdev) {
root_lance_dev = dev; lp->next = root_lance_dev;
root_lance_dev = dev;
}
printk("%s: registered as %s.\n", name, dev->name); printk("%s: registered as %s.\n", name, dev->name);
return 0; return 0;
err_out_free_dev: err_out_resource:
if (bdev)
release_mem_region(start, len);
err_out_dev:
free_netdev(dev); free_netdev(dev);
err_out: err_out:
return ret; return ret;
} }
static void __exit dec_lance_remove(struct device *bdev)
{
struct net_device *dev = dev_get_drvdata(bdev);
resource_size_t start, len;
unregister_netdev(dev);
start = to_tc_dev(bdev)->resource.start;
len = to_tc_dev(bdev)->resource.end - start + 1;
release_mem_region(start, len);
free_netdev(dev);
}
/* Find all the lance cards on the system and initialize them */ /* Find all the lance cards on the system and initialize them */
static int __init dec_lance_probe(void) static int __init dec_lance_platform_probe(void)
{ {
int count = 0; int count = 0;
/* Scan slots for PMAD-AA cards first. */
#ifdef CONFIG_TC
if (TURBOCHANNEL) {
int slot;
while ((slot = search_tc_card("PMAD-AA")) >= 0) {
if (dec_lance_init(PMAD_LANCE, slot) < 0)
break;
count++;
}
}
#endif
/* Then handle onboard devices. */
if (dec_interrupt[DEC_IRQ_LANCE] >= 0) { if (dec_interrupt[DEC_IRQ_LANCE] >= 0) {
if (dec_interrupt[DEC_IRQ_LANCE_MERR] >= 0) { if (dec_interrupt[DEC_IRQ_LANCE_MERR] >= 0) {
if (dec_lance_init(ASIC_LANCE, -1) >= 0) if (dec_lance_probe(NULL, ASIC_LANCE) >= 0)
count++; count++;
} else if (!TURBOCHANNEL) { } else if (!TURBOCHANNEL) {
if (dec_lance_init(PMAX_LANCE, -1) >= 0) if (dec_lance_probe(NULL, PMAX_LANCE) >= 0)
count++; count++;
} }
} }
@ -1304,21 +1325,70 @@ static int __init dec_lance_probe(void)
return (count > 0) ? 0 : -ENODEV; return (count > 0) ? 0 : -ENODEV;
} }
static void __exit dec_lance_cleanup(void) static void __exit dec_lance_platform_remove(void)
{ {
while (root_lance_dev) { while (root_lance_dev) {
struct net_device *dev = root_lance_dev; struct net_device *dev = root_lance_dev;
struct lance_private *lp = netdev_priv(dev); struct lance_private *lp = netdev_priv(dev);
unregister_netdev(dev); unregister_netdev(dev);
#ifdef CONFIG_TC
if (lp->slot >= 0)
release_tc_card(lp->slot);
#endif
root_lance_dev = lp->next; root_lance_dev = lp->next;
free_netdev(dev); free_netdev(dev);
} }
} }
module_init(dec_lance_probe); #ifdef CONFIG_TC
module_exit(dec_lance_cleanup); static int __init dec_lance_tc_probe(struct device *dev);
static int __exit dec_lance_tc_remove(struct device *dev);
static const struct tc_device_id dec_lance_tc_table[] = {
{ "DEC ", "PMAD-AA " },
{ }
};
MODULE_DEVICE_TABLE(tc, dec_lance_tc_table);
static struct tc_driver dec_lance_tc_driver = {
.id_table = dec_lance_tc_table,
.driver = {
.name = "declance",
.bus = &tc_bus_type,
.probe = dec_lance_tc_probe,
.remove = __exit_p(dec_lance_tc_remove),
},
};
static int __init dec_lance_tc_probe(struct device *dev)
{
int status = dec_lance_probe(dev, PMAD_LANCE);
if (!status)
get_device(dev);
return status;
}
static int __exit dec_lance_tc_remove(struct device *dev)
{
put_device(dev);
dec_lance_remove(dev);
return 0;
}
#endif
static int __init dec_lance_init(void)
{
int status;
status = tc_register_driver(&dec_lance_tc_driver);
if (!status)
dec_lance_platform_probe();
return status;
}
static void __exit dec_lance_exit(void)
{
dec_lance_platform_remove();
tc_unregister_driver(&dec_lance_tc_driver);
}
module_init(dec_lance_init);
module_exit(dec_lance_exit);

View File

@ -59,17 +59,13 @@
#include <linux/capability.h> #include <linux/capability.h>
#include <linux/in.h> #include <linux/in.h>
#include <linux/ip.h> #include <linux/ip.h>
#ifdef NETIF_F_TSO6
#include <linux/ipv6.h> #include <linux/ipv6.h>
#endif
#include <linux/tcp.h> #include <linux/tcp.h>
#include <linux/udp.h> #include <linux/udp.h>
#include <net/pkt_sched.h> #include <net/pkt_sched.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#ifdef NETIF_F_TSO
#include <net/checksum.h> #include <net/checksum.h>
#endif
#include <linux/mii.h> #include <linux/mii.h>
#include <linux/ethtool.h> #include <linux/ethtool.h>
#include <linux/if_vlan.h> #include <linux/if_vlan.h>
@ -257,7 +253,6 @@ struct e1000_adapter {
spinlock_t tx_queue_lock; spinlock_t tx_queue_lock;
#endif #endif
atomic_t irq_sem; atomic_t irq_sem;
unsigned int detect_link;
unsigned int total_tx_bytes; unsigned int total_tx_bytes;
unsigned int total_tx_packets; unsigned int total_tx_packets;
unsigned int total_rx_bytes; unsigned int total_rx_bytes;
@ -348,9 +343,7 @@ struct e1000_adapter {
boolean_t have_msi; boolean_t have_msi;
#endif #endif
/* to not mess up cache alignment, always add to the bottom */ /* to not mess up cache alignment, always add to the bottom */
#ifdef NETIF_F_TSO
boolean_t tso_force; boolean_t tso_force;
#endif
boolean_t smart_power_down; /* phy smart power down */ boolean_t smart_power_down; /* phy smart power down */
boolean_t quad_port_a; boolean_t quad_port_a;
unsigned long flags; unsigned long flags;

View File

@ -338,7 +338,6 @@ e1000_set_tx_csum(struct net_device *netdev, uint32_t data)
return 0; return 0;
} }
#ifdef NETIF_F_TSO
static int static int
e1000_set_tso(struct net_device *netdev, uint32_t data) e1000_set_tso(struct net_device *netdev, uint32_t data)
{ {
@ -352,18 +351,15 @@ e1000_set_tso(struct net_device *netdev, uint32_t data)
else else
netdev->features &= ~NETIF_F_TSO; netdev->features &= ~NETIF_F_TSO;
#ifdef NETIF_F_TSO6
if (data) if (data)
netdev->features |= NETIF_F_TSO6; netdev->features |= NETIF_F_TSO6;
else else
netdev->features &= ~NETIF_F_TSO6; netdev->features &= ~NETIF_F_TSO6;
#endif
DPRINTK(PROBE, INFO, "TSO is %s\n", data ? "Enabled" : "Disabled"); DPRINTK(PROBE, INFO, "TSO is %s\n", data ? "Enabled" : "Disabled");
adapter->tso_force = TRUE; adapter->tso_force = TRUE;
return 0; return 0;
} }
#endif /* NETIF_F_TSO */
static uint32_t static uint32_t
e1000_get_msglevel(struct net_device *netdev) e1000_get_msglevel(struct net_device *netdev)
@ -1971,10 +1967,8 @@ static const struct ethtool_ops e1000_ethtool_ops = {
.set_tx_csum = e1000_set_tx_csum, .set_tx_csum = e1000_set_tx_csum,
.get_sg = ethtool_op_get_sg, .get_sg = ethtool_op_get_sg,
.set_sg = ethtool_op_set_sg, .set_sg = ethtool_op_set_sg,
#ifdef NETIF_F_TSO
.get_tso = ethtool_op_get_tso, .get_tso = ethtool_op_get_tso,
.set_tso = e1000_set_tso, .set_tso = e1000_set_tso,
#endif
.self_test_count = e1000_diag_test_count, .self_test_count = e1000_diag_test_count,
.self_test = e1000_diag_test, .self_test = e1000_diag_test,
.get_strings = e1000_get_strings, .get_strings = e1000_get_strings,

View File

@ -36,7 +36,7 @@ static char e1000_driver_string[] = "Intel(R) PRO/1000 Network Driver";
#else #else
#define DRIVERNAPI "-NAPI" #define DRIVERNAPI "-NAPI"
#endif #endif
#define DRV_VERSION "7.3.15-k2"DRIVERNAPI #define DRV_VERSION "7.3.20-k2"DRIVERNAPI
char e1000_driver_version[] = DRV_VERSION; char e1000_driver_version[] = DRV_VERSION;
static char e1000_copyright[] = "Copyright (c) 1999-2006 Intel Corporation."; static char e1000_copyright[] = "Copyright (c) 1999-2006 Intel Corporation.";
@ -990,16 +990,12 @@ e1000_probe(struct pci_dev *pdev,
netdev->features &= ~NETIF_F_HW_VLAN_FILTER; netdev->features &= ~NETIF_F_HW_VLAN_FILTER;
} }
#ifdef NETIF_F_TSO
if ((adapter->hw.mac_type >= e1000_82544) && if ((adapter->hw.mac_type >= e1000_82544) &&
(adapter->hw.mac_type != e1000_82547)) (adapter->hw.mac_type != e1000_82547))
netdev->features |= NETIF_F_TSO; netdev->features |= NETIF_F_TSO;
#ifdef NETIF_F_TSO6
if (adapter->hw.mac_type > e1000_82547_rev_2) if (adapter->hw.mac_type > e1000_82547_rev_2)
netdev->features |= NETIF_F_TSO6; netdev->features |= NETIF_F_TSO6;
#endif
#endif
if (pci_using_dac) if (pci_using_dac)
netdev->features |= NETIF_F_HIGHDMA; netdev->features |= NETIF_F_HIGHDMA;
@ -2583,15 +2579,22 @@ e1000_watchdog(unsigned long data)
if (link) { if (link) {
if (!netif_carrier_ok(netdev)) { if (!netif_carrier_ok(netdev)) {
uint32_t ctrl;
boolean_t txb2b = 1; boolean_t txb2b = 1;
e1000_get_speed_and_duplex(&adapter->hw, e1000_get_speed_and_duplex(&adapter->hw,
&adapter->link_speed, &adapter->link_speed,
&adapter->link_duplex); &adapter->link_duplex);
DPRINTK(LINK, INFO, "NIC Link is Up %d Mbps %s\n", ctrl = E1000_READ_REG(&adapter->hw, CTRL);
adapter->link_speed, DPRINTK(LINK, INFO, "NIC Link is Up %d Mbps %s, "
adapter->link_duplex == FULL_DUPLEX ? "Flow Control: %s\n",
"Full Duplex" : "Half Duplex"); adapter->link_speed,
adapter->link_duplex == FULL_DUPLEX ?
"Full Duplex" : "Half Duplex",
((ctrl & E1000_CTRL_TFCE) && (ctrl &
E1000_CTRL_RFCE)) ? "RX/TX" : ((ctrl &
E1000_CTRL_RFCE) ? "RX" : ((ctrl &
E1000_CTRL_TFCE) ? "TX" : "None" )));
/* tweak tx_queue_len according to speed/duplex /* tweak tx_queue_len according to speed/duplex
* and adjust the timeout factor */ * and adjust the timeout factor */
@ -2619,7 +2622,6 @@ e1000_watchdog(unsigned long data)
E1000_WRITE_REG(&adapter->hw, TARC0, tarc0); E1000_WRITE_REG(&adapter->hw, TARC0, tarc0);
} }
#ifdef NETIF_F_TSO
/* disable TSO for pcie and 10/100 speeds, to avoid /* disable TSO for pcie and 10/100 speeds, to avoid
* some hardware issues */ * some hardware issues */
if (!adapter->tso_force && if (!adapter->tso_force &&
@ -2630,22 +2632,17 @@ e1000_watchdog(unsigned long data)
DPRINTK(PROBE,INFO, DPRINTK(PROBE,INFO,
"10/100 speed: disabling TSO\n"); "10/100 speed: disabling TSO\n");
netdev->features &= ~NETIF_F_TSO; netdev->features &= ~NETIF_F_TSO;
#ifdef NETIF_F_TSO6
netdev->features &= ~NETIF_F_TSO6; netdev->features &= ~NETIF_F_TSO6;
#endif
break; break;
case SPEED_1000: case SPEED_1000:
netdev->features |= NETIF_F_TSO; netdev->features |= NETIF_F_TSO;
#ifdef NETIF_F_TSO6
netdev->features |= NETIF_F_TSO6; netdev->features |= NETIF_F_TSO6;
#endif
break; break;
default: default:
/* oops */ /* oops */
break; break;
} }
} }
#endif
/* enable transmits in the hardware, need to do this /* enable transmits in the hardware, need to do this
* after setting TARC0 */ * after setting TARC0 */
@ -2875,7 +2872,6 @@ static int
e1000_tso(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring, e1000_tso(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring,
struct sk_buff *skb) struct sk_buff *skb)
{ {
#ifdef NETIF_F_TSO
struct e1000_context_desc *context_desc; struct e1000_context_desc *context_desc;
struct e1000_buffer *buffer_info; struct e1000_buffer *buffer_info;
unsigned int i; unsigned int i;
@ -2904,7 +2900,6 @@ e1000_tso(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring,
0); 0);
cmd_length = E1000_TXD_CMD_IP; cmd_length = E1000_TXD_CMD_IP;
ipcse = skb->h.raw - skb->data - 1; ipcse = skb->h.raw - skb->data - 1;
#ifdef NETIF_F_TSO6
} else if (skb->protocol == htons(ETH_P_IPV6)) { } else if (skb->protocol == htons(ETH_P_IPV6)) {
skb->nh.ipv6h->payload_len = 0; skb->nh.ipv6h->payload_len = 0;
skb->h.th->check = skb->h.th->check =
@ -2914,7 +2909,6 @@ e1000_tso(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring,
IPPROTO_TCP, IPPROTO_TCP,
0); 0);
ipcse = 0; ipcse = 0;
#endif
} }
ipcss = skb->nh.raw - skb->data; ipcss = skb->nh.raw - skb->data;
ipcso = (void *)&(skb->nh.iph->check) - (void *)skb->data; ipcso = (void *)&(skb->nh.iph->check) - (void *)skb->data;
@ -2947,8 +2941,6 @@ e1000_tso(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring,
return TRUE; return TRUE;
} }
#endif
return FALSE; return FALSE;
} }
@ -2968,8 +2960,9 @@ e1000_tx_csum(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring,
buffer_info = &tx_ring->buffer_info[i]; buffer_info = &tx_ring->buffer_info[i];
context_desc = E1000_CONTEXT_DESC(*tx_ring, i); context_desc = E1000_CONTEXT_DESC(*tx_ring, i);
context_desc->lower_setup.ip_config = 0;
context_desc->upper_setup.tcp_fields.tucss = css; context_desc->upper_setup.tcp_fields.tucss = css;
context_desc->upper_setup.tcp_fields.tucso = css + skb->csum_offset; context_desc->upper_setup.tcp_fields.tucso = css + skb->csum;
context_desc->upper_setup.tcp_fields.tucse = 0; context_desc->upper_setup.tcp_fields.tucse = 0;
context_desc->tcp_seg_setup.data = 0; context_desc->tcp_seg_setup.data = 0;
context_desc->cmd_and_length = cpu_to_le32(E1000_TXD_CMD_DEXT); context_desc->cmd_and_length = cpu_to_le32(E1000_TXD_CMD_DEXT);
@ -3005,7 +2998,6 @@ e1000_tx_map(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring,
while (len) { while (len) {
buffer_info = &tx_ring->buffer_info[i]; buffer_info = &tx_ring->buffer_info[i];
size = min(len, max_per_txd); size = min(len, max_per_txd);
#ifdef NETIF_F_TSO
/* Workaround for Controller erratum -- /* Workaround for Controller erratum --
* descriptor for non-tso packet in a linear SKB that follows a * descriptor for non-tso packet in a linear SKB that follows a
* tso gets written back prematurely before the data is fully * tso gets written back prematurely before the data is fully
@ -3020,7 +3012,6 @@ e1000_tx_map(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring,
* in TSO mode. Append 4-byte sentinel desc */ * in TSO mode. Append 4-byte sentinel desc */
if (unlikely(mss && !nr_frags && size == len && size > 8)) if (unlikely(mss && !nr_frags && size == len && size > 8))
size -= 4; size -= 4;
#endif
/* work-around for errata 10 and it applies /* work-around for errata 10 and it applies
* to all controllers in PCI-X mode * to all controllers in PCI-X mode
* The fix is to make sure that the first descriptor of a * The fix is to make sure that the first descriptor of a
@ -3062,12 +3053,10 @@ e1000_tx_map(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring,
while (len) { while (len) {
buffer_info = &tx_ring->buffer_info[i]; buffer_info = &tx_ring->buffer_info[i];
size = min(len, max_per_txd); size = min(len, max_per_txd);
#ifdef NETIF_F_TSO
/* Workaround for premature desc write-backs /* Workaround for premature desc write-backs
* in TSO mode. Append 4-byte sentinel desc */ * in TSO mode. Append 4-byte sentinel desc */
if (unlikely(mss && f == (nr_frags-1) && size == len && size > 8)) if (unlikely(mss && f == (nr_frags-1) && size == len && size > 8))
size -= 4; size -= 4;
#endif
/* Workaround for potential 82544 hang in PCI-X. /* Workaround for potential 82544 hang in PCI-X.
* Avoid terminating buffers within evenly-aligned * Avoid terminating buffers within evenly-aligned
* dwords. */ * dwords. */
@ -3292,7 +3281,6 @@ e1000_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
if (adapter->hw.mac_type >= e1000_82571) if (adapter->hw.mac_type >= e1000_82571)
max_per_txd = 8192; max_per_txd = 8192;
#ifdef NETIF_F_TSO
mss = skb_shinfo(skb)->gso_size; mss = skb_shinfo(skb)->gso_size;
/* The controller does a simple calculation to /* The controller does a simple calculation to
* make sure there is enough room in the FIFO before * make sure there is enough room in the FIFO before
@ -3346,16 +3334,10 @@ e1000_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
if ((mss) || (skb->ip_summed == CHECKSUM_PARTIAL)) if ((mss) || (skb->ip_summed == CHECKSUM_PARTIAL))
count++; count++;
count++; count++;
#else
if (skb->ip_summed == CHECKSUM_PARTIAL)
count++;
#endif
#ifdef NETIF_F_TSO
/* Controller Erratum workaround */ /* Controller Erratum workaround */
if (!skb->data_len && tx_ring->last_tx_tso && !skb_is_gso(skb)) if (!skb->data_len && tx_ring->last_tx_tso && !skb_is_gso(skb))
count++; count++;
#endif
count += TXD_USE_COUNT(len, max_txd_pwr); count += TXD_USE_COUNT(len, max_txd_pwr);
@ -3765,8 +3747,8 @@ e1000_update_stats(struct e1000_adapter *adapter)
* @data: pointer to a network interface device structure * @data: pointer to a network interface device structure
**/ **/
static static irqreturn_t
irqreturn_t e1000_intr_msi(int irq, void *data) e1000_intr_msi(int irq, void *data)
{ {
struct net_device *netdev = data; struct net_device *netdev = data;
struct e1000_adapter *adapter = netdev_priv(netdev); struct e1000_adapter *adapter = netdev_priv(netdev);
@ -3774,49 +3756,27 @@ irqreturn_t e1000_intr_msi(int irq, void *data)
#ifndef CONFIG_E1000_NAPI #ifndef CONFIG_E1000_NAPI
int i; int i;
#endif #endif
uint32_t icr = E1000_READ_REG(hw, ICR);
/* this code avoids the read of ICR but has to get 1000 interrupts
* at every link change event before it will notice the change */
if (++adapter->detect_link >= 1000) {
uint32_t icr = E1000_READ_REG(hw, ICR);
#ifdef CONFIG_E1000_NAPI #ifdef CONFIG_E1000_NAPI
/* read ICR disables interrupts using IAM, so keep up with our /* read ICR disables interrupts using IAM, so keep up with our
* enable/disable accounting */ * enable/disable accounting */
atomic_inc(&adapter->irq_sem); atomic_inc(&adapter->irq_sem);
#endif #endif
adapter->detect_link = 0; if (icr & (E1000_ICR_RXSEQ | E1000_ICR_LSC)) {
if ((icr & (E1000_ICR_RXSEQ | E1000_ICR_LSC)) && hw->get_link_status = 1;
(icr & E1000_ICR_INT_ASSERTED)) { /* 80003ES2LAN workaround-- For packet buffer work-around on
hw->get_link_status = 1; * link down event; disable receives here in the ISR and reset
/* 80003ES2LAN workaround-- * adapter in watchdog */
* For packet buffer work-around on link down event; if (netif_carrier_ok(netdev) &&
* disable receives here in the ISR and (adapter->hw.mac_type == e1000_80003es2lan)) {
* reset adapter in watchdog /* disable receives */
*/ uint32_t rctl = E1000_READ_REG(hw, RCTL);
if (netif_carrier_ok(netdev) && E1000_WRITE_REG(hw, RCTL, rctl & ~E1000_RCTL_EN);
(adapter->hw.mac_type == e1000_80003es2lan)) {
/* disable receives */
uint32_t rctl = E1000_READ_REG(hw, RCTL);
E1000_WRITE_REG(hw, RCTL, rctl & ~E1000_RCTL_EN);
}
/* guard against interrupt when we're going down */
if (!test_bit(__E1000_DOWN, &adapter->flags))
mod_timer(&adapter->watchdog_timer,
jiffies + 1);
} }
} else { /* guard against interrupt when we're going down */
E1000_WRITE_REG(hw, ICR, (0xffffffff & ~(E1000_ICR_RXSEQ | if (!test_bit(__E1000_DOWN, &adapter->flags))
E1000_ICR_LSC))); mod_timer(&adapter->watchdog_timer, jiffies + 1);
/* bummer we have to flush here, but things break otherwise as
* some event appears to be lost or delayed and throughput
* drops. In almost all tests this flush is un-necessary */
E1000_WRITE_FLUSH(hw);
#ifdef CONFIG_E1000_NAPI
/* Interrupt Auto-Mask (IAM)...upon writing ICR, interrupts are
* masked. No need for the IMC write, but it does mean we
* should account for it ASAP. */
atomic_inc(&adapter->irq_sem);
#endif
} }
#ifdef CONFIG_E1000_NAPI #ifdef CONFIG_E1000_NAPI
@ -3836,7 +3796,7 @@ irqreturn_t e1000_intr_msi(int irq, void *data)
for (i = 0; i < E1000_MAX_INTR; i++) for (i = 0; i < E1000_MAX_INTR; i++)
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring) & if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring) &
!e1000_clean_tx_irq(adapter, adapter->tx_ring))) e1000_clean_tx_irq(adapter, adapter->tx_ring)))
break; break;
if (likely(adapter->itr_setting & 3)) if (likely(adapter->itr_setting & 3))
@ -3939,7 +3899,7 @@ e1000_intr(int irq, void *data)
for (i = 0; i < E1000_MAX_INTR; i++) for (i = 0; i < E1000_MAX_INTR; i++)
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring) & if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring) &
!e1000_clean_tx_irq(adapter, adapter->tx_ring))) e1000_clean_tx_irq(adapter, adapter->tx_ring)))
break; break;
if (likely(adapter->itr_setting & 3)) if (likely(adapter->itr_setting & 3))
@ -3989,7 +3949,7 @@ e1000_clean(struct net_device *poll_dev, int *budget)
poll_dev->quota -= work_done; poll_dev->quota -= work_done;
/* If no Tx and not enough Rx work done, exit the polling mode */ /* If no Tx and not enough Rx work done, exit the polling mode */
if ((!tx_cleaned && (work_done == 0)) || if ((tx_cleaned && (work_done < work_to_do)) ||
!netif_running(poll_dev)) { !netif_running(poll_dev)) {
quit_polling: quit_polling:
if (likely(adapter->itr_setting & 3)) if (likely(adapter->itr_setting & 3))
@ -4019,7 +3979,7 @@ e1000_clean_tx_irq(struct e1000_adapter *adapter,
#ifdef CONFIG_E1000_NAPI #ifdef CONFIG_E1000_NAPI
unsigned int count = 0; unsigned int count = 0;
#endif #endif
boolean_t cleaned = FALSE; boolean_t cleaned = TRUE;
unsigned int total_tx_bytes=0, total_tx_packets=0; unsigned int total_tx_bytes=0, total_tx_packets=0;
i = tx_ring->next_to_clean; i = tx_ring->next_to_clean;
@ -4034,10 +3994,13 @@ e1000_clean_tx_irq(struct e1000_adapter *adapter,
if (cleaned) { if (cleaned) {
struct sk_buff *skb = buffer_info->skb; struct sk_buff *skb = buffer_info->skb;
unsigned int segs = skb_shinfo(skb)->gso_segs; unsigned int segs, bytecount;
segs = skb_shinfo(skb)->gso_segs ?: 1;
/* multiply data chunks by size of headers */
bytecount = ((segs - 1) * skb_headlen(skb)) +
skb->len;
total_tx_packets += segs; total_tx_packets += segs;
total_tx_packets++; total_tx_bytes += bytecount;
total_tx_bytes += skb->len;
} }
e1000_unmap_and_free_tx_resource(adapter, buffer_info); e1000_unmap_and_free_tx_resource(adapter, buffer_info);
tx_desc->upper.data = 0; tx_desc->upper.data = 0;
@ -4050,7 +4013,10 @@ e1000_clean_tx_irq(struct e1000_adapter *adapter,
#ifdef CONFIG_E1000_NAPI #ifdef CONFIG_E1000_NAPI
#define E1000_TX_WEIGHT 64 #define E1000_TX_WEIGHT 64
/* weight of a sort for tx, to avoid endless transmit cleanup */ /* weight of a sort for tx, to avoid endless transmit cleanup */
if (count++ == E1000_TX_WEIGHT) break; if (count++ == E1000_TX_WEIGHT) {
cleaned = FALSE;
break;
}
#endif #endif
} }

View File

@ -48,8 +48,6 @@ typedef enum {
TRUE = 1 TRUE = 1
} boolean_t; } boolean_t;
#define MSGOUT(S, A, B) printk(KERN_DEBUG S "\n", A, B)
#ifdef DBG #ifdef DBG
#define DEBUGOUT(S) printk(KERN_DEBUG S "\n") #define DEBUGOUT(S) printk(KERN_DEBUG S "\n")
#define DEBUGOUT1(S, A...) printk(KERN_DEBUG S "\n", A) #define DEBUGOUT1(S, A...) printk(KERN_DEBUG S "\n", A)
@ -58,7 +56,7 @@ typedef enum {
#define DEBUGOUT1(S, A...) #define DEBUGOUT1(S, A...)
#endif #endif
#define DEBUGFUNC(F) DEBUGOUT(F) #define DEBUGFUNC(F) DEBUGOUT(F "\n")
#define DEBUGOUT2 DEBUGOUT1 #define DEBUGOUT2 DEBUGOUT1
#define DEBUGOUT3 DEBUGOUT2 #define DEBUGOUT3 DEBUGOUT2
#define DEBUGOUT7 DEBUGOUT3 #define DEBUGOUT7 DEBUGOUT3

View File

@ -760,22 +760,13 @@ e1000_check_copper_options(struct e1000_adapter *adapter)
case SPEED_1000: case SPEED_1000:
DPRINTK(PROBE, INFO, "1000 Mbps Speed specified without " DPRINTK(PROBE, INFO, "1000 Mbps Speed specified without "
"Duplex\n"); "Duplex\n");
DPRINTK(PROBE, INFO, goto full_duplex_only;
"Using Autonegotiation at 1000 Mbps "
"Full Duplex only\n");
adapter->hw.autoneg = adapter->fc_autoneg = 1;
adapter->hw.autoneg_advertised = ADVERTISE_1000_FULL;
break;
case SPEED_1000 + HALF_DUPLEX: case SPEED_1000 + HALF_DUPLEX:
DPRINTK(PROBE, INFO, DPRINTK(PROBE, INFO,
"Half Duplex is not supported at 1000 Mbps\n"); "Half Duplex is not supported at 1000 Mbps\n");
DPRINTK(PROBE, INFO, /* fall through */
"Using Autonegotiation at 1000 Mbps "
"Full Duplex only\n");
adapter->hw.autoneg = adapter->fc_autoneg = 1;
adapter->hw.autoneg_advertised = ADVERTISE_1000_FULL;
break;
case SPEED_1000 + FULL_DUPLEX: case SPEED_1000 + FULL_DUPLEX:
full_duplex_only:
DPRINTK(PROBE, INFO, DPRINTK(PROBE, INFO,
"Using Autonegotiation at 1000 Mbps Full Duplex only\n"); "Using Autonegotiation at 1000 Mbps Full Duplex only\n");
adapter->hw.autoneg = adapter->fc_autoneg = 1; adapter->hw.autoneg = adapter->fc_autoneg = 1;

File diff suppressed because it is too large Load Diff

View File

@ -3034,7 +3034,7 @@ static int __init hp100_module_init(void)
goto out2; goto out2;
#endif #endif
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
err = pci_module_init(&hp100_pci_driver); err = pci_register_driver(&hp100_pci_driver);
if (err && err != -ENODEV) if (err && err != -ENODEV)
goto out3; goto out3;
#endif #endif

View File

@ -61,9 +61,7 @@
#include <net/pkt_sched.h> #include <net/pkt_sched.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#ifdef NETIF_F_TSO
#include <net/checksum.h> #include <net/checksum.h>
#endif
#include <linux/ethtool.h> #include <linux/ethtool.h>
#include <linux/if_vlan.h> #include <linux/if_vlan.h>

View File

@ -82,10 +82,8 @@ static struct ixgb_stats ixgb_gstrings_stats[] = {
{"tx_restart_queue", IXGB_STAT(restart_queue) }, {"tx_restart_queue", IXGB_STAT(restart_queue) },
{"rx_long_length_errors", IXGB_STAT(stats.roc)}, {"rx_long_length_errors", IXGB_STAT(stats.roc)},
{"rx_short_length_errors", IXGB_STAT(stats.ruc)}, {"rx_short_length_errors", IXGB_STAT(stats.ruc)},
#ifdef NETIF_F_TSO
{"tx_tcp_seg_good", IXGB_STAT(stats.tsctc)}, {"tx_tcp_seg_good", IXGB_STAT(stats.tsctc)},
{"tx_tcp_seg_failed", IXGB_STAT(stats.tsctfc)}, {"tx_tcp_seg_failed", IXGB_STAT(stats.tsctfc)},
#endif
{"rx_flow_control_xon", IXGB_STAT(stats.xonrxc)}, {"rx_flow_control_xon", IXGB_STAT(stats.xonrxc)},
{"rx_flow_control_xoff", IXGB_STAT(stats.xoffrxc)}, {"rx_flow_control_xoff", IXGB_STAT(stats.xoffrxc)},
{"tx_flow_control_xon", IXGB_STAT(stats.xontxc)}, {"tx_flow_control_xon", IXGB_STAT(stats.xontxc)},
@ -240,7 +238,6 @@ ixgb_set_tx_csum(struct net_device *netdev, uint32_t data)
return 0; return 0;
} }
#ifdef NETIF_F_TSO
static int static int
ixgb_set_tso(struct net_device *netdev, uint32_t data) ixgb_set_tso(struct net_device *netdev, uint32_t data)
{ {
@ -250,7 +247,6 @@ ixgb_set_tso(struct net_device *netdev, uint32_t data)
netdev->features &= ~NETIF_F_TSO; netdev->features &= ~NETIF_F_TSO;
return 0; return 0;
} }
#endif /* NETIF_F_TSO */
static uint32_t static uint32_t
ixgb_get_msglevel(struct net_device *netdev) ixgb_get_msglevel(struct net_device *netdev)
@ -722,10 +718,8 @@ static const struct ethtool_ops ixgb_ethtool_ops = {
.set_sg = ethtool_op_set_sg, .set_sg = ethtool_op_set_sg,
.get_msglevel = ixgb_get_msglevel, .get_msglevel = ixgb_get_msglevel,
.set_msglevel = ixgb_set_msglevel, .set_msglevel = ixgb_set_msglevel,
#ifdef NETIF_F_TSO
.get_tso = ethtool_op_get_tso, .get_tso = ethtool_op_get_tso,
.set_tso = ixgb_set_tso, .set_tso = ixgb_set_tso,
#endif
.get_strings = ixgb_get_strings, .get_strings = ixgb_get_strings,
.phys_id = ixgb_phys_id, .phys_id = ixgb_phys_id,
.get_stats_count = ixgb_get_stats_count, .get_stats_count = ixgb_get_stats_count,

View File

@ -456,9 +456,7 @@ ixgb_probe(struct pci_dev *pdev,
NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_TX |
NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_RX |
NETIF_F_HW_VLAN_FILTER; NETIF_F_HW_VLAN_FILTER;
#ifdef NETIF_F_TSO
netdev->features |= NETIF_F_TSO; netdev->features |= NETIF_F_TSO;
#endif
#ifdef NETIF_F_LLTX #ifdef NETIF_F_LLTX
netdev->features |= NETIF_F_LLTX; netdev->features |= NETIF_F_LLTX;
#endif #endif
@ -1176,7 +1174,6 @@ ixgb_watchdog(unsigned long data)
static int static int
ixgb_tso(struct ixgb_adapter *adapter, struct sk_buff *skb) ixgb_tso(struct ixgb_adapter *adapter, struct sk_buff *skb)
{ {
#ifdef NETIF_F_TSO
struct ixgb_context_desc *context_desc; struct ixgb_context_desc *context_desc;
unsigned int i; unsigned int i;
uint8_t ipcss, ipcso, tucss, tucso, hdr_len; uint8_t ipcss, ipcso, tucss, tucso, hdr_len;
@ -1233,7 +1230,6 @@ ixgb_tso(struct ixgb_adapter *adapter, struct sk_buff *skb)
return 1; return 1;
} }
#endif
return 0; return 0;
} }

View File

@ -1046,6 +1046,14 @@ static int __devinit macb_probe(struct platform_device *pdev)
spin_lock_init(&bp->lock); spin_lock_init(&bp->lock);
#if defined(CONFIG_ARCH_AT91)
bp->pclk = clk_get(&pdev->dev, "macb_clk");
if (IS_ERR(bp->pclk)) {
dev_err(&pdev->dev, "failed to get macb_clk\n");
goto err_out_free_dev;
}
clk_enable(bp->pclk);
#else
bp->pclk = clk_get(&pdev->dev, "pclk"); bp->pclk = clk_get(&pdev->dev, "pclk");
if (IS_ERR(bp->pclk)) { if (IS_ERR(bp->pclk)) {
dev_err(&pdev->dev, "failed to get pclk\n"); dev_err(&pdev->dev, "failed to get pclk\n");
@ -1059,6 +1067,7 @@ static int __devinit macb_probe(struct platform_device *pdev)
clk_enable(bp->pclk); clk_enable(bp->pclk);
clk_enable(bp->hclk); clk_enable(bp->hclk);
#endif
bp->regs = ioremap(regs->start, regs->end - regs->start + 1); bp->regs = ioremap(regs->start, regs->end - regs->start + 1);
if (!bp->regs) { if (!bp->regs) {
@ -1119,9 +1128,17 @@ static int __devinit macb_probe(struct platform_device *pdev)
pdata = pdev->dev.platform_data; pdata = pdev->dev.platform_data;
if (pdata && pdata->is_rmii) if (pdata && pdata->is_rmii)
#if defined(CONFIG_ARCH_AT91)
macb_writel(bp, USRIO, (MACB_BIT(RMII) | MACB_BIT(CLKEN)) );
#else
macb_writel(bp, USRIO, 0); macb_writel(bp, USRIO, 0);
#endif
else else
#if defined(CONFIG_ARCH_AT91)
macb_writel(bp, USRIO, MACB_BIT(CLKEN));
#else
macb_writel(bp, USRIO, MACB_BIT(MII)); macb_writel(bp, USRIO, MACB_BIT(MII));
#endif
bp->tx_pending = DEF_TX_RING_PENDING; bp->tx_pending = DEF_TX_RING_PENDING;
@ -1148,9 +1165,11 @@ err_out_free_irq:
err_out_iounmap: err_out_iounmap:
iounmap(bp->regs); iounmap(bp->regs);
err_out_disable_clocks: err_out_disable_clocks:
#ifndef CONFIG_ARCH_AT91
clk_disable(bp->hclk); clk_disable(bp->hclk);
clk_disable(bp->pclk);
clk_put(bp->hclk); clk_put(bp->hclk);
#endif
clk_disable(bp->pclk);
err_out_put_pclk: err_out_put_pclk:
clk_put(bp->pclk); clk_put(bp->pclk);
err_out_free_dev: err_out_free_dev:
@ -1173,9 +1192,11 @@ static int __devexit macb_remove(struct platform_device *pdev)
unregister_netdev(dev); unregister_netdev(dev);
free_irq(dev->irq, dev); free_irq(dev->irq, dev);
iounmap(bp->regs); iounmap(bp->regs);
#ifndef CONFIG_ARCH_AT91
clk_disable(bp->hclk); clk_disable(bp->hclk);
clk_disable(bp->pclk);
clk_put(bp->hclk); clk_put(bp->hclk);
#endif
clk_disable(bp->pclk);
clk_put(bp->pclk); clk_put(bp->pclk);
free_netdev(dev); free_netdev(dev);
platform_set_drvdata(pdev, NULL); platform_set_drvdata(pdev, NULL);

View File

@ -200,7 +200,7 @@
#define MACB_SOF_OFFSET 30 #define MACB_SOF_OFFSET 30
#define MACB_SOF_SIZE 2 #define MACB_SOF_SIZE 2
/* Bitfields in USRIO */ /* Bitfields in USRIO (AVR32) */
#define MACB_MII_OFFSET 0 #define MACB_MII_OFFSET 0
#define MACB_MII_SIZE 1 #define MACB_MII_SIZE 1
#define MACB_EAM_OFFSET 1 #define MACB_EAM_OFFSET 1
@ -210,6 +210,12 @@
#define MACB_TX_PAUSE_ZERO_OFFSET 3 #define MACB_TX_PAUSE_ZERO_OFFSET 3
#define MACB_TX_PAUSE_ZERO_SIZE 1 #define MACB_TX_PAUSE_ZERO_SIZE 1
/* Bitfields in USRIO (AT91) */
#define MACB_RMII_OFFSET 0
#define MACB_RMII_SIZE 1
#define MACB_CLKEN_OFFSET 1
#define MACB_CLKEN_SIZE 1
/* Bitfields in WOL */ /* Bitfields in WOL */
#define MACB_IP_OFFSET 0 #define MACB_IP_OFFSET 0
#define MACB_IP_SIZE 16 #define MACB_IP_SIZE 16

View File

@ -15,6 +15,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/crc32.h> #include <linux/crc32.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/bitrev.h>
#include <asm/prom.h> #include <asm/prom.h>
#include <asm/dbdma.h> #include <asm/dbdma.h>
#include <asm/io.h> #include <asm/io.h>
@ -74,7 +75,6 @@ struct mace_data {
#define PRIV_BYTES (sizeof(struct mace_data) \ #define PRIV_BYTES (sizeof(struct mace_data) \
+ (N_RX_RING + NCMDS_TX * N_TX_RING + 3) * sizeof(struct dbdma_cmd)) + (N_RX_RING + NCMDS_TX * N_TX_RING + 3) * sizeof(struct dbdma_cmd))
static int bitrev(int);
static int mace_open(struct net_device *dev); static int mace_open(struct net_device *dev);
static int mace_close(struct net_device *dev); static int mace_close(struct net_device *dev);
static int mace_xmit_start(struct sk_buff *skb, struct net_device *dev); static int mace_xmit_start(struct sk_buff *skb, struct net_device *dev);
@ -96,18 +96,6 @@ static void __mace_set_address(struct net_device *dev, void *addr);
*/ */
static unsigned char *dummy_buf; static unsigned char *dummy_buf;
/* Bit-reverse one byte of an ethernet hardware address. */
static inline int
bitrev(int b)
{
int d = 0, i;
for (i = 0; i < 8; ++i, b >>= 1)
d = (d << 1) | (b & 1);
return d;
}
static int __devinit mace_probe(struct macio_dev *mdev, const struct of_device_id *match) static int __devinit mace_probe(struct macio_dev *mdev, const struct of_device_id *match)
{ {
struct device_node *mace = macio_get_of_node(mdev); struct device_node *mace = macio_get_of_node(mdev);
@ -173,7 +161,7 @@ static int __devinit mace_probe(struct macio_dev *mdev, const struct of_device_i
rev = addr[0] == 0 && addr[1] == 0xA0; rev = addr[0] == 0 && addr[1] == 0xA0;
for (j = 0; j < 6; ++j) { for (j = 0; j < 6; ++j) {
dev->dev_addr[j] = rev? bitrev(addr[j]): addr[j]; dev->dev_addr[j] = rev ? bitrev8(addr[j]): addr[j];
} }
mp->chipid = (in_8(&mp->mace->chipid_hi) << 8) | mp->chipid = (in_8(&mp->mace->chipid_hi) << 8) |
in_8(&mp->mace->chipid_lo); in_8(&mp->mace->chipid_lo);

View File

@ -22,6 +22,7 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/crc32.h> #include <linux/crc32.h>
#include <linux/bitrev.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/irq.h> #include <asm/irq.h>
@ -81,19 +82,6 @@ static irqreturn_t mace_interrupt(int irq, void *dev_id);
static irqreturn_t mace_dma_intr(int irq, void *dev_id); static irqreturn_t mace_dma_intr(int irq, void *dev_id);
static void mace_tx_timeout(struct net_device *dev); static void mace_tx_timeout(struct net_device *dev);
/* Bit-reverse one byte of an ethernet hardware address. */
static int bitrev(int b)
{
int d = 0, i;
for (i = 0; i < 8; ++i, b >>= 1) {
d = (d << 1) | (b & 1);
}
return d;
}
/* /*
* Load a receive DMA channel with a base address and ring length * Load a receive DMA channel with a base address and ring length
*/ */
@ -219,12 +207,12 @@ struct net_device *mace_probe(int unit)
addr = (void *)MACE_PROM; addr = (void *)MACE_PROM;
for (j = 0; j < 6; ++j) { for (j = 0; j < 6; ++j) {
u8 v=bitrev(addr[j<<4]); u8 v = bitrev8(addr[j<<4]);
checksum ^= v; checksum ^= v;
dev->dev_addr[j] = v; dev->dev_addr[j] = v;
} }
for (; j < 8; ++j) { for (; j < 8; ++j) {
checksum ^= bitrev(addr[j<<4]); checksum ^= bitrev8(addr[j<<4]);
} }
if (checksum != 0xFF) { if (checksum != 0xFF) {

View File

@ -121,16 +121,12 @@ enum macsonic_type {
* For reversing the PROM address * For reversing the PROM address
*/ */
static unsigned char nibbletab[] = {0, 8, 4, 12, 2, 10, 6, 14,
1, 9, 5, 13, 3, 11, 7, 15};
static inline void bit_reverse_addr(unsigned char addr[6]) static inline void bit_reverse_addr(unsigned char addr[6])
{ {
int i; int i;
for(i = 0; i < 6; i++) for(i = 0; i < 6; i++)
addr[i] = ((nibbletab[addr[i] & 0xf] << 4) | addr[i] = bitrev8(addr[i]);
nibbletab[(addr[i] >> 4) &0xf]);
} }
int __init macsonic_init(struct net_device* dev) int __init macsonic_init(struct net_device* dev)

View File

@ -1412,10 +1412,8 @@ static const struct ethtool_ops myri10ge_ethtool_ops = {
.set_tx_csum = ethtool_op_set_tx_hw_csum, .set_tx_csum = ethtool_op_set_tx_hw_csum,
.get_sg = ethtool_op_get_sg, .get_sg = ethtool_op_get_sg,
.set_sg = ethtool_op_set_sg, .set_sg = ethtool_op_set_sg,
#ifdef NETIF_F_TSO
.get_tso = ethtool_op_get_tso, .get_tso = ethtool_op_get_tso,
.set_tso = ethtool_op_set_tso, .set_tso = ethtool_op_set_tso,
#endif
.get_strings = myri10ge_get_strings, .get_strings = myri10ge_get_strings,
.get_stats_count = myri10ge_get_stats_count, .get_stats_count = myri10ge_get_stats_count,
.get_ethtool_stats = myri10ge_get_ethtool_stats, .get_ethtool_stats = myri10ge_get_ethtool_stats,
@ -1975,13 +1973,11 @@ again:
mss = 0; mss = 0;
max_segments = MXGEFW_MAX_SEND_DESC; max_segments = MXGEFW_MAX_SEND_DESC;
#ifdef NETIF_F_TSO
if (skb->len > (dev->mtu + ETH_HLEN)) { if (skb->len > (dev->mtu + ETH_HLEN)) {
mss = skb_shinfo(skb)->gso_size; mss = skb_shinfo(skb)->gso_size;
if (mss != 0) if (mss != 0)
max_segments = MYRI10GE_MAX_SEND_DESC_TSO; max_segments = MYRI10GE_MAX_SEND_DESC_TSO;
} }
#endif /*NETIF_F_TSO */
if ((unlikely(avail < max_segments))) { if ((unlikely(avail < max_segments))) {
/* we are out of transmit resources */ /* we are out of transmit resources */
@ -2013,7 +2009,6 @@ again:
cum_len = 0; cum_len = 0;
#ifdef NETIF_F_TSO
if (mss) { /* TSO */ if (mss) { /* TSO */
/* this removes any CKSUM flag from before */ /* this removes any CKSUM flag from before */
flags = (MXGEFW_FLAGS_TSO_HDR | MXGEFW_FLAGS_FIRST); flags = (MXGEFW_FLAGS_TSO_HDR | MXGEFW_FLAGS_FIRST);
@ -2029,7 +2024,6 @@ again:
* the checksum by parsing the header. */ * the checksum by parsing the header. */
pseudo_hdr_offset = mss; pseudo_hdr_offset = mss;
} else } else
#endif /*NETIF_F_TSO */
/* Mark small packets, and pad out tiny packets */ /* Mark small packets, and pad out tiny packets */
if (skb->len <= MXGEFW_SEND_SMALL_SIZE) { if (skb->len <= MXGEFW_SEND_SMALL_SIZE) {
flags |= MXGEFW_FLAGS_SMALL; flags |= MXGEFW_FLAGS_SMALL;
@ -2097,7 +2091,6 @@ again:
seglen = len; seglen = len;
flags_next = flags & ~MXGEFW_FLAGS_FIRST; flags_next = flags & ~MXGEFW_FLAGS_FIRST;
cum_len_next = cum_len + seglen; cum_len_next = cum_len + seglen;
#ifdef NETIF_F_TSO
if (mss) { /* TSO */ if (mss) { /* TSO */
(req - rdma_count)->rdma_count = rdma_count + 1; (req - rdma_count)->rdma_count = rdma_count + 1;
@ -2124,7 +2117,6 @@ again:
(small * MXGEFW_FLAGS_SMALL); (small * MXGEFW_FLAGS_SMALL);
} }
} }
#endif /* NETIF_F_TSO */
req->addr_high = high_swapped; req->addr_high = high_swapped;
req->addr_low = htonl(low); req->addr_low = htonl(low);
req->pseudo_hdr_offset = htons(pseudo_hdr_offset); req->pseudo_hdr_offset = htons(pseudo_hdr_offset);
@ -2161,14 +2153,12 @@ again:
} }
(req - rdma_count)->rdma_count = rdma_count; (req - rdma_count)->rdma_count = rdma_count;
#ifdef NETIF_F_TSO
if (mss) if (mss)
do { do {
req--; req--;
req->flags |= MXGEFW_FLAGS_TSO_LAST; req->flags |= MXGEFW_FLAGS_TSO_LAST;
} while (!(req->flags & (MXGEFW_FLAGS_TSO_CHOP | } while (!(req->flags & (MXGEFW_FLAGS_TSO_CHOP |
MXGEFW_FLAGS_FIRST))); MXGEFW_FLAGS_FIRST)));
#endif
idx = ((count - 1) + tx->req) & tx->mask; idx = ((count - 1) + tx->req) & tx->mask;
tx->info[idx].last = 1; tx->info[idx].last = 1;
if (tx->wc_fifo == NULL) if (tx->wc_fifo == NULL)

View File

@ -63,11 +63,14 @@
#include "netxen_nic_hw.h" #include "netxen_nic_hw.h"
#define NETXEN_NIC_BUILD_NO "2"
#define _NETXEN_NIC_LINUX_MAJOR 3 #define _NETXEN_NIC_LINUX_MAJOR 3
#define _NETXEN_NIC_LINUX_MINOR 3 #define _NETXEN_NIC_LINUX_MINOR 3
#define _NETXEN_NIC_LINUX_SUBVERSION 3 #define _NETXEN_NIC_LINUX_SUBVERSION 3
#define NETXEN_NIC_LINUX_VERSIONID "3.3.3" "-" NETXEN_NIC_BUILD_NO #define NETXEN_NIC_LINUX_VERSIONID "3.3.3"
#define NUM_FLASH_SECTORS (64)
#define FLASH_SECTOR_SIZE (64 * 1024)
#define FLASH_TOTAL_SIZE (NUM_FLASH_SECTORS * FLASH_SECTOR_SIZE)
#define RCV_DESC_RINGSIZE \ #define RCV_DESC_RINGSIZE \
(sizeof(struct rcv_desc) * adapter->max_rx_desc_count) (sizeof(struct rcv_desc) * adapter->max_rx_desc_count)
@ -85,6 +88,7 @@
#define NETXEN_RCV_PRODUCER_OFFSET 0 #define NETXEN_RCV_PRODUCER_OFFSET 0
#define NETXEN_RCV_PEG_DB_ID 2 #define NETXEN_RCV_PEG_DB_ID 2
#define NETXEN_HOST_DUMMY_DMA_SIZE 1024 #define NETXEN_HOST_DUMMY_DMA_SIZE 1024
#define FLASH_SUCCESS 0
#define ADDR_IN_WINDOW1(off) \ #define ADDR_IN_WINDOW1(off) \
((off > NETXEN_CRB_PCIX_HOST2) && (off < NETXEN_CRB_MAX)) ? 1 : 0 ((off > NETXEN_CRB_PCIX_HOST2) && (off < NETXEN_CRB_MAX)) ? 1 : 0
@ -1028,6 +1032,15 @@ void netxen_phantom_init(struct netxen_adapter *adapter, int pegtune_val);
void netxen_load_firmware(struct netxen_adapter *adapter); void netxen_load_firmware(struct netxen_adapter *adapter);
int netxen_pinit_from_rom(struct netxen_adapter *adapter, int verbose); int netxen_pinit_from_rom(struct netxen_adapter *adapter, int verbose);
int netxen_rom_fast_read(struct netxen_adapter *adapter, int addr, int *valp); int netxen_rom_fast_read(struct netxen_adapter *adapter, int addr, int *valp);
int netxen_rom_fast_read_words(struct netxen_adapter *adapter, int addr,
u8 *bytes, size_t size);
int netxen_rom_fast_write_words(struct netxen_adapter *adapter, int addr,
u8 *bytes, size_t size);
int netxen_flash_unlock(struct netxen_adapter *adapter);
int netxen_backup_crbinit(struct netxen_adapter *adapter);
int netxen_flash_erase_secondary(struct netxen_adapter *adapter);
int netxen_flash_erase_primary(struct netxen_adapter *adapter);
int netxen_rom_fast_write(struct netxen_adapter *adapter, int addr, int data); int netxen_rom_fast_write(struct netxen_adapter *adapter, int addr, int data);
int netxen_rom_se(struct netxen_adapter *adapter, int addr); int netxen_rom_se(struct netxen_adapter *adapter, int addr);
int netxen_do_rom_se(struct netxen_adapter *adapter, int addr); int netxen_do_rom_se(struct netxen_adapter *adapter, int addr);

View File

@ -32,6 +32,7 @@
*/ */
#include <linux/types.h> #include <linux/types.h>
#include <linux/delay.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <asm/io.h> #include <asm/io.h>
@ -94,17 +95,7 @@ static const char netxen_nic_gstrings_test[][ETH_GSTRING_LEN] = {
static int netxen_nic_get_eeprom_len(struct net_device *dev) static int netxen_nic_get_eeprom_len(struct net_device *dev)
{ {
struct netxen_port *port = netdev_priv(dev); return FLASH_TOTAL_SIZE;
struct netxen_adapter *adapter = port->adapter;
int n;
if ((netxen_rom_fast_read(adapter, 0, &n) == 0)
&& (n & NETXEN_ROM_ROUNDUP)) {
n &= ~NETXEN_ROM_ROUNDUP;
if (n < NETXEN_MAX_EEPROM_LEN)
return n;
}
return 0;
} }
static void static void
@ -440,18 +431,92 @@ netxen_nic_get_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
struct netxen_port *port = netdev_priv(dev); struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter; struct netxen_adapter *adapter = port->adapter;
int offset; int offset;
int ret;
if (eeprom->len == 0) if (eeprom->len == 0)
return -EINVAL; return -EINVAL;
eeprom->magic = (port->pdev)->vendor | ((port->pdev)->device << 16); eeprom->magic = (port->pdev)->vendor | ((port->pdev)->device << 16);
for (offset = 0; offset < eeprom->len; offset++) offset = eeprom->offset;
if (netxen_rom_fast_read
(adapter, (8 * offset) + 8, (int *)eeprom->data) == -1) ret = netxen_rom_fast_read_words(adapter, offset, bytes,
return -EIO; eeprom->len);
if (ret < 0)
return ret;
return 0; return 0;
} }
static int
netxen_nic_set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
u8 * bytes)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
int offset = eeprom->offset;
static int flash_start;
static int ready_to_flash;
int ret;
if (flash_start == 0) {
ret = netxen_flash_unlock(adapter);
if (ret < 0) {
printk(KERN_ERR "%s: Flash unlock failed.\n",
netxen_nic_driver_name);
return ret;
}
printk(KERN_INFO "%s: flash unlocked. \n",
netxen_nic_driver_name);
ret = netxen_flash_erase_secondary(adapter);
if (ret != FLASH_SUCCESS) {
printk(KERN_ERR "%s: Flash erase failed.\n",
netxen_nic_driver_name);
return ret;
}
printk(KERN_INFO "%s: secondary flash erased successfully.\n",
netxen_nic_driver_name);
flash_start = 1;
return 0;
}
if (offset == BOOTLD_START) {
ret = netxen_flash_erase_primary(adapter);
if (ret != FLASH_SUCCESS) {
printk(KERN_ERR "%s: Flash erase failed.\n",
netxen_nic_driver_name);
return ret;
}
ret = netxen_rom_se(adapter, USER_START);
if (ret != FLASH_SUCCESS)
return ret;
ret = netxen_rom_se(adapter, FIXED_START);
if (ret != FLASH_SUCCESS)
return ret;
printk(KERN_INFO "%s: primary flash erased successfully\n",
netxen_nic_driver_name);
ret = netxen_backup_crbinit(adapter);
if (ret != FLASH_SUCCESS) {
printk(KERN_ERR "%s: CRBinit backup failed.\n",
netxen_nic_driver_name);
return ret;
}
printk(KERN_INFO "%s: CRBinit backup done.\n",
netxen_nic_driver_name);
ready_to_flash = 1;
}
if (!ready_to_flash) {
printk(KERN_ERR "%s: Invalid write sequence, returning...\n",
netxen_nic_driver_name);
return -EINVAL;
}
return netxen_rom_fast_write_words(adapter, offset, bytes, eeprom->len);
}
static void static void
netxen_nic_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring) netxen_nic_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
{ {
@ -721,6 +786,7 @@ struct ethtool_ops netxen_nic_ethtool_ops = {
.get_link = netxen_nic_get_link, .get_link = netxen_nic_get_link,
.get_eeprom_len = netxen_nic_get_eeprom_len, .get_eeprom_len = netxen_nic_get_eeprom_len,
.get_eeprom = netxen_nic_get_eeprom, .get_eeprom = netxen_nic_get_eeprom,
.set_eeprom = netxen_nic_set_eeprom,
.get_ringparam = netxen_nic_get_ringparam, .get_ringparam = netxen_nic_get_ringparam,
.get_pauseparam = netxen_nic_get_pauseparam, .get_pauseparam = netxen_nic_get_pauseparam,
.set_pauseparam = netxen_nic_set_pauseparam, .set_pauseparam = netxen_nic_set_pauseparam,

View File

@ -110,6 +110,7 @@ static void crb_addr_transform_setup(void)
crb_addr_transform(CAM); crb_addr_transform(CAM);
crb_addr_transform(C2C1); crb_addr_transform(C2C1);
crb_addr_transform(C2C0); crb_addr_transform(C2C0);
crb_addr_transform(SMB);
} }
int netxen_init_firmware(struct netxen_adapter *adapter) int netxen_init_firmware(struct netxen_adapter *adapter)
@ -276,6 +277,7 @@ unsigned long netxen_decode_crb_addr(unsigned long addr)
static long rom_max_timeout = 10000; static long rom_max_timeout = 10000;
static long rom_lock_timeout = 1000000; static long rom_lock_timeout = 1000000;
static long rom_write_timeout = 700;
static inline int rom_lock(struct netxen_adapter *adapter) static inline int rom_lock(struct netxen_adapter *adapter)
{ {
@ -404,7 +406,7 @@ do_rom_fast_read(struct netxen_adapter *adapter, int addr, int *valp)
{ {
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_ADDRESS, addr); netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_ADDRESS, addr);
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_ABYTE_CNT, 3); netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_ABYTE_CNT, 3);
udelay(100); /* prevent bursting on CRB */ udelay(70); /* prevent bursting on CRB */
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_DUMMY_BYTE_CNT, 0); netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_DUMMY_BYTE_CNT, 0);
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_INSTR_OPCODE, 0xb); netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_INSTR_OPCODE, 0xb);
if (netxen_wait_rom_done(adapter)) { if (netxen_wait_rom_done(adapter)) {
@ -413,13 +415,46 @@ do_rom_fast_read(struct netxen_adapter *adapter, int addr, int *valp)
} }
/* reset abyte_cnt and dummy_byte_cnt */ /* reset abyte_cnt and dummy_byte_cnt */
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_ABYTE_CNT, 0); netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_ABYTE_CNT, 0);
udelay(100); /* prevent bursting on CRB */ udelay(70); /* prevent bursting on CRB */
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_DUMMY_BYTE_CNT, 0); netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_DUMMY_BYTE_CNT, 0);
*valp = netxen_nic_reg_read(adapter, NETXEN_ROMUSB_ROM_RDATA); *valp = netxen_nic_reg_read(adapter, NETXEN_ROMUSB_ROM_RDATA);
return 0; return 0;
} }
static inline int
do_rom_fast_read_words(struct netxen_adapter *adapter, int addr,
u8 *bytes, size_t size)
{
int addridx;
int ret = 0;
for (addridx = addr; addridx < (addr + size); addridx += 4) {
ret = do_rom_fast_read(adapter, addridx, (int *)bytes);
if (ret != 0)
break;
bytes += 4;
}
return ret;
}
int
netxen_rom_fast_read_words(struct netxen_adapter *adapter, int addr,
u8 *bytes, size_t size)
{
int ret;
ret = rom_lock(adapter);
if (ret < 0)
return ret;
ret = do_rom_fast_read_words(adapter, addr, bytes, size);
netxen_rom_unlock(adapter);
return ret;
}
int netxen_rom_fast_read(struct netxen_adapter *adapter, int addr, int *valp) int netxen_rom_fast_read(struct netxen_adapter *adapter, int addr, int *valp)
{ {
int ret; int ret;
@ -443,6 +478,152 @@ int netxen_rom_fast_write(struct netxen_adapter *adapter, int addr, int data)
netxen_rom_unlock(adapter); netxen_rom_unlock(adapter);
return ret; return ret;
} }
static inline int do_rom_fast_write_words(struct netxen_adapter *adapter,
int addr, u8 *bytes, size_t size)
{
int addridx = addr;
int ret = 0;
while (addridx < (addr + size)) {
int last_attempt = 0;
int timeout = 0;
int data;
data = *(u32*)bytes;
ret = do_rom_fast_write(adapter, addridx, data);
if (ret < 0)
return ret;
while(1) {
int data1;
do_rom_fast_read(adapter, addridx, &data1);
if (data1 == data)
break;
if (timeout++ >= rom_write_timeout) {
if (last_attempt++ < 4) {
ret = do_rom_fast_write(adapter,
addridx, data);
if (ret < 0)
return ret;
}
else {
printk(KERN_INFO "Data write did not "
"succeed at address 0x%x\n", addridx);
break;
}
}
}
bytes += 4;
addridx += 4;
}
return ret;
}
int netxen_rom_fast_write_words(struct netxen_adapter *adapter, int addr,
u8 *bytes, size_t size)
{
int ret = 0;
ret = rom_lock(adapter);
if (ret < 0)
return ret;
ret = do_rom_fast_write_words(adapter, addr, bytes, size);
netxen_rom_unlock(adapter);
return ret;
}
int netxen_rom_wrsr(struct netxen_adapter *adapter, int data)
{
int ret;
ret = netxen_rom_wren(adapter);
if (ret < 0)
return ret;
netxen_crb_writelit_adapter(adapter, NETXEN_ROMUSB_ROM_WDATA, data);
netxen_crb_writelit_adapter(adapter,
NETXEN_ROMUSB_ROM_INSTR_OPCODE, 0x1);
ret = netxen_wait_rom_done(adapter);
if (ret < 0)
return ret;
return netxen_rom_wip_poll(adapter);
}
int netxen_rom_rdsr(struct netxen_adapter *adapter)
{
int ret;
ret = rom_lock(adapter);
if (ret < 0)
return ret;
ret = netxen_do_rom_rdsr(adapter);
netxen_rom_unlock(adapter);
return ret;
}
int netxen_backup_crbinit(struct netxen_adapter *adapter)
{
int ret = FLASH_SUCCESS;
int val;
char *buffer = kmalloc(FLASH_SECTOR_SIZE, GFP_KERNEL);
if (!buffer)
return -ENOMEM;
/* unlock sector 63 */
val = netxen_rom_rdsr(adapter);
val = val & 0xe3;
ret = netxen_rom_wrsr(adapter, val);
if (ret != FLASH_SUCCESS)
goto out_kfree;
ret = netxen_rom_wip_poll(adapter);
if (ret != FLASH_SUCCESS)
goto out_kfree;
/* copy sector 0 to sector 63 */
ret = netxen_rom_fast_read_words(adapter, CRBINIT_START,
buffer, FLASH_SECTOR_SIZE);
if (ret != FLASH_SUCCESS)
goto out_kfree;
ret = netxen_rom_fast_write_words(adapter, FIXED_START,
buffer, FLASH_SECTOR_SIZE);
if (ret != FLASH_SUCCESS)
goto out_kfree;
/* lock sector 63 */
val = netxen_rom_rdsr(adapter);
if (!(val & 0x8)) {
val |= (0x1 << 2);
/* lock sector 63 */
if (netxen_rom_wrsr(adapter, val) == 0) {
ret = netxen_rom_wip_poll(adapter);
if (ret != FLASH_SUCCESS)
goto out_kfree;
/* lock SR writes */
ret = netxen_rom_wip_poll(adapter);
if (ret != FLASH_SUCCESS)
goto out_kfree;
}
}
out_kfree:
kfree(buffer);
return ret;
}
int netxen_do_rom_se(struct netxen_adapter *adapter, int addr) int netxen_do_rom_se(struct netxen_adapter *adapter, int addr)
{ {
netxen_rom_wren(adapter); netxen_rom_wren(adapter);
@ -457,6 +638,27 @@ int netxen_do_rom_se(struct netxen_adapter *adapter, int addr)
return netxen_rom_wip_poll(adapter); return netxen_rom_wip_poll(adapter);
} }
void check_erased_flash(struct netxen_adapter *adapter, int addr)
{
int i;
int val;
int count = 0, erased_errors = 0;
int range;
range = (addr == USER_START) ? FIXED_START : addr + FLASH_SECTOR_SIZE;
for (i = addr; i < range; i += 4) {
netxen_rom_fast_read(adapter, i, &val);
if (val != 0xffffffff)
erased_errors++;
count++;
}
if (erased_errors)
printk(KERN_INFO "0x%x out of 0x%x words fail to be erased "
"for sector address: %x\n", erased_errors, count, addr);
}
int netxen_rom_se(struct netxen_adapter *adapter, int addr) int netxen_rom_se(struct netxen_adapter *adapter, int addr)
{ {
int ret = 0; int ret = 0;
@ -465,6 +667,68 @@ int netxen_rom_se(struct netxen_adapter *adapter, int addr)
} }
ret = netxen_do_rom_se(adapter, addr); ret = netxen_do_rom_se(adapter, addr);
netxen_rom_unlock(adapter); netxen_rom_unlock(adapter);
msleep(30);
check_erased_flash(adapter, addr);
return ret;
}
int
netxen_flash_erase_sections(struct netxen_adapter *adapter, int start, int end)
{
int ret = FLASH_SUCCESS;
int i;
for (i = start; i < end; i++) {
ret = netxen_rom_se(adapter, i * FLASH_SECTOR_SIZE);
if (ret)
break;
ret = netxen_rom_wip_poll(adapter);
if (ret < 0)
return ret;
}
return ret;
}
int
netxen_flash_erase_secondary(struct netxen_adapter *adapter)
{
int ret = FLASH_SUCCESS;
int start, end;
start = SECONDARY_START / FLASH_SECTOR_SIZE;
end = USER_START / FLASH_SECTOR_SIZE;
ret = netxen_flash_erase_sections(adapter, start, end);
return ret;
}
int
netxen_flash_erase_primary(struct netxen_adapter *adapter)
{
int ret = FLASH_SUCCESS;
int start, end;
start = PRIMARY_START / FLASH_SECTOR_SIZE;
end = SECONDARY_START / FLASH_SECTOR_SIZE;
ret = netxen_flash_erase_sections(adapter, start, end);
return ret;
}
int netxen_flash_unlock(struct netxen_adapter *adapter)
{
int ret = 0;
ret = netxen_rom_wrsr(adapter, 0);
if (ret < 0)
return ret;
ret = netxen_rom_wren(adapter);
if (ret < 0)
return ret;
return ret; return ret;
} }
@ -543,9 +807,13 @@ int netxen_pinit_from_rom(struct netxen_adapter *adapter, int verbose)
} }
for (i = 0; i < n; i++) { for (i = 0; i < n; i++) {
off = off = netxen_decode_crb_addr((unsigned long)buf[i].addr);
netxen_decode_crb_addr((unsigned long)buf[i].addr) + if (off == NETXEN_ADDR_ERROR) {
NETXEN_PCI_CRBSPACE; printk(KERN_ERR"CRB init value out of range %lx\n",
buf[i].addr);
continue;
}
off += NETXEN_PCI_CRBSPACE;
/* skipping cold reboot MAGIC */ /* skipping cold reboot MAGIC */
if (off == NETXEN_CAM_RAM(0x1fc)) if (off == NETXEN_CAM_RAM(0x1fc))
continue; continue;
@ -662,6 +930,7 @@ void netxen_phantom_init(struct netxen_adapter *adapter, int pegtune_val)
int loops = 0; int loops = 0;
if (!pegtune_val) { if (!pegtune_val) {
val = readl(NETXEN_CRB_NORMALIZE(adapter, CRB_CMDPEG_STATE));
while (val != PHAN_INITIALIZE_COMPLETE && loops < 200000) { while (val != PHAN_INITIALIZE_COMPLETE && loops < 200000) {
udelay(100); udelay(100);
schedule(); schedule();

View File

@ -1,666 +0,0 @@
/*
*
* Copyright (c) 1999-2000 Grant Erickson <grant@lcse.umn.edu>
*
* Module name: oaknet.c
*
* Description:
* Driver for the National Semiconductor DP83902AV Ethernet controller
* on-board the IBM PowerPC "Oak" evaluation board. Adapted from the
* various other 8390 drivers written by Donald Becker and Paul Gortmaker.
*
* Additional inspiration from the "tcd8390.c" driver from TiVo, Inc.
* and "enetLib.c" from IBM.
*
*/
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/delay.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/init.h>
#include <linux/jiffies.h>
#include <asm/board.h>
#include <asm/io.h>
#include "8390.h"
/* Preprocessor Defines */
#if !defined(TRUE) || TRUE != 1
#define TRUE 1
#endif
#if !defined(FALSE) || FALSE != 0
#define FALSE 0
#endif
#define OAKNET_START_PG 0x20 /* First page of TX buffer */
#define OAKNET_STOP_PG 0x40 /* Last pagge +1 of RX ring */
#define OAKNET_WAIT (2 * HZ / 100) /* 20 ms */
/* Experimenting with some fixes for a broken driver... */
#define OAKNET_DISINT
#define OAKNET_HEADCHECK
#define OAKNET_RWFIX
/* Global Variables */
static const char *name = "National DP83902AV";
static struct net_device *oaknet_devs;
/* Function Prototypes */
static int oaknet_open(struct net_device *dev);
static int oaknet_close(struct net_device *dev);
static void oaknet_reset_8390(struct net_device *dev);
static void oaknet_get_8390_hdr(struct net_device *dev,
struct e8390_pkt_hdr *hdr, int ring_page);
static void oaknet_block_input(struct net_device *dev, int count,
struct sk_buff *skb, int ring_offset);
static void oaknet_block_output(struct net_device *dev, int count,
const unsigned char *buf, int start_page);
static void oaknet_dma_error(struct net_device *dev, const char *name);
/*
* int oaknet_init()
*
* Description:
* This routine performs all the necessary platform-specific initiali-
* zation and set-up for the IBM "Oak" evaluation board's National
* Semiconductor DP83902AV "ST-NIC" Ethernet controller.
*
* Input(s):
* N/A
*
* Output(s):
* N/A
*
* Returns:
* 0 if OK, otherwise system error number on error.
*
*/
static int __init oaknet_init(void)
{
register int i;
int reg0, regd;
int ret = -ENOMEM;
struct net_device *dev;
#if 0
unsigned long ioaddr = OAKNET_IO_BASE;
#else
unsigned long ioaddr = ioremap(OAKNET_IO_BASE, OAKNET_IO_SIZE);
#endif
bd_t *bip = (bd_t *)__res;
if (!ioaddr)
return -ENOMEM;
dev = alloc_ei_netdev();
if (!dev)
goto out_unmap;
ret = -EBUSY;
if (!request_region(OAKNET_IO_BASE, OAKNET_IO_SIZE, name))
goto out_dev;
/* Quick register check to see if the device is really there. */
ret = -ENODEV;
if ((reg0 = ei_ibp(ioaddr)) == 0xFF)
goto out_region;
/*
* That worked. Now a more thorough check, using the multicast
* address registers, that the device is definitely out there
* and semi-functional.
*/
ei_obp(E8390_NODMA + E8390_PAGE1 + E8390_STOP, ioaddr + E8390_CMD);
regd = ei_ibp(ioaddr + 0x0D);
ei_obp(0xFF, ioaddr + 0x0D);
ei_obp(E8390_NODMA + E8390_PAGE0, ioaddr + E8390_CMD);
ei_ibp(ioaddr + EN0_COUNTER0);
/* It's no good. Fix things back up and leave. */
ret = -ENODEV;
if (ei_ibp(ioaddr + EN0_COUNTER0) != 0) {
ei_obp(reg0, ioaddr);
ei_obp(regd, ioaddr + 0x0D);
goto out_region;
}
SET_MODULE_OWNER(dev);
/*
* This controller is on an embedded board, so the base address
* and interrupt assignments are pre-assigned and unchageable.
*/
dev->base_addr = ioaddr;
dev->irq = OAKNET_INT;
/*
* Disable all chip interrupts for now and ACK all pending
* interrupts.
*/
ei_obp(0x0, ioaddr + EN0_IMR);
ei_obp(0xFF, ioaddr + EN0_ISR);
/* Attempt to get the interrupt line */
ret = -EAGAIN;
if (request_irq(dev->irq, ei_interrupt, 0, name, dev)) {
printk("%s: unable to request interrupt %d.\n",
name, dev->irq);
goto out_region;
}
/* Tell the world about what and where we've found. */
printk("%s: %s at", dev->name, name);
for (i = 0; i < ETHER_ADDR_LEN; ++i) {
dev->dev_addr[i] = bip->bi_enetaddr[i];
printk("%c%.2x", (i ? ':' : ' '), dev->dev_addr[i]);
}
printk(", found at %#lx, using IRQ %d.\n", dev->base_addr, dev->irq);
/* Set up some required driver fields and then we're done. */
ei_status.name = name;
ei_status.word16 = FALSE;
ei_status.tx_start_page = OAKNET_START_PG;
ei_status.rx_start_page = OAKNET_START_PG + TX_PAGES;
ei_status.stop_page = OAKNET_STOP_PG;
ei_status.reset_8390 = &oaknet_reset_8390;
ei_status.block_input = &oaknet_block_input;
ei_status.block_output = &oaknet_block_output;
ei_status.get_8390_hdr = &oaknet_get_8390_hdr;
dev->open = oaknet_open;
dev->stop = oaknet_close;
#ifdef CONFIG_NET_POLL_CONTROLLER
dev->poll_controller = ei_poll;
#endif
NS8390_init(dev, FALSE);
ret = register_netdev(dev);
if (ret)
goto out_irq;
oaknet_devs = dev;
return 0;
out_irq;
free_irq(dev->irq, dev);
out_region:
release_region(OAKNET_IO_BASE, OAKNET_IO_SIZE);
out_dev:
free_netdev(dev);
out_unmap:
iounmap(ioaddr);
return ret;
}
/*
* static int oaknet_open()
*
* Description:
* This routine is a modest wrapper around ei_open, the 8390-generic,
* driver open routine. This just increments the module usage count
* and passes along the status from ei_open.
*
* Input(s):
* *dev - Pointer to the device structure for this driver.
*
* Output(s):
* *dev - Pointer to the device structure for this driver, potentially
* modified by ei_open.
*
* Returns:
* 0 if OK, otherwise < 0 on error.
*
*/
static int
oaknet_open(struct net_device *dev)
{
int status = ei_open(dev);
return (status);
}
/*
* static int oaknet_close()
*
* Description:
* This routine is a modest wrapper around ei_close, the 8390-generic,
* driver close routine. This just decrements the module usage count
* and passes along the status from ei_close.
*
* Input(s):
* *dev - Pointer to the device structure for this driver.
*
* Output(s):
* *dev - Pointer to the device structure for this driver, potentially
* modified by ei_close.
*
* Returns:
* 0 if OK, otherwise < 0 on error.
*
*/
static int
oaknet_close(struct net_device *dev)
{
int status = ei_close(dev);
return (status);
}
/*
* static void oaknet_reset_8390()
*
* Description:
* This routine resets the DP83902 chip.
*
* Input(s):
* *dev - Pointer to the device structure for this driver.
*
* Output(s):
* N/A
*
* Returns:
* N/A
*
*/
static void
oaknet_reset_8390(struct net_device *dev)
{
int base = E8390_BASE;
/*
* We have no provision of reseting the controller as is done
* in other drivers, such as "ne.c". However, the following
* seems to work well enough in the TiVo driver.
*/
printk("Resetting %s...\n", dev->name);
ei_obp(E8390_STOP | E8390_NODMA | E8390_PAGE0, base + E8390_CMD);
ei_status.txing = 0;
ei_status.dmaing = 0;
}
/*
* static void oaknet_get_8390_hdr()
*
* Description:
* This routine grabs the 8390-specific header. It's similar to the
* block input routine, but we don't need to be concerned with ring wrap
* as the header will be at the start of a page, so we optimize accordingly.
*
* Input(s):
* *dev - Pointer to the device structure for this driver.
* *hdr - Pointer to storage for the 8390-specific packet header.
* ring_page - ?
*
* Output(s):
* *hdr - Pointer to the 8390-specific packet header for the just-
* received frame.
*
* Returns:
* N/A
*
*/
static void
oaknet_get_8390_hdr(struct net_device *dev, struct e8390_pkt_hdr *hdr,
int ring_page)
{
int base = dev->base_addr;
/*
* This should NOT happen. If it does, it is the LAST thing you'll
* see.
*/
if (ei_status.dmaing) {
oaknet_dma_error(dev, "oaknet_get_8390_hdr");
return;
}
ei_status.dmaing |= 0x01;
outb_p(E8390_NODMA + E8390_PAGE0 + E8390_START, base + OAKNET_CMD);
outb_p(sizeof(struct e8390_pkt_hdr), base + EN0_RCNTLO);
outb_p(0, base + EN0_RCNTHI);
outb_p(0, base + EN0_RSARLO); /* On page boundary */
outb_p(ring_page, base + EN0_RSARHI);
outb_p(E8390_RREAD + E8390_START, base + OAKNET_CMD);
if (ei_status.word16)
insw(base + OAKNET_DATA, hdr,
sizeof(struct e8390_pkt_hdr) >> 1);
else
insb(base + OAKNET_DATA, hdr,
sizeof(struct e8390_pkt_hdr));
/* Byte-swap the packet byte count */
hdr->count = le16_to_cpu(hdr->count);
outb_p(ENISR_RDC, base + EN0_ISR); /* ACK Remote DMA interrupt */
ei_status.dmaing &= ~0x01;
}
/*
* XXX - Document me.
*/
static void
oaknet_block_input(struct net_device *dev, int count, struct sk_buff *skb,
int ring_offset)
{
int base = OAKNET_BASE;
char *buf = skb->data;
/*
* This should NOT happen. If it does, it is the LAST thing you'll
* see.
*/
if (ei_status.dmaing) {
oaknet_dma_error(dev, "oaknet_block_input");
return;
}
#ifdef OAKNET_DISINT
save_flags(flags);
cli();
#endif
ei_status.dmaing |= 0x01;
ei_obp(E8390_NODMA + E8390_PAGE0 + E8390_START, base + E8390_CMD);
ei_obp(count & 0xff, base + EN0_RCNTLO);
ei_obp(count >> 8, base + EN0_RCNTHI);
ei_obp(ring_offset & 0xff, base + EN0_RSARLO);
ei_obp(ring_offset >> 8, base + EN0_RSARHI);
ei_obp(E8390_RREAD + E8390_START, base + E8390_CMD);
if (ei_status.word16) {
ei_isw(base + E8390_DATA, buf, count >> 1);
if (count & 0x01) {
buf[count - 1] = ei_ib(base + E8390_DATA);
#ifdef OAKNET_HEADCHECK
bytes++;
#endif
}
} else {
ei_isb(base + E8390_DATA, buf, count);
}
#ifdef OAKNET_HEADCHECK
/*
* This was for the ALPHA version only, but enough people have
* been encountering problems so it is still here. If you see
* this message you either 1) have a slightly incompatible clone
* or 2) have noise/speed problems with your bus.
*/
/* DMA termination address check... */
{
int addr, tries = 20;
do {
/* DON'T check for 'ei_ibp(EN0_ISR) & ENISR_RDC' here
-- it's broken for Rx on some cards! */
int high = ei_ibp(base + EN0_RSARHI);
int low = ei_ibp(base + EN0_RSARLO);
addr = (high << 8) + low;
if (((ring_offset + bytes) & 0xff) == low)
break;
} while (--tries > 0);
if (tries <= 0)
printk("%s: RX transfer address mismatch,"
"%#4.4x (expected) vs. %#4.4x (actual).\n",
dev->name, ring_offset + bytes, addr);
}
#endif
ei_obp(ENISR_RDC, base + EN0_ISR); /* ACK Remote DMA interrupt */
ei_status.dmaing &= ~0x01;
#ifdef OAKNET_DISINT
restore_flags(flags);
#endif
}
/*
* static void oaknet_block_output()
*
* Description:
* This routine...
*
* Input(s):
* *dev - Pointer to the device structure for this driver.
* count - Number of bytes to be transferred.
* *buf -
* start_page -
*
* Output(s):
* N/A
*
* Returns:
* N/A
*
*/
static void
oaknet_block_output(struct net_device *dev, int count,
const unsigned char *buf, int start_page)
{
int base = E8390_BASE;
#if 0
int bug;
#endif
unsigned long start;
#ifdef OAKNET_DISINT
unsigned long flags;
#endif
#ifdef OAKNET_HEADCHECK
int retries = 0;
#endif
/* Round the count up for word writes. */
if (ei_status.word16 && (count & 0x1))
count++;
/*
* This should NOT happen. If it does, it is the LAST thing you'll
* see.
*/
if (ei_status.dmaing) {
oaknet_dma_error(dev, "oaknet_block_output");
return;
}
#ifdef OAKNET_DISINT
save_flags(flags);
cli();
#endif
ei_status.dmaing |= 0x01;
/* Make sure we are in page 0. */
ei_obp(E8390_PAGE0 + E8390_START + E8390_NODMA, base + E8390_CMD);
#ifdef OAKNET_HEADCHECK
retry:
#endif
#if 0
/*
* The 83902 documentation states that the processor needs to
* do a "dummy read" before doing the remote write to work
* around a chip bug they don't feel like fixing.
*/
bug = 0;
while (1) {
unsigned int rdhi;
unsigned int rdlo;
/* Now the normal output. */
ei_obp(ENISR_RDC, base + EN0_ISR);
ei_obp(count & 0xff, base + EN0_RCNTLO);
ei_obp(count >> 8, base + EN0_RCNTHI);
ei_obp(0x00, base + EN0_RSARLO);
ei_obp(start_page, base + EN0_RSARHI);
if (bug++)
break;
/* Perform the dummy read */
rdhi = ei_ibp(base + EN0_CRDAHI);
rdlo = ei_ibp(base + EN0_CRDALO);
ei_obp(E8390_RREAD + E8390_START, base + E8390_CMD);
while (1) {
unsigned int nrdhi;
unsigned int nrdlo;
nrdhi = ei_ibp(base + EN0_CRDAHI);
nrdlo = ei_ibp(base + EN0_CRDALO);
if ((rdhi != nrdhi) || (rdlo != nrdlo))
break;
}
}
#else
#ifdef OAKNET_RWFIX
/*
* Handle the read-before-write bug the same way as the
* Crynwr packet driver -- the Nat'l Semi. method doesn't work.
* Actually this doesn't always work either, but if you have
* problems with your 83902 this is better than nothing!
*/
ei_obp(0x42, base + EN0_RCNTLO);
ei_obp(0x00, base + EN0_RCNTHI);
ei_obp(0x42, base + EN0_RSARLO);
ei_obp(0x00, base + EN0_RSARHI);
ei_obp(E8390_RREAD + E8390_START, base + E8390_CMD);
/* Make certain that the dummy read has occurred. */
udelay(6);
#endif
ei_obp(ENISR_RDC, base + EN0_ISR);
/* Now the normal output. */
ei_obp(count & 0xff, base + EN0_RCNTLO);
ei_obp(count >> 8, base + EN0_RCNTHI);
ei_obp(0x00, base + EN0_RSARLO);
ei_obp(start_page, base + EN0_RSARHI);
#endif /* 0/1 */
ei_obp(E8390_RWRITE + E8390_START, base + E8390_CMD);
if (ei_status.word16) {
ei_osw(E8390_BASE + E8390_DATA, buf, count >> 1);
} else {
ei_osb(E8390_BASE + E8390_DATA, buf, count);
}
#ifdef OAKNET_DISINT
restore_flags(flags);
#endif
start = jiffies;
#ifdef OAKNET_HEADCHECK
/*
* This was for the ALPHA version only, but enough people have
* been encountering problems so it is still here.
*/
{
/* DMA termination address check... */
int addr, tries = 20;
do {
int high = ei_ibp(base + EN0_RSARHI);
int low = ei_ibp(base + EN0_RSARLO);
addr = (high << 8) + low;
if ((start_page << 8) + count == addr)
break;
} while (--tries > 0);
if (tries <= 0) {
printk("%s: Tx packet transfer address mismatch,"
"%#4.4x (expected) vs. %#4.4x (actual).\n",
dev->name, (start_page << 8) + count, addr);
if (retries++ == 0)
goto retry;
}
}
#endif
while ((ei_ibp(base + EN0_ISR) & ENISR_RDC) == 0) {
if (time_after(jiffies, start + OAKNET_WAIT)) {
printk("%s: timeout waiting for Tx RDC.\n", dev->name);
oaknet_reset_8390(dev);
NS8390_init(dev, TRUE);
break;
}
}
ei_obp(ENISR_RDC, base + EN0_ISR); /* Ack intr. */
ei_status.dmaing &= ~0x01;
}
/*
* static void oaknet_dma_error()
*
* Description:
* This routine prints out a last-ditch informative message to the console
* indicating that a DMA error occurred. If you see this, it's the last
* thing you'll see.
*
* Input(s):
* *dev - Pointer to the device structure for this driver.
* *name - Informative text (e.g. function name) indicating where the
* DMA error occurred.
*
* Output(s):
* N/A
*
* Returns:
* N/A
*
*/
static void
oaknet_dma_error(struct net_device *dev, const char *name)
{
printk(KERN_EMERG "%s: DMAing conflict in %s."
"[DMAstat:%d][irqlock:%d][intr:%ld]\n",
dev->name, name, ei_status.dmaing, ei_status.irqlock,
dev->interrupt);
}
/*
* Oak Ethernet module unload interface.
*/
static void __exit oaknet_cleanup_module (void)
{
/* Convert to loop once driver supports multiple devices. */
unregister_netdev(oaknet_dev);
free_irq(oaknet_devs->irq, oaknet_devs);
release_region(oaknet_devs->base_addr, OAKNET_IO_SIZE);
iounmap(ioaddr);
free_netdev(oaknet_devs);
}
module_init(oaknet_init);
module_exit(oaknet_cleanup_module);
MODULE_LICENSE("GPL");

1019
drivers/net/pasemi_mac.c Normal file

File diff suppressed because it is too large Load Diff

460
drivers/net/pasemi_mac.h Normal file
View File

@ -0,0 +1,460 @@
/*
* Copyright (C) 2006 PA Semi, Inc
*
* Driver for the PA6T-1682M onchip 1G/10G Ethernet MACs, soft state and
* hardware register layouts.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef PASEMI_MAC_H
#define PASEMI_MAC_H
#include <linux/ethtool.h>
#include <linux/netdevice.h>
#include <linux/spinlock.h>
struct pasemi_mac_txring {
spinlock_t lock;
struct pas_dma_xct_descr *desc;
dma_addr_t dma;
unsigned int size;
unsigned int next_to_use;
unsigned int next_to_clean;
struct pasemi_mac_buffer *desc_info;
char irq_name[10]; /* "eth%d tx" */
};
struct pasemi_mac_rxring {
spinlock_t lock;
struct pas_dma_xct_descr *desc; /* RX channel descriptor ring */
dma_addr_t dma;
u64 *buffers; /* RX interface buffer ring */
dma_addr_t buf_dma;
unsigned int size;
unsigned int next_to_fill;
unsigned int next_to_clean;
struct pasemi_mac_buffer *desc_info;
char irq_name[10]; /* "eth%d rx" */
};
struct pasemi_mac {
struct net_device *netdev;
struct pci_dev *pdev;
struct pci_dev *dma_pdev;
struct pci_dev *iob_pdev;
struct net_device_stats stats;
/* Pointer to the cacheable per-channel status registers */
u64 *rx_status;
u64 *tx_status;
u8 type;
#define MAC_TYPE_GMAC 1
#define MAC_TYPE_XAUI 2
u32 dma_txch;
u32 dma_if;
u32 dma_rxch;
u8 mac_addr[6];
struct timer_list rxtimer;
struct pasemi_mac_txring *tx;
struct pasemi_mac_rxring *rx;
};
/* Software status descriptor (desc_info) */
struct pasemi_mac_buffer {
struct sk_buff *skb;
dma_addr_t dma;
};
/* status register layout in IOB region, at 0xfb800000 */
struct pasdma_status {
u64 rx_sta[64];
u64 tx_sta[20];
};
/* descriptor structure */
struct pas_dma_xct_descr {
union {
u64 mactx;
u64 macrx;
};
union {
u64 ptr;
u64 rxb;
};
};
/* MAC CFG register offsets */
enum {
PAS_MAC_CFG_PCFG = 0x80,
PAS_MAC_CFG_TXP = 0x98,
PAS_MAC_IPC_CHNL = 0x208,
};
/* MAC CFG register fields */
#define PAS_MAC_CFG_PCFG_PE 0x80000000
#define PAS_MAC_CFG_PCFG_CE 0x40000000
#define PAS_MAC_CFG_PCFG_BU 0x20000000
#define PAS_MAC_CFG_PCFG_TT 0x10000000
#define PAS_MAC_CFG_PCFG_TSR_M 0x0c000000
#define PAS_MAC_CFG_PCFG_TSR_10M 0x00000000
#define PAS_MAC_CFG_PCFG_TSR_100M 0x04000000
#define PAS_MAC_CFG_PCFG_TSR_1G 0x08000000
#define PAS_MAC_CFG_PCFG_TSR_10G 0x0c000000
#define PAS_MAC_CFG_PCFG_T24 0x02000000
#define PAS_MAC_CFG_PCFG_PR 0x01000000
#define PAS_MAC_CFG_PCFG_CRO_M 0x00ff0000
#define PAS_MAC_CFG_PCFG_CRO_S 16
#define PAS_MAC_CFG_PCFG_IPO_M 0x0000ff00
#define PAS_MAC_CFG_PCFG_IPO_S 8
#define PAS_MAC_CFG_PCFG_S1 0x00000080
#define PAS_MAC_CFG_PCFG_IO_M 0x00000060
#define PAS_MAC_CFG_PCFG_IO_MAC 0x00000000
#define PAS_MAC_CFG_PCFG_IO_OFF 0x00000020
#define PAS_MAC_CFG_PCFG_IO_IND_ETH 0x00000040
#define PAS_MAC_CFG_PCFG_IO_IND_IP 0x00000060
#define PAS_MAC_CFG_PCFG_LP 0x00000010
#define PAS_MAC_CFG_PCFG_TS 0x00000008
#define PAS_MAC_CFG_PCFG_HD 0x00000004
#define PAS_MAC_CFG_PCFG_SPD_M 0x00000003
#define PAS_MAC_CFG_PCFG_SPD_10M 0x00000000
#define PAS_MAC_CFG_PCFG_SPD_100M 0x00000001
#define PAS_MAC_CFG_PCFG_SPD_1G 0x00000002
#define PAS_MAC_CFG_PCFG_SPD_10G 0x00000003
#define PAS_MAC_CFG_TXP_FCF 0x01000000
#define PAS_MAC_CFG_TXP_FCE 0x00800000
#define PAS_MAC_CFG_TXP_FC 0x00400000
#define PAS_MAC_CFG_TXP_FPC_M 0x00300000
#define PAS_MAC_CFG_TXP_FPC_S 20
#define PAS_MAC_CFG_TXP_FPC(x) (((x) << PAS_MAC_CFG_TXP_FPC_S) & \
PAS_MAC_CFG_TXP_FPC_M)
#define PAS_MAC_CFG_TXP_RT 0x00080000
#define PAS_MAC_CFG_TXP_BL 0x00040000
#define PAS_MAC_CFG_TXP_SL_M 0x00030000
#define PAS_MAC_CFG_TXP_SL_S 16
#define PAS_MAC_CFG_TXP_SL(x) (((x) << PAS_MAC_CFG_TXP_SL_S) & \
PAS_MAC_CFG_TXP_SL_M)
#define PAS_MAC_CFG_TXP_COB_M 0x0000f000
#define PAS_MAC_CFG_TXP_COB_S 12
#define PAS_MAC_CFG_TXP_COB(x) (((x) << PAS_MAC_CFG_TXP_COB_S) & \
PAS_MAC_CFG_TXP_COB_M)
#define PAS_MAC_CFG_TXP_TIFT_M 0x00000f00
#define PAS_MAC_CFG_TXP_TIFT_S 8
#define PAS_MAC_CFG_TXP_TIFT(x) (((x) << PAS_MAC_CFG_TXP_TIFT_S) & \
PAS_MAC_CFG_TXP_TIFT_M)
#define PAS_MAC_CFG_TXP_TIFG_M 0x000000ff
#define PAS_MAC_CFG_TXP_TIFG_S 0
#define PAS_MAC_CFG_TXP_TIFG(x) (((x) << PAS_MAC_CFG_TXP_TIFG_S) & \
PAS_MAC_CFG_TXP_TIFG_M)
#define PAS_MAC_IPC_CHNL_DCHNO_M 0x003f0000
#define PAS_MAC_IPC_CHNL_DCHNO_S 16
#define PAS_MAC_IPC_CHNL_DCHNO(x) (((x) << PAS_MAC_IPC_CHNL_DCHNO_S) & \
PAS_MAC_IPC_CHNL_DCHNO_M)
#define PAS_MAC_IPC_CHNL_BCH_M 0x0000003f
#define PAS_MAC_IPC_CHNL_BCH_S 0
#define PAS_MAC_IPC_CHNL_BCH(x) (((x) << PAS_MAC_IPC_CHNL_BCH_S) & \
PAS_MAC_IPC_CHNL_BCH_M)
/* All these registers live in the PCI configuration space for the DMA PCI
* device. Use the normal PCI config access functions for them.
*/
enum {
PAS_DMA_COM_TXCMD = 0x100, /* Transmit Command Register */
PAS_DMA_COM_TXSTA = 0x104, /* Transmit Status Register */
PAS_DMA_COM_RXCMD = 0x108, /* Receive Command Register */
PAS_DMA_COM_RXSTA = 0x10c, /* Receive Status Register */
};
#define PAS_DMA_COM_TXCMD_EN 0x00000001 /* enable */
#define PAS_DMA_COM_TXSTA_ACT 0x00000001 /* active */
#define PAS_DMA_COM_RXCMD_EN 0x00000001 /* enable */
#define PAS_DMA_COM_RXSTA_ACT 0x00000001 /* active */
/* Per-interface and per-channel registers */
#define _PAS_DMA_RXINT_STRIDE 0x20
#define PAS_DMA_RXINT_RCMDSTA(i) (0x200+(i)*_PAS_DMA_RXINT_STRIDE)
#define PAS_DMA_RXINT_RCMDSTA_EN 0x00000001
#define PAS_DMA_RXINT_RCMDSTA_ST 0x00000002
#define PAS_DMA_RXINT_RCMDSTA_OO 0x00000100
#define PAS_DMA_RXINT_RCMDSTA_BP 0x00000200
#define PAS_DMA_RXINT_RCMDSTA_DR 0x00000400
#define PAS_DMA_RXINT_RCMDSTA_BT 0x00000800
#define PAS_DMA_RXINT_RCMDSTA_TB 0x00001000
#define PAS_DMA_RXINT_RCMDSTA_ACT 0x00010000
#define PAS_DMA_RXINT_RCMDSTA_DROPS_M 0xfffe0000
#define PAS_DMA_RXINT_RCMDSTA_DROPS_S 17
#define PAS_DMA_RXINT_INCR(i) (0x210+(i)*_PAS_DMA_RXINT_STRIDE)
#define PAS_DMA_RXINT_INCR_INCR_M 0x0000ffff
#define PAS_DMA_RXINT_INCR_INCR_S 0
#define PAS_DMA_RXINT_INCR_INCR(x) ((x) & 0x0000ffff)
#define PAS_DMA_RXINT_BASEL(i) (0x218+(i)*_PAS_DMA_RXINT_STRIDE)
#define PAS_DMA_RXINT_BASEL_BRBL(x) ((x) & ~0x3f)
#define PAS_DMA_RXINT_BASEU(i) (0x21c+(i)*_PAS_DMA_RXINT_STRIDE)
#define PAS_DMA_RXINT_BASEU_BRBH(x) ((x) & 0xfff)
#define PAS_DMA_RXINT_BASEU_SIZ_M 0x3fff0000 /* # of cache lines worth of buffer ring */
#define PAS_DMA_RXINT_BASEU_SIZ_S 16 /* 0 = 16K */
#define PAS_DMA_RXINT_BASEU_SIZ(x) (((x) << PAS_DMA_RXINT_BASEU_SIZ_S) & \
PAS_DMA_RXINT_BASEU_SIZ_M)
#define _PAS_DMA_TXCHAN_STRIDE 0x20 /* Size per channel */
#define _PAS_DMA_TXCHAN_TCMDSTA 0x300 /* Command / Status */
#define _PAS_DMA_TXCHAN_CFG 0x304 /* Configuration */
#define _PAS_DMA_TXCHAN_DSCRBU 0x308 /* Descriptor BU Allocation */
#define _PAS_DMA_TXCHAN_INCR 0x310 /* Descriptor increment */
#define _PAS_DMA_TXCHAN_CNT 0x314 /* Descriptor count/offset */
#define _PAS_DMA_TXCHAN_BASEL 0x318 /* Descriptor ring base (low) */
#define _PAS_DMA_TXCHAN_BASEU 0x31c /* (high) */
#define PAS_DMA_TXCHAN_TCMDSTA(c) (0x300+(c)*_PAS_DMA_TXCHAN_STRIDE)
#define PAS_DMA_TXCHAN_TCMDSTA_EN 0x00000001 /* Enabled */
#define PAS_DMA_TXCHAN_TCMDSTA_ST 0x00000002 /* Stop interface */
#define PAS_DMA_TXCHAN_TCMDSTA_ACT 0x00010000 /* Active */
#define PAS_DMA_TXCHAN_CFG(c) (0x304+(c)*_PAS_DMA_TXCHAN_STRIDE)
#define PAS_DMA_TXCHAN_CFG_TY_IFACE 0x00000000 /* Type = interface */
#define PAS_DMA_TXCHAN_CFG_TATTR_M 0x0000003c
#define PAS_DMA_TXCHAN_CFG_TATTR_S 2
#define PAS_DMA_TXCHAN_CFG_TATTR(x) (((x) << PAS_DMA_TXCHAN_CFG_TATTR_S) & \
PAS_DMA_TXCHAN_CFG_TATTR_M)
#define PAS_DMA_TXCHAN_CFG_WT_M 0x000001c0
#define PAS_DMA_TXCHAN_CFG_WT_S 6
#define PAS_DMA_TXCHAN_CFG_WT(x) (((x) << PAS_DMA_TXCHAN_CFG_WT_S) & \
PAS_DMA_TXCHAN_CFG_WT_M)
#define PAS_DMA_TXCHAN_CFG_CF 0x00001000 /* Clean first line */
#define PAS_DMA_TXCHAN_CFG_CL 0x00002000 /* Clean last line */
#define PAS_DMA_TXCHAN_CFG_UP 0x00004000 /* update tx descr when sent */
#define PAS_DMA_TXCHAN_INCR(c) (0x310+(c)*_PAS_DMA_TXCHAN_STRIDE)
#define PAS_DMA_TXCHAN_BASEL(c) (0x318+(c)*_PAS_DMA_TXCHAN_STRIDE)
#define PAS_DMA_TXCHAN_BASEL_BRBL_M 0xffffffc0
#define PAS_DMA_TXCHAN_BASEL_BRBL_S 0
#define PAS_DMA_TXCHAN_BASEL_BRBL(x) (((x) << PAS_DMA_TXCHAN_BASEL_BRBL_S) & \
PAS_DMA_TXCHAN_BASEL_BRBL_M)
#define PAS_DMA_TXCHAN_BASEU(c) (0x31c+(c)*_PAS_DMA_TXCHAN_STRIDE)
#define PAS_DMA_TXCHAN_BASEU_BRBH_M 0x00000fff
#define PAS_DMA_TXCHAN_BASEU_BRBH_S 0
#define PAS_DMA_TXCHAN_BASEU_BRBH(x) (((x) << PAS_DMA_TXCHAN_BASEU_BRBH_S) & \
PAS_DMA_TXCHAN_BASEU_BRBH_M)
/* # of cache lines worth of buffer ring */
#define PAS_DMA_TXCHAN_BASEU_SIZ_M 0x3fff0000
#define PAS_DMA_TXCHAN_BASEU_SIZ_S 16 /* 0 = 16K */
#define PAS_DMA_TXCHAN_BASEU_SIZ(x) (((x) << PAS_DMA_TXCHAN_BASEU_SIZ_S) & \
PAS_DMA_TXCHAN_BASEU_SIZ_M)
#define _PAS_DMA_RXCHAN_STRIDE 0x20 /* Size per channel */
#define _PAS_DMA_RXCHAN_CCMDSTA 0x800 /* Command / Status */
#define _PAS_DMA_RXCHAN_CFG 0x804 /* Configuration */
#define _PAS_DMA_RXCHAN_INCR 0x810 /* Descriptor increment */
#define _PAS_DMA_RXCHAN_CNT 0x814 /* Descriptor count/offset */
#define _PAS_DMA_RXCHAN_BASEL 0x818 /* Descriptor ring base (low) */
#define _PAS_DMA_RXCHAN_BASEU 0x81c /* (high) */
#define PAS_DMA_RXCHAN_CCMDSTA(c) (0x800+(c)*_PAS_DMA_RXCHAN_STRIDE)
#define PAS_DMA_RXCHAN_CCMDSTA_EN 0x00000001 /* Enabled */
#define PAS_DMA_RXCHAN_CCMDSTA_ST 0x00000002 /* Stop interface */
#define PAS_DMA_RXCHAN_CCMDSTA_ACT 0x00010000 /* Active */
#define PAS_DMA_RXCHAN_CCMDSTA_DU 0x00020000
#define PAS_DMA_RXCHAN_CFG(c) (0x804+(c)*_PAS_DMA_RXCHAN_STRIDE)
#define PAS_DMA_RXCHAN_CFG_HBU_M 0x00000380
#define PAS_DMA_RXCHAN_CFG_HBU_S 7
#define PAS_DMA_RXCHAN_CFG_HBU(x) (((x) << PAS_DMA_RXCHAN_CFG_HBU_S) & \
PAS_DMA_RXCHAN_CFG_HBU_M)
#define PAS_DMA_RXCHAN_INCR(c) (0x810+(c)*_PAS_DMA_RXCHAN_STRIDE)
#define PAS_DMA_RXCHAN_BASEL(c) (0x818+(c)*_PAS_DMA_RXCHAN_STRIDE)
#define PAS_DMA_RXCHAN_BASEL_BRBL_M 0xffffffc0
#define PAS_DMA_RXCHAN_BASEL_BRBL_S 0
#define PAS_DMA_RXCHAN_BASEL_BRBL(x) (((x) << PAS_DMA_RXCHAN_BASEL_BRBL_S) & \
PAS_DMA_RXCHAN_BASEL_BRBL_M)
#define PAS_DMA_RXCHAN_BASEU(c) (0x81c+(c)*_PAS_DMA_RXCHAN_STRIDE)
#define PAS_DMA_RXCHAN_BASEU_BRBH_M 0x00000fff
#define PAS_DMA_RXCHAN_BASEU_BRBH_S 0
#define PAS_DMA_RXCHAN_BASEU_BRBH(x) (((x) << PAS_DMA_RXCHAN_BASEU_BRBH_S) & \
PAS_DMA_RXCHAN_BASEU_BRBH_M)
/* # of cache lines worth of buffer ring */
#define PAS_DMA_RXCHAN_BASEU_SIZ_M 0x3fff0000
#define PAS_DMA_RXCHAN_BASEU_SIZ_S 16 /* 0 = 16K */
#define PAS_DMA_RXCHAN_BASEU_SIZ(x) (((x) << PAS_DMA_RXCHAN_BASEU_SIZ_S) & \
PAS_DMA_RXCHAN_BASEU_SIZ_M)
#define PAS_STATUS_PCNT_M 0x000000000000ffffull
#define PAS_STATUS_PCNT_S 0
#define PAS_STATUS_DCNT_M 0x00000000ffff0000ull
#define PAS_STATUS_DCNT_S 16
#define PAS_STATUS_BPCNT_M 0x0000ffff00000000ull
#define PAS_STATUS_BPCNT_S 32
#define PAS_STATUS_TIMER 0x1000000000000000ull
#define PAS_STATUS_ERROR 0x2000000000000000ull
#define PAS_STATUS_SOFT 0x4000000000000000ull
#define PAS_STATUS_INT 0x8000000000000000ull
#define PAS_IOB_DMA_RXCH_CFG(i) (0x1100 + (i)*4)
#define PAS_IOB_DMA_RXCH_CFG_CNTTH_M 0x00000fff
#define PAS_IOB_DMA_RXCH_CFG_CNTTH_S 0
#define PAS_IOB_DMA_RXCH_CFG_CNTTH(x) (((x) << PAS_IOB_DMA_RXCH_CFG_CNTTH_S) & \
PAS_IOB_DMA_RXCH_CFG_CNTTH_M)
#define PAS_IOB_DMA_TXCH_CFG(i) (0x1200 + (i)*4)
#define PAS_IOB_DMA_TXCH_CFG_CNTTH_M 0x00000fff
#define PAS_IOB_DMA_TXCH_CFG_CNTTH_S 0
#define PAS_IOB_DMA_TXCH_CFG_CNTTH(x) (((x) << PAS_IOB_DMA_TXCH_CFG_CNTTH_S) & \
PAS_IOB_DMA_TXCH_CFG_CNTTH_M)
#define PAS_IOB_DMA_RXCH_STAT(i) (0x1300 + (i)*4)
#define PAS_IOB_DMA_RXCH_STAT_INTGEN 0x00001000
#define PAS_IOB_DMA_RXCH_STAT_CNTDEL_M 0x00000fff
#define PAS_IOB_DMA_RXCH_STAT_CNTDEL_S 0
#define PAS_IOB_DMA_RXCH_STAT_CNTDEL(x) (((x) << PAS_IOB_DMA_RXCH_STAT_CNTDEL_S) &\
PAS_IOB_DMA_RXCH_STAT_CNTDEL_M)
#define PAS_IOB_DMA_TXCH_STAT(i) (0x1400 + (i)*4)
#define PAS_IOB_DMA_TXCH_STAT_INTGEN 0x00001000
#define PAS_IOB_DMA_TXCH_STAT_CNTDEL_M 0x00000fff
#define PAS_IOB_DMA_TXCH_STAT_CNTDEL_S 0
#define PAS_IOB_DMA_TXCH_STAT_CNTDEL(x) (((x) << PAS_IOB_DMA_TXCH_STAT_CNTDEL_S) &\
PAS_IOB_DMA_TXCH_STAT_CNTDEL_M)
#define PAS_IOB_DMA_RXCH_RESET(i) (0x1500 + (i)*4)
#define PAS_IOB_DMA_RXCH_RESET_PCNT_M 0xffff0000
#define PAS_IOB_DMA_RXCH_RESET_PCNT_S 0
#define PAS_IOB_DMA_RXCH_RESET_PCNT(x) (((x) << PAS_IOB_DMA_RXCH_RESET_PCNT_S) & \
PAS_IOB_DMA_RXCH_RESET_PCNT_M)
#define PAS_IOB_DMA_RXCH_RESET_PCNTRST 0x00000020
#define PAS_IOB_DMA_RXCH_RESET_DCNTRST 0x00000010
#define PAS_IOB_DMA_RXCH_RESET_TINTC 0x00000008
#define PAS_IOB_DMA_RXCH_RESET_DINTC 0x00000004
#define PAS_IOB_DMA_RXCH_RESET_SINTC 0x00000002
#define PAS_IOB_DMA_RXCH_RESET_PINTC 0x00000001
#define PAS_IOB_DMA_TXCH_RESET(i) (0x1600 + (i)*4)
#define PAS_IOB_DMA_TXCH_RESET_PCNT_M 0xffff0000
#define PAS_IOB_DMA_TXCH_RESET_PCNT_S 0
#define PAS_IOB_DMA_TXCH_RESET_PCNT(x) (((x) << PAS_IOB_DMA_TXCH_RESET_PCNT_S) & \
PAS_IOB_DMA_TXCH_RESET_PCNT_M)
#define PAS_IOB_DMA_TXCH_RESET_PCNTRST 0x00000020
#define PAS_IOB_DMA_TXCH_RESET_DCNTRST 0x00000010
#define PAS_IOB_DMA_TXCH_RESET_TINTC 0x00000008
#define PAS_IOB_DMA_TXCH_RESET_DINTC 0x00000004
#define PAS_IOB_DMA_TXCH_RESET_SINTC 0x00000002
#define PAS_IOB_DMA_TXCH_RESET_PINTC 0x00000001
#define PAS_IOB_DMA_COM_TIMEOUTCFG 0x1700
#define PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_M 0x00ffffff
#define PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_S 0
#define PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT(x) (((x) << PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_S) & \
PAS_IOB_DMA_COM_TIMEOUTCFG_TCNT_M)
/* Transmit descriptor fields */
#define XCT_MACTX_T 0x8000000000000000ull
#define XCT_MACTX_ST 0x4000000000000000ull
#define XCT_MACTX_NORES 0x0000000000000000ull
#define XCT_MACTX_8BRES 0x1000000000000000ull
#define XCT_MACTX_24BRES 0x2000000000000000ull
#define XCT_MACTX_40BRES 0x3000000000000000ull
#define XCT_MACTX_I 0x0800000000000000ull
#define XCT_MACTX_O 0x0400000000000000ull
#define XCT_MACTX_E 0x0200000000000000ull
#define XCT_MACTX_VLAN_M 0x0180000000000000ull
#define XCT_MACTX_VLAN_NOP 0x0000000000000000ull
#define XCT_MACTX_VLAN_REMOVE 0x0080000000000000ull
#define XCT_MACTX_VLAN_INSERT 0x0100000000000000ull
#define XCT_MACTX_VLAN_REPLACE 0x0180000000000000ull
#define XCT_MACTX_CRC_M 0x0060000000000000ull
#define XCT_MACTX_CRC_NOP 0x0000000000000000ull
#define XCT_MACTX_CRC_INSERT 0x0020000000000000ull
#define XCT_MACTX_CRC_PAD 0x0040000000000000ull
#define XCT_MACTX_CRC_REPLACE 0x0060000000000000ull
#define XCT_MACTX_SS 0x0010000000000000ull
#define XCT_MACTX_LLEN_M 0x00007fff00000000ull
#define XCT_MACTX_LLEN_S 32ull
#define XCT_MACTX_LLEN(x) ((((long)(x)) << XCT_MACTX_LLEN_S) & \
XCT_MACTX_LLEN_M)
#define XCT_MACTX_IPH_M 0x00000000f8000000ull
#define XCT_MACTX_IPH_S 27ull
#define XCT_MACTX_IPH(x) ((((long)(x)) << XCT_MACTX_IPH_S) & \
XCT_MACTX_IPH_M)
#define XCT_MACTX_IPO_M 0x0000000007c00000ull
#define XCT_MACTX_IPO_S 22ull
#define XCT_MACTX_IPO(x) ((((long)(x)) << XCT_MACTX_IPO_S) & \
XCT_MACTX_IPO_M)
#define XCT_MACTX_CSUM_M 0x0000000000000060ull
#define XCT_MACTX_CSUM_NOP 0x0000000000000000ull
#define XCT_MACTX_CSUM_TCP 0x0000000000000040ull
#define XCT_MACTX_CSUM_UDP 0x0000000000000060ull
#define XCT_MACTX_V6 0x0000000000000010ull
#define XCT_MACTX_C 0x0000000000000004ull
#define XCT_MACTX_AL2 0x0000000000000002ull
/* Receive descriptor fields */
#define XCT_MACRX_T 0x8000000000000000ull
#define XCT_MACRX_ST 0x4000000000000000ull
#define XCT_MACRX_NORES 0x0000000000000000ull
#define XCT_MACRX_8BRES 0x1000000000000000ull
#define XCT_MACRX_24BRES 0x2000000000000000ull
#define XCT_MACRX_40BRES 0x3000000000000000ull
#define XCT_MACRX_O 0x0400000000000000ull
#define XCT_MACRX_E 0x0200000000000000ull
#define XCT_MACRX_FF 0x0100000000000000ull
#define XCT_MACRX_PF 0x0080000000000000ull
#define XCT_MACRX_OB 0x0040000000000000ull
#define XCT_MACRX_OD 0x0020000000000000ull
#define XCT_MACRX_FS 0x0010000000000000ull
#define XCT_MACRX_NB_M 0x000fc00000000000ull
#define XCT_MACRX_NB_S 46ULL
#define XCT_MACRX_NB(x) ((((long)(x)) << XCT_MACRX_NB_S) & \
XCT_MACRX_NB_M)
#define XCT_MACRX_LLEN_M 0x00003fff00000000ull
#define XCT_MACRX_LLEN_S 32ULL
#define XCT_MACRX_LLEN(x) ((((long)(x)) << XCT_MACRX_LLEN_S) & \
XCT_MACRX_LLEN_M)
#define XCT_MACRX_CRC 0x0000000080000000ull
#define XCT_MACRX_LEN_M 0x0000000060000000ull
#define XCT_MACRX_LEN_TOOSHORT 0x0000000020000000ull
#define XCT_MACRX_LEN_BELOWMIN 0x0000000040000000ull
#define XCT_MACRX_LEN_TRUNC 0x0000000060000000ull
#define XCT_MACRX_CAST_M 0x0000000018000000ull
#define XCT_MACRX_CAST_UNI 0x0000000000000000ull
#define XCT_MACRX_CAST_MULTI 0x0000000008000000ull
#define XCT_MACRX_CAST_BROAD 0x0000000010000000ull
#define XCT_MACRX_CAST_PAUSE 0x0000000018000000ull
#define XCT_MACRX_VLC_M 0x0000000006000000ull
#define XCT_MACRX_FM 0x0000000001000000ull
#define XCT_MACRX_HTY_M 0x0000000000c00000ull
#define XCT_MACRX_HTY_IPV4_OK 0x0000000000000000ull
#define XCT_MACRX_HTY_IPV6 0x0000000000400000ull
#define XCT_MACRX_HTY_IPV4_BAD 0x0000000000800000ull
#define XCT_MACRX_HTY_NONIP 0x0000000000c00000ull
#define XCT_MACRX_IPP_M 0x00000000003f0000ull
#define XCT_MACRX_IPP_S 16
#define XCT_MACRX_CSUM_M 0x000000000000ffffull
#define XCT_MACRX_CSUM_S 0
#define XCT_PTR_T 0x8000000000000000ull
#define XCT_PTR_LEN_M 0x7ffff00000000000ull
#define XCT_PTR_LEN_S 44
#define XCT_PTR_LEN(x) ((((long)(x)) << XCT_PTR_LEN_S) & \
XCT_PTR_LEN_M)
#define XCT_PTR_ADDR_M 0x00000fffffffffffull
#define XCT_PTR_ADDR_S 0
#define XCT_PTR_ADDR(x) ((((long)(x)) << XCT_PTR_ADDR_S) & \
XCT_PTR_ADDR_M)
/* Receive interface buffer fields */
#define XCT_RXB_LEN_M 0x0ffff00000000000ull
#define XCT_RXB_LEN_S 44
#define XCT_RXB_LEN(x) ((((long)(x)) << XCT_PTR_LEN_S) & XCT_PTR_LEN_M)
#define XCT_RXB_ADDR_M 0x00000fffffffffffull
#define XCT_RXB_ADDR_S 0
#define XCT_RXB_ADDR(x) ((((long)(x)) << XCT_PTR_ADDR_S) & XCT_PTR_ADDR_M)
#endif /* PASEMI_MAC_H */

363
drivers/net/qla3xxx.c Normal file → Executable file
View File

@ -22,6 +22,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/ip.h> #include <linux/ip.h>
#include <linux/in.h>
#include <linux/if_arp.h> #include <linux/if_arp.h>
#include <linux/if_ether.h> #include <linux/if_ether.h>
#include <linux/netdevice.h> #include <linux/netdevice.h>
@ -63,6 +64,7 @@ MODULE_PARM_DESC(msi, "Turn on Message Signaled Interrupts.");
static struct pci_device_id ql3xxx_pci_tbl[] __devinitdata = { static struct pci_device_id ql3xxx_pci_tbl[] __devinitdata = {
{PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, QL3022_DEVICE_ID)}, {PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, QL3022_DEVICE_ID)},
{PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, QL3032_DEVICE_ID)},
/* required last entry */ /* required last entry */
{0,} {0,}
}; };
@ -1475,6 +1477,10 @@ static int ql_mii_setup(struct ql3_adapter *qdev)
2) << 7)) 2) << 7))
return -1; return -1;
if (qdev->device_id == QL3032_DEVICE_ID)
ql_write_page0_reg(qdev,
&port_regs->macMIIMgmtControlReg, 0x0f00000);
/* Divide 125MHz clock by 28 to meet PHY timing requirements */ /* Divide 125MHz clock by 28 to meet PHY timing requirements */
reg = MAC_MII_CONTROL_CLK_SEL_DIV28; reg = MAC_MII_CONTROL_CLK_SEL_DIV28;
@ -1706,18 +1712,42 @@ static void ql_process_mac_tx_intr(struct ql3_adapter *qdev,
struct ob_mac_iocb_rsp *mac_rsp) struct ob_mac_iocb_rsp *mac_rsp)
{ {
struct ql_tx_buf_cb *tx_cb; struct ql_tx_buf_cb *tx_cb;
int i;
tx_cb = &qdev->tx_buf[mac_rsp->transaction_id]; tx_cb = &qdev->tx_buf[mac_rsp->transaction_id];
pci_unmap_single(qdev->pdev, pci_unmap_single(qdev->pdev,
pci_unmap_addr(tx_cb, mapaddr), pci_unmap_addr(&tx_cb->map[0], mapaddr),
pci_unmap_len(tx_cb, maplen), PCI_DMA_TODEVICE); pci_unmap_len(&tx_cb->map[0], maplen),
dev_kfree_skb_irq(tx_cb->skb); PCI_DMA_TODEVICE);
tx_cb->seg_count--;
if (tx_cb->seg_count) {
for (i = 1; i < tx_cb->seg_count; i++) {
pci_unmap_page(qdev->pdev,
pci_unmap_addr(&tx_cb->map[i],
mapaddr),
pci_unmap_len(&tx_cb->map[i], maplen),
PCI_DMA_TODEVICE);
}
}
qdev->stats.tx_packets++; qdev->stats.tx_packets++;
qdev->stats.tx_bytes += tx_cb->skb->len; qdev->stats.tx_bytes += tx_cb->skb->len;
dev_kfree_skb_irq(tx_cb->skb);
tx_cb->skb = NULL; tx_cb->skb = NULL;
atomic_inc(&qdev->tx_count); atomic_inc(&qdev->tx_count);
} }
/*
* The difference between 3022 and 3032 for inbound completions:
* 3022 uses two buffers per completion. The first buffer contains
* (some) header info, the second the remainder of the headers plus
* the data. For this chip we reserve some space at the top of the
* receive buffer so that the header info in buffer one can be
* prepended to the buffer two. Buffer two is the sent up while
* buffer one is returned to the hardware to be reused.
* 3032 receives all of it's data and headers in one buffer for a
* simpler process. 3032 also supports checksum verification as
* can be seen in ql_process_macip_rx_intr().
*/
static void ql_process_mac_rx_intr(struct ql3_adapter *qdev, static void ql_process_mac_rx_intr(struct ql3_adapter *qdev,
struct ib_mac_iocb_rsp *ib_mac_rsp_ptr) struct ib_mac_iocb_rsp *ib_mac_rsp_ptr)
{ {
@ -1740,14 +1770,17 @@ static void ql_process_mac_rx_intr(struct ql3_adapter *qdev,
qdev->last_rsp_offset = qdev->small_buf_phy_addr_low + offset; qdev->last_rsp_offset = qdev->small_buf_phy_addr_low + offset;
qdev->small_buf_release_cnt++; qdev->small_buf_release_cnt++;
/* start of first buffer */ if (qdev->device_id == QL3022_DEVICE_ID) {
lrg_buf_phy_addr_low = le32_to_cpu(*curr_ial_ptr); /* start of first buffer (3022 only) */
lrg_buf_cb1 = &qdev->lrg_buf[qdev->lrg_buf_index]; lrg_buf_phy_addr_low = le32_to_cpu(*curr_ial_ptr);
qdev->lrg_buf_release_cnt++; lrg_buf_cb1 = &qdev->lrg_buf[qdev->lrg_buf_index];
if (++qdev->lrg_buf_index == NUM_LARGE_BUFFERS) qdev->lrg_buf_release_cnt++;
qdev->lrg_buf_index = 0; if (++qdev->lrg_buf_index == NUM_LARGE_BUFFERS) {
curr_ial_ptr++; /* 64-bit pointers require two incs. */ qdev->lrg_buf_index = 0;
curr_ial_ptr++; }
curr_ial_ptr++; /* 64-bit pointers require two incs. */
curr_ial_ptr++;
}
/* start of second buffer */ /* start of second buffer */
lrg_buf_phy_addr_low = le32_to_cpu(*curr_ial_ptr); lrg_buf_phy_addr_low = le32_to_cpu(*curr_ial_ptr);
@ -1778,7 +1811,8 @@ static void ql_process_mac_rx_intr(struct ql3_adapter *qdev,
qdev->ndev->last_rx = jiffies; qdev->ndev->last_rx = jiffies;
lrg_buf_cb2->skb = NULL; lrg_buf_cb2->skb = NULL;
ql_release_to_lrg_buf_free_list(qdev, lrg_buf_cb1); if (qdev->device_id == QL3022_DEVICE_ID)
ql_release_to_lrg_buf_free_list(qdev, lrg_buf_cb1);
ql_release_to_lrg_buf_free_list(qdev, lrg_buf_cb2); ql_release_to_lrg_buf_free_list(qdev, lrg_buf_cb2);
} }
@ -1790,7 +1824,7 @@ static void ql_process_macip_rx_intr(struct ql3_adapter *qdev,
struct ql_rcv_buf_cb *lrg_buf_cb1 = NULL; struct ql_rcv_buf_cb *lrg_buf_cb1 = NULL;
struct ql_rcv_buf_cb *lrg_buf_cb2 = NULL; struct ql_rcv_buf_cb *lrg_buf_cb2 = NULL;
u32 *curr_ial_ptr; u32 *curr_ial_ptr;
struct sk_buff *skb1, *skb2; struct sk_buff *skb1 = NULL, *skb2;
struct net_device *ndev = qdev->ndev; struct net_device *ndev = qdev->ndev;
u16 length = le16_to_cpu(ib_ip_rsp_ptr->length); u16 length = le16_to_cpu(ib_ip_rsp_ptr->length);
u16 size = 0; u16 size = 0;
@ -1806,16 +1840,20 @@ static void ql_process_macip_rx_intr(struct ql3_adapter *qdev,
qdev->last_rsp_offset = qdev->small_buf_phy_addr_low + offset; qdev->last_rsp_offset = qdev->small_buf_phy_addr_low + offset;
qdev->small_buf_release_cnt++; qdev->small_buf_release_cnt++;
/* start of first buffer */ if (qdev->device_id == QL3022_DEVICE_ID) {
lrg_buf_phy_addr_low = le32_to_cpu(*curr_ial_ptr); /* start of first buffer on 3022 */
lrg_buf_cb1 = &qdev->lrg_buf[qdev->lrg_buf_index]; lrg_buf_phy_addr_low = le32_to_cpu(*curr_ial_ptr);
lrg_buf_cb1 = &qdev->lrg_buf[qdev->lrg_buf_index];
qdev->lrg_buf_release_cnt++; qdev->lrg_buf_release_cnt++;
if (++qdev->lrg_buf_index == NUM_LARGE_BUFFERS) if (++qdev->lrg_buf_index == NUM_LARGE_BUFFERS)
qdev->lrg_buf_index = 0; qdev->lrg_buf_index = 0;
skb1 = lrg_buf_cb1->skb; skb1 = lrg_buf_cb1->skb;
curr_ial_ptr++; /* 64-bit pointers require two incs. */ curr_ial_ptr++; /* 64-bit pointers require two incs. */
curr_ial_ptr++; curr_ial_ptr++;
size = ETH_HLEN;
if (*((u16 *) skb1->data) != 0xFFFF)
size += VLAN_ETH_HLEN - ETH_HLEN;
}
/* start of second buffer */ /* start of second buffer */
lrg_buf_phy_addr_low = le32_to_cpu(*curr_ial_ptr); lrg_buf_phy_addr_low = le32_to_cpu(*curr_ial_ptr);
@ -1825,18 +1863,6 @@ static void ql_process_macip_rx_intr(struct ql3_adapter *qdev,
if (++qdev->lrg_buf_index == NUM_LARGE_BUFFERS) if (++qdev->lrg_buf_index == NUM_LARGE_BUFFERS)
qdev->lrg_buf_index = 0; qdev->lrg_buf_index = 0;
qdev->stats.rx_packets++;
qdev->stats.rx_bytes += length;
/*
* Copy the ethhdr from first buffer to second. This
* is necessary for IP completions.
*/
if (*((u16 *) skb1->data) != 0xFFFF)
size = VLAN_ETH_HLEN;
else
size = ETH_HLEN;
skb_put(skb2, length); /* Just the second buffer length here. */ skb_put(skb2, length); /* Just the second buffer length here. */
pci_unmap_single(qdev->pdev, pci_unmap_single(qdev->pdev,
pci_unmap_addr(lrg_buf_cb2, mapaddr), pci_unmap_addr(lrg_buf_cb2, mapaddr),
@ -1844,16 +1870,40 @@ static void ql_process_macip_rx_intr(struct ql3_adapter *qdev,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
prefetch(skb2->data); prefetch(skb2->data);
memcpy(skb_push(skb2, size), skb1->data + VLAN_ID_LEN, size);
skb2->dev = qdev->ndev;
skb2->ip_summed = CHECKSUM_NONE; skb2->ip_summed = CHECKSUM_NONE;
if (qdev->device_id == QL3022_DEVICE_ID) {
/*
* Copy the ethhdr from first buffer to second. This
* is necessary for 3022 IP completions.
*/
memcpy(skb_push(skb2, size), skb1->data + VLAN_ID_LEN, size);
} else {
u16 checksum = le16_to_cpu(ib_ip_rsp_ptr->checksum);
if (checksum &
(IB_IP_IOCB_RSP_3032_ICE |
IB_IP_IOCB_RSP_3032_CE |
IB_IP_IOCB_RSP_3032_NUC)) {
printk(KERN_ERR
"%s: Bad checksum for this %s packet, checksum = %x.\n",
__func__,
((checksum &
IB_IP_IOCB_RSP_3032_TCP) ? "TCP" :
"UDP"),checksum);
} else if (checksum & IB_IP_IOCB_RSP_3032_TCP) {
skb2->ip_summed = CHECKSUM_UNNECESSARY;
}
}
skb2->dev = qdev->ndev;
skb2->protocol = eth_type_trans(skb2, qdev->ndev); skb2->protocol = eth_type_trans(skb2, qdev->ndev);
netif_receive_skb(skb2); netif_receive_skb(skb2);
qdev->stats.rx_packets++;
qdev->stats.rx_bytes += length;
ndev->last_rx = jiffies; ndev->last_rx = jiffies;
lrg_buf_cb2->skb = NULL; lrg_buf_cb2->skb = NULL;
ql_release_to_lrg_buf_free_list(qdev, lrg_buf_cb1); if (qdev->device_id == QL3022_DEVICE_ID)
ql_release_to_lrg_buf_free_list(qdev, lrg_buf_cb1);
ql_release_to_lrg_buf_free_list(qdev, lrg_buf_cb2); ql_release_to_lrg_buf_free_list(qdev, lrg_buf_cb2);
} }
@ -1880,12 +1930,14 @@ static int ql_tx_rx_clean(struct ql3_adapter *qdev,
break; break;
case OPCODE_IB_MAC_IOCB: case OPCODE_IB_MAC_IOCB:
case OPCODE_IB_3032_MAC_IOCB:
ql_process_mac_rx_intr(qdev, (struct ib_mac_iocb_rsp *) ql_process_mac_rx_intr(qdev, (struct ib_mac_iocb_rsp *)
net_rsp); net_rsp);
(*rx_cleaned)++; (*rx_cleaned)++;
break; break;
case OPCODE_IB_IP_IOCB: case OPCODE_IB_IP_IOCB:
case OPCODE_IB_3032_IP_IOCB:
ql_process_macip_rx_intr(qdev, (struct ib_ip_iocb_rsp *) ql_process_macip_rx_intr(qdev, (struct ib_ip_iocb_rsp *)
net_rsp); net_rsp);
(*rx_cleaned)++; (*rx_cleaned)++;
@ -2032,13 +2084,96 @@ static irqreturn_t ql3xxx_isr(int irq, void *dev_id)
return IRQ_RETVAL(handled); return IRQ_RETVAL(handled);
} }
/*
* Get the total number of segments needed for the
* given number of fragments. This is necessary because
* outbound address lists (OAL) will be used when more than
* two frags are given. Each address list has 5 addr/len
* pairs. The 5th pair in each AOL is used to point to
* the next AOL if more frags are coming.
* That is why the frags:segment count ratio is not linear.
*/
static int ql_get_seg_count(unsigned short frags)
{
switch(frags) {
case 0: return 1; /* just the skb->data seg */
case 1: return 2; /* skb->data + 1 frag */
case 2: return 3; /* skb->data + 2 frags */
case 3: return 5; /* skb->data + 1 frag + 1 AOL containting 2 frags */
case 4: return 6;
case 5: return 7;
case 6: return 8;
case 7: return 10;
case 8: return 11;
case 9: return 12;
case 10: return 13;
case 11: return 15;
case 12: return 16;
case 13: return 17;
case 14: return 18;
case 15: return 20;
case 16: return 21;
case 17: return 22;
case 18: return 23;
}
return -1;
}
static void ql_hw_csum_setup(struct sk_buff *skb,
struct ob_mac_iocb_req *mac_iocb_ptr)
{
struct ethhdr *eth;
struct iphdr *ip = NULL;
u8 offset = ETH_HLEN;
eth = (struct ethhdr *)(skb->data);
if (eth->h_proto == __constant_htons(ETH_P_IP)) {
ip = (struct iphdr *)&skb->data[ETH_HLEN];
} else if (eth->h_proto == htons(ETH_P_8021Q) &&
((struct vlan_ethhdr *)skb->data)->
h_vlan_encapsulated_proto == __constant_htons(ETH_P_IP)) {
ip = (struct iphdr *)&skb->data[VLAN_ETH_HLEN];
offset = VLAN_ETH_HLEN;
}
if (ip) {
if (ip->protocol == IPPROTO_TCP) {
mac_iocb_ptr->flags1 |= OB_3032MAC_IOCB_REQ_TC;
mac_iocb_ptr->ip_hdr_off = offset;
mac_iocb_ptr->ip_hdr_len = ip->ihl;
} else if (ip->protocol == IPPROTO_UDP) {
mac_iocb_ptr->flags1 |= OB_3032MAC_IOCB_REQ_UC;
mac_iocb_ptr->ip_hdr_off = offset;
mac_iocb_ptr->ip_hdr_len = ip->ihl;
}
}
}
/*
* The difference between 3022 and 3032 sends:
* 3022 only supports a simple single segment transmission.
* 3032 supports checksumming and scatter/gather lists (fragments).
* The 3032 supports sglists by using the 3 addr/len pairs (ALP)
* in the IOCB plus a chain of outbound address lists (OAL) that
* each contain 5 ALPs. The last ALP of the IOCB (3rd) or OAL (5th)
* will used to point to an OAL when more ALP entries are required.
* The IOCB is always the top of the chain followed by one or more
* OALs (when necessary).
*/
static int ql3xxx_send(struct sk_buff *skb, struct net_device *ndev) static int ql3xxx_send(struct sk_buff *skb, struct net_device *ndev)
{ {
struct ql3_adapter *qdev = (struct ql3_adapter *)netdev_priv(ndev); struct ql3_adapter *qdev = (struct ql3_adapter *)netdev_priv(ndev);
struct ql3xxx_port_registers __iomem *port_regs = qdev->mem_map_registers; struct ql3xxx_port_registers __iomem *port_regs = qdev->mem_map_registers;
struct ql_tx_buf_cb *tx_cb; struct ql_tx_buf_cb *tx_cb;
u32 tot_len = skb->len;
struct oal *oal;
struct oal_entry *oal_entry;
int len;
struct ob_mac_iocb_req *mac_iocb_ptr; struct ob_mac_iocb_req *mac_iocb_ptr;
u64 map; u64 map;
int seg_cnt, seg = 0;
int frag_cnt = (int)skb_shinfo(skb)->nr_frags;
if (unlikely(atomic_read(&qdev->tx_count) < 2)) { if (unlikely(atomic_read(&qdev->tx_count) < 2)) {
if (!netif_queue_stopped(ndev)) if (!netif_queue_stopped(ndev))
@ -2046,21 +2181,79 @@ static int ql3xxx_send(struct sk_buff *skb, struct net_device *ndev)
return NETDEV_TX_BUSY; return NETDEV_TX_BUSY;
} }
tx_cb = &qdev->tx_buf[qdev->req_producer_index] ; tx_cb = &qdev->tx_buf[qdev->req_producer_index] ;
seg_cnt = tx_cb->seg_count = ql_get_seg_count((skb_shinfo(skb)->nr_frags));
if(seg_cnt == -1) {
printk(KERN_ERR PFX"%s: invalid segment count!\n",__func__);
return NETDEV_TX_OK;
}
mac_iocb_ptr = tx_cb->queue_entry; mac_iocb_ptr = tx_cb->queue_entry;
memset((void *)mac_iocb_ptr, 0, sizeof(struct ob_mac_iocb_req)); memset((void *)mac_iocb_ptr, 0, sizeof(struct ob_mac_iocb_req));
mac_iocb_ptr->opcode = qdev->mac_ob_opcode; mac_iocb_ptr->opcode = qdev->mac_ob_opcode;
mac_iocb_ptr->flags |= qdev->mb_bit_mask; mac_iocb_ptr->flags |= qdev->mb_bit_mask;
mac_iocb_ptr->transaction_id = qdev->req_producer_index; mac_iocb_ptr->transaction_id = qdev->req_producer_index;
mac_iocb_ptr->data_len = cpu_to_le16((u16) skb->len); mac_iocb_ptr->data_len = cpu_to_le16((u16) tot_len);
tx_cb->skb = skb; tx_cb->skb = skb;
map = pci_map_single(qdev->pdev, skb->data, skb->len, PCI_DMA_TODEVICE); if (skb->ip_summed == CHECKSUM_PARTIAL)
mac_iocb_ptr->buf_addr0_low = cpu_to_le32(LS_64BITS(map)); ql_hw_csum_setup(skb, mac_iocb_ptr);
mac_iocb_ptr->buf_addr0_high = cpu_to_le32(MS_64BITS(map)); len = skb_headlen(skb);
mac_iocb_ptr->buf_0_len = cpu_to_le32(skb->len | OB_MAC_IOCB_REQ_E); map = pci_map_single(qdev->pdev, skb->data, len, PCI_DMA_TODEVICE);
pci_unmap_addr_set(tx_cb, mapaddr, map); oal_entry = (struct oal_entry *)&mac_iocb_ptr->buf_addr0_low;
pci_unmap_len_set(tx_cb, maplen, skb->len); oal_entry->dma_lo = cpu_to_le32(LS_64BITS(map));
atomic_dec(&qdev->tx_count); oal_entry->dma_hi = cpu_to_le32(MS_64BITS(map));
oal_entry->len = cpu_to_le32(len);
pci_unmap_addr_set(&tx_cb->map[seg], mapaddr, map);
pci_unmap_len_set(&tx_cb->map[seg], maplen, len);
seg++;
if (!skb_shinfo(skb)->nr_frags) {
/* Terminate the last segment. */
oal_entry->len =
cpu_to_le32(le32_to_cpu(oal_entry->len) | OAL_LAST_ENTRY);
} else {
int i;
oal = tx_cb->oal;
for (i=0; i<frag_cnt; i++,seg++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
oal_entry++;
if ((seg == 2 && seg_cnt > 3) || /* Check for continuation */
(seg == 7 && seg_cnt > 8) || /* requirements. It's strange */
(seg == 12 && seg_cnt > 13) || /* but necessary. */
(seg == 17 && seg_cnt > 18)) {
/* Continuation entry points to outbound address list. */
map = pci_map_single(qdev->pdev, oal,
sizeof(struct oal),
PCI_DMA_TODEVICE);
oal_entry->dma_lo = cpu_to_le32(LS_64BITS(map));
oal_entry->dma_hi = cpu_to_le32(MS_64BITS(map));
oal_entry->len =
cpu_to_le32(sizeof(struct oal) |
OAL_CONT_ENTRY);
pci_unmap_addr_set(&tx_cb->map[seg], mapaddr,
map);
pci_unmap_len_set(&tx_cb->map[seg], maplen,
len);
oal_entry = (struct oal_entry *)oal;
oal++;
seg++;
}
map =
pci_map_page(qdev->pdev, frag->page,
frag->page_offset, frag->size,
PCI_DMA_TODEVICE);
oal_entry->dma_lo = cpu_to_le32(LS_64BITS(map));
oal_entry->dma_hi = cpu_to_le32(MS_64BITS(map));
oal_entry->len = cpu_to_le32(frag->size);
pci_unmap_addr_set(&tx_cb->map[seg], mapaddr, map);
pci_unmap_len_set(&tx_cb->map[seg], maplen,
frag->size);
}
/* Terminate the last segment. */
oal_entry->len =
cpu_to_le32(le32_to_cpu(oal_entry->len) | OAL_LAST_ENTRY);
}
wmb();
qdev->req_producer_index++; qdev->req_producer_index++;
if (qdev->req_producer_index == NUM_REQ_Q_ENTRIES) if (qdev->req_producer_index == NUM_REQ_Q_ENTRIES)
qdev->req_producer_index = 0; qdev->req_producer_index = 0;
@ -2074,8 +2267,10 @@ static int ql3xxx_send(struct sk_buff *skb, struct net_device *ndev)
printk(KERN_DEBUG PFX "%s: tx queued, slot %d, len %d\n", printk(KERN_DEBUG PFX "%s: tx queued, slot %d, len %d\n",
ndev->name, qdev->req_producer_index, skb->len); ndev->name, qdev->req_producer_index, skb->len);
atomic_dec(&qdev->tx_count);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
static int ql_alloc_net_req_rsp_queues(struct ql3_adapter *qdev) static int ql_alloc_net_req_rsp_queues(struct ql3_adapter *qdev)
{ {
qdev->req_q_size = qdev->req_q_size =
@ -2359,7 +2554,22 @@ static int ql_alloc_large_buffers(struct ql3_adapter *qdev)
return 0; return 0;
} }
static void ql_create_send_free_list(struct ql3_adapter *qdev) static void ql_free_send_free_list(struct ql3_adapter *qdev)
{
struct ql_tx_buf_cb *tx_cb;
int i;
tx_cb = &qdev->tx_buf[0];
for (i = 0; i < NUM_REQ_Q_ENTRIES; i++) {
if (tx_cb->oal) {
kfree(tx_cb->oal);
tx_cb->oal = NULL;
}
tx_cb++;
}
}
static int ql_create_send_free_list(struct ql3_adapter *qdev)
{ {
struct ql_tx_buf_cb *tx_cb; struct ql_tx_buf_cb *tx_cb;
int i; int i;
@ -2368,11 +2578,16 @@ static void ql_create_send_free_list(struct ql3_adapter *qdev)
/* Create free list of transmit buffers */ /* Create free list of transmit buffers */
for (i = 0; i < NUM_REQ_Q_ENTRIES; i++) { for (i = 0; i < NUM_REQ_Q_ENTRIES; i++) {
tx_cb = &qdev->tx_buf[i]; tx_cb = &qdev->tx_buf[i];
tx_cb->skb = NULL; tx_cb->skb = NULL;
tx_cb->queue_entry = req_q_curr; tx_cb->queue_entry = req_q_curr;
req_q_curr++; req_q_curr++;
tx_cb->oal = kmalloc(512, GFP_KERNEL);
if (tx_cb->oal == NULL)
return -1;
} }
return 0;
} }
static int ql_alloc_mem_resources(struct ql3_adapter *qdev) static int ql_alloc_mem_resources(struct ql3_adapter *qdev)
@ -2447,12 +2662,14 @@ static int ql_alloc_mem_resources(struct ql3_adapter *qdev)
/* Initialize the large buffer queue. */ /* Initialize the large buffer queue. */
ql_init_large_buffers(qdev); ql_init_large_buffers(qdev);
ql_create_send_free_list(qdev); if (ql_create_send_free_list(qdev))
goto err_free_list;
qdev->rsp_current = qdev->rsp_q_virt_addr; qdev->rsp_current = qdev->rsp_q_virt_addr;
return 0; return 0;
err_free_list:
ql_free_send_free_list(qdev);
err_small_buffers: err_small_buffers:
ql_free_buffer_queues(qdev); ql_free_buffer_queues(qdev);
err_buffer_queues: err_buffer_queues:
@ -2468,6 +2685,7 @@ err_req_rsp:
static void ql_free_mem_resources(struct ql3_adapter *qdev) static void ql_free_mem_resources(struct ql3_adapter *qdev)
{ {
ql_free_send_free_list(qdev);
ql_free_large_buffers(qdev); ql_free_large_buffers(qdev);
ql_free_small_buffers(qdev); ql_free_small_buffers(qdev);
ql_free_buffer_queues(qdev); ql_free_buffer_queues(qdev);
@ -2766,11 +2984,20 @@ static int ql_adapter_initialize(struct ql3_adapter *qdev)
} }
/* Enable Ethernet Function */ /* Enable Ethernet Function */
value = if (qdev->device_id == QL3032_DEVICE_ID) {
(PORT_CONTROL_EF | PORT_CONTROL_ET | PORT_CONTROL_EI | value =
PORT_CONTROL_HH); (QL3032_PORT_CONTROL_EF | QL3032_PORT_CONTROL_KIE |
ql_write_page0_reg(qdev, &port_regs->portControl, QL3032_PORT_CONTROL_EIv6 | QL3032_PORT_CONTROL_EIv4);
((value << 16) | value)); ql_write_page0_reg(qdev, &port_regs->functionControl,
((value << 16) | value));
} else {
value =
(PORT_CONTROL_EF | PORT_CONTROL_ET | PORT_CONTROL_EI |
PORT_CONTROL_HH);
ql_write_page0_reg(qdev, &port_regs->portControl,
((value << 16) | value));
}
out: out:
return status; return status;
@ -2917,8 +3144,10 @@ static void ql_display_dev_info(struct net_device *ndev)
struct pci_dev *pdev = qdev->pdev; struct pci_dev *pdev = qdev->pdev;
printk(KERN_INFO PFX printk(KERN_INFO PFX
"\n%s Adapter %d RevisionID %d found on PCI slot %d.\n", "\n%s Adapter %d RevisionID %d found %s on PCI slot %d.\n",
DRV_NAME, qdev->index, qdev->chip_rev_id, qdev->pci_slot); DRV_NAME, qdev->index, qdev->chip_rev_id,
(qdev->device_id == QL3032_DEVICE_ID) ? "QLA3032" : "QLA3022",
qdev->pci_slot);
printk(KERN_INFO PFX printk(KERN_INFO PFX
"%s Interface.\n", "%s Interface.\n",
test_bit(QL_LINK_OPTICAL,&qdev->flags) ? "OPTICAL" : "COPPER"); test_bit(QL_LINK_OPTICAL,&qdev->flags) ? "OPTICAL" : "COPPER");
@ -3212,15 +3441,22 @@ static void ql_reset_work(struct work_struct *work)
* Loop through the active list and return the skb. * Loop through the active list and return the skb.
*/ */
for (i = 0; i < NUM_REQ_Q_ENTRIES; i++) { for (i = 0; i < NUM_REQ_Q_ENTRIES; i++) {
int j;
tx_cb = &qdev->tx_buf[i]; tx_cb = &qdev->tx_buf[i];
if (tx_cb->skb) { if (tx_cb->skb) {
printk(KERN_DEBUG PFX printk(KERN_DEBUG PFX
"%s: Freeing lost SKB.\n", "%s: Freeing lost SKB.\n",
qdev->ndev->name); qdev->ndev->name);
pci_unmap_single(qdev->pdev, pci_unmap_single(qdev->pdev,
pci_unmap_addr(tx_cb, mapaddr), pci_unmap_addr(&tx_cb->map[0], mapaddr),
pci_unmap_len(tx_cb, maplen), PCI_DMA_TODEVICE); pci_unmap_len(&tx_cb->map[0], maplen),
PCI_DMA_TODEVICE);
for(j=1;j<tx_cb->seg_count;j++) {
pci_unmap_page(qdev->pdev,
pci_unmap_addr(&tx_cb->map[j],mapaddr),
pci_unmap_len(&tx_cb->map[j],maplen),
PCI_DMA_TODEVICE);
}
dev_kfree_skb(tx_cb->skb); dev_kfree_skb(tx_cb->skb);
tx_cb->skb = NULL; tx_cb->skb = NULL;
} }
@ -3379,21 +3615,24 @@ static int __devinit ql3xxx_probe(struct pci_dev *pdev,
SET_MODULE_OWNER(ndev); SET_MODULE_OWNER(ndev);
SET_NETDEV_DEV(ndev, &pdev->dev); SET_NETDEV_DEV(ndev, &pdev->dev);
if (pci_using_dac)
ndev->features |= NETIF_F_HIGHDMA;
pci_set_drvdata(pdev, ndev); pci_set_drvdata(pdev, ndev);
qdev = netdev_priv(ndev); qdev = netdev_priv(ndev);
qdev->index = cards_found; qdev->index = cards_found;
qdev->ndev = ndev; qdev->ndev = ndev;
qdev->pdev = pdev; qdev->pdev = pdev;
qdev->device_id = pci_entry->device;
qdev->port_link_state = LS_DOWN; qdev->port_link_state = LS_DOWN;
if (msi) if (msi)
qdev->msi = 1; qdev->msi = 1;
qdev->msg_enable = netif_msg_init(debug, default_msg); qdev->msg_enable = netif_msg_init(debug, default_msg);
if (pci_using_dac)
ndev->features |= NETIF_F_HIGHDMA;
if (qdev->device_id == QL3032_DEVICE_ID)
ndev->features |= (NETIF_F_HW_CSUM | NETIF_F_SG);
qdev->mem_map_registers = qdev->mem_map_registers =
ioremap_nocache(pci_resource_start(pdev, 1), ioremap_nocache(pci_resource_start(pdev, 1),
pci_resource_len(qdev->pdev, 1)); pci_resource_len(qdev->pdev, 1));

88
drivers/net/qla3xxx.h Normal file → Executable file
View File

@ -21,7 +21,9 @@
#define OPCODE_UPDATE_NCB_IOCB 0xF0 #define OPCODE_UPDATE_NCB_IOCB 0xF0
#define OPCODE_IB_MAC_IOCB 0xF9 #define OPCODE_IB_MAC_IOCB 0xF9
#define OPCODE_IB_3032_MAC_IOCB 0x09
#define OPCODE_IB_IP_IOCB 0xFA #define OPCODE_IB_IP_IOCB 0xFA
#define OPCODE_IB_3032_IP_IOCB 0x0A
#define OPCODE_IB_TCP_IOCB 0xFB #define OPCODE_IB_TCP_IOCB 0xFB
#define OPCODE_DUMP_PROTO_IOCB 0xFE #define OPCODE_DUMP_PROTO_IOCB 0xFE
#define OPCODE_BUFFER_ALERT_IOCB 0xFB #define OPCODE_BUFFER_ALERT_IOCB 0xFB
@ -37,18 +39,23 @@
struct ob_mac_iocb_req { struct ob_mac_iocb_req {
u8 opcode; u8 opcode;
u8 flags; u8 flags;
#define OB_MAC_IOCB_REQ_MA 0xC0 #define OB_MAC_IOCB_REQ_MA 0xe0
#define OB_MAC_IOCB_REQ_F 0x20 #define OB_MAC_IOCB_REQ_F 0x10
#define OB_MAC_IOCB_REQ_X 0x10 #define OB_MAC_IOCB_REQ_X 0x08
#define OB_MAC_IOCB_REQ_D 0x02 #define OB_MAC_IOCB_REQ_D 0x02
#define OB_MAC_IOCB_REQ_I 0x01 #define OB_MAC_IOCB_REQ_I 0x01
__le16 reserved0; u8 flags1;
#define OB_3032MAC_IOCB_REQ_IC 0x04
#define OB_3032MAC_IOCB_REQ_TC 0x02
#define OB_3032MAC_IOCB_REQ_UC 0x01
u8 reserved0;
__le32 transaction_id; __le32 transaction_id;
__le16 data_len; __le16 data_len;
__le16 reserved1; u8 ip_hdr_off;
u8 ip_hdr_len;
__le32 reserved1;
__le32 reserved2; __le32 reserved2;
__le32 reserved3;
__le32 buf_addr0_low; __le32 buf_addr0_low;
__le32 buf_addr0_high; __le32 buf_addr0_high;
__le32 buf_0_len; __le32 buf_0_len;
@ -58,8 +65,8 @@ struct ob_mac_iocb_req {
__le32 buf_addr2_low; __le32 buf_addr2_low;
__le32 buf_addr2_high; __le32 buf_addr2_high;
__le32 buf_2_len; __le32 buf_2_len;
__le32 reserved3;
__le32 reserved4; __le32 reserved4;
__le32 reserved5;
}; };
/* /*
* The following constants define control bits for buffer * The following constants define control bits for buffer
@ -74,6 +81,7 @@ struct ob_mac_iocb_rsp {
u8 opcode; u8 opcode;
u8 flags; u8 flags;
#define OB_MAC_IOCB_RSP_P 0x08 #define OB_MAC_IOCB_RSP_P 0x08
#define OB_MAC_IOCB_RSP_L 0x04
#define OB_MAC_IOCB_RSP_S 0x02 #define OB_MAC_IOCB_RSP_S 0x02
#define OB_MAC_IOCB_RSP_I 0x01 #define OB_MAC_IOCB_RSP_I 0x01
@ -85,6 +93,7 @@ struct ob_mac_iocb_rsp {
struct ib_mac_iocb_rsp { struct ib_mac_iocb_rsp {
u8 opcode; u8 opcode;
#define IB_MAC_IOCB_RSP_V 0x80
u8 flags; u8 flags;
#define IB_MAC_IOCB_RSP_S 0x80 #define IB_MAC_IOCB_RSP_S 0x80
#define IB_MAC_IOCB_RSP_H1 0x40 #define IB_MAC_IOCB_RSP_H1 0x40
@ -138,6 +147,7 @@ struct ob_ip_iocb_req {
struct ob_ip_iocb_rsp { struct ob_ip_iocb_rsp {
u8 opcode; u8 opcode;
u8 flags; u8 flags;
#define OB_MAC_IOCB_RSP_H 0x10
#define OB_MAC_IOCB_RSP_E 0x08 #define OB_MAC_IOCB_RSP_E 0x08
#define OB_MAC_IOCB_RSP_L 0x04 #define OB_MAC_IOCB_RSP_L 0x04
#define OB_MAC_IOCB_RSP_S 0x02 #define OB_MAC_IOCB_RSP_S 0x02
@ -220,6 +230,10 @@ struct ob_tcp_iocb_rsp {
struct ib_ip_iocb_rsp { struct ib_ip_iocb_rsp {
u8 opcode; u8 opcode;
#define IB_IP_IOCB_RSP_3032_V 0x80
#define IB_IP_IOCB_RSP_3032_O 0x40
#define IB_IP_IOCB_RSP_3032_I 0x20
#define IB_IP_IOCB_RSP_3032_R 0x10
u8 flags; u8 flags;
#define IB_IP_IOCB_RSP_S 0x80 #define IB_IP_IOCB_RSP_S 0x80
#define IB_IP_IOCB_RSP_H1 0x40 #define IB_IP_IOCB_RSP_H1 0x40
@ -230,6 +244,12 @@ struct ib_ip_iocb_rsp {
__le16 length; __le16 length;
__le16 checksum; __le16 checksum;
#define IB_IP_IOCB_RSP_3032_ICE 0x01
#define IB_IP_IOCB_RSP_3032_CE 0x02
#define IB_IP_IOCB_RSP_3032_NUC 0x04
#define IB_IP_IOCB_RSP_3032_UDP 0x08
#define IB_IP_IOCB_RSP_3032_TCP 0x10
#define IB_IP_IOCB_RSP_3032_IPE 0x20
__le16 reserved; __le16 reserved;
#define IB_IP_IOCB_RSP_R 0x01 #define IB_IP_IOCB_RSP_R 0x01
__le32 ial_low; __le32 ial_low;
@ -524,6 +544,21 @@ enum {
IP_ADDR_INDEX_REG_FUNC_2_SEC = 0x0005, IP_ADDR_INDEX_REG_FUNC_2_SEC = 0x0005,
IP_ADDR_INDEX_REG_FUNC_3_PRI = 0x0006, IP_ADDR_INDEX_REG_FUNC_3_PRI = 0x0006,
IP_ADDR_INDEX_REG_FUNC_3_SEC = 0x0007, IP_ADDR_INDEX_REG_FUNC_3_SEC = 0x0007,
IP_ADDR_INDEX_REG_6 = 0x0008,
IP_ADDR_INDEX_REG_OFFSET_MASK = 0x0030,
IP_ADDR_INDEX_REG_E = 0x0040,
};
enum {
QL3032_PORT_CONTROL_DS = 0x0001,
QL3032_PORT_CONTROL_HH = 0x0002,
QL3032_PORT_CONTROL_EIv6 = 0x0004,
QL3032_PORT_CONTROL_EIv4 = 0x0008,
QL3032_PORT_CONTROL_ET = 0x0010,
QL3032_PORT_CONTROL_EF = 0x0020,
QL3032_PORT_CONTROL_DRM = 0x0040,
QL3032_PORT_CONTROL_RLB = 0x0080,
QL3032_PORT_CONTROL_RCB = 0x0100,
QL3032_PORT_CONTROL_KIE = 0x0200,
}; };
enum { enum {
@ -657,7 +692,8 @@ struct ql3xxx_port_registers {
u32 internalRamWDataReg; u32 internalRamWDataReg;
u32 reclaimedBufferAddrRegLow; u32 reclaimedBufferAddrRegLow;
u32 reclaimedBufferAddrRegHigh; u32 reclaimedBufferAddrRegHigh;
u32 reserved[2]; u32 tcpConfiguration;
u32 functionControl;
u32 fpgaRevID; u32 fpgaRevID;
u32 localRamAddr; u32 localRamAddr;
u32 localRamDataAutoIncr; u32 localRamDataAutoIncr;
@ -963,6 +999,7 @@ struct eeprom_data {
#define QL3XXX_VENDOR_ID 0x1077 #define QL3XXX_VENDOR_ID 0x1077
#define QL3022_DEVICE_ID 0x3022 #define QL3022_DEVICE_ID 0x3022
#define QL3032_DEVICE_ID 0x3032
/* MTU & Frame Size stuff */ /* MTU & Frame Size stuff */
#define NORMAL_MTU_SIZE ETH_DATA_LEN #define NORMAL_MTU_SIZE ETH_DATA_LEN
@ -1038,11 +1075,41 @@ struct ql_rcv_buf_cb {
int index; int index;
}; };
/*
* Original IOCB has 3 sg entries:
* first points to skb-data area
* second points to first frag
* third points to next oal.
* OAL has 5 entries:
* 1 thru 4 point to frags
* fifth points to next oal.
*/
#define MAX_OAL_CNT ((MAX_SKB_FRAGS-1)/4 + 1)
struct oal_entry {
u32 dma_lo;
u32 dma_hi;
u32 len;
#define OAL_LAST_ENTRY 0x80000000 /* Last valid buffer in list. */
#define OAL_CONT_ENTRY 0x40000000 /* points to an OAL. (continuation) */
u32 reserved;
};
struct oal {
struct oal_entry oal_entry[5];
};
struct map_list {
DECLARE_PCI_UNMAP_ADDR(mapaddr);
DECLARE_PCI_UNMAP_LEN(maplen);
};
struct ql_tx_buf_cb { struct ql_tx_buf_cb {
struct sk_buff *skb; struct sk_buff *skb;
struct ob_mac_iocb_req *queue_entry ; struct ob_mac_iocb_req *queue_entry ;
DECLARE_PCI_UNMAP_ADDR(mapaddr); int seg_count;
DECLARE_PCI_UNMAP_LEN(maplen); struct oal *oal;
struct map_list map[MAX_SKB_FRAGS+1];
}; };
/* definitions for type field */ /* definitions for type field */
@ -1189,6 +1256,7 @@ struct ql3_adapter {
struct delayed_work reset_work; struct delayed_work reset_work;
struct delayed_work tx_timeout_work; struct delayed_work tx_timeout_work;
u32 max_frame_size; u32 max_frame_size;
u32 device_id;
}; };
#endif /* _QLA3XXX_H_ */ #endif /* _QLA3XXX_H_ */

View File

@ -15,7 +15,7 @@
#define TBD 0 #define TBD 0
typedef struct _XENA_dev_config { struct XENA_dev_config {
/* Convention: mHAL_XXX is mask, vHAL_XXX is value */ /* Convention: mHAL_XXX is mask, vHAL_XXX is value */
/* General Control-Status Registers */ /* General Control-Status Registers */
@ -300,6 +300,7 @@ typedef struct _XENA_dev_config {
u64 gpio_control; u64 gpio_control;
#define GPIO_CTRL_GPIO_0 BIT(8) #define GPIO_CTRL_GPIO_0 BIT(8)
u64 misc_control; u64 misc_control;
#define FAULT_BEHAVIOUR BIT(0)
#define EXT_REQ_EN BIT(1) #define EXT_REQ_EN BIT(1)
#define MISC_LINK_STABILITY_PRD(val) vBIT(val,29,3) #define MISC_LINK_STABILITY_PRD(val) vBIT(val,29,3)
@ -851,9 +852,9 @@ typedef struct _XENA_dev_config {
#define SPI_CONTROL_DONE BIT(6) #define SPI_CONTROL_DONE BIT(6)
u64 spi_data; u64 spi_data;
#define SPI_DATA_WRITE(data,len) vBIT(data,0,len) #define SPI_DATA_WRITE(data,len) vBIT(data,0,len)
} XENA_dev_config_t; };
#define XENA_REG_SPACE sizeof(XENA_dev_config_t) #define XENA_REG_SPACE sizeof(struct XENA_dev_config)
#define XENA_EEPROM_SPACE (0x01 << 11) #define XENA_EEPROM_SPACE (0x01 << 11)
#endif /* _REGS_H */ #endif /* _REGS_H */

File diff suppressed because it is too large Load Diff

View File

@ -30,6 +30,8 @@
#undef SUCCESS #undef SUCCESS
#define SUCCESS 0 #define SUCCESS 0
#define FAILURE -1 #define FAILURE -1
#define S2IO_MINUS_ONE 0xFFFFFFFFFFFFFFFFULL
#define S2IO_MAX_PCI_CONFIG_SPACE_REINIT 100
#define CHECKBIT(value, nbit) (value & (1 << nbit)) #define CHECKBIT(value, nbit) (value & (1 << nbit))
@ -37,7 +39,7 @@
#define MAX_FLICKER_TIME 60000 /* 60 Secs */ #define MAX_FLICKER_TIME 60000 /* 60 Secs */
/* Maximum outstanding splits to be configured into xena. */ /* Maximum outstanding splits to be configured into xena. */
typedef enum xena_max_outstanding_splits { enum {
XENA_ONE_SPLIT_TRANSACTION = 0, XENA_ONE_SPLIT_TRANSACTION = 0,
XENA_TWO_SPLIT_TRANSACTION = 1, XENA_TWO_SPLIT_TRANSACTION = 1,
XENA_THREE_SPLIT_TRANSACTION = 2, XENA_THREE_SPLIT_TRANSACTION = 2,
@ -46,7 +48,7 @@ typedef enum xena_max_outstanding_splits {
XENA_TWELVE_SPLIT_TRANSACTION = 5, XENA_TWELVE_SPLIT_TRANSACTION = 5,
XENA_SIXTEEN_SPLIT_TRANSACTION = 6, XENA_SIXTEEN_SPLIT_TRANSACTION = 6,
XENA_THIRTYTWO_SPLIT_TRANSACTION = 7 XENA_THIRTYTWO_SPLIT_TRANSACTION = 7
} xena_max_outstanding_splits; };
#define XENA_MAX_OUTSTANDING_SPLITS(n) (n << 4) #define XENA_MAX_OUTSTANDING_SPLITS(n) (n << 4)
/* OS concerned variables and constants */ /* OS concerned variables and constants */
@ -77,7 +79,7 @@ static int debug_level = ERR_DBG;
#define S2IO_JUMBO_SIZE 9600 #define S2IO_JUMBO_SIZE 9600
/* Driver statistics maintained by driver */ /* Driver statistics maintained by driver */
typedef struct { struct swStat {
unsigned long long single_ecc_errs; unsigned long long single_ecc_errs;
unsigned long long double_ecc_errs; unsigned long long double_ecc_errs;
unsigned long long parity_err_cnt; unsigned long long parity_err_cnt;
@ -92,10 +94,10 @@ typedef struct {
unsigned long long flush_max_pkts; unsigned long long flush_max_pkts;
unsigned long long sum_avg_pkts_aggregated; unsigned long long sum_avg_pkts_aggregated;
unsigned long long num_aggregations; unsigned long long num_aggregations;
} swStat_t; };
/* Xpak releated alarm and warnings */ /* Xpak releated alarm and warnings */
typedef struct { struct xpakStat {
u64 alarm_transceiver_temp_high; u64 alarm_transceiver_temp_high;
u64 alarm_transceiver_temp_low; u64 alarm_transceiver_temp_low;
u64 alarm_laser_bias_current_high; u64 alarm_laser_bias_current_high;
@ -110,11 +112,11 @@ typedef struct {
u64 warn_laser_output_power_low; u64 warn_laser_output_power_low;
u64 xpak_regs_stat; u64 xpak_regs_stat;
u32 xpak_timer_count; u32 xpak_timer_count;
} xpakStat_t; };
/* The statistics block of Xena */ /* The statistics block of Xena */
typedef struct stat_block { struct stat_block {
/* Tx MAC statistics counters. */ /* Tx MAC statistics counters. */
__le32 tmac_data_octets; __le32 tmac_data_octets;
__le32 tmac_frms; __le32 tmac_frms;
@ -290,9 +292,9 @@ typedef struct stat_block {
__le32 reserved_14; __le32 reserved_14;
__le32 link_fault_cnt; __le32 link_fault_cnt;
u8 buffer[20]; u8 buffer[20];
swStat_t sw_stat; struct swStat sw_stat;
xpakStat_t xpak_stat; struct xpakStat xpak_stat;
} StatInfo_t; };
/* /*
* Structures representing different init time configuration * Structures representing different init time configuration
@ -315,7 +317,7 @@ static int fifo_map[][MAX_TX_FIFOS] = {
}; };
/* Maintains Per FIFO related information. */ /* Maintains Per FIFO related information. */
typedef struct tx_fifo_config { struct tx_fifo_config {
#define MAX_AVAILABLE_TXDS 8192 #define MAX_AVAILABLE_TXDS 8192
u32 fifo_len; /* specifies len of FIFO upto 8192, ie no of TxDLs */ u32 fifo_len; /* specifies len of FIFO upto 8192, ie no of TxDLs */
/* Priority definition */ /* Priority definition */
@ -332,11 +334,11 @@ typedef struct tx_fifo_config {
u8 f_no_snoop; u8 f_no_snoop;
#define NO_SNOOP_TXD 0x01 #define NO_SNOOP_TXD 0x01
#define NO_SNOOP_TXD_BUFFER 0x02 #define NO_SNOOP_TXD_BUFFER 0x02
} tx_fifo_config_t; };
/* Maintains per Ring related information */ /* Maintains per Ring related information */
typedef struct rx_ring_config { struct rx_ring_config {
u32 num_rxd; /*No of RxDs per Rx Ring */ u32 num_rxd; /*No of RxDs per Rx Ring */
#define RX_RING_PRI_0 0 /* highest */ #define RX_RING_PRI_0 0 /* highest */
#define RX_RING_PRI_1 1 #define RX_RING_PRI_1 1
@ -357,7 +359,7 @@ typedef struct rx_ring_config {
u8 f_no_snoop; u8 f_no_snoop;
#define NO_SNOOP_RXD 0x01 #define NO_SNOOP_RXD 0x01
#define NO_SNOOP_RXD_BUFFER 0x02 #define NO_SNOOP_RXD_BUFFER 0x02
} rx_ring_config_t; };
/* This structure provides contains values of the tunable parameters /* This structure provides contains values of the tunable parameters
* of the H/W * of the H/W
@ -367,7 +369,7 @@ struct config_param {
u32 tx_fifo_num; /*Number of Tx FIFOs */ u32 tx_fifo_num; /*Number of Tx FIFOs */
u8 fifo_mapping[MAX_TX_FIFOS]; u8 fifo_mapping[MAX_TX_FIFOS];
tx_fifo_config_t tx_cfg[MAX_TX_FIFOS]; /*Per-Tx FIFO config */ struct tx_fifo_config tx_cfg[MAX_TX_FIFOS]; /*Per-Tx FIFO config */
u32 max_txds; /*Max no. of Tx buffer descriptor per TxDL */ u32 max_txds; /*Max no. of Tx buffer descriptor per TxDL */
u64 tx_intr_type; u64 tx_intr_type;
/* Specifies if Tx Intr is UTILZ or PER_LIST type. */ /* Specifies if Tx Intr is UTILZ or PER_LIST type. */
@ -376,7 +378,7 @@ struct config_param {
u32 rx_ring_num; /*Number of receive rings */ u32 rx_ring_num; /*Number of receive rings */
#define MAX_RX_BLOCKS_PER_RING 150 #define MAX_RX_BLOCKS_PER_RING 150
rx_ring_config_t rx_cfg[MAX_RX_RINGS]; /*Per-Rx Ring config */ struct rx_ring_config rx_cfg[MAX_RX_RINGS]; /*Per-Rx Ring config */
u8 bimodal; /*Flag for setting bimodal interrupts*/ u8 bimodal; /*Flag for setting bimodal interrupts*/
#define HEADER_ETHERNET_II_802_3_SIZE 14 #define HEADER_ETHERNET_II_802_3_SIZE 14
@ -395,14 +397,14 @@ struct config_param {
}; };
/* Structure representing MAC Addrs */ /* Structure representing MAC Addrs */
typedef struct mac_addr { struct mac_addr {
u8 mac_addr[ETH_ALEN]; u8 mac_addr[ETH_ALEN];
} macaddr_t; };
/* Structure that represent every FIFO element in the BAR1 /* Structure that represent every FIFO element in the BAR1
* Address location. * Address location.
*/ */
typedef struct _TxFIFO_element { struct TxFIFO_element {
u64 TxDL_Pointer; u64 TxDL_Pointer;
u64 List_Control; u64 List_Control;
@ -413,10 +415,10 @@ typedef struct _TxFIFO_element {
#define TX_FIFO_SPECIAL_FUNC BIT(23) #define TX_FIFO_SPECIAL_FUNC BIT(23)
#define TX_FIFO_DS_NO_SNOOP BIT(31) #define TX_FIFO_DS_NO_SNOOP BIT(31)
#define TX_FIFO_BUFF_NO_SNOOP BIT(30) #define TX_FIFO_BUFF_NO_SNOOP BIT(30)
} TxFIFO_element_t; };
/* Tx descriptor structure */ /* Tx descriptor structure */
typedef struct _TxD { struct TxD {
u64 Control_1; u64 Control_1;
/* bit mask */ /* bit mask */
#define TXD_LIST_OWN_XENA BIT(7) #define TXD_LIST_OWN_XENA BIT(7)
@ -447,16 +449,16 @@ typedef struct _TxD {
u64 Buffer_Pointer; u64 Buffer_Pointer;
u64 Host_Control; /* reserved for host */ u64 Host_Control; /* reserved for host */
} TxD_t; };
/* Structure to hold the phy and virt addr of every TxDL. */ /* Structure to hold the phy and virt addr of every TxDL. */
typedef struct list_info_hold { struct list_info_hold {
dma_addr_t list_phy_addr; dma_addr_t list_phy_addr;
void *list_virt_addr; void *list_virt_addr;
} list_info_hold_t; };
/* Rx descriptor structure for 1 buffer mode */ /* Rx descriptor structure for 1 buffer mode */
typedef struct _RxD_t { struct RxD_t {
u64 Host_Control; /* reserved for host */ u64 Host_Control; /* reserved for host */
u64 Control_1; u64 Control_1;
#define RXD_OWN_XENA BIT(7) #define RXD_OWN_XENA BIT(7)
@ -481,21 +483,21 @@ typedef struct _RxD_t {
#define SET_NUM_TAG(val) vBIT(val,16,32) #define SET_NUM_TAG(val) vBIT(val,16,32)
} RxD_t; };
/* Rx descriptor structure for 1 buffer mode */ /* Rx descriptor structure for 1 buffer mode */
typedef struct _RxD1_t { struct RxD1 {
struct _RxD_t h; struct RxD_t h;
#define MASK_BUFFER0_SIZE_1 vBIT(0x3FFF,2,14) #define MASK_BUFFER0_SIZE_1 vBIT(0x3FFF,2,14)
#define SET_BUFFER0_SIZE_1(val) vBIT(val,2,14) #define SET_BUFFER0_SIZE_1(val) vBIT(val,2,14)
#define RXD_GET_BUFFER0_SIZE_1(_Control_2) \ #define RXD_GET_BUFFER0_SIZE_1(_Control_2) \
(u16)((_Control_2 & MASK_BUFFER0_SIZE_1) >> 48) (u16)((_Control_2 & MASK_BUFFER0_SIZE_1) >> 48)
u64 Buffer0_ptr; u64 Buffer0_ptr;
} RxD1_t; };
/* Rx descriptor structure for 3 or 2 buffer mode */ /* Rx descriptor structure for 3 or 2 buffer mode */
typedef struct _RxD3_t { struct RxD3 {
struct _RxD_t h; struct RxD_t h;
#define MASK_BUFFER0_SIZE_3 vBIT(0xFF,2,14) #define MASK_BUFFER0_SIZE_3 vBIT(0xFF,2,14)
#define MASK_BUFFER1_SIZE_3 vBIT(0xFFFF,16,16) #define MASK_BUFFER1_SIZE_3 vBIT(0xFFFF,16,16)
@ -515,15 +517,15 @@ typedef struct _RxD3_t {
u64 Buffer0_ptr; u64 Buffer0_ptr;
u64 Buffer1_ptr; u64 Buffer1_ptr;
u64 Buffer2_ptr; u64 Buffer2_ptr;
} RxD3_t; };
/* Structure that represents the Rx descriptor block which contains /* Structure that represents the Rx descriptor block which contains
* 128 Rx descriptors. * 128 Rx descriptors.
*/ */
typedef struct _RxD_block { struct RxD_block {
#define MAX_RXDS_PER_BLOCK_1 127 #define MAX_RXDS_PER_BLOCK_1 127
RxD1_t rxd[MAX_RXDS_PER_BLOCK_1]; struct RxD1 rxd[MAX_RXDS_PER_BLOCK_1];
u64 reserved_0; u64 reserved_0;
#define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL #define END_OF_BLOCK 0xFEFFFFFFFFFFFFFFULL
@ -533,22 +535,22 @@ typedef struct _RxD_block {
u64 pNext_RxD_Blk_physical; /* Buff0_ptr.In a 32 bit arch u64 pNext_RxD_Blk_physical; /* Buff0_ptr.In a 32 bit arch
* the upper 32 bits should * the upper 32 bits should
* be 0 */ * be 0 */
} RxD_block_t; };
#define SIZE_OF_BLOCK 4096 #define SIZE_OF_BLOCK 4096
#define RXD_MODE_1 0 #define RXD_MODE_1 0 /* One Buffer mode */
#define RXD_MODE_3A 1 #define RXD_MODE_3A 1 /* Three Buffer mode */
#define RXD_MODE_3B 2 #define RXD_MODE_3B 2 /* Two Buffer mode */
/* Structure to hold virtual addresses of Buf0 and Buf1 in /* Structure to hold virtual addresses of Buf0 and Buf1 in
* 2buf mode. */ * 2buf mode. */
typedef struct bufAdd { struct buffAdd {
void *ba_0_org; void *ba_0_org;
void *ba_1_org; void *ba_1_org;
void *ba_0; void *ba_0;
void *ba_1; void *ba_1;
} buffAdd_t; };
/* Structure which stores all the MAC control parameters */ /* Structure which stores all the MAC control parameters */
@ -556,43 +558,46 @@ typedef struct bufAdd {
* from which the Rx Interrupt processor can start picking * from which the Rx Interrupt processor can start picking
* up the RxDs for processing. * up the RxDs for processing.
*/ */
typedef struct _rx_curr_get_info_t { struct rx_curr_get_info {
u32 block_index; u32 block_index;
u32 offset; u32 offset;
u32 ring_len; u32 ring_len;
} rx_curr_get_info_t; };
typedef rx_curr_get_info_t rx_curr_put_info_t; struct rx_curr_put_info {
u32 block_index;
u32 offset;
u32 ring_len;
};
/* This structure stores the offset of the TxDl in the FIFO /* This structure stores the offset of the TxDl in the FIFO
* from which the Tx Interrupt processor can start picking * from which the Tx Interrupt processor can start picking
* up the TxDLs for send complete interrupt processing. * up the TxDLs for send complete interrupt processing.
*/ */
typedef struct { struct tx_curr_get_info {
u32 offset; u32 offset;
u32 fifo_len; u32 fifo_len;
} tx_curr_get_info_t; };
typedef tx_curr_get_info_t tx_curr_put_info_t; struct tx_curr_put_info {
u32 offset;
u32 fifo_len;
};
struct rxd_info {
typedef struct rxd_info {
void *virt_addr; void *virt_addr;
dma_addr_t dma_addr; dma_addr_t dma_addr;
}rxd_info_t; };
/* Structure that holds the Phy and virt addresses of the Blocks */ /* Structure that holds the Phy and virt addresses of the Blocks */
typedef struct rx_block_info { struct rx_block_info {
void *block_virt_addr; void *block_virt_addr;
dma_addr_t block_dma_addr; dma_addr_t block_dma_addr;
rxd_info_t *rxds; struct rxd_info *rxds;
} rx_block_info_t; };
/* pre declaration of the nic structure */
typedef struct s2io_nic nic_t;
/* Ring specific structure */ /* Ring specific structure */
typedef struct ring_info { struct ring_info {
/* The ring number */ /* The ring number */
int ring_no; int ring_no;
@ -600,7 +605,7 @@ typedef struct ring_info {
* Place holders for the virtual and physical addresses of * Place holders for the virtual and physical addresses of
* all the Rx Blocks * all the Rx Blocks
*/ */
rx_block_info_t rx_blocks[MAX_RX_BLOCKS_PER_RING]; struct rx_block_info rx_blocks[MAX_RX_BLOCKS_PER_RING];
int block_count; int block_count;
int pkt_cnt; int pkt_cnt;
@ -608,26 +613,24 @@ typedef struct ring_info {
* Put pointer info which indictes which RxD has to be replenished * Put pointer info which indictes which RxD has to be replenished
* with a new buffer. * with a new buffer.
*/ */
rx_curr_put_info_t rx_curr_put_info; struct rx_curr_put_info rx_curr_put_info;
/* /*
* Get pointer info which indictes which is the last RxD that was * Get pointer info which indictes which is the last RxD that was
* processed by the driver. * processed by the driver.
*/ */
rx_curr_get_info_t rx_curr_get_info; struct rx_curr_get_info rx_curr_get_info;
#ifndef CONFIG_S2IO_NAPI
/* Index to the absolute position of the put pointer of Rx ring */ /* Index to the absolute position of the put pointer of Rx ring */
int put_pos; int put_pos;
#endif
/* Buffer Address store. */ /* Buffer Address store. */
buffAdd_t **ba; struct buffAdd **ba;
nic_t *nic; struct s2io_nic *nic;
} ring_info_t; };
/* Fifo specific structure */ /* Fifo specific structure */
typedef struct fifo_info { struct fifo_info {
/* FIFO number */ /* FIFO number */
int fifo_no; int fifo_no;
@ -635,40 +638,40 @@ typedef struct fifo_info {
int max_txds; int max_txds;
/* Place holder of all the TX List's Phy and Virt addresses. */ /* Place holder of all the TX List's Phy and Virt addresses. */
list_info_hold_t *list_info; struct list_info_hold *list_info;
/* /*
* Current offset within the tx FIFO where driver would write * Current offset within the tx FIFO where driver would write
* new Tx frame * new Tx frame
*/ */
tx_curr_put_info_t tx_curr_put_info; struct tx_curr_put_info tx_curr_put_info;
/* /*
* Current offset within tx FIFO from where the driver would start freeing * Current offset within tx FIFO from where the driver would start freeing
* the buffers * the buffers
*/ */
tx_curr_get_info_t tx_curr_get_info; struct tx_curr_get_info tx_curr_get_info;
nic_t *nic; struct s2io_nic *nic;
}fifo_info_t; };
/* Information related to the Tx and Rx FIFOs and Rings of Xena /* Information related to the Tx and Rx FIFOs and Rings of Xena
* is maintained in this structure. * is maintained in this structure.
*/ */
typedef struct mac_info { struct mac_info {
/* tx side stuff */ /* tx side stuff */
/* logical pointer of start of each Tx FIFO */ /* logical pointer of start of each Tx FIFO */
TxFIFO_element_t __iomem *tx_FIFO_start[MAX_TX_FIFOS]; struct TxFIFO_element __iomem *tx_FIFO_start[MAX_TX_FIFOS];
/* Fifo specific structure */ /* Fifo specific structure */
fifo_info_t fifos[MAX_TX_FIFOS]; struct fifo_info fifos[MAX_TX_FIFOS];
/* Save virtual address of TxD page with zero DMA addr(if any) */ /* Save virtual address of TxD page with zero DMA addr(if any) */
void *zerodma_virt_addr; void *zerodma_virt_addr;
/* rx side stuff */ /* rx side stuff */
/* Ring specific structure */ /* Ring specific structure */
ring_info_t rings[MAX_RX_RINGS]; struct ring_info rings[MAX_RX_RINGS];
u16 rmac_pause_time; u16 rmac_pause_time;
u16 mc_pause_threshold_q0q3; u16 mc_pause_threshold_q0q3;
@ -677,14 +680,14 @@ typedef struct mac_info {
void *stats_mem; /* orignal pointer to allocated mem */ void *stats_mem; /* orignal pointer to allocated mem */
dma_addr_t stats_mem_phy; /* Physical address of the stat block */ dma_addr_t stats_mem_phy; /* Physical address of the stat block */
u32 stats_mem_sz; u32 stats_mem_sz;
StatInfo_t *stats_info; /* Logical address of the stat block */ struct stat_block *stats_info; /* Logical address of the stat block */
} mac_info_t; };
/* structure representing the user defined MAC addresses */ /* structure representing the user defined MAC addresses */
typedef struct { struct usr_addr {
char addr[ETH_ALEN]; char addr[ETH_ALEN];
int usage_cnt; int usage_cnt;
} usr_addr_t; };
/* Default Tunable parameters of the NIC. */ /* Default Tunable parameters of the NIC. */
#define DEFAULT_FIFO_0_LEN 4096 #define DEFAULT_FIFO_0_LEN 4096
@ -717,7 +720,7 @@ struct msix_info_st {
}; };
/* Data structure to represent a LRO session */ /* Data structure to represent a LRO session */
typedef struct lro { struct lro {
struct sk_buff *parent; struct sk_buff *parent;
struct sk_buff *last_frag; struct sk_buff *last_frag;
u8 *l2h; u8 *l2h;
@ -733,20 +736,18 @@ typedef struct lro {
u32 cur_tsval; u32 cur_tsval;
u32 cur_tsecr; u32 cur_tsecr;
u8 saw_ts; u8 saw_ts;
}lro_t; };
/* Structure representing one instance of the NIC */ /* Structure representing one instance of the NIC */
struct s2io_nic { struct s2io_nic {
int rxd_mode; int rxd_mode;
#ifdef CONFIG_S2IO_NAPI
/* /*
* Count of packets to be processed in a given iteration, it will be indicated * Count of packets to be processed in a given iteration, it will be indicated
* by the quota field of the device structure when NAPI is enabled. * by the quota field of the device structure when NAPI is enabled.
*/ */
int pkts_to_process; int pkts_to_process;
#endif
struct net_device *dev; struct net_device *dev;
mac_info_t mac_control; struct mac_info mac_control;
struct config_param config; struct config_param config;
struct pci_dev *pdev; struct pci_dev *pdev;
void __iomem *bar0; void __iomem *bar0;
@ -754,8 +755,8 @@ struct s2io_nic {
#define MAX_MAC_SUPPORTED 16 #define MAX_MAC_SUPPORTED 16
#define MAX_SUPPORTED_MULTICASTS MAX_MAC_SUPPORTED #define MAX_SUPPORTED_MULTICASTS MAX_MAC_SUPPORTED
macaddr_t def_mac_addr[MAX_MAC_SUPPORTED]; struct mac_addr def_mac_addr[MAX_MAC_SUPPORTED];
macaddr_t pre_mac_addr[MAX_MAC_SUPPORTED]; struct mac_addr pre_mac_addr[MAX_MAC_SUPPORTED];
struct net_device_stats stats; struct net_device_stats stats;
int high_dma_flag; int high_dma_flag;
@ -775,9 +776,7 @@ struct s2io_nic {
atomic_t rx_bufs_left[MAX_RX_RINGS]; atomic_t rx_bufs_left[MAX_RX_RINGS];
spinlock_t tx_lock; spinlock_t tx_lock;
#ifndef CONFIG_S2IO_NAPI
spinlock_t put_lock; spinlock_t put_lock;
#endif
#define PROMISC 1 #define PROMISC 1
#define ALL_MULTI 2 #define ALL_MULTI 2
@ -785,7 +784,7 @@ struct s2io_nic {
#define MAX_ADDRS_SUPPORTED 64 #define MAX_ADDRS_SUPPORTED 64
u16 usr_addr_count; u16 usr_addr_count;
u16 mc_addr_count; u16 mc_addr_count;
usr_addr_t usr_addrs[MAX_ADDRS_SUPPORTED]; struct usr_addr usr_addrs[MAX_ADDRS_SUPPORTED];
u16 m_cast_flg; u16 m_cast_flg;
u16 all_multi_pos; u16 all_multi_pos;
@ -841,7 +840,7 @@ struct s2io_nic {
u8 device_type; u8 device_type;
#define MAX_LRO_SESSIONS 32 #define MAX_LRO_SESSIONS 32
lro_t lro0_n[MAX_LRO_SESSIONS]; struct lro lro0_n[MAX_LRO_SESSIONS];
unsigned long clubbed_frms_cnt; unsigned long clubbed_frms_cnt;
unsigned long sending_both; unsigned long sending_both;
u8 lro; u8 lro;
@ -855,8 +854,9 @@ struct s2io_nic {
spinlock_t rx_lock; spinlock_t rx_lock;
atomic_t isr_cnt; atomic_t isr_cnt;
u64 *ufo_in_band_v; u64 *ufo_in_band_v;
#define VPD_PRODUCT_NAME_LEN 50 #define VPD_STRING_LEN 80
u8 product_name[VPD_PRODUCT_NAME_LEN]; u8 product_name[VPD_STRING_LEN];
u8 serial_num[VPD_STRING_LEN];
}; };
#define RESET_ERROR 1; #define RESET_ERROR 1;
@ -975,43 +975,50 @@ static void __devexit s2io_rem_nic(struct pci_dev *pdev);
static int init_shared_mem(struct s2io_nic *sp); static int init_shared_mem(struct s2io_nic *sp);
static void free_shared_mem(struct s2io_nic *sp); static void free_shared_mem(struct s2io_nic *sp);
static int init_nic(struct s2io_nic *nic); static int init_nic(struct s2io_nic *nic);
static void rx_intr_handler(ring_info_t *ring_data); static void rx_intr_handler(struct ring_info *ring_data);
static void tx_intr_handler(fifo_info_t *fifo_data); static void tx_intr_handler(struct fifo_info *fifo_data);
static void alarm_intr_handler(struct s2io_nic *sp); static void alarm_intr_handler(struct s2io_nic *sp);
static int s2io_starter(void); static int s2io_starter(void);
static void s2io_closer(void);
static void s2io_tx_watchdog(struct net_device *dev); static void s2io_tx_watchdog(struct net_device *dev);
static void s2io_tasklet(unsigned long dev_addr); static void s2io_tasklet(unsigned long dev_addr);
static void s2io_set_multicast(struct net_device *dev); static void s2io_set_multicast(struct net_device *dev);
static int rx_osm_handler(ring_info_t *ring_data, RxD_t * rxdp); static int rx_osm_handler(struct ring_info *ring_data, struct RxD_t * rxdp);
static void s2io_link(nic_t * sp, int link); static void s2io_link(struct s2io_nic * sp, int link);
#if defined(CONFIG_S2IO_NAPI) static void s2io_reset(struct s2io_nic * sp);
static int s2io_poll(struct net_device *dev, int *budget); static int s2io_poll(struct net_device *dev, int *budget);
#endif static void s2io_init_pci(struct s2io_nic * sp);
static void s2io_init_pci(nic_t * sp);
static int s2io_set_mac_addr(struct net_device *dev, u8 * addr); static int s2io_set_mac_addr(struct net_device *dev, u8 * addr);
static void s2io_alarm_handle(unsigned long data); static void s2io_alarm_handle(unsigned long data);
static int s2io_enable_msi(nic_t *nic); static int s2io_enable_msi(struct s2io_nic *nic);
static irqreturn_t s2io_msi_handle(int irq, void *dev_id); static irqreturn_t s2io_msi_handle(int irq, void *dev_id);
static irqreturn_t static irqreturn_t
s2io_msix_ring_handle(int irq, void *dev_id); s2io_msix_ring_handle(int irq, void *dev_id);
static irqreturn_t static irqreturn_t
s2io_msix_fifo_handle(int irq, void *dev_id); s2io_msix_fifo_handle(int irq, void *dev_id);
static irqreturn_t s2io_isr(int irq, void *dev_id); static irqreturn_t s2io_isr(int irq, void *dev_id);
static int verify_xena_quiescence(nic_t *sp, u64 val64, int flag); static int verify_xena_quiescence(struct s2io_nic *sp);
static const struct ethtool_ops netdev_ethtool_ops; static const struct ethtool_ops netdev_ethtool_ops;
static void s2io_set_link(struct work_struct *work); static void s2io_set_link(struct work_struct *work);
static int s2io_set_swapper(nic_t * sp); static int s2io_set_swapper(struct s2io_nic * sp);
static void s2io_card_down(nic_t *nic); static void s2io_card_down(struct s2io_nic *nic);
static int s2io_card_up(nic_t *nic); static int s2io_card_up(struct s2io_nic *nic);
static int get_xena_rev_id(struct pci_dev *pdev); static int get_xena_rev_id(struct pci_dev *pdev);
static void restore_xmsi_data(nic_t *nic); static int wait_for_cmd_complete(void *addr, u64 busy_bit);
static int s2io_add_isr(struct s2io_nic * sp);
static void s2io_rem_isr(struct s2io_nic * sp);
static int s2io_club_tcp_session(u8 *buffer, u8 **tcp, u32 *tcp_len, lro_t **lro, RxD_t *rxdp, nic_t *sp); static void restore_xmsi_data(struct s2io_nic *nic);
static void clear_lro_session(lro_t *lro);
static int
s2io_club_tcp_session(u8 *buffer, u8 **tcp, u32 *tcp_len, struct lro **lro,
struct RxD_t *rxdp, struct s2io_nic *sp);
static void clear_lro_session(struct lro *lro);
static void queue_rx_frame(struct sk_buff *skb); static void queue_rx_frame(struct sk_buff *skb);
static void update_L3L4_header(nic_t *sp, lro_t *lro); static void update_L3L4_header(struct s2io_nic *sp, struct lro *lro);
static void lro_append_pkt(nic_t *sp, lro_t *lro, struct sk_buff *skb, u32 tcp_len); static void lro_append_pkt(struct s2io_nic *sp, struct lro *lro,
struct sk_buff *skb, u32 tcp_len);
#define s2io_tcp_mss(skb) skb_shinfo(skb)->gso_size #define s2io_tcp_mss(skb) skb_shinfo(skb)->gso_size
#define s2io_udp_mss(skb) skb_shinfo(skb)->gso_size #define s2io_udp_mss(skb) skb_shinfo(skb)->gso_size

1620
drivers/net/sc92031.c Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,170 +0,0 @@
#ifndef _SK_MCA_INCLUDE_
#define _SK_MCA_INCLUDE_
#ifdef _SK_MCA_DRIVER_
/* Adapter ID's */
#define SKNET_MCA_ID 0x6afd
#define SKNET_JUNIOR_MCA_ID 0x6be9
/* media enumeration - defined in a way that it fits onto the MC2+'s
POS registers... */
typedef enum { Media_10Base2, Media_10BaseT,
Media_10Base5, Media_Unknown, Media_Count
} skmca_medium;
/* private structure */
typedef struct {
unsigned int slot; /* MCA-Slot-# */
void __iomem *base;
void __iomem *macbase; /* base address of MAC address PROM */
void __iomem *ioregaddr;/* address of I/O-register (Lo) */
void __iomem *ctrladdr; /* address of control/stat register */
void __iomem *cmdaddr; /* address of I/O-command register */
int nextrx; /* index of next RX descriptor to
be read */
int nexttxput; /* index of next free TX descriptor */
int nexttxdone; /* index of next TX descriptor to
be finished */
int txbusy; /* # of busy TX descriptors */
struct net_device_stats stat; /* packet statistics */
int realirq; /* memorizes actual IRQ, even when
currently not allocated */
skmca_medium medium; /* physical cannector */
spinlock_t lock;
} skmca_priv;
/* card registers: control/status register bits */
#define CTRL_ADR_DATA 0 /* Bit 0 = 0 ->access data register */
#define CTRL_ADR_RAP 1 /* Bit 0 = 1 ->access RAP register */
#define CTRL_RW_WRITE 0 /* Bit 1 = 0 ->write register */
#define CTRL_RW_READ 2 /* Bit 1 = 1 ->read register */
#define CTRL_RESET_ON 0 /* Bit 3 = 0 ->reset board */
#define CTRL_RESET_OFF 8 /* Bit 3 = 1 ->no reset of board */
#define STAT_ADR_DATA 0 /* Bit 0 of ctrl register read back */
#define STAT_ADR_RAP 1
#define STAT_RW_WRITE 0 /* Bit 1 of ctrl register read back */
#define STAT_RW_READ 2
#define STAT_RESET_ON 0 /* Bit 3 of ctrl register read back */
#define STAT_RESET_OFF 8
#define STAT_IRQ_ACT 0 /* interrupt pending */
#define STAT_IRQ_NOACT 16 /* no interrupt pending */
#define STAT_IO_NOBUSY 0 /* no transfer busy */
#define STAT_IO_BUSY 32 /* transfer busy */
/* I/O command register bits */
#define IOCMD_GO 128 /* Bit 7 = 1 -> start register xfer */
/* LANCE registers */
#define LANCE_CSR0 0 /* Status/Control */
#define CSR0_ERR 0x8000 /* general error flag */
#define CSR0_BABL 0x4000 /* transmitter timeout */
#define CSR0_CERR 0x2000 /* collision error */
#define CSR0_MISS 0x1000 /* lost Rx block */
#define CSR0_MERR 0x0800 /* memory access error */
#define CSR0_RINT 0x0400 /* receiver interrupt */
#define CSR0_TINT 0x0200 /* transmitter interrupt */
#define CSR0_IDON 0x0100 /* initialization done */
#define CSR0_INTR 0x0080 /* general interrupt flag */
#define CSR0_INEA 0x0040 /* interrupt enable */
#define CSR0_RXON 0x0020 /* receiver enabled */
#define CSR0_TXON 0x0010 /* transmitter enabled */
#define CSR0_TDMD 0x0008 /* force transmission now */
#define CSR0_STOP 0x0004 /* stop LANCE */
#define CSR0_STRT 0x0002 /* start LANCE */
#define CSR0_INIT 0x0001 /* read initialization block */
#define LANCE_CSR1 1 /* addr bit 0..15 of initialization */
#define LANCE_CSR2 2 /* 16..23 block */
#define LANCE_CSR3 3 /* Bus control */
#define CSR3_BCON_HOLD 0 /* Bit 0 = 0 -> BM1,BM0,HOLD */
#define CSR3_BCON_BUSRQ 1 /* Bit 0 = 1 -> BUSAK0,BYTE,BUSRQ */
#define CSR3_ALE_HIGH 0 /* Bit 1 = 0 -> ALE asserted high */
#define CSR3_ALE_LOW 2 /* Bit 1 = 1 -> ALE asserted low */
#define CSR3_BSWAP_OFF 0 /* Bit 2 = 0 -> no byte swap */
#define CSR3_BSWAP_ON 4 /* Bit 2 = 1 -> byte swap */
/* LANCE structures */
typedef struct { /* LANCE initialization block */
u16 Mode; /* mode flags */
u8 PAdr[6]; /* MAC address */
u8 LAdrF[8]; /* Multicast filter */
u32 RdrP; /* Receive descriptor */
u32 TdrP; /* Transmit descriptor */
} LANCE_InitBlock;
/* Mode flags init block */
#define LANCE_INIT_PROM 0x8000 /* enable promiscous mode */
#define LANCE_INIT_INTL 0x0040 /* internal loopback */
#define LANCE_INIT_DRTY 0x0020 /* disable retry */
#define LANCE_INIT_COLL 0x0010 /* force collision */
#define LANCE_INIT_DTCR 0x0008 /* disable transmit CRC */
#define LANCE_INIT_LOOP 0x0004 /* loopback */
#define LANCE_INIT_DTX 0x0002 /* disable transmitter */
#define LANCE_INIT_DRX 0x0001 /* disable receiver */
typedef struct { /* LANCE Tx descriptor */
u16 LowAddr; /* bit 0..15 of address */
u16 Flags; /* bit 16..23 of address + Flags */
u16 Len; /* 2s complement of packet length */
u16 Status; /* Result of transmission */
} LANCE_TxDescr;
#define TXDSCR_FLAGS_OWN 0x8000 /* LANCE owns descriptor */
#define TXDSCR_FLAGS_ERR 0x4000 /* summary error flag */
#define TXDSCR_FLAGS_MORE 0x1000 /* more than one retry needed? */
#define TXDSCR_FLAGS_ONE 0x0800 /* one retry? */
#define TXDSCR_FLAGS_DEF 0x0400 /* transmission deferred? */
#define TXDSCR_FLAGS_STP 0x0200 /* first packet in chain? */
#define TXDSCR_FLAGS_ENP 0x0100 /* last packet in chain? */
#define TXDSCR_STATUS_BUFF 0x8000 /* buffer error? */
#define TXDSCR_STATUS_UFLO 0x4000 /* silo underflow during transmit? */
#define TXDSCR_STATUS_LCOL 0x1000 /* late collision? */
#define TXDSCR_STATUS_LCAR 0x0800 /* loss of carrier? */
#define TXDSCR_STATUS_RTRY 0x0400 /* retry error? */
typedef struct { /* LANCE Rx descriptor */
u16 LowAddr; /* bit 0..15 of address */
u16 Flags; /* bit 16..23 of address + Flags */
u16 MaxLen; /* 2s complement of buffer length */
u16 Len; /* packet length */
} LANCE_RxDescr;
#define RXDSCR_FLAGS_OWN 0x8000 /* LANCE owns descriptor */
#define RXDSCR_FLAGS_ERR 0x4000 /* summary error flag */
#define RXDSCR_FLAGS_FRAM 0x2000 /* framing error flag */
#define RXDSCR_FLAGS_OFLO 0x1000 /* FIFO overflow? */
#define RXDSCR_FLAGS_CRC 0x0800 /* CRC error? */
#define RXDSCR_FLAGS_BUFF 0x0400 /* buffer error? */
#define RXDSCR_FLAGS_STP 0x0200 /* first packet in chain? */
#define RXDCSR_FLAGS_ENP 0x0100 /* last packet in chain? */
/* RAM layout */
#define TXCOUNT 4 /* length of TX descriptor queue */
#define LTXCOUNT 2 /* log2 of it */
#define RXCOUNT 4 /* length of RX descriptor queue */
#define LRXCOUNT 2 /* log2 of it */
#define RAM_INITBASE 0 /* LANCE init block */
#define RAM_TXBASE 24 /* Start of TX descriptor queue */
#define RAM_RXBASE \
(RAM_TXBASE + (TXCOUNT * 8)) /* Start of RX descriptor queue */
#define RAM_DATABASE \
(RAM_RXBASE + (RXCOUNT * 8)) /* Start of data area for frames */
#define RAM_BUFSIZE 1580 /* max. frame size - should never be
reached */
#endif /* _SK_MCA_DRIVER_ */
#endif /* _SK_MCA_INCLUDE_ */

View File

@ -1,83 +0,0 @@
/******************************************************************************
*
* (C)Copyright 1998,1999 SysKonnect,
* a business unit of Schneider & Koch & Co. Datensysteme GmbH.
*
* See the file "skfddi.c" for further information.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* The information in this file is provided "AS IS" without warranty.
*
******************************************************************************/
#ifndef lint
static const char xID_sccs[] = "@(#)can.c 1.5 97/04/07 (C) SK " ;
#endif
/*
* canonical bit order
*/
const u_char canonical[256] = {
0x00,0x80,0x40,0xc0,0x20,0xa0,0x60,0xe0,
0x10,0x90,0x50,0xd0,0x30,0xb0,0x70,0xf0,
0x08,0x88,0x48,0xc8,0x28,0xa8,0x68,0xe8,
0x18,0x98,0x58,0xd8,0x38,0xb8,0x78,0xf8,
0x04,0x84,0x44,0xc4,0x24,0xa4,0x64,0xe4,
0x14,0x94,0x54,0xd4,0x34,0xb4,0x74,0xf4,
0x0c,0x8c,0x4c,0xcc,0x2c,0xac,0x6c,0xec,
0x1c,0x9c,0x5c,0xdc,0x3c,0xbc,0x7c,0xfc,
0x02,0x82,0x42,0xc2,0x22,0xa2,0x62,0xe2,
0x12,0x92,0x52,0xd2,0x32,0xb2,0x72,0xf2,
0x0a,0x8a,0x4a,0xca,0x2a,0xaa,0x6a,0xea,
0x1a,0x9a,0x5a,0xda,0x3a,0xba,0x7a,0xfa,
0x06,0x86,0x46,0xc6,0x26,0xa6,0x66,0xe6,
0x16,0x96,0x56,0xd6,0x36,0xb6,0x76,0xf6,
0x0e,0x8e,0x4e,0xce,0x2e,0xae,0x6e,0xee,
0x1e,0x9e,0x5e,0xde,0x3e,0xbe,0x7e,0xfe,
0x01,0x81,0x41,0xc1,0x21,0xa1,0x61,0xe1,
0x11,0x91,0x51,0xd1,0x31,0xb1,0x71,0xf1,
0x09,0x89,0x49,0xc9,0x29,0xa9,0x69,0xe9,
0x19,0x99,0x59,0xd9,0x39,0xb9,0x79,0xf9,
0x05,0x85,0x45,0xc5,0x25,0xa5,0x65,0xe5,
0x15,0x95,0x55,0xd5,0x35,0xb5,0x75,0xf5,
0x0d,0x8d,0x4d,0xcd,0x2d,0xad,0x6d,0xed,
0x1d,0x9d,0x5d,0xdd,0x3d,0xbd,0x7d,0xfd,
0x03,0x83,0x43,0xc3,0x23,0xa3,0x63,0xe3,
0x13,0x93,0x53,0xd3,0x33,0xb3,0x73,0xf3,
0x0b,0x8b,0x4b,0xcb,0x2b,0xab,0x6b,0xeb,
0x1b,0x9b,0x5b,0xdb,0x3b,0xbb,0x7b,0xfb,
0x07,0x87,0x47,0xc7,0x27,0xa7,0x67,0xe7,
0x17,0x97,0x57,0xd7,0x37,0xb7,0x77,0xf7,
0x0f,0x8f,0x4f,0xcf,0x2f,0xaf,0x6f,0xef,
0x1f,0x9f,0x5f,0xdf,0x3f,0xbf,0x7f,0xff
} ;
#ifdef MAKE_TABLE
int byte_reverse(x)
int x ;
{
int y = 0 ;
if (x & 0x01)
y |= 0x80 ;
if (x & 0x02)
y |= 0x40 ;
if (x & 0x04)
y |= 0x20 ;
if (x & 0x08)
y |= 0x10 ;
if (x & 0x10)
y |= 0x08 ;
if (x & 0x20)
y |= 0x04 ;
if (x & 0x40)
y |= 0x02 ;
if (x & 0x80)
y |= 0x01 ;
return(y) ;
}
#endif

View File

@ -23,6 +23,7 @@
#include "h/smc.h" #include "h/smc.h"
#include "h/supern_2.h" #include "h/supern_2.h"
#include "h/skfbiinc.h" #include "h/skfbiinc.h"
#include <linux/bitrev.h>
#ifndef lint #ifndef lint
static const char ID_sccs[] = "@(#)drvfbi.c 1.63 99/02/11 (C) SK " ; static const char ID_sccs[] = "@(#)drvfbi.c 1.63 99/02/11 (C) SK " ;
@ -445,16 +446,14 @@ void read_address(struct s_smc *smc, u_char *mac_addr)
char PmdType ; char PmdType ;
int i ; int i ;
extern const u_char canonical[256] ;
#if (defined(ISA) || defined(MCA)) #if (defined(ISA) || defined(MCA))
for (i = 0; i < 4 ;i++) { /* read mac address from board */ for (i = 0; i < 4 ;i++) { /* read mac address from board */
smc->hw.fddi_phys_addr.a[i] = smc->hw.fddi_phys_addr.a[i] =
canonical[(inpw(PR_A(i+SA_MAC))&0xff)] ; bitrev8(inpw(PR_A(i+SA_MAC)));
} }
for (i = 4; i < 6; i++) { for (i = 4; i < 6; i++) {
smc->hw.fddi_phys_addr.a[i] = smc->hw.fddi_phys_addr.a[i] =
canonical[(inpw(PR_A(i+SA_MAC+PRA_OFF))&0xff)] ; bitrev8(inpw(PR_A(i+SA_MAC+PRA_OFF)));
} }
#endif #endif
#ifdef EISA #ifdef EISA
@ -464,17 +463,17 @@ void read_address(struct s_smc *smc, u_char *mac_addr)
*/ */
for (i = 0; i < 4 ;i++) { /* read mac address from board */ for (i = 0; i < 4 ;i++) { /* read mac address from board */
smc->hw.fddi_phys_addr.a[i] = smc->hw.fddi_phys_addr.a[i] =
canonical[inp(PR_A(i+SA_MAC))] ; bitrev8(inp(PR_A(i+SA_MAC)));
} }
for (i = 4; i < 6; i++) { for (i = 4; i < 6; i++) {
smc->hw.fddi_phys_addr.a[i] = smc->hw.fddi_phys_addr.a[i] =
canonical[inp(PR_A(i+SA_MAC+PRA_OFF))] ; bitrev8(inp(PR_A(i+SA_MAC+PRA_OFF)));
} }
#endif #endif
#ifdef PCI #ifdef PCI
for (i = 0; i < 6; i++) { /* read mac address from board */ for (i = 0; i < 6; i++) { /* read mac address from board */
smc->hw.fddi_phys_addr.a[i] = smc->hw.fddi_phys_addr.a[i] =
canonical[inp(ADDR(B2_MAC_0+i))] ; bitrev8(inp(ADDR(B2_MAC_0+i)));
} }
#endif #endif
#ifndef PCI #ifndef PCI
@ -493,7 +492,7 @@ void read_address(struct s_smc *smc, u_char *mac_addr)
if (mac_addr) { if (mac_addr) {
for (i = 0; i < 6 ;i++) { for (i = 0; i < 6 ;i++) {
smc->hw.fddi_canon_addr.a[i] = mac_addr[i] ; smc->hw.fddi_canon_addr.a[i] = mac_addr[i] ;
smc->hw.fddi_home_addr.a[i] = canonical[mac_addr[i]] ; smc->hw.fddi_home_addr.a[i] = bitrev8(mac_addr[i]);
} }
return ; return ;
} }
@ -501,7 +500,7 @@ void read_address(struct s_smc *smc, u_char *mac_addr)
for (i = 0; i < 6 ;i++) { for (i = 0; i < 6 ;i++) {
smc->hw.fddi_canon_addr.a[i] = smc->hw.fddi_canon_addr.a[i] =
canonical[smc->hw.fddi_phys_addr.a[i]] ; bitrev8(smc->hw.fddi_phys_addr.a[i]);
} }
} }
@ -1269,11 +1268,8 @@ void driver_get_bia(struct s_smc *smc, struct fddi_addr *bia_addr)
{ {
int i ; int i ;
extern const u_char canonical[256] ; for (i = 0 ; i < 6 ; i++)
bia_addr->a[i] = bitrev8(smc->hw.fddi_phys_addr.a[i]);
for (i = 0 ; i < 6 ; i++) {
bia_addr->a[i] = canonical[smc->hw.fddi_phys_addr.a[i]] ;
}
} }
void smt_start_watchdog(struct s_smc *smc) void smt_start_watchdog(struct s_smc *smc)

View File

@ -22,7 +22,7 @@
#include "h/fddi.h" #include "h/fddi.h"
#include "h/smc.h" #include "h/smc.h"
#include "h/supern_2.h" #include "h/supern_2.h"
#include "can.c" #include <linux/bitrev.h>
#ifndef lint #ifndef lint
static const char ID_sccs[] = "@(#)fplustm.c 1.32 99/02/23 (C) SK " ; static const char ID_sccs[] = "@(#)fplustm.c 1.32 99/02/23 (C) SK " ;
@ -1073,7 +1073,7 @@ static struct s_fpmc* mac_get_mc_table(struct s_smc *smc,
if (can) { if (can) {
p = own->a ; p = own->a ;
for (i = 0 ; i < 6 ; i++, p++) for (i = 0 ; i < 6 ; i++, p++)
*p = canonical[*p] ; *p = bitrev8(*p);
} }
slot = NULL; slot = NULL;
for (i = 0, tb = smc->hw.fp.mc.table ; i < FPMAX_MULTICAST ; i++, tb++){ for (i = 0, tb = smc->hw.fp.mc.table ; i < FPMAX_MULTICAST ; i++, tb++){

View File

@ -18,6 +18,7 @@
#include "h/fddi.h" #include "h/fddi.h"
#include "h/smc.h" #include "h/smc.h"
#include "h/smt_p.h" #include "h/smt_p.h"
#include <linux/bitrev.h>
#define KERNEL #define KERNEL
#include "h/smtstate.h" #include "h/smtstate.h"
@ -26,8 +27,6 @@
static const char ID_sccs[] = "@(#)smt.c 2.43 98/11/23 (C) SK " ; static const char ID_sccs[] = "@(#)smt.c 2.43 98/11/23 (C) SK " ;
#endif #endif
extern const u_char canonical[256] ;
/* /*
* FC in SMbuf * FC in SMbuf
*/ */
@ -180,7 +179,7 @@ void smt_agent_init(struct s_smc *smc)
driver_get_bia(smc,&smc->mib.fddiSMTStationId.sid_node) ; driver_get_bia(smc,&smc->mib.fddiSMTStationId.sid_node) ;
for (i = 0 ; i < 6 ; i ++) { for (i = 0 ; i < 6 ; i ++) {
smc->mib.fddiSMTStationId.sid_node.a[i] = smc->mib.fddiSMTStationId.sid_node.a[i] =
canonical[smc->mib.fddiSMTStationId.sid_node.a[i]] ; bitrev8(smc->mib.fddiSMTStationId.sid_node.a[i]);
} }
smc->mib.fddiSMTManufacturerData[0] = smc->mib.fddiSMTManufacturerData[0] =
smc->mib.fddiSMTStationId.sid_node.a[0] ; smc->mib.fddiSMTStationId.sid_node.a[0] ;
@ -2049,9 +2048,8 @@ static void hwm_conv_can(struct s_smc *smc, char *data, int len)
SK_UNUSED(smc) ; SK_UNUSED(smc) ;
for (i = len; i ; i--, data++) { for (i = len; i ; i--, data++)
*data = canonical[*(u_char *)data] ; *data = bitrev8(*data);
}
} }
#endif #endif

View File

@ -42,7 +42,7 @@
#include "skge.h" #include "skge.h"
#define DRV_NAME "skge" #define DRV_NAME "skge"
#define DRV_VERSION "1.9" #define DRV_VERSION "1.10"
#define PFX DRV_NAME " " #define PFX DRV_NAME " "
#define DEFAULT_TX_RING_SIZE 128 #define DEFAULT_TX_RING_SIZE 128
@ -132,18 +132,93 @@ static void skge_get_regs(struct net_device *dev, struct ethtool_regs *regs,
} }
/* Wake on Lan only supported on Yukon chips with rev 1 or above */ /* Wake on Lan only supported on Yukon chips with rev 1 or above */
static int wol_supported(const struct skge_hw *hw) static u32 wol_supported(const struct skge_hw *hw)
{ {
return !((hw->chip_id == CHIP_ID_GENESIS || if (hw->chip_id == CHIP_ID_YUKON && hw->chip_rev != 0)
(hw->chip_id == CHIP_ID_YUKON && hw->chip_rev == 0))); return WAKE_MAGIC | WAKE_PHY;
else
return 0;
}
static u32 pci_wake_enabled(struct pci_dev *dev)
{
int pm = pci_find_capability(dev, PCI_CAP_ID_PM);
u16 value;
/* If device doesn't support PM Capabilities, but request is to disable
* wake events, it's a nop; otherwise fail */
if (!pm)
return 0;
pci_read_config_word(dev, pm + PCI_PM_PMC, &value);
value &= PCI_PM_CAP_PME_MASK;
value >>= ffs(PCI_PM_CAP_PME_MASK) - 1; /* First bit of mask */
return value != 0;
}
static void skge_wol_init(struct skge_port *skge)
{
struct skge_hw *hw = skge->hw;
int port = skge->port;
enum pause_control save_mode;
u32 ctrl;
/* Bring hardware out of reset */
skge_write16(hw, B0_CTST, CS_RST_CLR);
skge_write16(hw, SK_REG(port, GMAC_LINK_CTRL), GMLC_RST_CLR);
skge_write8(hw, SK_REG(port, GPHY_CTRL), GPC_RST_CLR);
skge_write8(hw, SK_REG(port, GMAC_CTRL), GMC_RST_CLR);
/* Force to 10/100 skge_reset will re-enable on resume */
save_mode = skge->flow_control;
skge->flow_control = FLOW_MODE_SYMMETRIC;
ctrl = skge->advertising;
skge->advertising &= ~(ADVERTISED_1000baseT_Half|ADVERTISED_1000baseT_Full);
skge_phy_reset(skge);
skge->flow_control = save_mode;
skge->advertising = ctrl;
/* Set GMAC to no flow control and auto update for speed/duplex */
gma_write16(hw, port, GM_GP_CTRL,
GM_GPCR_FC_TX_DIS|GM_GPCR_TX_ENA|GM_GPCR_RX_ENA|
GM_GPCR_DUP_FULL|GM_GPCR_FC_RX_DIS|GM_GPCR_AU_FCT_DIS);
/* Set WOL address */
memcpy_toio(hw->regs + WOL_REGS(port, WOL_MAC_ADDR),
skge->netdev->dev_addr, ETH_ALEN);
/* Turn on appropriate WOL control bits */
skge_write16(hw, WOL_REGS(port, WOL_CTRL_STAT), WOL_CTL_CLEAR_RESULT);
ctrl = 0;
if (skge->wol & WAKE_PHY)
ctrl |= WOL_CTL_ENA_PME_ON_LINK_CHG|WOL_CTL_ENA_LINK_CHG_UNIT;
else
ctrl |= WOL_CTL_DIS_PME_ON_LINK_CHG|WOL_CTL_DIS_LINK_CHG_UNIT;
if (skge->wol & WAKE_MAGIC)
ctrl |= WOL_CTL_ENA_PME_ON_MAGIC_PKT|WOL_CTL_ENA_MAGIC_PKT_UNIT;
else
ctrl |= WOL_CTL_DIS_PME_ON_MAGIC_PKT|WOL_CTL_DIS_MAGIC_PKT_UNIT;;
ctrl |= WOL_CTL_DIS_PME_ON_PATTERN|WOL_CTL_DIS_PATTERN_UNIT;
skge_write16(hw, WOL_REGS(port, WOL_CTRL_STAT), ctrl);
/* block receiver */
skge_write8(hw, SK_REG(port, RX_GMF_CTRL_T), GMF_RST_SET);
} }
static void skge_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol) static void skge_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
{ {
struct skge_port *skge = netdev_priv(dev); struct skge_port *skge = netdev_priv(dev);
wol->supported = wol_supported(skge->hw) ? WAKE_MAGIC : 0; wol->supported = wol_supported(skge->hw);
wol->wolopts = skge->wol ? WAKE_MAGIC : 0; wol->wolopts = skge->wol;
} }
static int skge_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol) static int skge_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
@ -151,23 +226,12 @@ static int skge_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
struct skge_port *skge = netdev_priv(dev); struct skge_port *skge = netdev_priv(dev);
struct skge_hw *hw = skge->hw; struct skge_hw *hw = skge->hw;
if (wol->wolopts != WAKE_MAGIC && wol->wolopts != 0) if (wol->wolopts & wol_supported(hw))
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (wol->wolopts == WAKE_MAGIC && !wol_supported(hw)) skge->wol = wol->wolopts;
return -EOPNOTSUPP; if (!netif_running(dev))
skge_wol_init(skge);
skge->wol = wol->wolopts == WAKE_MAGIC;
if (skge->wol) {
memcpy_toio(hw->regs + WOL_MAC_ADDR, dev->dev_addr, ETH_ALEN);
skge_write16(hw, WOL_CTRL_STAT,
WOL_CTL_ENA_PME_ON_MAGIC_PKT |
WOL_CTL_ENA_MAGIC_PKT_UNIT);
} else
skge_write16(hw, WOL_CTRL_STAT, WOL_CTL_DEFAULT);
return 0; return 0;
} }
@ -2373,6 +2437,9 @@ static int skge_up(struct net_device *dev)
size_t rx_size, tx_size; size_t rx_size, tx_size;
int err; int err;
if (!is_valid_ether_addr(dev->dev_addr))
return -EINVAL;
if (netif_msg_ifup(skge)) if (netif_msg_ifup(skge))
printk(KERN_INFO PFX "%s: enabling interface\n", dev->name); printk(KERN_INFO PFX "%s: enabling interface\n", dev->name);
@ -2392,7 +2459,7 @@ static int skge_up(struct net_device *dev)
BUG_ON(skge->dma & 7); BUG_ON(skge->dma & 7);
if ((u64)skge->dma >> 32 != ((u64) skge->dma + skge->mem_size) >> 32) { if ((u64)skge->dma >> 32 != ((u64) skge->dma + skge->mem_size) >> 32) {
printk(KERN_ERR PFX "pci_alloc_consistent region crosses 4G boundary\n"); dev_err(&hw->pdev->dev, "pci_alloc_consistent region crosses 4G boundary\n");
err = -EINVAL; err = -EINVAL;
goto free_pci_mem; goto free_pci_mem;
} }
@ -3001,6 +3068,7 @@ static void skge_mac_intr(struct skge_hw *hw, int port)
/* Handle device specific framing and timeout interrupts */ /* Handle device specific framing and timeout interrupts */
static void skge_error_irq(struct skge_hw *hw) static void skge_error_irq(struct skge_hw *hw)
{ {
struct pci_dev *pdev = hw->pdev;
u32 hwstatus = skge_read32(hw, B0_HWE_ISRC); u32 hwstatus = skge_read32(hw, B0_HWE_ISRC);
if (hw->chip_id == CHIP_ID_GENESIS) { if (hw->chip_id == CHIP_ID_GENESIS) {
@ -3016,12 +3084,12 @@ static void skge_error_irq(struct skge_hw *hw)
} }
if (hwstatus & IS_RAM_RD_PAR) { if (hwstatus & IS_RAM_RD_PAR) {
printk(KERN_ERR PFX "Ram read data parity error\n"); dev_err(&pdev->dev, "Ram read data parity error\n");
skge_write16(hw, B3_RI_CTRL, RI_CLR_RD_PERR); skge_write16(hw, B3_RI_CTRL, RI_CLR_RD_PERR);
} }
if (hwstatus & IS_RAM_WR_PAR) { if (hwstatus & IS_RAM_WR_PAR) {
printk(KERN_ERR PFX "Ram write data parity error\n"); dev_err(&pdev->dev, "Ram write data parity error\n");
skge_write16(hw, B3_RI_CTRL, RI_CLR_WR_PERR); skge_write16(hw, B3_RI_CTRL, RI_CLR_WR_PERR);
} }
@ -3032,38 +3100,38 @@ static void skge_error_irq(struct skge_hw *hw)
skge_mac_parity(hw, 1); skge_mac_parity(hw, 1);
if (hwstatus & IS_R1_PAR_ERR) { if (hwstatus & IS_R1_PAR_ERR) {
printk(KERN_ERR PFX "%s: receive queue parity error\n", dev_err(&pdev->dev, "%s: receive queue parity error\n",
hw->dev[0]->name); hw->dev[0]->name);
skge_write32(hw, B0_R1_CSR, CSR_IRQ_CL_P); skge_write32(hw, B0_R1_CSR, CSR_IRQ_CL_P);
} }
if (hwstatus & IS_R2_PAR_ERR) { if (hwstatus & IS_R2_PAR_ERR) {
printk(KERN_ERR PFX "%s: receive queue parity error\n", dev_err(&pdev->dev, "%s: receive queue parity error\n",
hw->dev[1]->name); hw->dev[1]->name);
skge_write32(hw, B0_R2_CSR, CSR_IRQ_CL_P); skge_write32(hw, B0_R2_CSR, CSR_IRQ_CL_P);
} }
if (hwstatus & (IS_IRQ_MST_ERR|IS_IRQ_STAT)) { if (hwstatus & (IS_IRQ_MST_ERR|IS_IRQ_STAT)) {
u16 pci_status, pci_cmd; u16 pci_status, pci_cmd;
pci_read_config_word(hw->pdev, PCI_COMMAND, &pci_cmd); pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
pci_read_config_word(hw->pdev, PCI_STATUS, &pci_status); pci_read_config_word(pdev, PCI_STATUS, &pci_status);
printk(KERN_ERR PFX "%s: PCI error cmd=%#x status=%#x\n", dev_err(&pdev->dev, "PCI error cmd=%#x status=%#x\n",
pci_name(hw->pdev), pci_cmd, pci_status); pci_cmd, pci_status);
/* Write the error bits back to clear them. */ /* Write the error bits back to clear them. */
pci_status &= PCI_STATUS_ERROR_BITS; pci_status &= PCI_STATUS_ERROR_BITS;
skge_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON); skge_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON);
pci_write_config_word(hw->pdev, PCI_COMMAND, pci_write_config_word(pdev, PCI_COMMAND,
pci_cmd | PCI_COMMAND_SERR | PCI_COMMAND_PARITY); pci_cmd | PCI_COMMAND_SERR | PCI_COMMAND_PARITY);
pci_write_config_word(hw->pdev, PCI_STATUS, pci_status); pci_write_config_word(pdev, PCI_STATUS, pci_status);
skge_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_OFF); skge_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_OFF);
/* if error still set then just ignore it */ /* if error still set then just ignore it */
hwstatus = skge_read32(hw, B0_HWE_ISRC); hwstatus = skge_read32(hw, B0_HWE_ISRC);
if (hwstatus & IS_IRQ_STAT) { if (hwstatus & IS_IRQ_STAT) {
printk(KERN_INFO PFX "unable to clear error (so ignoring them)\n"); dev_warn(&hw->pdev->dev, "unable to clear error (so ignoring them)\n");
hw->intr_mask &= ~IS_HW_ERR; hw->intr_mask &= ~IS_HW_ERR;
} }
} }
@ -3277,8 +3345,8 @@ static int skge_reset(struct skge_hw *hw)
hw->phy_addr = PHY_ADDR_BCOM; hw->phy_addr = PHY_ADDR_BCOM;
break; break;
default: default:
printk(KERN_ERR PFX "%s: unsupported phy type 0x%x\n", dev_err(&hw->pdev->dev, "unsupported phy type 0x%x\n",
pci_name(hw->pdev), hw->phy_type); hw->phy_type);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
break; break;
@ -3293,8 +3361,8 @@ static int skge_reset(struct skge_hw *hw)
break; break;
default: default:
printk(KERN_ERR PFX "%s: unsupported chip type 0x%x\n", dev_err(&hw->pdev->dev, "unsupported chip type 0x%x\n",
pci_name(hw->pdev), hw->chip_id); hw->chip_id);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
@ -3334,7 +3402,7 @@ static int skge_reset(struct skge_hw *hw)
/* avoid boards with stuck Hardware error bits */ /* avoid boards with stuck Hardware error bits */
if ((skge_read32(hw, B0_ISRC) & IS_HW_ERR) && if ((skge_read32(hw, B0_ISRC) & IS_HW_ERR) &&
(skge_read32(hw, B0_HWE_ISRC) & IS_IRQ_SENSOR)) { (skge_read32(hw, B0_HWE_ISRC) & IS_IRQ_SENSOR)) {
printk(KERN_WARNING PFX "stuck hardware sensor bit\n"); dev_warn(&hw->pdev->dev, "stuck hardware sensor bit\n");
hw->intr_mask &= ~IS_HW_ERR; hw->intr_mask &= ~IS_HW_ERR;
} }
@ -3408,7 +3476,7 @@ static struct net_device *skge_devinit(struct skge_hw *hw, int port,
struct net_device *dev = alloc_etherdev(sizeof(*skge)); struct net_device *dev = alloc_etherdev(sizeof(*skge));
if (!dev) { if (!dev) {
printk(KERN_ERR "skge etherdev alloc failed"); dev_err(&hw->pdev->dev, "etherdev alloc failed\n");
return NULL; return NULL;
} }
@ -3452,6 +3520,7 @@ static struct net_device *skge_devinit(struct skge_hw *hw, int port,
skge->duplex = -1; skge->duplex = -1;
skge->speed = -1; skge->speed = -1;
skge->advertising = skge_supported_modes(hw); skge->advertising = skge_supported_modes(hw);
skge->wol = pci_wake_enabled(hw->pdev) ? wol_supported(hw) : 0;
hw->dev[port] = dev; hw->dev[port] = dev;
@ -3496,15 +3565,13 @@ static int __devinit skge_probe(struct pci_dev *pdev,
err = pci_enable_device(pdev); err = pci_enable_device(pdev);
if (err) { if (err) {
printk(KERN_ERR PFX "%s cannot enable PCI device\n", dev_err(&pdev->dev, "cannot enable PCI device\n");
pci_name(pdev));
goto err_out; goto err_out;
} }
err = pci_request_regions(pdev, DRV_NAME); err = pci_request_regions(pdev, DRV_NAME);
if (err) { if (err) {
printk(KERN_ERR PFX "%s cannot obtain PCI resources\n", dev_err(&pdev->dev, "cannot obtain PCI resources\n");
pci_name(pdev));
goto err_out_disable_pdev; goto err_out_disable_pdev;
} }
@ -3519,8 +3586,7 @@ static int __devinit skge_probe(struct pci_dev *pdev,
} }
if (err) { if (err) {
printk(KERN_ERR PFX "%s no usable DMA configuration\n", dev_err(&pdev->dev, "no usable DMA configuration\n");
pci_name(pdev));
goto err_out_free_regions; goto err_out_free_regions;
} }
@ -3538,8 +3604,7 @@ static int __devinit skge_probe(struct pci_dev *pdev,
err = -ENOMEM; err = -ENOMEM;
hw = kzalloc(sizeof(*hw), GFP_KERNEL); hw = kzalloc(sizeof(*hw), GFP_KERNEL);
if (!hw) { if (!hw) {
printk(KERN_ERR PFX "%s: cannot allocate hardware struct\n", dev_err(&pdev->dev, "cannot allocate hardware struct\n");
pci_name(pdev));
goto err_out_free_regions; goto err_out_free_regions;
} }
@ -3550,8 +3615,7 @@ static int __devinit skge_probe(struct pci_dev *pdev,
hw->regs = ioremap_nocache(pci_resource_start(pdev, 0), 0x4000); hw->regs = ioremap_nocache(pci_resource_start(pdev, 0), 0x4000);
if (!hw->regs) { if (!hw->regs) {
printk(KERN_ERR PFX "%s: cannot map device registers\n", dev_err(&pdev->dev, "cannot map device registers\n");
pci_name(pdev));
goto err_out_free_hw; goto err_out_free_hw;
} }
@ -3567,23 +3631,19 @@ static int __devinit skge_probe(struct pci_dev *pdev,
if (!dev) if (!dev)
goto err_out_led_off; goto err_out_led_off;
if (!is_valid_ether_addr(dev->dev_addr)) { /* Some motherboards are broken and has zero in ROM. */
printk(KERN_ERR PFX "%s: bad (zero?) ethernet address in rom\n", if (!is_valid_ether_addr(dev->dev_addr))
pci_name(pdev)); dev_warn(&pdev->dev, "bad (zero?) ethernet address in rom\n");
err = -EIO;
goto err_out_free_netdev;
}
err = register_netdev(dev); err = register_netdev(dev);
if (err) { if (err) {
printk(KERN_ERR PFX "%s: cannot register net device\n", dev_err(&pdev->dev, "cannot register net device\n");
pci_name(pdev));
goto err_out_free_netdev; goto err_out_free_netdev;
} }
err = request_irq(pdev->irq, skge_intr, IRQF_SHARED, dev->name, hw); err = request_irq(pdev->irq, skge_intr, IRQF_SHARED, dev->name, hw);
if (err) { if (err) {
printk(KERN_ERR PFX "%s: cannot assign irq %d\n", dev_err(&pdev->dev, "%s: cannot assign irq %d\n",
dev->name, pdev->irq); dev->name, pdev->irq);
goto err_out_unregister; goto err_out_unregister;
} }
@ -3594,7 +3654,7 @@ static int __devinit skge_probe(struct pci_dev *pdev,
skge_show_addr(dev1); skge_show_addr(dev1);
else { else {
/* Failure to register second port need not be fatal */ /* Failure to register second port need not be fatal */
printk(KERN_WARNING PFX "register of second port failed\n"); dev_warn(&pdev->dev, "register of second port failed\n");
hw->dev[1] = NULL; hw->dev[1] = NULL;
free_netdev(dev1); free_netdev(dev1);
} }
@ -3659,28 +3719,46 @@ static void __devexit skge_remove(struct pci_dev *pdev)
} }
#ifdef CONFIG_PM #ifdef CONFIG_PM
static int vaux_avail(struct pci_dev *pdev)
{
int pm_cap;
pm_cap = pci_find_capability(pdev, PCI_CAP_ID_PM);
if (pm_cap) {
u16 ctl;
pci_read_config_word(pdev, pm_cap + PCI_PM_PMC, &ctl);
if (ctl & PCI_PM_CAP_AUX_POWER)
return 1;
}
return 0;
}
static int skge_suspend(struct pci_dev *pdev, pm_message_t state) static int skge_suspend(struct pci_dev *pdev, pm_message_t state)
{ {
struct skge_hw *hw = pci_get_drvdata(pdev); struct skge_hw *hw = pci_get_drvdata(pdev);
int i, wol = 0; int i, err, wol = 0;
err = pci_save_state(pdev);
if (err)
return err;
pci_save_state(pdev);
for (i = 0; i < hw->ports; i++) { for (i = 0; i < hw->ports; i++) {
struct net_device *dev = hw->dev[i]; struct net_device *dev = hw->dev[i];
struct skge_port *skge = netdev_priv(dev);
if (netif_running(dev)) { if (netif_running(dev))
struct skge_port *skge = netdev_priv(dev); skge_down(dev);
if (skge->wol)
skge_wol_init(skge);
netif_carrier_off(dev); wol |= skge->wol;
if (skge->wol)
netif_stop_queue(dev);
else
skge_down(dev);
wol |= skge->wol;
}
netif_device_detach(dev);
} }
if (wol && vaux_avail(pdev))
skge_write8(hw, B0_POWER_CTRL,
PC_VAUX_ENA | PC_VCC_ENA | PC_VAUX_ON | PC_VCC_OFF);
skge_write32(hw, B0_IMSK, 0); skge_write32(hw, B0_IMSK, 0);
pci_enable_wake(pdev, pci_choose_state(pdev, state), wol); pci_enable_wake(pdev, pci_choose_state(pdev, state), wol);
pci_set_power_state(pdev, pci_choose_state(pdev, state)); pci_set_power_state(pdev, pci_choose_state(pdev, state));
@ -3693,8 +3771,14 @@ static int skge_resume(struct pci_dev *pdev)
struct skge_hw *hw = pci_get_drvdata(pdev); struct skge_hw *hw = pci_get_drvdata(pdev);
int i, err; int i, err;
pci_set_power_state(pdev, PCI_D0); err = pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev); if (err)
goto out;
err = pci_restore_state(pdev);
if (err)
goto out;
pci_enable_wake(pdev, PCI_D0, 0); pci_enable_wake(pdev, PCI_D0, 0);
err = skge_reset(hw); err = skge_reset(hw);
@ -3704,7 +3788,6 @@ static int skge_resume(struct pci_dev *pdev)
for (i = 0; i < hw->ports; i++) { for (i = 0; i < hw->ports; i++) {
struct net_device *dev = hw->dev[i]; struct net_device *dev = hw->dev[i];
netif_device_attach(dev);
if (netif_running(dev)) { if (netif_running(dev)) {
err = skge_up(dev); err = skge_up(dev);

View File

@ -876,11 +876,13 @@ enum {
WOL_PATT_CNT_0 = 0x0f38,/* 32 bit WOL Pattern Counter 3..0 */ WOL_PATT_CNT_0 = 0x0f38,/* 32 bit WOL Pattern Counter 3..0 */
WOL_PATT_CNT_4 = 0x0f3c,/* 24 bit WOL Pattern Counter 6..4 */ WOL_PATT_CNT_4 = 0x0f3c,/* 24 bit WOL Pattern Counter 6..4 */
}; };
#define WOL_REGS(port, x) (x + (port)*0x80)
enum { enum {
WOL_PATT_RAM_1 = 0x1000,/* WOL Pattern RAM Link 1 */ WOL_PATT_RAM_1 = 0x1000,/* WOL Pattern RAM Link 1 */
WOL_PATT_RAM_2 = 0x1400,/* WOL Pattern RAM Link 2 */ WOL_PATT_RAM_2 = 0x1400,/* WOL Pattern RAM Link 2 */
}; };
#define WOL_PATT_RAM_BASE(port) (WOL_PATT_RAM_1 + (port)*0x400)
enum { enum {
BASE_XMAC_1 = 0x2000,/* XMAC 1 registers */ BASE_XMAC_1 = 0x2000,/* XMAC 1 registers */

View File

@ -49,7 +49,7 @@
#include "sky2.h" #include "sky2.h"
#define DRV_NAME "sky2" #define DRV_NAME "sky2"
#define DRV_VERSION "1.10" #define DRV_VERSION "1.12"
#define PFX DRV_NAME " " #define PFX DRV_NAME " "
/* /*
@ -105,6 +105,7 @@ static const struct pci_device_id sky2_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4b00) }, /* DGE-560T */ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4b00) }, /* DGE-560T */
{ PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4001) }, /* DGE-550SX */ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4001) }, /* DGE-550SX */
{ PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4B02) }, /* DGE-560SX */ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4B02) }, /* DGE-560SX */
{ PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4B03) }, /* DGE-550T */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4340) }, /* 88E8021 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4340) }, /* 88E8021 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4341) }, /* 88E8022 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4341) }, /* 88E8022 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4342) }, /* 88E8061 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4342) }, /* 88E8061 */
@ -126,6 +127,9 @@ static const struct pci_device_id sky2_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4366) }, /* 88EC036 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4366) }, /* 88EC036 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4367) }, /* 88EC032 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4367) }, /* 88EC032 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4368) }, /* 88EC034 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4368) }, /* 88EC034 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4369) }, /* 88EC042 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436A) }, /* 88E8058 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436B) }, /* 88E8071 */
{ 0 } { 0 }
}; };
@ -140,7 +144,7 @@ static const u32 portirq_msk[] = { Y2_IS_PORT_1, Y2_IS_PORT_2 };
static const char *yukon2_name[] = { static const char *yukon2_name[] = {
"XL", /* 0xb3 */ "XL", /* 0xb3 */
"EC Ultra", /* 0xb4 */ "EC Ultra", /* 0xb4 */
"UNKNOWN", /* 0xb5 */ "Extreme", /* 0xb5 */
"EC", /* 0xb6 */ "EC", /* 0xb6 */
"FE", /* 0xb7 */ "FE", /* 0xb7 */
}; };
@ -192,76 +196,52 @@ static u16 gm_phy_read(struct sky2_hw *hw, unsigned port, u16 reg)
return v; return v;
} }
static void sky2_set_power_state(struct sky2_hw *hw, pci_power_t state)
static void sky2_power_on(struct sky2_hw *hw)
{ {
u16 power_control; /* switch power to VCC (WA for VAUX problem) */
int vaux; sky2_write8(hw, B0_POWER_CTRL,
PC_VAUX_ENA | PC_VCC_ENA | PC_VAUX_OFF | PC_VCC_ON);
pr_debug("sky2_set_power_state %d\n", state); /* disable Core Clock Division, */
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON); sky2_write32(hw, B2_Y2_CLK_CTRL, Y2_CLK_DIV_DIS);
power_control = sky2_pci_read16(hw, hw->pm_cap + PCI_PM_PMC); if (hw->chip_id == CHIP_ID_YUKON_XL && hw->chip_rev > 1)
vaux = (sky2_read16(hw, B0_CTST) & Y2_VAUX_AVAIL) && /* enable bits are inverted */
(power_control & PCI_PM_CAP_PME_D3cold); sky2_write8(hw, B2_Y2_CLK_GATE,
Y2_PCI_CLK_LNK1_DIS | Y2_COR_CLK_LNK1_DIS |
Y2_CLK_GAT_LNK1_DIS | Y2_PCI_CLK_LNK2_DIS |
Y2_COR_CLK_LNK2_DIS | Y2_CLK_GAT_LNK2_DIS);
else
sky2_write8(hw, B2_Y2_CLK_GATE, 0);
power_control = sky2_pci_read16(hw, hw->pm_cap + PCI_PM_CTRL); if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) {
u32 reg1;
power_control |= PCI_PM_CTRL_PME_STATUS; sky2_pci_write32(hw, PCI_DEV_REG3, 0);
power_control &= ~(PCI_PM_CTRL_STATE_MASK); reg1 = sky2_pci_read32(hw, PCI_DEV_REG4);
reg1 &= P_ASPM_CONTROL_MSK;
switch (state) { sky2_pci_write32(hw, PCI_DEV_REG4, reg1);
case PCI_D0: sky2_pci_write32(hw, PCI_DEV_REG5, 0);
/* switch power to VCC (WA for VAUX problem) */
sky2_write8(hw, B0_POWER_CTRL,
PC_VAUX_ENA | PC_VCC_ENA | PC_VAUX_OFF | PC_VCC_ON);
/* disable Core Clock Division, */
sky2_write32(hw, B2_Y2_CLK_CTRL, Y2_CLK_DIV_DIS);
if (hw->chip_id == CHIP_ID_YUKON_XL && hw->chip_rev > 1)
/* enable bits are inverted */
sky2_write8(hw, B2_Y2_CLK_GATE,
Y2_PCI_CLK_LNK1_DIS | Y2_COR_CLK_LNK1_DIS |
Y2_CLK_GAT_LNK1_DIS | Y2_PCI_CLK_LNK2_DIS |
Y2_COR_CLK_LNK2_DIS | Y2_CLK_GAT_LNK2_DIS);
else
sky2_write8(hw, B2_Y2_CLK_GATE, 0);
if (hw->chip_id == CHIP_ID_YUKON_EC_U) {
u32 reg1;
sky2_pci_write32(hw, PCI_DEV_REG3, 0);
reg1 = sky2_pci_read32(hw, PCI_DEV_REG4);
reg1 &= P_ASPM_CONTROL_MSK;
sky2_pci_write32(hw, PCI_DEV_REG4, reg1);
sky2_pci_write32(hw, PCI_DEV_REG5, 0);
}
break;
case PCI_D3hot:
case PCI_D3cold:
if (hw->chip_id == CHIP_ID_YUKON_XL && hw->chip_rev > 1)
sky2_write8(hw, B2_Y2_CLK_GATE, 0);
else
/* enable bits are inverted */
sky2_write8(hw, B2_Y2_CLK_GATE,
Y2_PCI_CLK_LNK1_DIS | Y2_COR_CLK_LNK1_DIS |
Y2_CLK_GAT_LNK1_DIS | Y2_PCI_CLK_LNK2_DIS |
Y2_COR_CLK_LNK2_DIS | Y2_CLK_GAT_LNK2_DIS);
/* switch power to VAUX */
if (vaux && state != PCI_D3cold)
sky2_write8(hw, B0_POWER_CTRL,
(PC_VAUX_ENA | PC_VCC_ENA |
PC_VAUX_ON | PC_VCC_OFF));
break;
default:
printk(KERN_ERR PFX "Unknown power state %d\n", state);
} }
}
sky2_pci_write16(hw, hw->pm_cap + PCI_PM_CTRL, power_control); static void sky2_power_aux(struct sky2_hw *hw)
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_OFF); {
if (hw->chip_id == CHIP_ID_YUKON_XL && hw->chip_rev > 1)
sky2_write8(hw, B2_Y2_CLK_GATE, 0);
else
/* enable bits are inverted */
sky2_write8(hw, B2_Y2_CLK_GATE,
Y2_PCI_CLK_LNK1_DIS | Y2_COR_CLK_LNK1_DIS |
Y2_CLK_GAT_LNK1_DIS | Y2_PCI_CLK_LNK2_DIS |
Y2_COR_CLK_LNK2_DIS | Y2_CLK_GAT_LNK2_DIS);
/* switch power to VAUX */
if (sky2_read16(hw, B0_CTST) & Y2_VAUX_AVAIL)
sky2_write8(hw, B0_POWER_CTRL,
(PC_VAUX_ENA | PC_VCC_ENA |
PC_VAUX_ON | PC_VCC_OFF));
} }
static void sky2_gmac_reset(struct sky2_hw *hw, unsigned port) static void sky2_gmac_reset(struct sky2_hw *hw, unsigned port)
@ -313,8 +293,10 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port)
struct sky2_port *sky2 = netdev_priv(hw->dev[port]); struct sky2_port *sky2 = netdev_priv(hw->dev[port]);
u16 ctrl, ct1000, adv, pg, ledctrl, ledover, reg; u16 ctrl, ct1000, adv, pg, ledctrl, ledover, reg;
if (sky2->autoneg == AUTONEG_ENABLE && if (sky2->autoneg == AUTONEG_ENABLE
!(hw->chip_id == CHIP_ID_YUKON_XL || hw->chip_id == CHIP_ID_YUKON_EC_U)) { && !(hw->chip_id == CHIP_ID_YUKON_XL
|| hw->chip_id == CHIP_ID_YUKON_EC_U
|| hw->chip_id == CHIP_ID_YUKON_EX)) {
u16 ectrl = gm_phy_read(hw, port, PHY_MARV_EXT_CTRL); u16 ectrl = gm_phy_read(hw, port, PHY_MARV_EXT_CTRL);
ectrl &= ~(PHY_M_EC_M_DSC_MSK | PHY_M_EC_S_DSC_MSK | ectrl &= ~(PHY_M_EC_M_DSC_MSK | PHY_M_EC_S_DSC_MSK |
@ -341,8 +323,10 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port)
/* enable automatic crossover */ /* enable automatic crossover */
ctrl |= PHY_M_PC_MDI_XMODE(PHY_M_PC_ENA_AUTO); ctrl |= PHY_M_PC_MDI_XMODE(PHY_M_PC_ENA_AUTO);
if (sky2->autoneg == AUTONEG_ENABLE && if (sky2->autoneg == AUTONEG_ENABLE
(hw->chip_id == CHIP_ID_YUKON_XL || hw->chip_id == CHIP_ID_YUKON_EC_U)) { && (hw->chip_id == CHIP_ID_YUKON_XL
|| hw->chip_id == CHIP_ID_YUKON_EC_U
|| hw->chip_id == CHIP_ID_YUKON_EX)) {
ctrl &= ~PHY_M_PC_DSC_MSK; ctrl &= ~PHY_M_PC_DSC_MSK;
ctrl |= PHY_M_PC_DSC(2) | PHY_M_PC_DOWN_S_ENA; ctrl |= PHY_M_PC_DSC(2) | PHY_M_PC_DOWN_S_ENA;
} }
@ -497,7 +481,9 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port)
/* restore page register */ /* restore page register */
gm_phy_write(hw, port, PHY_MARV_EXT_ADR, pg); gm_phy_write(hw, port, PHY_MARV_EXT_ADR, pg);
break; break;
case CHIP_ID_YUKON_EC_U: case CHIP_ID_YUKON_EC_U:
case CHIP_ID_YUKON_EX:
pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR); pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR);
/* select page 3 to access LED control register */ /* select page 3 to access LED control register */
@ -539,7 +525,7 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port)
/* set page register to 0 */ /* set page register to 0 */
gm_phy_write(hw, port, PHY_MARV_EXT_ADR, pg); gm_phy_write(hw, port, PHY_MARV_EXT_ADR, pg);
} else { } else if (hw->chip_id != CHIP_ID_YUKON_EX) {
gm_phy_write(hw, port, PHY_MARV_LED_CTRL, ledctrl); gm_phy_write(hw, port, PHY_MARV_LED_CTRL, ledctrl);
if (sky2->autoneg == AUTONEG_DISABLE || sky2->speed == SPEED_100) { if (sky2->autoneg == AUTONEG_DISABLE || sky2->speed == SPEED_100) {
@ -591,6 +577,73 @@ static void sky2_phy_reinit(struct sky2_port *sky2)
spin_unlock_bh(&sky2->phy_lock); spin_unlock_bh(&sky2->phy_lock);
} }
/* Put device in state to listen for Wake On Lan */
static void sky2_wol_init(struct sky2_port *sky2)
{
struct sky2_hw *hw = sky2->hw;
unsigned port = sky2->port;
enum flow_control save_mode;
u16 ctrl;
u32 reg1;
/* Bring hardware out of reset */
sky2_write16(hw, B0_CTST, CS_RST_CLR);
sky2_write16(hw, SK_REG(port, GMAC_LINK_CTRL), GMLC_RST_CLR);
sky2_write8(hw, SK_REG(port, GPHY_CTRL), GPC_RST_CLR);
sky2_write8(hw, SK_REG(port, GMAC_CTRL), GMC_RST_CLR);
/* Force to 10/100
* sky2_reset will re-enable on resume
*/
save_mode = sky2->flow_mode;
ctrl = sky2->advertising;
sky2->advertising &= ~(ADVERTISED_1000baseT_Half|ADVERTISED_1000baseT_Full);
sky2->flow_mode = FC_NONE;
sky2_phy_power(hw, port, 1);
sky2_phy_reinit(sky2);
sky2->flow_mode = save_mode;
sky2->advertising = ctrl;
/* Set GMAC to no flow control and auto update for speed/duplex */
gma_write16(hw, port, GM_GP_CTRL,
GM_GPCR_FC_TX_DIS|GM_GPCR_TX_ENA|GM_GPCR_RX_ENA|
GM_GPCR_DUP_FULL|GM_GPCR_FC_RX_DIS|GM_GPCR_AU_FCT_DIS);
/* Set WOL address */
memcpy_toio(hw->regs + WOL_REGS(port, WOL_MAC_ADDR),
sky2->netdev->dev_addr, ETH_ALEN);
/* Turn on appropriate WOL control bits */
sky2_write16(hw, WOL_REGS(port, WOL_CTRL_STAT), WOL_CTL_CLEAR_RESULT);
ctrl = 0;
if (sky2->wol & WAKE_PHY)
ctrl |= WOL_CTL_ENA_PME_ON_LINK_CHG|WOL_CTL_ENA_LINK_CHG_UNIT;
else
ctrl |= WOL_CTL_DIS_PME_ON_LINK_CHG|WOL_CTL_DIS_LINK_CHG_UNIT;
if (sky2->wol & WAKE_MAGIC)
ctrl |= WOL_CTL_ENA_PME_ON_MAGIC_PKT|WOL_CTL_ENA_MAGIC_PKT_UNIT;
else
ctrl |= WOL_CTL_DIS_PME_ON_MAGIC_PKT|WOL_CTL_DIS_MAGIC_PKT_UNIT;;
ctrl |= WOL_CTL_DIS_PME_ON_PATTERN|WOL_CTL_DIS_PATTERN_UNIT;
sky2_write16(hw, WOL_REGS(port, WOL_CTRL_STAT), ctrl);
/* Turn on legacy PCI-Express PME mode */
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON);
reg1 = sky2_pci_read32(hw, PCI_DEV_REG1);
reg1 |= PCI_Y2_PME_LEGACY;
sky2_pci_write32(hw, PCI_DEV_REG1, reg1);
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_OFF);
/* block receiver */
sky2_write8(hw, SK_REG(port, RX_GMF_CTRL_T), GMF_RST_SET);
}
static void sky2_mac_init(struct sky2_hw *hw, unsigned port) static void sky2_mac_init(struct sky2_hw *hw, unsigned port)
{ {
struct sky2_port *sky2 = netdev_priv(hw->dev[port]); struct sky2_port *sky2 = netdev_priv(hw->dev[port]);
@ -684,7 +737,7 @@ static void sky2_mac_init(struct sky2_hw *hw, unsigned port)
sky2_write8(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_RST_CLR); sky2_write8(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_RST_CLR);
sky2_write16(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_OPER_ON); sky2_write16(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_OPER_ON);
if (hw->chip_id == CHIP_ID_YUKON_EC_U) { if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) {
sky2_write8(hw, SK_REG(port, RX_GMF_LP_THR), 768/8); sky2_write8(hw, SK_REG(port, RX_GMF_LP_THR), 768/8);
sky2_write8(hw, SK_REG(port, RX_GMF_UP_THR), 1024/8); sky2_write8(hw, SK_REG(port, RX_GMF_UP_THR), 1024/8);
if (hw->dev[port]->mtu > ETH_DATA_LEN) { if (hw->dev[port]->mtu > ETH_DATA_LEN) {
@ -1467,6 +1520,9 @@ static void sky2_tx_complete(struct sky2_port *sky2, u16 done)
if (unlikely(netif_msg_tx_done(sky2))) if (unlikely(netif_msg_tx_done(sky2)))
printk(KERN_DEBUG "%s: tx done %u\n", printk(KERN_DEBUG "%s: tx done %u\n",
dev->name, idx); dev->name, idx);
sky2->net_stats.tx_packets++;
sky2->net_stats.tx_bytes += re->skb->len;
dev_kfree_skb_any(re->skb); dev_kfree_skb_any(re->skb);
} }
@ -1641,7 +1697,9 @@ static void sky2_link_up(struct sky2_port *sky2)
sky2_write8(hw, SK_REG(port, LNK_LED_REG), sky2_write8(hw, SK_REG(port, LNK_LED_REG),
LINKLED_ON | LINKLED_BLINK_OFF | LINKLED_LINKSYNC_OFF); LINKLED_ON | LINKLED_BLINK_OFF | LINKLED_LINKSYNC_OFF);
if (hw->chip_id == CHIP_ID_YUKON_XL || hw->chip_id == CHIP_ID_YUKON_EC_U) { if (hw->chip_id == CHIP_ID_YUKON_XL
|| hw->chip_id == CHIP_ID_YUKON_EC_U
|| hw->chip_id == CHIP_ID_YUKON_EX) {
u16 pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR); u16 pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR);
u16 led = PHY_M_LEDC_LOS_CTRL(1); /* link active */ u16 led = PHY_M_LEDC_LOS_CTRL(1); /* link active */
@ -1734,14 +1792,16 @@ static int sky2_autoneg_done(struct sky2_port *sky2, u16 aux)
sky2->duplex = (aux & PHY_M_PS_FULL_DUP) ? DUPLEX_FULL : DUPLEX_HALF; sky2->duplex = (aux & PHY_M_PS_FULL_DUP) ? DUPLEX_FULL : DUPLEX_HALF;
/* Pause bits are offset (9..8) */ /* Pause bits are offset (9..8) */
if (hw->chip_id == CHIP_ID_YUKON_XL || hw->chip_id == CHIP_ID_YUKON_EC_U) if (hw->chip_id == CHIP_ID_YUKON_XL
|| hw->chip_id == CHIP_ID_YUKON_EC_U
|| hw->chip_id == CHIP_ID_YUKON_EX)
aux >>= 6; aux >>= 6;
sky2->flow_status = sky2_flow(aux & PHY_M_PS_RX_P_EN, sky2->flow_status = sky2_flow(aux & PHY_M_PS_RX_P_EN,
aux & PHY_M_PS_TX_P_EN); aux & PHY_M_PS_TX_P_EN);
if (sky2->duplex == DUPLEX_HALF && sky2->speed < SPEED_1000 if (sky2->duplex == DUPLEX_HALF && sky2->speed < SPEED_1000
&& hw->chip_id != CHIP_ID_YUKON_EC_U) && !(hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX))
sky2->flow_status = FC_NONE; sky2->flow_status = FC_NONE;
if (aux & PHY_M_PS_RX_P_EN) if (aux & PHY_M_PS_RX_P_EN)
@ -1794,48 +1854,37 @@ out:
} }
/* Transmit timeout is only called if we are running, carries is up /* Transmit timeout is only called if we are running, carrier is up
* and tx queue is full (stopped). * and tx queue is full (stopped).
* Called with netif_tx_lock held.
*/ */
static void sky2_tx_timeout(struct net_device *dev) static void sky2_tx_timeout(struct net_device *dev)
{ {
struct sky2_port *sky2 = netdev_priv(dev); struct sky2_port *sky2 = netdev_priv(dev);
struct sky2_hw *hw = sky2->hw; struct sky2_hw *hw = sky2->hw;
unsigned txq = txqaddr[sky2->port]; u32 imask;
u16 report, done;
if (netif_msg_timer(sky2)) if (netif_msg_timer(sky2))
printk(KERN_ERR PFX "%s: tx timeout\n", dev->name); printk(KERN_ERR PFX "%s: tx timeout\n", dev->name);
report = sky2_read16(hw, sky2->port == 0 ? STAT_TXA1_RIDX : STAT_TXA2_RIDX);
done = sky2_read16(hw, Q_ADDR(txq, Q_DONE));
printk(KERN_DEBUG PFX "%s: transmit ring %u .. %u report=%u done=%u\n", printk(KERN_DEBUG PFX "%s: transmit ring %u .. %u report=%u done=%u\n",
dev->name, dev->name, sky2->tx_cons, sky2->tx_prod,
sky2->tx_cons, sky2->tx_prod, report, done); sky2_read16(hw, sky2->port == 0 ? STAT_TXA1_RIDX : STAT_TXA2_RIDX),
sky2_read16(hw, Q_ADDR(txqaddr[sky2->port], Q_DONE)));
if (report != done) { imask = sky2_read32(hw, B0_IMSK); /* block IRQ in hw */
printk(KERN_INFO PFX "status burst pending (irq moderation?)\n"); sky2_write32(hw, B0_IMSK, 0);
sky2_read32(hw, B0_IMSK);
sky2_write8(hw, STAT_TX_TIMER_CTRL, TIM_STOP); netif_poll_disable(hw->dev[0]); /* stop NAPI poll */
sky2_write8(hw, STAT_TX_TIMER_CTRL, TIM_START); synchronize_irq(hw->pdev->irq);
} else if (report != sky2->tx_cons) {
printk(KERN_INFO PFX "status report lost?\n");
netif_tx_lock_bh(dev); netif_start_queue(dev); /* don't wakeup during flush */
sky2_tx_complete(sky2, report); sky2_tx_complete(sky2, sky2->tx_prod); /* Flush transmit queue */
netif_tx_unlock_bh(dev);
} else {
printk(KERN_INFO PFX "hardware hung? flushing\n");
sky2_write32(hw, Q_ADDR(txq, Q_CSR), BMU_STOP); sky2_write32(hw, B0_IMSK, imask);
sky2_write32(hw, Y2_QADDR(txq, PREF_UNIT_CTRL), PREF_UNIT_RST_SET);
sky2_tx_clean(dev); sky2_phy_reinit(sky2); /* this clears flow control etc */
sky2_qset(hw, txq);
sky2_prefetch_init(hw, txq, sky2->tx_le_map, TX_RING_SIZE - 1);
}
} }
static int sky2_change_mtu(struct net_device *dev, int new_mtu) static int sky2_change_mtu(struct net_device *dev, int new_mtu)
@ -1849,8 +1898,9 @@ static int sky2_change_mtu(struct net_device *dev, int new_mtu)
if (new_mtu < ETH_ZLEN || new_mtu > ETH_JUMBO_MTU) if (new_mtu < ETH_ZLEN || new_mtu > ETH_JUMBO_MTU)
return -EINVAL; return -EINVAL;
/* TSO on Yukon Ultra and MTU > 1500 not supported */
if (hw->chip_id == CHIP_ID_YUKON_EC_U && new_mtu > ETH_DATA_LEN) if (hw->chip_id == CHIP_ID_YUKON_EC_U && new_mtu > ETH_DATA_LEN)
return -EINVAL; dev->features &= ~NETIF_F_TSO;
if (!netif_running(dev)) { if (!netif_running(dev)) {
dev->mtu = new_mtu; dev->mtu = new_mtu;
@ -2089,6 +2139,8 @@ static int sky2_status_intr(struct sky2_hw *hw, int to_do)
goto force_update; goto force_update;
skb->protocol = eth_type_trans(skb, dev); skb->protocol = eth_type_trans(skb, dev);
sky2->net_stats.rx_packets++;
sky2->net_stats.rx_bytes += skb->len;
dev->last_rx = jiffies; dev->last_rx = jiffies;
#ifdef SKY2_VLAN_TAG_USED #ifdef SKY2_VLAN_TAG_USED
@ -2218,8 +2270,8 @@ static void sky2_hw_intr(struct sky2_hw *hw)
pci_err = sky2_pci_read16(hw, PCI_STATUS); pci_err = sky2_pci_read16(hw, PCI_STATUS);
if (net_ratelimit()) if (net_ratelimit())
printk(KERN_ERR PFX "%s: pci hw error (0x%x)\n", dev_err(&hw->pdev->dev, "PCI hardware error (0x%x)\n",
pci_name(hw->pdev), pci_err); pci_err);
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON); sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON);
sky2_pci_write16(hw, PCI_STATUS, sky2_pci_write16(hw, PCI_STATUS,
@ -2234,8 +2286,8 @@ static void sky2_hw_intr(struct sky2_hw *hw)
pex_err = sky2_pci_read32(hw, PEX_UNC_ERR_STAT); pex_err = sky2_pci_read32(hw, PEX_UNC_ERR_STAT);
if (net_ratelimit()) if (net_ratelimit())
printk(KERN_ERR PFX "%s: pci express error (0x%x)\n", dev_err(&hw->pdev->dev, "PCI Express error (0x%x)\n",
pci_name(hw->pdev), pex_err); pex_err);
/* clear the interrupt */ /* clear the interrupt */
sky2_write32(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON); sky2_write32(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON);
@ -2404,6 +2456,7 @@ static inline u32 sky2_mhz(const struct sky2_hw *hw)
switch (hw->chip_id) { switch (hw->chip_id) {
case CHIP_ID_YUKON_EC: case CHIP_ID_YUKON_EC:
case CHIP_ID_YUKON_EC_U: case CHIP_ID_YUKON_EC_U:
case CHIP_ID_YUKON_EX:
return 125; /* 125 Mhz */ return 125; /* 125 Mhz */
case CHIP_ID_YUKON_FE: case CHIP_ID_YUKON_FE:
return 100; /* 100 Mhz */ return 100; /* 100 Mhz */
@ -2423,34 +2476,62 @@ static inline u32 sky2_clk2us(const struct sky2_hw *hw, u32 clk)
} }
static int sky2_reset(struct sky2_hw *hw) static int __devinit sky2_init(struct sky2_hw *hw)
{ {
u16 status;
u8 t8; u8 t8;
int i;
sky2_write8(hw, B0_CTST, CS_RST_CLR); sky2_write8(hw, B0_CTST, CS_RST_CLR);
hw->chip_id = sky2_read8(hw, B2_CHIP_ID); hw->chip_id = sky2_read8(hw, B2_CHIP_ID);
if (hw->chip_id < CHIP_ID_YUKON_XL || hw->chip_id > CHIP_ID_YUKON_FE) { if (hw->chip_id < CHIP_ID_YUKON_XL || hw->chip_id > CHIP_ID_YUKON_FE) {
printk(KERN_ERR PFX "%s: unsupported chip type 0x%x\n", dev_err(&hw->pdev->dev, "unsupported chip type 0x%x\n",
pci_name(hw->pdev), hw->chip_id); hw->chip_id);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
if (hw->chip_id == CHIP_ID_YUKON_EX)
dev_warn(&hw->pdev->dev, "this driver not yet tested on this chip type\n"
"Please report success or failure to <netdev@vger.kernel.org>\n");
/* Make sure and enable all clocks */
if (hw->chip_id == CHIP_ID_YUKON_EX || hw->chip_id == CHIP_ID_YUKON_EC_U)
sky2_pci_write32(hw, PCI_DEV_REG3, 0);
hw->chip_rev = (sky2_read8(hw, B2_MAC_CFG) & CFG_CHIP_R_MSK) >> 4; hw->chip_rev = (sky2_read8(hw, B2_MAC_CFG) & CFG_CHIP_R_MSK) >> 4;
/* This rev is really old, and requires untested workarounds */ /* This rev is really old, and requires untested workarounds */
if (hw->chip_id == CHIP_ID_YUKON_EC && hw->chip_rev == CHIP_REV_YU_EC_A1) { if (hw->chip_id == CHIP_ID_YUKON_EC && hw->chip_rev == CHIP_REV_YU_EC_A1) {
printk(KERN_ERR PFX "%s: unsupported revision Yukon-%s (0x%x) rev %d\n", dev_err(&hw->pdev->dev, "unsupported revision Yukon-%s (0x%x) rev %d\n",
pci_name(hw->pdev), yukon2_name[hw->chip_id - CHIP_ID_YUKON_XL], yukon2_name[hw->chip_id - CHIP_ID_YUKON_XL],
hw->chip_id, hw->chip_rev); hw->chip_id, hw->chip_rev);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
hw->pmd_type = sky2_read8(hw, B2_PMD_TYP);
hw->ports = 1;
t8 = sky2_read8(hw, B2_Y2_HW_RES);
if ((t8 & CFG_DUAL_MAC_MSK) == CFG_DUAL_MAC_MSK) {
if (!(sky2_read8(hw, B2_Y2_CLK_GATE) & Y2_STATUS_LNK2_INAC))
++hw->ports;
}
return 0;
}
static void sky2_reset(struct sky2_hw *hw)
{
u16 status;
int i;
/* disable ASF */ /* disable ASF */
if (hw->chip_id <= CHIP_ID_YUKON_EC) { if (hw->chip_id <= CHIP_ID_YUKON_EC) {
sky2_write8(hw, B28_Y2_ASF_STAT_CMD, Y2_ASF_RESET); if (hw->chip_id == CHIP_ID_YUKON_EX) {
status = sky2_read16(hw, HCU_CCSR);
status &= ~(HCU_CCSR_AHB_RST | HCU_CCSR_CPU_RST_MODE |
HCU_CCSR_UC_STATE_MSK);
sky2_write16(hw, HCU_CCSR, status);
} else
sky2_write8(hw, B28_Y2_ASF_STAT_CMD, Y2_ASF_RESET);
sky2_write16(hw, B0_CTST, Y2_ASF_DISABLE); sky2_write16(hw, B0_CTST, Y2_ASF_DISABLE);
} }
@ -2472,15 +2553,7 @@ static int sky2_reset(struct sky2_hw *hw)
sky2_pci_write32(hw, PEX_UNC_ERR_STAT, 0xffffffffUL); sky2_pci_write32(hw, PEX_UNC_ERR_STAT, 0xffffffffUL);
hw->pmd_type = sky2_read8(hw, B2_PMD_TYP); sky2_power_on(hw);
hw->ports = 1;
t8 = sky2_read8(hw, B2_Y2_HW_RES);
if ((t8 & CFG_DUAL_MAC_MSK) == CFG_DUAL_MAC_MSK) {
if (!(sky2_read8(hw, B2_Y2_CLK_GATE) & Y2_STATUS_LNK2_INAC))
++hw->ports;
}
sky2_set_power_state(hw, PCI_D0);
for (i = 0; i < hw->ports; i++) { for (i = 0; i < hw->ports; i++) {
sky2_write8(hw, SK_REG(i, GMAC_LINK_CTRL), GMLC_RST_SET); sky2_write8(hw, SK_REG(i, GMAC_LINK_CTRL), GMLC_RST_SET);
@ -2563,7 +2636,37 @@ static int sky2_reset(struct sky2_hw *hw)
sky2_write8(hw, STAT_TX_TIMER_CTRL, TIM_START); sky2_write8(hw, STAT_TX_TIMER_CTRL, TIM_START);
sky2_write8(hw, STAT_LEV_TIMER_CTRL, TIM_START); sky2_write8(hw, STAT_LEV_TIMER_CTRL, TIM_START);
sky2_write8(hw, STAT_ISR_TIMER_CTRL, TIM_START); sky2_write8(hw, STAT_ISR_TIMER_CTRL, TIM_START);
}
static inline u8 sky2_wol_supported(const struct sky2_hw *hw)
{
return sky2_is_copper(hw) ? (WAKE_PHY | WAKE_MAGIC) : 0;
}
static void sky2_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
{
const struct sky2_port *sky2 = netdev_priv(dev);
wol->supported = sky2_wol_supported(sky2->hw);
wol->wolopts = sky2->wol;
}
static int sky2_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
{
struct sky2_port *sky2 = netdev_priv(dev);
struct sky2_hw *hw = sky2->hw;
if (wol->wolopts & ~sky2_wol_supported(sky2->hw))
return -EOPNOTSUPP;
sky2->wol = wol->wolopts;
if (hw->chip_id == CHIP_ID_YUKON_EC_U)
sky2_write32(hw, B0_CTST, sky2->wol
? Y2_HW_WOL_ON : Y2_HW_WOL_OFF);
if (!netif_running(dev))
sky2_wol_init(sky2);
return 0; return 0;
} }
@ -2814,25 +2917,9 @@ static void sky2_get_strings(struct net_device *dev, u32 stringset, u8 * data)
} }
} }
/* Use hardware MIB variables for critical path statistics and
* transmit feedback not reported at interrupt.
* Other errors are accounted for in interrupt handler.
*/
static struct net_device_stats *sky2_get_stats(struct net_device *dev) static struct net_device_stats *sky2_get_stats(struct net_device *dev)
{ {
struct sky2_port *sky2 = netdev_priv(dev); struct sky2_port *sky2 = netdev_priv(dev);
u64 data[13];
sky2_phy_stats(sky2, data, ARRAY_SIZE(data));
sky2->net_stats.tx_bytes = data[0];
sky2->net_stats.rx_bytes = data[1];
sky2->net_stats.tx_packets = data[2] + data[4] + data[6];
sky2->net_stats.rx_packets = data[3] + data[5] + data[7];
sky2->net_stats.multicast = data[3] + data[5];
sky2->net_stats.collisions = data[10];
sky2->net_stats.tx_aborted_errors = data[12];
return &sky2->net_stats; return &sky2->net_stats;
} }
@ -3191,7 +3278,9 @@ static void sky2_get_regs(struct net_device *dev, struct ethtool_regs *regs,
static const struct ethtool_ops sky2_ethtool_ops = { static const struct ethtool_ops sky2_ethtool_ops = {
.get_settings = sky2_get_settings, .get_settings = sky2_get_settings,
.set_settings = sky2_set_settings, .set_settings = sky2_set_settings,
.get_drvinfo = sky2_get_drvinfo, .get_drvinfo = sky2_get_drvinfo,
.get_wol = sky2_get_wol,
.set_wol = sky2_set_wol,
.get_msglevel = sky2_get_msglevel, .get_msglevel = sky2_get_msglevel,
.set_msglevel = sky2_set_msglevel, .set_msglevel = sky2_set_msglevel,
.nway_reset = sky2_nway_reset, .nway_reset = sky2_nway_reset,
@ -3221,13 +3310,14 @@ static const struct ethtool_ops sky2_ethtool_ops = {
/* Initialize network device */ /* Initialize network device */
static __devinit struct net_device *sky2_init_netdev(struct sky2_hw *hw, static __devinit struct net_device *sky2_init_netdev(struct sky2_hw *hw,
unsigned port, int highmem) unsigned port,
int highmem, int wol)
{ {
struct sky2_port *sky2; struct sky2_port *sky2;
struct net_device *dev = alloc_etherdev(sizeof(*sky2)); struct net_device *dev = alloc_etherdev(sizeof(*sky2));
if (!dev) { if (!dev) {
printk(KERN_ERR "sky2 etherdev alloc failed"); dev_err(&hw->pdev->dev, "etherdev alloc failed");
return NULL; return NULL;
} }
@ -3269,6 +3359,7 @@ static __devinit struct net_device *sky2_init_netdev(struct sky2_hw *hw,
sky2->speed = -1; sky2->speed = -1;
sky2->advertising = sky2_supported_modes(hw); sky2->advertising = sky2_supported_modes(hw);
sky2->rx_csum = 1; sky2->rx_csum = 1;
sky2->wol = wol;
spin_lock_init(&sky2->phy_lock); spin_lock_init(&sky2->phy_lock);
sky2->tx_pending = TX_DEF_PENDING; sky2->tx_pending = TX_DEF_PENDING;
@ -3278,11 +3369,9 @@ static __devinit struct net_device *sky2_init_netdev(struct sky2_hw *hw,
sky2->port = port; sky2->port = port;
if (hw->chip_id != CHIP_ID_YUKON_EC_U) dev->features |= NETIF_F_TSO | NETIF_F_IP_CSUM | NETIF_F_SG;
dev->features |= NETIF_F_TSO;
if (highmem) if (highmem)
dev->features |= NETIF_F_HIGHDMA; dev->features |= NETIF_F_HIGHDMA;
dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
#ifdef SKY2_VLAN_TAG_USED #ifdef SKY2_VLAN_TAG_USED
dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
@ -3343,8 +3432,7 @@ static int __devinit sky2_test_msi(struct sky2_hw *hw)
err = request_irq(pdev->irq, sky2_test_intr, 0, DRV_NAME, hw); err = request_irq(pdev->irq, sky2_test_intr, 0, DRV_NAME, hw);
if (err) { if (err) {
printk(KERN_ERR PFX "%s: cannot assign irq %d\n", dev_err(&pdev->dev, "cannot assign irq %d\n", pdev->irq);
pci_name(pdev), pdev->irq);
return err; return err;
} }
@ -3355,9 +3443,8 @@ static int __devinit sky2_test_msi(struct sky2_hw *hw)
if (!hw->msi) { if (!hw->msi) {
/* MSI test failed, go back to INTx mode */ /* MSI test failed, go back to INTx mode */
printk(KERN_INFO PFX "%s: No interrupt generated using MSI, " dev_info(&pdev->dev, "No interrupt generated using MSI, "
"switching to INTx mode.\n", "switching to INTx mode.\n");
pci_name(pdev));
err = -EOPNOTSUPP; err = -EOPNOTSUPP;
sky2_write8(hw, B0_CTST, CS_CL_SW_IRQ); sky2_write8(hw, B0_CTST, CS_CL_SW_IRQ);
@ -3371,62 +3458,62 @@ static int __devinit sky2_test_msi(struct sky2_hw *hw)
return err; return err;
} }
static int __devinit pci_wake_enabled(struct pci_dev *dev)
{
int pm = pci_find_capability(dev, PCI_CAP_ID_PM);
u16 value;
if (!pm)
return 0;
if (pci_read_config_word(dev, pm + PCI_PM_CTRL, &value))
return 0;
return value & PCI_PM_CTRL_PME_ENABLE;
}
static int __devinit sky2_probe(struct pci_dev *pdev, static int __devinit sky2_probe(struct pci_dev *pdev,
const struct pci_device_id *ent) const struct pci_device_id *ent)
{ {
struct net_device *dev, *dev1 = NULL; struct net_device *dev;
struct sky2_hw *hw; struct sky2_hw *hw;
int err, pm_cap, using_dac = 0; int err, using_dac = 0, wol_default;
err = pci_enable_device(pdev); err = pci_enable_device(pdev);
if (err) { if (err) {
printk(KERN_ERR PFX "%s cannot enable PCI device\n", dev_err(&pdev->dev, "cannot enable PCI device\n");
pci_name(pdev));
goto err_out; goto err_out;
} }
err = pci_request_regions(pdev, DRV_NAME); err = pci_request_regions(pdev, DRV_NAME);
if (err) { if (err) {
printk(KERN_ERR PFX "%s cannot obtain PCI resources\n", dev_err(&pdev->dev, "cannot obtain PCI resources\n");
pci_name(pdev));
goto err_out; goto err_out;
} }
pci_set_master(pdev); pci_set_master(pdev);
/* Find power-management capability. */
pm_cap = pci_find_capability(pdev, PCI_CAP_ID_PM);
if (pm_cap == 0) {
printk(KERN_ERR PFX "Cannot find PowerManagement capability, "
"aborting.\n");
err = -EIO;
goto err_out_free_regions;
}
if (sizeof(dma_addr_t) > sizeof(u32) && if (sizeof(dma_addr_t) > sizeof(u32) &&
!(err = pci_set_dma_mask(pdev, DMA_64BIT_MASK))) { !(err = pci_set_dma_mask(pdev, DMA_64BIT_MASK))) {
using_dac = 1; using_dac = 1;
err = pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK); err = pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK);
if (err < 0) { if (err < 0) {
printk(KERN_ERR PFX "%s unable to obtain 64 bit DMA " dev_err(&pdev->dev, "unable to obtain 64 bit DMA "
"for consistent allocations\n", pci_name(pdev)); "for consistent allocations\n");
goto err_out_free_regions; goto err_out_free_regions;
} }
} else { } else {
err = pci_set_dma_mask(pdev, DMA_32BIT_MASK); err = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
if (err) { if (err) {
printk(KERN_ERR PFX "%s no usable DMA configuration\n", dev_err(&pdev->dev, "no usable DMA configuration\n");
pci_name(pdev));
goto err_out_free_regions; goto err_out_free_regions;
} }
} }
wol_default = pci_wake_enabled(pdev) ? WAKE_MAGIC : 0;
err = -ENOMEM; err = -ENOMEM;
hw = kzalloc(sizeof(*hw), GFP_KERNEL); hw = kzalloc(sizeof(*hw), GFP_KERNEL);
if (!hw) { if (!hw) {
printk(KERN_ERR PFX "%s: cannot allocate hardware struct\n", dev_err(&pdev->dev, "cannot allocate hardware struct\n");
pci_name(pdev));
goto err_out_free_regions; goto err_out_free_regions;
} }
@ -3434,11 +3521,9 @@ static int __devinit sky2_probe(struct pci_dev *pdev,
hw->regs = ioremap_nocache(pci_resource_start(pdev, 0), 0x4000); hw->regs = ioremap_nocache(pci_resource_start(pdev, 0), 0x4000);
if (!hw->regs) { if (!hw->regs) {
printk(KERN_ERR PFX "%s: cannot map device registers\n", dev_err(&pdev->dev, "cannot map device registers\n");
pci_name(pdev));
goto err_out_free_hw; goto err_out_free_hw;
} }
hw->pm_cap = pm_cap;
#ifdef __BIG_ENDIAN #ifdef __BIG_ENDIAN
/* The sk98lin vendor driver uses hardware byte swapping but /* The sk98lin vendor driver uses hardware byte swapping but
@ -3458,18 +3543,22 @@ static int __devinit sky2_probe(struct pci_dev *pdev,
if (!hw->st_le) if (!hw->st_le)
goto err_out_iounmap; goto err_out_iounmap;
err = sky2_reset(hw); err = sky2_init(hw);
if (err) if (err)
goto err_out_iounmap; goto err_out_iounmap;
printk(KERN_INFO PFX "v%s addr 0x%llx irq %d Yukon-%s (0x%x) rev %d\n", dev_info(&pdev->dev, "v%s addr 0x%llx irq %d Yukon-%s (0x%x) rev %d\n",
DRV_VERSION, (unsigned long long)pci_resource_start(pdev, 0), DRV_VERSION, (unsigned long long)pci_resource_start(pdev, 0),
pdev->irq, yukon2_name[hw->chip_id - CHIP_ID_YUKON_XL], pdev->irq, yukon2_name[hw->chip_id - CHIP_ID_YUKON_XL],
hw->chip_id, hw->chip_rev); hw->chip_id, hw->chip_rev);
dev = sky2_init_netdev(hw, 0, using_dac); sky2_reset(hw);
if (!dev)
dev = sky2_init_netdev(hw, 0, using_dac, wol_default);
if (!dev) {
err = -ENOMEM;
goto err_out_free_pci; goto err_out_free_pci;
}
if (!disable_msi && pci_enable_msi(pdev) == 0) { if (!disable_msi && pci_enable_msi(pdev) == 0) {
err = sky2_test_msi(hw); err = sky2_test_msi(hw);
@ -3481,32 +3570,33 @@ static int __devinit sky2_probe(struct pci_dev *pdev,
err = register_netdev(dev); err = register_netdev(dev);
if (err) { if (err) {
printk(KERN_ERR PFX "%s: cannot register net device\n", dev_err(&pdev->dev, "cannot register net device\n");
pci_name(pdev));
goto err_out_free_netdev; goto err_out_free_netdev;
} }
err = request_irq(pdev->irq, sky2_intr, hw->msi ? 0 : IRQF_SHARED, err = request_irq(pdev->irq, sky2_intr, hw->msi ? 0 : IRQF_SHARED,
dev->name, hw); dev->name, hw);
if (err) { if (err) {
printk(KERN_ERR PFX "%s: cannot assign irq %d\n", dev_err(&pdev->dev, "cannot assign irq %d\n", pdev->irq);
pci_name(pdev), pdev->irq);
goto err_out_unregister; goto err_out_unregister;
} }
sky2_write32(hw, B0_IMSK, Y2_IS_BASE); sky2_write32(hw, B0_IMSK, Y2_IS_BASE);
sky2_show_addr(dev); sky2_show_addr(dev);
if (hw->ports > 1 && (dev1 = sky2_init_netdev(hw, 1, using_dac))) { if (hw->ports > 1) {
if (register_netdev(dev1) == 0) struct net_device *dev1;
sky2_show_addr(dev1);
else { dev1 = sky2_init_netdev(hw, 1, using_dac, wol_default);
/* Failure to register second port need not be fatal */ if (!dev1)
printk(KERN_WARNING PFX dev_warn(&pdev->dev, "allocation for second device failed\n");
"register of second port failed\n"); else if ((err = register_netdev(dev1))) {
dev_warn(&pdev->dev,
"register of second port failed (%d)\n", err);
hw->dev[1] = NULL; hw->dev[1] = NULL;
free_netdev(dev1); free_netdev(dev1);
} } else
sky2_show_addr(dev1);
} }
setup_timer(&hw->idle_timer, sky2_idle, (unsigned long) hw); setup_timer(&hw->idle_timer, sky2_idle, (unsigned long) hw);
@ -3555,7 +3645,8 @@ static void __devexit sky2_remove(struct pci_dev *pdev)
unregister_netdev(dev1); unregister_netdev(dev1);
unregister_netdev(dev0); unregister_netdev(dev0);
sky2_set_power_state(hw, PCI_D3hot); sky2_power_aux(hw);
sky2_write16(hw, B0_Y2LED, LED_STAT_OFF); sky2_write16(hw, B0_Y2LED, LED_STAT_OFF);
sky2_write8(hw, B0_CTST, CS_RST_SET); sky2_write8(hw, B0_CTST, CS_RST_SET);
sky2_read8(hw, B0_CTST); sky2_read8(hw, B0_CTST);
@ -3580,27 +3671,31 @@ static void __devexit sky2_remove(struct pci_dev *pdev)
static int sky2_suspend(struct pci_dev *pdev, pm_message_t state) static int sky2_suspend(struct pci_dev *pdev, pm_message_t state)
{ {
struct sky2_hw *hw = pci_get_drvdata(pdev); struct sky2_hw *hw = pci_get_drvdata(pdev);
int i; int i, wol = 0;
pci_power_t pstate = pci_choose_state(pdev, state);
if (!(pstate == PCI_D3hot || pstate == PCI_D3cold))
return -EINVAL;
del_timer_sync(&hw->idle_timer); del_timer_sync(&hw->idle_timer);
netif_poll_disable(hw->dev[0]); netif_poll_disable(hw->dev[0]);
for (i = 0; i < hw->ports; i++) { for (i = 0; i < hw->ports; i++) {
struct net_device *dev = hw->dev[i]; struct net_device *dev = hw->dev[i];
struct sky2_port *sky2 = netdev_priv(dev);
if (netif_running(dev)) { if (netif_running(dev))
sky2_down(dev); sky2_down(dev);
netif_device_detach(dev);
} if (sky2->wol)
sky2_wol_init(sky2);
wol |= sky2->wol;
} }
sky2_write32(hw, B0_IMSK, 0); sky2_write32(hw, B0_IMSK, 0);
sky2_power_aux(hw);
pci_save_state(pdev); pci_save_state(pdev);
sky2_set_power_state(hw, pstate); pci_enable_wake(pdev, pci_choose_state(pdev, state), wol);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
return 0; return 0;
} }
@ -3609,21 +3704,22 @@ static int sky2_resume(struct pci_dev *pdev)
struct sky2_hw *hw = pci_get_drvdata(pdev); struct sky2_hw *hw = pci_get_drvdata(pdev);
int i, err; int i, err;
pci_restore_state(pdev); err = pci_set_power_state(pdev, PCI_D0);
pci_enable_wake(pdev, PCI_D0, 0);
sky2_set_power_state(hw, PCI_D0);
err = sky2_reset(hw);
if (err) if (err)
goto out; goto out;
err = pci_restore_state(pdev);
if (err)
goto out;
pci_enable_wake(pdev, PCI_D0, 0);
sky2_reset(hw);
sky2_write32(hw, B0_IMSK, Y2_IS_BASE); sky2_write32(hw, B0_IMSK, Y2_IS_BASE);
for (i = 0; i < hw->ports; i++) { for (i = 0; i < hw->ports; i++) {
struct net_device *dev = hw->dev[i]; struct net_device *dev = hw->dev[i];
if (netif_running(dev)) { if (netif_running(dev)) {
netif_device_attach(dev);
err = sky2_up(dev); err = sky2_up(dev);
if (err) { if (err) {
printk(KERN_ERR PFX "%s: could not up: %d\n", printk(KERN_ERR PFX "%s: could not up: %d\n",
@ -3636,11 +3732,43 @@ static int sky2_resume(struct pci_dev *pdev)
netif_poll_enable(hw->dev[0]); netif_poll_enable(hw->dev[0]);
sky2_idle_start(hw); sky2_idle_start(hw);
return 0;
out: out:
dev_err(&pdev->dev, "resume failed (%d)\n", err);
pci_disable_device(pdev);
return err; return err;
} }
#endif #endif
static void sky2_shutdown(struct pci_dev *pdev)
{
struct sky2_hw *hw = pci_get_drvdata(pdev);
int i, wol = 0;
del_timer_sync(&hw->idle_timer);
netif_poll_disable(hw->dev[0]);
for (i = 0; i < hw->ports; i++) {
struct net_device *dev = hw->dev[i];
struct sky2_port *sky2 = netdev_priv(dev);
if (sky2->wol) {
wol = 1;
sky2_wol_init(sky2);
}
}
if (wol)
sky2_power_aux(hw);
pci_enable_wake(pdev, PCI_D3hot, wol);
pci_enable_wake(pdev, PCI_D3cold, wol);
pci_disable_device(pdev);
pci_set_power_state(pdev, PCI_D3hot);
}
static struct pci_driver sky2_driver = { static struct pci_driver sky2_driver = {
.name = DRV_NAME, .name = DRV_NAME,
.id_table = sky2_id_table, .id_table = sky2_id_table,
@ -3650,6 +3778,7 @@ static struct pci_driver sky2_driver = {
.suspend = sky2_suspend, .suspend = sky2_suspend,
.resume = sky2_resume, .resume = sky2_resume,
#endif #endif
.shutdown = sky2_shutdown,
}; };
static int __init sky2_init_module(void) static int __init sky2_init_module(void)

View File

@ -32,6 +32,7 @@ enum pci_dev_reg_1 {
PCI_Y2_PHY1_COMA = 1<<28, /* Set PHY 1 to Coma Mode (YUKON-2) */ PCI_Y2_PHY1_COMA = 1<<28, /* Set PHY 1 to Coma Mode (YUKON-2) */
PCI_Y2_PHY2_POWD = 1<<27, /* Set PHY 2 to Power Down (YUKON-2) */ PCI_Y2_PHY2_POWD = 1<<27, /* Set PHY 2 to Power Down (YUKON-2) */
PCI_Y2_PHY1_POWD = 1<<26, /* Set PHY 1 to Power Down (YUKON-2) */ PCI_Y2_PHY1_POWD = 1<<26, /* Set PHY 1 to Power Down (YUKON-2) */
PCI_Y2_PME_LEGACY= 1<<15, /* PCI Express legacy power management mode */
}; };
enum pci_dev_reg_2 { enum pci_dev_reg_2 {
@ -370,12 +371,9 @@ enum {
/* B2_CHIP_ID 8 bit Chip Identification Number */ /* B2_CHIP_ID 8 bit Chip Identification Number */
enum { enum {
CHIP_ID_GENESIS = 0x0a, /* Chip ID for GENESIS */
CHIP_ID_YUKON = 0xb0, /* Chip ID for YUKON */
CHIP_ID_YUKON_LITE = 0xb1, /* Chip ID for YUKON-Lite (Rev. A1-A3) */
CHIP_ID_YUKON_LP = 0xb2, /* Chip ID for YUKON-LP */
CHIP_ID_YUKON_XL = 0xb3, /* Chip ID for YUKON-2 XL */ CHIP_ID_YUKON_XL = 0xb3, /* Chip ID for YUKON-2 XL */
CHIP_ID_YUKON_EC_U = 0xb4, /* Chip ID for YUKON-2 EC Ultra */ CHIP_ID_YUKON_EC_U = 0xb4, /* Chip ID for YUKON-2 EC Ultra */
CHIP_ID_YUKON_EX = 0xb5, /* Chip ID for YUKON-2 Extreme */
CHIP_ID_YUKON_EC = 0xb6, /* Chip ID for YUKON-2 EC */ CHIP_ID_YUKON_EC = 0xb6, /* Chip ID for YUKON-2 EC */
CHIP_ID_YUKON_FE = 0xb7, /* Chip ID for YUKON-2 FE */ CHIP_ID_YUKON_FE = 0xb7, /* Chip ID for YUKON-2 FE */
@ -767,6 +765,24 @@ enum {
POLL_LIST_ADDR_HI= 0x0e2c,/* 32 bit Poll. List Start Addr (high) */ POLL_LIST_ADDR_HI= 0x0e2c,/* 32 bit Poll. List Start Addr (high) */
}; };
enum {
SMB_CFG = 0x0e40, /* 32 bit SMBus Config Register */
SMB_CSR = 0x0e44, /* 32 bit SMBus Control/Status Register */
};
enum {
CPU_WDOG = 0x0e48, /* 32 bit Watchdog Register */
CPU_CNTR = 0x0e4C, /* 32 bit Counter Register */
CPU_TIM = 0x0e50,/* 32 bit Timer Compare Register */
CPU_AHB_ADDR = 0x0e54, /* 32 bit CPU AHB Debug Register */
CPU_AHB_WDATA = 0x0e58, /* 32 bit CPU AHB Debug Register */
CPU_AHB_RDATA = 0x0e5C, /* 32 bit CPU AHB Debug Register */
HCU_MAP_BASE = 0x0e60, /* 32 bit Reset Mapping Base */
CPU_AHB_CTRL = 0x0e64, /* 32 bit CPU AHB Debug Register */
HCU_CCSR = 0x0e68, /* 32 bit CPU Control and Status Register */
HCU_HCSR = 0x0e6C, /* 32 bit Host Control and Status Register */
};
/* ASF Subsystem Registers (Yukon-2 only) */ /* ASF Subsystem Registers (Yukon-2 only) */
enum { enum {
B28_Y2_SMB_CONFIG = 0x0e40,/* 32 bit ASF SMBus Config Register */ B28_Y2_SMB_CONFIG = 0x0e40,/* 32 bit ASF SMBus Config Register */
@ -837,33 +853,27 @@ enum {
GMAC_LINK_CTRL = 0x0f10,/* 16 bit Link Control Reg */ GMAC_LINK_CTRL = 0x0f10,/* 16 bit Link Control Reg */
/* Wake-up Frame Pattern Match Control Registers (YUKON only) */ /* Wake-up Frame Pattern Match Control Registers (YUKON only) */
WOL_REG_OFFS = 0x20,/* HW-Bug: Address is + 0x20 against spec. */
WOL_CTRL_STAT = 0x0f20,/* 16 bit WOL Control/Status Reg */ WOL_CTRL_STAT = 0x0f20,/* 16 bit WOL Control/Status Reg */
WOL_MATCH_CTL = 0x0f22,/* 8 bit WOL Match Control Reg */ WOL_MATCH_CTL = 0x0f22,/* 8 bit WOL Match Control Reg */
WOL_MATCH_RES = 0x0f23,/* 8 bit WOL Match Result Reg */ WOL_MATCH_RES = 0x0f23,/* 8 bit WOL Match Result Reg */
WOL_MAC_ADDR = 0x0f24,/* 32 bit WOL MAC Address */ WOL_MAC_ADDR = 0x0f24,/* 32 bit WOL MAC Address */
WOL_PATT_PME = 0x0f2a,/* 8 bit WOL PME Match Enable (Yukon-2) */
WOL_PATT_ASFM = 0x0f2b,/* 8 bit WOL ASF Match Enable (Yukon-2) */
WOL_PATT_RPTR = 0x0f2c,/* 8 bit WOL Pattern Read Pointer */ WOL_PATT_RPTR = 0x0f2c,/* 8 bit WOL Pattern Read Pointer */
/* WOL Pattern Length Registers (YUKON only) */ /* WOL Pattern Length Registers (YUKON only) */
WOL_PATT_LEN_LO = 0x0f30,/* 32 bit WOL Pattern Length 3..0 */ WOL_PATT_LEN_LO = 0x0f30,/* 32 bit WOL Pattern Length 3..0 */
WOL_PATT_LEN_HI = 0x0f34,/* 24 bit WOL Pattern Length 6..4 */ WOL_PATT_LEN_HI = 0x0f34,/* 24 bit WOL Pattern Length 6..4 */
/* WOL Pattern Counter Registers (YUKON only) */ /* WOL Pattern Counter Registers (YUKON only) */
WOL_PATT_CNT_0 = 0x0f38,/* 32 bit WOL Pattern Counter 3..0 */ WOL_PATT_CNT_0 = 0x0f38,/* 32 bit WOL Pattern Counter 3..0 */
WOL_PATT_CNT_4 = 0x0f3c,/* 24 bit WOL Pattern Counter 6..4 */ WOL_PATT_CNT_4 = 0x0f3c,/* 24 bit WOL Pattern Counter 6..4 */
}; };
#define WOL_REGS(port, x) (x + (port)*0x80)
enum { enum {
WOL_PATT_RAM_1 = 0x1000,/* WOL Pattern RAM Link 1 */ WOL_PATT_RAM_1 = 0x1000,/* WOL Pattern RAM Link 1 */
WOL_PATT_RAM_2 = 0x1400,/* WOL Pattern RAM Link 2 */ WOL_PATT_RAM_2 = 0x1400,/* WOL Pattern RAM Link 2 */
}; };
#define WOL_PATT_RAM_BASE(port) (WOL_PATT_RAM_1 + (port)*0x400)
enum { enum {
BASE_GMAC_1 = 0x2800,/* GMAC 1 registers */ BASE_GMAC_1 = 0x2800,/* GMAC 1 registers */
@ -1654,6 +1664,39 @@ enum {
Y2_ASF_CLR_ASFI = 1<<1, /* Clear host IRQ */ Y2_ASF_CLR_ASFI = 1<<1, /* Clear host IRQ */
Y2_ASF_HOST_IRQ = 1<<0, /* Issue an IRQ to HOST system */ Y2_ASF_HOST_IRQ = 1<<0, /* Issue an IRQ to HOST system */
}; };
/* HCU_CCSR CPU Control and Status Register */
enum {
HCU_CCSR_SMBALERT_MONITOR= 1<<27, /* SMBALERT pin monitor */
HCU_CCSR_CPU_SLEEP = 1<<26, /* CPU sleep status */
/* Clock Stretching Timeout */
HCU_CCSR_CS_TO = 1<<25,
HCU_CCSR_WDOG = 1<<24, /* Watchdog Reset */
HCU_CCSR_CLR_IRQ_HOST = 1<<17, /* Clear IRQ_HOST */
HCU_CCSR_SET_IRQ_HCU = 1<<16, /* Set IRQ_HCU */
HCU_CCSR_AHB_RST = 1<<9, /* Reset AHB bridge */
HCU_CCSR_CPU_RST_MODE = 1<<8, /* CPU Reset Mode */
HCU_CCSR_SET_SYNC_CPU = 1<<5,
HCU_CCSR_CPU_CLK_DIVIDE_MSK = 3<<3,/* CPU Clock Divide */
HCU_CCSR_CPU_CLK_DIVIDE_BASE= 1<<3,
HCU_CCSR_OS_PRSNT = 1<<2, /* ASF OS Present */
/* Microcontroller State */
HCU_CCSR_UC_STATE_MSK = 3,
HCU_CCSR_UC_STATE_BASE = 1<<0,
HCU_CCSR_ASF_RESET = 0,
HCU_CCSR_ASF_HALTED = 1<<1,
HCU_CCSR_ASF_RUNNING = 1<<0,
};
/* HCU_HCSR Host Control and Status Register */
enum {
HCU_HCSR_SET_IRQ_CPU = 1<<16, /* Set IRQ_CPU */
HCU_HCSR_CLR_IRQ_HCU = 1<<1, /* Clear IRQ_HCU */
HCU_HCSR_SET_IRQ_HOST = 1<<0, /* Set IRQ_HOST */
};
/* STAT_CTRL 32 bit Status BMU control register (Yukon-2 only) */ /* STAT_CTRL 32 bit Status BMU control register (Yukon-2 only) */
enum { enum {
@ -1715,14 +1758,17 @@ enum {
GM_IS_RX_COMPL = 1<<0, /* Frame Reception Complete */ GM_IS_RX_COMPL = 1<<0, /* Frame Reception Complete */
#define GMAC_DEF_MSK GM_IS_TX_FF_UR #define GMAC_DEF_MSK GM_IS_TX_FF_UR
};
/* GMAC_LINK_CTRL 16 bit GMAC Link Control Reg (YUKON only) */ /* GMAC_LINK_CTRL 16 bit GMAC Link Control Reg (YUKON only) */
/* Bits 15.. 2: reserved */ enum { /* Bits 15.. 2: reserved */
GMLC_RST_CLR = 1<<1, /* Clear GMAC Link Reset */ GMLC_RST_CLR = 1<<1, /* Clear GMAC Link Reset */
GMLC_RST_SET = 1<<0, /* Set GMAC Link Reset */ GMLC_RST_SET = 1<<0, /* Set GMAC Link Reset */
};
/* WOL_CTRL_STAT 16 bit WOL Control/Status Reg */ /* WOL_CTRL_STAT 16 bit WOL Control/Status Reg */
enum {
WOL_CTL_LINK_CHG_OCC = 1<<15, WOL_CTL_LINK_CHG_OCC = 1<<15,
WOL_CTL_MAGIC_PKT_OCC = 1<<14, WOL_CTL_MAGIC_PKT_OCC = 1<<14,
WOL_CTL_PATTERN_OCC = 1<<13, WOL_CTL_PATTERN_OCC = 1<<13,
@ -1741,17 +1787,6 @@ enum {
WOL_CTL_DIS_PATTERN_UNIT = 1<<0, WOL_CTL_DIS_PATTERN_UNIT = 1<<0,
}; };
#define WOL_CTL_DEFAULT \
(WOL_CTL_DIS_PME_ON_LINK_CHG | \
WOL_CTL_DIS_PME_ON_PATTERN | \
WOL_CTL_DIS_PME_ON_MAGIC_PKT | \
WOL_CTL_DIS_LINK_CHG_UNIT | \
WOL_CTL_DIS_PATTERN_UNIT | \
WOL_CTL_DIS_MAGIC_PKT_UNIT)
/* WOL_MATCH_CTL 8 bit WOL Match Control Reg */
#define WOL_CTL_PATT_ENA(x) (1 << (x))
/* Control flags */ /* Control flags */
enum { enum {
@ -1875,6 +1910,7 @@ struct sky2_port {
u8 autoneg; /* AUTONEG_ENABLE, AUTONEG_DISABLE */ u8 autoneg; /* AUTONEG_ENABLE, AUTONEG_DISABLE */
u8 duplex; /* DUPLEX_HALF, DUPLEX_FULL */ u8 duplex; /* DUPLEX_HALF, DUPLEX_FULL */
u8 rx_csum; u8 rx_csum;
u8 wol;
enum flow_control flow_mode; enum flow_control flow_mode;
enum flow_control flow_status; enum flow_control flow_status;
@ -1887,7 +1923,6 @@ struct sky2_hw {
struct pci_dev *pdev; struct pci_dev *pdev;
struct net_device *dev[2]; struct net_device *dev[2];
int pm_cap;
u8 chip_id; u8 chip_id;
u8 chip_rev; u8 chip_rev;
u8 pmd_type; u8 pmd_type;

View File

@ -280,72 +280,67 @@ spider_net_free_chain(struct spider_net_card *card,
{ {
struct spider_net_descr *descr; struct spider_net_descr *descr;
for (descr = chain->tail; !descr->bus_addr; descr = descr->next) { descr = chain->ring;
pci_unmap_single(card->pdev, descr->bus_addr, do {
SPIDER_NET_DESCR_SIZE, PCI_DMA_BIDIRECTIONAL);
descr->bus_addr = 0; descr->bus_addr = 0;
} descr->next_descr_addr = 0;
descr = descr->next;
} while (descr != chain->ring);
dma_free_coherent(&card->pdev->dev, chain->num_desc,
chain->ring, chain->dma_addr);
} }
/** /**
* spider_net_init_chain - links descriptor chain * spider_net_init_chain - alloc and link descriptor chain
* @card: card structure * @card: card structure
* @chain: address of chain * @chain: address of chain
* @start_descr: address of descriptor array
* @no: number of descriptors
* *
* we manage a circular list that mirrors the hardware structure, * We manage a circular list that mirrors the hardware structure,
* except that the hardware uses bus addresses. * except that the hardware uses bus addresses.
* *
* returns 0 on success, <0 on failure * Returns 0 on success, <0 on failure
*/ */
static int static int
spider_net_init_chain(struct spider_net_card *card, spider_net_init_chain(struct spider_net_card *card,
struct spider_net_descr_chain *chain, struct spider_net_descr_chain *chain)
struct spider_net_descr *start_descr,
int no)
{ {
int i; int i;
struct spider_net_descr *descr; struct spider_net_descr *descr;
dma_addr_t buf; dma_addr_t buf;
size_t alloc_size;
descr = start_descr; alloc_size = chain->num_desc * sizeof (struct spider_net_descr);
memset(descr, 0, sizeof(*descr) * no);
/* set up the hardware pointers in each descriptor */ chain->ring = dma_alloc_coherent(&card->pdev->dev, alloc_size,
for (i=0; i<no; i++, descr++) { &chain->dma_addr, GFP_KERNEL);
if (!chain->ring)
return -ENOMEM;
descr = chain->ring;
memset(descr, 0, alloc_size);
/* Set up the hardware pointers in each descriptor */
buf = chain->dma_addr;
for (i=0; i < chain->num_desc; i++, descr++) {
descr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; descr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE;
buf = pci_map_single(card->pdev, descr,
SPIDER_NET_DESCR_SIZE,
PCI_DMA_BIDIRECTIONAL);
if (pci_dma_mapping_error(buf))
goto iommu_error;
descr->bus_addr = buf; descr->bus_addr = buf;
descr->next_descr_addr = 0;
descr->next = descr + 1; descr->next = descr + 1;
descr->prev = descr - 1; descr->prev = descr - 1;
buf += sizeof(struct spider_net_descr);
} }
/* do actual circular list */ /* do actual circular list */
(descr-1)->next = start_descr; (descr-1)->next = chain->ring;
start_descr->prev = descr-1; chain->ring->prev = descr-1;
spin_lock_init(&chain->lock); spin_lock_init(&chain->lock);
chain->head = start_descr; chain->head = chain->ring;
chain->tail = start_descr; chain->tail = chain->ring;
return 0; return 0;
iommu_error:
descr = start_descr;
for (i=0; i < no; i++, descr++)
if (descr->bus_addr)
pci_unmap_single(card->pdev, descr->bus_addr,
SPIDER_NET_DESCR_SIZE,
PCI_DMA_BIDIRECTIONAL);
return -ENOMEM;
} }
/** /**
@ -372,21 +367,20 @@ spider_net_free_rx_chain_contents(struct spider_net_card *card)
} }
/** /**
* spider_net_prepare_rx_descr - reinitializes a rx descriptor * spider_net_prepare_rx_descr - Reinitialize RX descriptor
* @card: card structure * @card: card structure
* @descr: descriptor to re-init * @descr: descriptor to re-init
* *
* return 0 on succes, <0 on failure * Return 0 on succes, <0 on failure.
* *
* allocates a new rx skb, iommu-maps it and attaches it to the descriptor. * Allocates a new rx skb, iommu-maps it and attaches it to the
* Activate the descriptor state-wise * descriptor. Mark the descriptor as activated, ready-to-use.
*/ */
static int static int
spider_net_prepare_rx_descr(struct spider_net_card *card, spider_net_prepare_rx_descr(struct spider_net_card *card,
struct spider_net_descr *descr) struct spider_net_descr *descr)
{ {
dma_addr_t buf; dma_addr_t buf;
int error = 0;
int offset; int offset;
int bufsize; int bufsize;
@ -414,7 +408,7 @@ spider_net_prepare_rx_descr(struct spider_net_card *card,
(SPIDER_NET_RXBUF_ALIGN - 1); (SPIDER_NET_RXBUF_ALIGN - 1);
if (offset) if (offset)
skb_reserve(descr->skb, SPIDER_NET_RXBUF_ALIGN - offset); skb_reserve(descr->skb, SPIDER_NET_RXBUF_ALIGN - offset);
/* io-mmu-map the skb */ /* iommu-map the skb */
buf = pci_map_single(card->pdev, descr->skb->data, buf = pci_map_single(card->pdev, descr->skb->data,
SPIDER_NET_MAX_FRAME, PCI_DMA_FROMDEVICE); SPIDER_NET_MAX_FRAME, PCI_DMA_FROMDEVICE);
descr->buf_addr = buf; descr->buf_addr = buf;
@ -425,11 +419,16 @@ spider_net_prepare_rx_descr(struct spider_net_card *card,
card->spider_stats.rx_iommu_map_error++; card->spider_stats.rx_iommu_map_error++;
descr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; descr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE;
} else { } else {
descr->next_descr_addr = 0;
wmb();
descr->dmac_cmd_status = SPIDER_NET_DESCR_CARDOWNED | descr->dmac_cmd_status = SPIDER_NET_DESCR_CARDOWNED |
SPIDER_NET_DMAC_NOINTR_COMPLETE; SPIDER_NET_DMAC_NOINTR_COMPLETE;
wmb();
descr->prev->next_descr_addr = descr->bus_addr;
} }
return error; return 0;
} }
/** /**
@ -493,10 +492,10 @@ spider_net_refill_rx_chain(struct spider_net_card *card)
} }
/** /**
* spider_net_alloc_rx_skbs - allocates rx skbs in rx descriptor chains * spider_net_alloc_rx_skbs - Allocates rx skbs in rx descriptor chains
* @card: card structure * @card: card structure
* *
* returns 0 on success, <0 on failure * Returns 0 on success, <0 on failure.
*/ */
static int static int
spider_net_alloc_rx_skbs(struct spider_net_card *card) spider_net_alloc_rx_skbs(struct spider_net_card *card)
@ -507,16 +506,16 @@ spider_net_alloc_rx_skbs(struct spider_net_card *card)
result = -ENOMEM; result = -ENOMEM;
chain = &card->rx_chain; chain = &card->rx_chain;
/* put at least one buffer into the chain. if this fails, /* Put at least one buffer into the chain. if this fails,
* we've got a problem. if not, spider_net_refill_rx_chain * we've got a problem. If not, spider_net_refill_rx_chain
* will do the rest at the end of this function */ * will do the rest at the end of this function. */
if (spider_net_prepare_rx_descr(card, chain->head)) if (spider_net_prepare_rx_descr(card, chain->head))
goto error; goto error;
else else
chain->head = chain->head->next; chain->head = chain->head->next;
/* this will allocate the rest of the rx buffers; if not, it's /* This will allocate the rest of the rx buffers;
* business as usual later on */ * if not, it's business as usual later on. */
spider_net_refill_rx_chain(card); spider_net_refill_rx_chain(card);
spider_net_enable_rxdmac(card); spider_net_enable_rxdmac(card);
return 0; return 0;
@ -707,7 +706,7 @@ spider_net_set_low_watermark(struct spider_net_card *card)
} }
/* If TX queue is short, don't even bother with interrupts */ /* If TX queue is short, don't even bother with interrupts */
if (cnt < card->num_tx_desc/4) if (cnt < card->tx_chain.num_desc/4)
return cnt; return cnt;
/* Set low-watermark 3/4th's of the way into the queue. */ /* Set low-watermark 3/4th's of the way into the queue. */
@ -915,16 +914,13 @@ spider_net_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
* spider_net_pass_skb_up - takes an skb from a descriptor and passes it on * spider_net_pass_skb_up - takes an skb from a descriptor and passes it on
* @descr: descriptor to process * @descr: descriptor to process
* @card: card structure * @card: card structure
* @napi: whether caller is in NAPI context
* *
* returns 1 on success, 0 if no packet was passed to the stack * Fills out skb structure and passes the data to the stack.
* * The descriptor state is not changed.
* iommu-unmaps the skb, fills out skb structure and passes the data to the
* stack. The descriptor state is not changed.
*/ */
static int static void
spider_net_pass_skb_up(struct spider_net_descr *descr, spider_net_pass_skb_up(struct spider_net_descr *descr,
struct spider_net_card *card, int napi) struct spider_net_card *card)
{ {
struct sk_buff *skb; struct sk_buff *skb;
struct net_device *netdev; struct net_device *netdev;
@ -932,23 +928,8 @@ spider_net_pass_skb_up(struct spider_net_descr *descr,
data_status = descr->data_status; data_status = descr->data_status;
data_error = descr->data_error; data_error = descr->data_error;
netdev = card->netdev; netdev = card->netdev;
/* unmap descriptor */
pci_unmap_single(card->pdev, descr->buf_addr, SPIDER_NET_MAX_FRAME,
PCI_DMA_FROMDEVICE);
/* the cases we'll throw away the packet immediately */
if (data_error & SPIDER_NET_DESTROY_RX_FLAGS) {
if (netif_msg_rx_err(card))
pr_err("error in received descriptor found, "
"data_status=x%08x, data_error=x%08x\n",
data_status, data_error);
card->spider_stats.rx_desc_error++;
return 0;
}
skb = descr->skb; skb = descr->skb;
skb->dev = netdev; skb->dev = netdev;
skb_put(skb, descr->valid_size); skb_put(skb, descr->valid_size);
@ -977,57 +958,72 @@ spider_net_pass_skb_up(struct spider_net_descr *descr,
} }
/* pass skb up to stack */ /* pass skb up to stack */
if (napi) netif_receive_skb(skb);
netif_receive_skb(skb);
else
netif_rx_ni(skb);
/* update netdevice statistics */ /* update netdevice statistics */
card->netdev_stats.rx_packets++; card->netdev_stats.rx_packets++;
card->netdev_stats.rx_bytes += skb->len; card->netdev_stats.rx_bytes += skb->len;
return 1;
} }
#ifdef DEBUG
static void show_rx_chain(struct spider_net_card *card)
{
struct spider_net_descr_chain *chain = &card->rx_chain;
struct spider_net_descr *start= chain->tail;
struct spider_net_descr *descr= start;
int status;
int cnt = 0;
int cstat = spider_net_get_descr_status(descr);
printk(KERN_INFO "RX chain tail at descr=%ld\n",
(start - card->descr) - card->tx_chain.num_desc);
status = cstat;
do
{
status = spider_net_get_descr_status(descr);
if (cstat != status) {
printk(KERN_INFO "Have %d descrs with stat=x%08x\n", cnt, cstat);
cstat = status;
cnt = 0;
}
cnt ++;
descr = descr->next;
} while (descr != start);
printk(KERN_INFO "Last %d descrs with stat=x%08x\n", cnt, cstat);
}
#endif
/** /**
* spider_net_decode_one_descr - processes an rx descriptor * spider_net_decode_one_descr - processes an rx descriptor
* @card: card structure * @card: card structure
* @napi: whether caller is in NAPI context
* *
* returns 1 if a packet has been sent to the stack, otherwise 0 * Returns 1 if a packet has been sent to the stack, otherwise 0
* *
* processes an rx descriptor by iommu-unmapping the data buffer and passing * Processes an rx descriptor by iommu-unmapping the data buffer and passing
* the packet up to the stack. This function is called in softirq * the packet up to the stack. This function is called in softirq
* context, e.g. either bottom half from interrupt or NAPI polling context * context, e.g. either bottom half from interrupt or NAPI polling context
*/ */
static int static int
spider_net_decode_one_descr(struct spider_net_card *card, int napi) spider_net_decode_one_descr(struct spider_net_card *card)
{ {
struct spider_net_descr_chain *chain = &card->rx_chain; struct spider_net_descr_chain *chain = &card->rx_chain;
struct spider_net_descr *descr = chain->tail; struct spider_net_descr *descr = chain->tail;
int status; int status;
int result;
status = spider_net_get_descr_status(descr); status = spider_net_get_descr_status(descr);
if (status == SPIDER_NET_DESCR_CARDOWNED) { /* Nothing in the descriptor, or ring must be empty */
/* nothing in the descriptor yet */ if ((status == SPIDER_NET_DESCR_CARDOWNED) ||
result=0; (status == SPIDER_NET_DESCR_NOT_IN_USE))
goto out; return 0;
}
if (status == SPIDER_NET_DESCR_NOT_IN_USE) {
/* not initialized yet, the ring must be empty */
spider_net_refill_rx_chain(card);
spider_net_enable_rxdmac(card);
result=0;
goto out;
}
/* descriptor definitively used -- move on tail */ /* descriptor definitively used -- move on tail */
chain->tail = descr->next; chain->tail = descr->next;
result = 0; /* unmap descriptor */
pci_unmap_single(card->pdev, descr->buf_addr,
SPIDER_NET_MAX_FRAME, PCI_DMA_FROMDEVICE);
if ( (status == SPIDER_NET_DESCR_RESPONSE_ERROR) || if ( (status == SPIDER_NET_DESCR_RESPONSE_ERROR) ||
(status == SPIDER_NET_DESCR_PROTECTION_ERROR) || (status == SPIDER_NET_DESCR_PROTECTION_ERROR) ||
(status == SPIDER_NET_DESCR_FORCE_END) ) { (status == SPIDER_NET_DESCR_FORCE_END) ) {
@ -1035,31 +1031,55 @@ spider_net_decode_one_descr(struct spider_net_card *card, int napi)
pr_err("%s: dropping RX descriptor with state %d\n", pr_err("%s: dropping RX descriptor with state %d\n",
card->netdev->name, status); card->netdev->name, status);
card->netdev_stats.rx_dropped++; card->netdev_stats.rx_dropped++;
pci_unmap_single(card->pdev, descr->buf_addr, goto bad_desc;
SPIDER_NET_MAX_FRAME, PCI_DMA_FROMDEVICE);
dev_kfree_skb_irq(descr->skb);
goto refill;
} }
if ( (status != SPIDER_NET_DESCR_COMPLETE) && if ( (status != SPIDER_NET_DESCR_COMPLETE) &&
(status != SPIDER_NET_DESCR_FRAME_END) ) { (status != SPIDER_NET_DESCR_FRAME_END) ) {
if (netif_msg_rx_err(card)) { if (netif_msg_rx_err(card))
pr_err("%s: RX descriptor with state %d\n", pr_err("%s: RX descriptor with unkown state %d\n",
card->netdev->name, status); card->netdev->name, status);
card->spider_stats.rx_desc_unk_state++; card->spider_stats.rx_desc_unk_state++;
} goto bad_desc;
goto refill;
} }
/* ok, we've got a packet in descr */ /* The cases we'll throw away the packet immediately */
result = spider_net_pass_skb_up(descr, card, napi); if (descr->data_error & SPIDER_NET_DESTROY_RX_FLAGS) {
refill: if (netif_msg_rx_err(card))
pr_err("%s: error in received descriptor found, "
"data_status=x%08x, data_error=x%08x\n",
card->netdev->name,
descr->data_status, descr->data_error);
goto bad_desc;
}
if (descr->dmac_cmd_status & 0xfefe) {
pr_err("%s: bad status, cmd_status=x%08x\n",
card->netdev->name,
descr->dmac_cmd_status);
pr_err("buf_addr=x%08x\n", descr->buf_addr);
pr_err("buf_size=x%08x\n", descr->buf_size);
pr_err("next_descr_addr=x%08x\n", descr->next_descr_addr);
pr_err("result_size=x%08x\n", descr->result_size);
pr_err("valid_size=x%08x\n", descr->valid_size);
pr_err("data_status=x%08x\n", descr->data_status);
pr_err("data_error=x%08x\n", descr->data_error);
pr_err("bus_addr=x%08x\n", descr->bus_addr);
pr_err("which=%ld\n", descr - card->rx_chain.ring);
card->spider_stats.rx_desc_error++;
goto bad_desc;
}
/* Ok, we've got a packet in descr */
spider_net_pass_skb_up(descr, card);
descr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; descr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE;
/* change the descriptor state: */ return 1;
if (!napi)
spider_net_refill_rx_chain(card); bad_desc:
out: dev_kfree_skb_irq(descr->skb);
return result; descr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE;
return 0;
} }
/** /**
@ -1085,7 +1105,7 @@ spider_net_poll(struct net_device *netdev, int *budget)
packets_to_do = min(*budget, netdev->quota); packets_to_do = min(*budget, netdev->quota);
while (packets_to_do) { while (packets_to_do) {
if (spider_net_decode_one_descr(card, 1)) { if (spider_net_decode_one_descr(card)) {
packets_done++; packets_done++;
packets_to_do--; packets_to_do--;
} else { } else {
@ -1098,6 +1118,7 @@ spider_net_poll(struct net_device *netdev, int *budget)
netdev->quota -= packets_done; netdev->quota -= packets_done;
*budget -= packets_done; *budget -= packets_done;
spider_net_refill_rx_chain(card); spider_net_refill_rx_chain(card);
spider_net_enable_rxdmac(card);
/* if all packets are in the stack, enable interrupts and return 0 */ /* if all packets are in the stack, enable interrupts and return 0 */
/* if not, return 1 */ /* if not, return 1 */
@ -1226,24 +1247,6 @@ spider_net_set_mac(struct net_device *netdev, void *p)
return 0; return 0;
} }
/**
* spider_net_handle_rxram_full - cleans up RX ring upon RX RAM full interrupt
* @card: card structure
*
* spider_net_handle_rxram_full empties the RX ring so that spider can put
* more packets in it and empty its RX RAM. This is called in bottom half
* context
*/
static void
spider_net_handle_rxram_full(struct spider_net_card *card)
{
while (spider_net_decode_one_descr(card, 0))
;
spider_net_enable_rxchtails(card);
spider_net_enable_rxdmac(card);
netif_rx_schedule(card->netdev);
}
/** /**
* spider_net_handle_error_irq - handles errors raised by an interrupt * spider_net_handle_error_irq - handles errors raised by an interrupt
* @card: card structure * @card: card structure
@ -1366,10 +1369,10 @@ spider_net_handle_error_irq(struct spider_net_card *card, u32 status_reg)
case SPIDER_NET_GRFAFLLINT: /* fallthrough */ case SPIDER_NET_GRFAFLLINT: /* fallthrough */
case SPIDER_NET_GRMFLLINT: case SPIDER_NET_GRMFLLINT:
if (netif_msg_intr(card) && net_ratelimit()) if (netif_msg_intr(card) && net_ratelimit())
pr_debug("Spider RX RAM full, incoming packets " pr_err("Spider RX RAM full, incoming packets "
"might be discarded!\n"); "might be discarded!\n");
spider_net_rx_irq_off(card); spider_net_rx_irq_off(card);
tasklet_schedule(&card->rxram_full_tl); netif_rx_schedule(card->netdev);
show_error = 0; show_error = 0;
break; break;
@ -1384,7 +1387,7 @@ spider_net_handle_error_irq(struct spider_net_card *card, u32 status_reg)
case SPIDER_NET_GDCDCEINT: /* fallthrough */ case SPIDER_NET_GDCDCEINT: /* fallthrough */
case SPIDER_NET_GDBDCEINT: /* fallthrough */ case SPIDER_NET_GDBDCEINT: /* fallthrough */
case SPIDER_NET_GDADCEINT: case SPIDER_NET_GDADCEINT:
if (netif_msg_intr(card)) if (netif_msg_intr(card) && net_ratelimit())
pr_err("got descriptor chain end interrupt, " pr_err("got descriptor chain end interrupt, "
"restarting DMAC %c.\n", "restarting DMAC %c.\n",
'D'-(i-SPIDER_NET_GDDDCEINT)/3); 'D'-(i-SPIDER_NET_GDDDCEINT)/3);
@ -1455,7 +1458,7 @@ spider_net_handle_error_irq(struct spider_net_card *card, u32 status_reg)
break; break;
} }
if ((show_error) && (netif_msg_intr(card))) if ((show_error) && (netif_msg_intr(card)) && net_ratelimit())
pr_err("Got error interrupt on %s, GHIINT0STS = 0x%08x, " pr_err("Got error interrupt on %s, GHIINT0STS = 0x%08x, "
"GHIINT1STS = 0x%08x, GHIINT2STS = 0x%08x\n", "GHIINT1STS = 0x%08x, GHIINT2STS = 0x%08x\n",
card->netdev->name, card->netdev->name,
@ -1651,27 +1654,18 @@ int
spider_net_open(struct net_device *netdev) spider_net_open(struct net_device *netdev)
{ {
struct spider_net_card *card = netdev_priv(netdev); struct spider_net_card *card = netdev_priv(netdev);
struct spider_net_descr *descr; int result;
int i, result;
result = -ENOMEM; result = spider_net_init_chain(card, &card->tx_chain);
if (spider_net_init_chain(card, &card->tx_chain, card->descr, if (result)
card->num_tx_desc))
goto alloc_tx_failed; goto alloc_tx_failed;
card->low_watermark = NULL; card->low_watermark = NULL;
/* rx_chain is after tx_chain, so offset is descr + tx_count */ result = spider_net_init_chain(card, &card->rx_chain);
if (spider_net_init_chain(card, &card->rx_chain, if (result)
card->descr + card->num_tx_desc,
card->num_rx_desc))
goto alloc_rx_failed; goto alloc_rx_failed;
descr = card->rx_chain.head; /* Allocate rx skbs */
for (i=0; i < card->num_rx_desc; i++, descr++)
descr->next_descr_addr = descr->next->bus_addr;
/* allocate rx skbs */
if (spider_net_alloc_rx_skbs(card)) if (spider_net_alloc_rx_skbs(card))
goto alloc_skbs_failed; goto alloc_skbs_failed;
@ -1902,7 +1896,6 @@ spider_net_stop(struct net_device *netdev)
{ {
struct spider_net_card *card = netdev_priv(netdev); struct spider_net_card *card = netdev_priv(netdev);
tasklet_kill(&card->rxram_full_tl);
netif_poll_disable(netdev); netif_poll_disable(netdev);
netif_carrier_off(netdev); netif_carrier_off(netdev);
netif_stop_queue(netdev); netif_stop_queue(netdev);
@ -1924,6 +1917,7 @@ spider_net_stop(struct net_device *netdev)
/* release chains */ /* release chains */
spider_net_release_tx_chain(card, 1); spider_net_release_tx_chain(card, 1);
spider_net_free_rx_chain_contents(card);
spider_net_free_rx_chain_contents(card); spider_net_free_rx_chain_contents(card);
@ -2046,9 +2040,6 @@ spider_net_setup_netdev(struct spider_net_card *card)
pci_set_drvdata(card->pdev, netdev); pci_set_drvdata(card->pdev, netdev);
card->rxram_full_tl.data = (unsigned long) card;
card->rxram_full_tl.func =
(void (*)(unsigned long)) spider_net_handle_rxram_full;
init_timer(&card->tx_timer); init_timer(&card->tx_timer);
card->tx_timer.function = card->tx_timer.function =
(void (*)(unsigned long)) spider_net_cleanup_tx_ring; (void (*)(unsigned long)) spider_net_cleanup_tx_ring;
@ -2057,8 +2048,8 @@ spider_net_setup_netdev(struct spider_net_card *card)
card->options.rx_csum = SPIDER_NET_RX_CSUM_DEFAULT; card->options.rx_csum = SPIDER_NET_RX_CSUM_DEFAULT;
card->num_tx_desc = tx_descriptors; card->tx_chain.num_desc = tx_descriptors;
card->num_rx_desc = rx_descriptors; card->rx_chain.num_desc = rx_descriptors;
spider_net_setup_netdev_ops(netdev); spider_net_setup_netdev_ops(netdev);
@ -2107,12 +2098,8 @@ spider_net_alloc_card(void)
{ {
struct net_device *netdev; struct net_device *netdev;
struct spider_net_card *card; struct spider_net_card *card;
size_t alloc_size;
alloc_size = sizeof (*card) + netdev = alloc_etherdev(sizeof(struct spider_net_card));
sizeof (struct spider_net_descr) * rx_descriptors +
sizeof (struct spider_net_descr) * tx_descriptors;
netdev = alloc_etherdev(alloc_size);
if (!netdev) if (!netdev)
return NULL; return NULL;

View File

@ -24,7 +24,7 @@
#ifndef _SPIDER_NET_H #ifndef _SPIDER_NET_H
#define _SPIDER_NET_H #define _SPIDER_NET_H
#define VERSION "1.6 A" #define VERSION "1.6 B"
#include "sungem_phy.h" #include "sungem_phy.h"
@ -378,6 +378,9 @@ struct spider_net_descr_chain {
spinlock_t lock; spinlock_t lock;
struct spider_net_descr *head; struct spider_net_descr *head;
struct spider_net_descr *tail; struct spider_net_descr *tail;
struct spider_net_descr *ring;
int num_desc;
dma_addr_t dma_addr;
}; };
/* descriptor data_status bits */ /* descriptor data_status bits */
@ -397,8 +400,6 @@ struct spider_net_descr_chain {
* 701b8000 would be correct, but every packets gets that flag */ * 701b8000 would be correct, but every packets gets that flag */
#define SPIDER_NET_DESTROY_RX_FLAGS 0x700b8000 #define SPIDER_NET_DESTROY_RX_FLAGS 0x700b8000
#define SPIDER_NET_DESCR_SIZE 32
/* this will be bigger some time */ /* this will be bigger some time */
struct spider_net_options { struct spider_net_options {
int rx_csum; /* for rx: if 0 ip_summed=NONE, int rx_csum; /* for rx: if 0 ip_summed=NONE,
@ -441,25 +442,16 @@ struct spider_net_card {
struct spider_net_descr_chain rx_chain; struct spider_net_descr_chain rx_chain;
struct spider_net_descr *low_watermark; struct spider_net_descr *low_watermark;
struct net_device_stats netdev_stats;
struct spider_net_options options;
spinlock_t intmask_lock;
struct tasklet_struct rxram_full_tl;
struct timer_list tx_timer; struct timer_list tx_timer;
struct work_struct tx_timeout_task; struct work_struct tx_timeout_task;
atomic_t tx_timeout_task_counter; atomic_t tx_timeout_task_counter;
wait_queue_head_t waitq; wait_queue_head_t waitq;
/* for ethtool */ /* for ethtool */
int msg_enable; int msg_enable;
int num_rx_desc; struct net_device_stats netdev_stats;
int num_tx_desc;
struct spider_net_extra_stats spider_stats; struct spider_net_extra_stats spider_stats;
struct spider_net_options options;
struct spider_net_descr descr[0];
}; };
#define pr_err(fmt,arg...) \ #define pr_err(fmt,arg...) \

View File

@ -158,9 +158,9 @@ spider_net_ethtool_get_ringparam(struct net_device *netdev,
struct spider_net_card *card = netdev->priv; struct spider_net_card *card = netdev->priv;
ering->tx_max_pending = SPIDER_NET_TX_DESCRIPTORS_MAX; ering->tx_max_pending = SPIDER_NET_TX_DESCRIPTORS_MAX;
ering->tx_pending = card->num_tx_desc; ering->tx_pending = card->tx_chain.num_desc;
ering->rx_max_pending = SPIDER_NET_RX_DESCRIPTORS_MAX; ering->rx_max_pending = SPIDER_NET_RX_DESCRIPTORS_MAX;
ering->rx_pending = card->num_rx_desc; ering->rx_pending = card->rx_chain.num_desc;
} }
static int spider_net_get_stats_count(struct net_device *netdev) static int spider_net_get_stats_count(struct net_device *netdev)

View File

@ -58,11 +58,7 @@
#define TG3_VLAN_TAG_USED 0 #define TG3_VLAN_TAG_USED 0
#endif #endif
#ifdef NETIF_F_TSO
#define TG3_TSO_SUPPORT 1 #define TG3_TSO_SUPPORT 1
#else
#define TG3_TSO_SUPPORT 0
#endif
#include "tg3.h" #include "tg3.h"
@ -3873,7 +3869,6 @@ static int tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
entry = tp->tx_prod; entry = tp->tx_prod;
base_flags = 0; base_flags = 0;
#if TG3_TSO_SUPPORT != 0
mss = 0; mss = 0;
if (skb->len > (tp->dev->mtu + ETH_HLEN) && if (skb->len > (tp->dev->mtu + ETH_HLEN) &&
(mss = skb_shinfo(skb)->gso_size) != 0) { (mss = skb_shinfo(skb)->gso_size) != 0) {
@ -3906,11 +3901,6 @@ static int tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
} }
else if (skb->ip_summed == CHECKSUM_PARTIAL) else if (skb->ip_summed == CHECKSUM_PARTIAL)
base_flags |= TXD_FLAG_TCPUDP_CSUM; base_flags |= TXD_FLAG_TCPUDP_CSUM;
#else
mss = 0;
if (skb->ip_summed == CHECKSUM_PARTIAL)
base_flags |= TXD_FLAG_TCPUDP_CSUM;
#endif
#if TG3_VLAN_TAG_USED #if TG3_VLAN_TAG_USED
if (tp->vlgrp != NULL && vlan_tx_tag_present(skb)) if (tp->vlgrp != NULL && vlan_tx_tag_present(skb))
base_flags |= (TXD_FLAG_VLAN | base_flags |= (TXD_FLAG_VLAN |
@ -3970,7 +3960,6 @@ out_unlock:
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
#if TG3_TSO_SUPPORT != 0
static int tg3_start_xmit_dma_bug(struct sk_buff *, struct net_device *); static int tg3_start_xmit_dma_bug(struct sk_buff *, struct net_device *);
/* Use GSO to workaround a rare TSO bug that may be triggered when the /* Use GSO to workaround a rare TSO bug that may be triggered when the
@ -4002,7 +3991,6 @@ tg3_tso_bug_end:
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
#endif
/* hard_start_xmit for devices that have the 4G bug and/or 40-bit bug and /* hard_start_xmit for devices that have the 4G bug and/or 40-bit bug and
* support TG3_FLG2_HW_TSO_1 or firmware TSO only. * support TG3_FLG2_HW_TSO_1 or firmware TSO only.
@ -4036,7 +4024,6 @@ static int tg3_start_xmit_dma_bug(struct sk_buff *skb, struct net_device *dev)
base_flags = 0; base_flags = 0;
if (skb->ip_summed == CHECKSUM_PARTIAL) if (skb->ip_summed == CHECKSUM_PARTIAL)
base_flags |= TXD_FLAG_TCPUDP_CSUM; base_flags |= TXD_FLAG_TCPUDP_CSUM;
#if TG3_TSO_SUPPORT != 0
mss = 0; mss = 0;
if (skb->len > (tp->dev->mtu + ETH_HLEN) && if (skb->len > (tp->dev->mtu + ETH_HLEN) &&
(mss = skb_shinfo(skb)->gso_size) != 0) { (mss = skb_shinfo(skb)->gso_size) != 0) {
@ -4091,9 +4078,6 @@ static int tg3_start_xmit_dma_bug(struct sk_buff *skb, struct net_device *dev)
} }
} }
} }
#else
mss = 0;
#endif
#if TG3_VLAN_TAG_USED #if TG3_VLAN_TAG_USED
if (tp->vlgrp != NULL && vlan_tx_tag_present(skb)) if (tp->vlgrp != NULL && vlan_tx_tag_present(skb))
base_flags |= (TXD_FLAG_VLAN | base_flags |= (TXD_FLAG_VLAN |
@ -5329,7 +5313,6 @@ static int tg3_load_5701_a0_firmware_fix(struct tg3 *tp)
return 0; return 0;
} }
#if TG3_TSO_SUPPORT != 0
#define TG3_TSO_FW_RELEASE_MAJOR 0x1 #define TG3_TSO_FW_RELEASE_MAJOR 0x1
#define TG3_TSO_FW_RELASE_MINOR 0x6 #define TG3_TSO_FW_RELASE_MINOR 0x6
@ -5906,7 +5889,6 @@ static int tg3_load_tso_firmware(struct tg3 *tp)
return 0; return 0;
} }
#endif /* TG3_TSO_SUPPORT != 0 */
/* tp->lock is held. */ /* tp->lock is held. */
static void __tg3_set_mac_addr(struct tg3 *tp) static void __tg3_set_mac_addr(struct tg3 *tp)
@ -6120,7 +6102,6 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
tw32(BUFMGR_DMA_DESC_POOL_ADDR, NIC_SRAM_DMA_DESC_POOL_BASE); tw32(BUFMGR_DMA_DESC_POOL_ADDR, NIC_SRAM_DMA_DESC_POOL_BASE);
tw32(BUFMGR_DMA_DESC_POOL_SIZE, NIC_SRAM_DMA_DESC_POOL_SIZE); tw32(BUFMGR_DMA_DESC_POOL_SIZE, NIC_SRAM_DMA_DESC_POOL_SIZE);
} }
#if TG3_TSO_SUPPORT != 0
else if (tp->tg3_flags2 & TG3_FLG2_TSO_CAPABLE) { else if (tp->tg3_flags2 & TG3_FLG2_TSO_CAPABLE) {
int fw_len; int fw_len;
@ -6135,7 +6116,6 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
tw32(BUFMGR_MB_POOL_SIZE, tw32(BUFMGR_MB_POOL_SIZE,
NIC_SRAM_MBUF_POOL_SIZE5705 - fw_len - 0xa00); NIC_SRAM_MBUF_POOL_SIZE5705 - fw_len - 0xa00);
} }
#endif
if (tp->dev->mtu <= ETH_DATA_LEN) { if (tp->dev->mtu <= ETH_DATA_LEN) {
tw32(BUFMGR_MB_RDMA_LOW_WATER, tw32(BUFMGR_MB_RDMA_LOW_WATER,
@ -6337,10 +6317,8 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
if (tp->tg3_flags2 & TG3_FLG2_PCI_EXPRESS) if (tp->tg3_flags2 & TG3_FLG2_PCI_EXPRESS)
rdmac_mode |= RDMAC_MODE_FIFO_LONG_BURST; rdmac_mode |= RDMAC_MODE_FIFO_LONG_BURST;
#if TG3_TSO_SUPPORT != 0
if (tp->tg3_flags2 & TG3_FLG2_HW_TSO) if (tp->tg3_flags2 & TG3_FLG2_HW_TSO)
rdmac_mode |= (1 << 27); rdmac_mode |= (1 << 27);
#endif
/* Receive/send statistics. */ /* Receive/send statistics. */
if (tp->tg3_flags2 & TG3_FLG2_5750_PLUS) { if (tp->tg3_flags2 & TG3_FLG2_5750_PLUS) {
@ -6511,10 +6489,8 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
tw32(RCVBDI_MODE, RCVBDI_MODE_ENABLE | RCVBDI_MODE_RCB_ATTN_ENAB); tw32(RCVBDI_MODE, RCVBDI_MODE_ENABLE | RCVBDI_MODE_RCB_ATTN_ENAB);
tw32(RCVDBDI_MODE, RCVDBDI_MODE_ENABLE | RCVDBDI_MODE_INV_RING_SZ); tw32(RCVDBDI_MODE, RCVDBDI_MODE_ENABLE | RCVDBDI_MODE_INV_RING_SZ);
tw32(SNDDATAI_MODE, SNDDATAI_MODE_ENABLE); tw32(SNDDATAI_MODE, SNDDATAI_MODE_ENABLE);
#if TG3_TSO_SUPPORT != 0
if (tp->tg3_flags2 & TG3_FLG2_HW_TSO) if (tp->tg3_flags2 & TG3_FLG2_HW_TSO)
tw32(SNDDATAI_MODE, SNDDATAI_MODE_ENABLE | 0x8); tw32(SNDDATAI_MODE, SNDDATAI_MODE_ENABLE | 0x8);
#endif
tw32(SNDBDI_MODE, SNDBDI_MODE_ENABLE | SNDBDI_MODE_ATTN_ENABLE); tw32(SNDBDI_MODE, SNDBDI_MODE_ENABLE | SNDBDI_MODE_ATTN_ENABLE);
tw32(SNDBDS_MODE, SNDBDS_MODE_ENABLE | SNDBDS_MODE_ATTN_ENABLE); tw32(SNDBDS_MODE, SNDBDS_MODE_ENABLE | SNDBDS_MODE_ATTN_ENABLE);
@ -6524,13 +6500,11 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
return err; return err;
} }
#if TG3_TSO_SUPPORT != 0
if (tp->tg3_flags2 & TG3_FLG2_TSO_CAPABLE) { if (tp->tg3_flags2 & TG3_FLG2_TSO_CAPABLE) {
err = tg3_load_tso_firmware(tp); err = tg3_load_tso_firmware(tp);
if (err) if (err)
return err; return err;
} }
#endif
tp->tx_mode = TX_MODE_ENABLE; tp->tx_mode = TX_MODE_ENABLE;
tw32_f(MAC_TX_MODE, tp->tx_mode); tw32_f(MAC_TX_MODE, tp->tx_mode);
@ -8062,7 +8036,6 @@ static void tg3_set_msglevel(struct net_device *dev, u32 value)
tp->msg_enable = value; tp->msg_enable = value;
} }
#if TG3_TSO_SUPPORT != 0
static int tg3_set_tso(struct net_device *dev, u32 value) static int tg3_set_tso(struct net_device *dev, u32 value)
{ {
struct tg3 *tp = netdev_priv(dev); struct tg3 *tp = netdev_priv(dev);
@ -8081,7 +8054,6 @@ static int tg3_set_tso(struct net_device *dev, u32 value)
} }
return ethtool_op_set_tso(dev, value); return ethtool_op_set_tso(dev, value);
} }
#endif
static int tg3_nway_reset(struct net_device *dev) static int tg3_nway_reset(struct net_device *dev)
{ {
@ -9212,10 +9184,8 @@ static const struct ethtool_ops tg3_ethtool_ops = {
.set_tx_csum = tg3_set_tx_csum, .set_tx_csum = tg3_set_tx_csum,
.get_sg = ethtool_op_get_sg, .get_sg = ethtool_op_get_sg,
.set_sg = ethtool_op_set_sg, .set_sg = ethtool_op_set_sg,
#if TG3_TSO_SUPPORT != 0
.get_tso = ethtool_op_get_tso, .get_tso = ethtool_op_get_tso,
.set_tso = tg3_set_tso, .set_tso = tg3_set_tso,
#endif
.self_test_count = tg3_get_test_count, .self_test_count = tg3_get_test_count,
.self_test = tg3_self_test, .self_test = tg3_self_test,
.get_strings = tg3_get_strings, .get_strings = tg3_get_strings,
@ -11856,7 +11826,6 @@ static int __devinit tg3_init_one(struct pci_dev *pdev,
tg3_init_bufmgr_config(tp); tg3_init_bufmgr_config(tp);
#if TG3_TSO_SUPPORT != 0
if (tp->tg3_flags2 & TG3_FLG2_HW_TSO) { if (tp->tg3_flags2 & TG3_FLG2_HW_TSO) {
tp->tg3_flags2 |= TG3_FLG2_TSO_CAPABLE; tp->tg3_flags2 |= TG3_FLG2_TSO_CAPABLE;
} }
@ -11881,7 +11850,6 @@ static int __devinit tg3_init_one(struct pci_dev *pdev,
dev->features |= NETIF_F_TSO6; dev->features |= NETIF_F_TSO6;
} }
#endif
if (tp->pci_chip_rev_id == CHIPREV_ID_5705_A1 && if (tp->pci_chip_rev_id == CHIPREV_ID_5705_A1 &&
!(tp->tg3_flags2 & TG3_FLG2_TSO_CAPABLE) && !(tp->tg3_flags2 & TG3_FLG2_TSO_CAPABLE) &&

View File

@ -2865,8 +2865,8 @@ static int ucc_geth_startup(struct ucc_geth_private *ugeth)
if (UCC_GETH_TX_BD_RING_ALIGNMENT > 4) if (UCC_GETH_TX_BD_RING_ALIGNMENT > 4)
align = UCC_GETH_TX_BD_RING_ALIGNMENT; align = UCC_GETH_TX_BD_RING_ALIGNMENT;
ugeth->tx_bd_ring_offset[j] = ugeth->tx_bd_ring_offset[j] =
(u32) (kmalloc((u32) (length + align), kmalloc((u32) (length + align), GFP_KERNEL);
GFP_KERNEL));
if (ugeth->tx_bd_ring_offset[j] != 0) if (ugeth->tx_bd_ring_offset[j] != 0)
ugeth->p_tx_bd_ring[j] = ugeth->p_tx_bd_ring[j] =
(void*)((ugeth->tx_bd_ring_offset[j] + (void*)((ugeth->tx_bd_ring_offset[j] +
@ -2901,7 +2901,7 @@ static int ucc_geth_startup(struct ucc_geth_private *ugeth)
if (UCC_GETH_RX_BD_RING_ALIGNMENT > 4) if (UCC_GETH_RX_BD_RING_ALIGNMENT > 4)
align = UCC_GETH_RX_BD_RING_ALIGNMENT; align = UCC_GETH_RX_BD_RING_ALIGNMENT;
ugeth->rx_bd_ring_offset[j] = ugeth->rx_bd_ring_offset[j] =
(u32) (kmalloc((u32) (length + align), GFP_KERNEL)); kmalloc((u32) (length + align), GFP_KERNEL);
if (ugeth->rx_bd_ring_offset[j] != 0) if (ugeth->rx_bd_ring_offset[j] != 0)
ugeth->p_rx_bd_ring[j] = ugeth->p_rx_bd_ring[j] =
(void*)((ugeth->rx_bd_ring_offset[j] + (void*)((ugeth->rx_bd_ring_offset[j] +
@ -2927,10 +2927,9 @@ static int ucc_geth_startup(struct ucc_geth_private *ugeth)
/* Init Tx bds */ /* Init Tx bds */
for (j = 0; j < ug_info->numQueuesTx; j++) { for (j = 0; j < ug_info->numQueuesTx; j++) {
/* Setup the skbuff rings */ /* Setup the skbuff rings */
ugeth->tx_skbuff[j] = ugeth->tx_skbuff[j] = kmalloc(sizeof(struct sk_buff *) *
(struct sk_buff **)kmalloc(sizeof(struct sk_buff *) * ugeth->ug_info->bdRingLenTx[j],
ugeth->ug_info->bdRingLenTx[j], GFP_KERNEL);
GFP_KERNEL);
if (ugeth->tx_skbuff[j] == NULL) { if (ugeth->tx_skbuff[j] == NULL) {
ugeth_err("%s: Could not allocate tx_skbuff", ugeth_err("%s: Could not allocate tx_skbuff",
@ -2959,10 +2958,9 @@ static int ucc_geth_startup(struct ucc_geth_private *ugeth)
/* Init Rx bds */ /* Init Rx bds */
for (j = 0; j < ug_info->numQueuesRx; j++) { for (j = 0; j < ug_info->numQueuesRx; j++) {
/* Setup the skbuff rings */ /* Setup the skbuff rings */
ugeth->rx_skbuff[j] = ugeth->rx_skbuff[j] = kmalloc(sizeof(struct sk_buff *) *
(struct sk_buff **)kmalloc(sizeof(struct sk_buff *) * ugeth->ug_info->bdRingLenRx[j],
ugeth->ug_info->bdRingLenRx[j], GFP_KERNEL);
GFP_KERNEL);
if (ugeth->rx_skbuff[j] == NULL) { if (ugeth->rx_skbuff[j] == NULL) {
ugeth_err("%s: Could not allocate rx_skbuff", ugeth_err("%s: Could not allocate rx_skbuff",
@ -3453,8 +3451,7 @@ static int ucc_geth_startup(struct ucc_geth_private *ugeth)
* allocated resources can be released when the channel is freed. * allocated resources can be released when the channel is freed.
*/ */
if (!(ugeth->p_init_enet_param_shadow = if (!(ugeth->p_init_enet_param_shadow =
(struct ucc_geth_init_pram *) kmalloc(sizeof(struct ucc_geth_init_pram), kmalloc(sizeof(struct ucc_geth_init_pram), GFP_KERNEL))) {
GFP_KERNEL))) {
ugeth_err ugeth_err
("%s: Can not allocate memory for" ("%s: Can not allocate memory for"
" p_UccInitEnetParamShadows.", __FUNCTION__); " p_UccInitEnetParamShadows.", __FUNCTION__);

View File

@ -235,6 +235,19 @@ comment "Cyclades-PC300 MLPPP support is disabled."
comment "Refer to the file README.mlppp, provided by PC300 package." comment "Refer to the file README.mlppp, provided by PC300 package."
depends on WAN && HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP) depends on WAN && HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
config PC300TOO
tristate "Cyclades PC300 RSV/X21 alternative support"
depends on HDLC && PCI
help
Alternative driver for PC300 RSV/X21 PCI cards made by
Cyclades, Inc. If you have such a card, say Y here and see
<http://www.kernel.org/pub/linux/utils/net/hdlc/>.
To compile this as a module, choose M here: the module
will be called pc300too.
If unsure, say N here.
config N2 config N2
tristate "SDL RISCom/N2 support" tristate "SDL RISCom/N2 support"
depends on HDLC && ISA depends on HDLC && ISA
@ -344,17 +357,6 @@ config DLCI
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called dlci. module will be called dlci.
config DLCI_COUNT
int "Max open DLCI"
depends on DLCI
default "24"
help
Maximal number of logical point-to-point frame relay connections
(the identifiers of which are called DCLIs) that the driver can
handle.
The default is probably fine.
config DLCI_MAX config DLCI_MAX
int "Max DLCI per device" int "Max DLCI per device"
depends on DLCI depends on DLCI

View File

@ -41,6 +41,7 @@ obj-$(CONFIG_N2) += n2.o
obj-$(CONFIG_C101) += c101.o obj-$(CONFIG_C101) += c101.o
obj-$(CONFIG_WANXL) += wanxl.o obj-$(CONFIG_WANXL) += wanxl.o
obj-$(CONFIG_PCI200SYN) += pci200syn.o obj-$(CONFIG_PCI200SYN) += pci200syn.o
obj-$(CONFIG_PC300TOO) += pc300too.o
clean-files := wanxlfw.inc clean-files := wanxlfw.inc
$(obj)/wanxl.o: $(obj)/wanxlfw.inc $(obj)/wanxl.o: $(obj)/wanxlfw.inc

Some files were not shown because too many files have changed in this diff Show More