Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:
 "A bit has accumulated, but it's been a week or so since my last batch
  of post-merge-window fixes, so...

   1) Missing module license in netfilter reject module, from Pablo.
      Lots of people ran into this.

   2) Off by one in mac80211 baserate calculation, from Karl Beldan.

   3) Fix incorrect return value from ax88179_178a driver's set_mac_addr
      op, which broke use of it with bonding.  From Ian Morgan.

   4) Checking of skb_gso_segment()'s return value was not all
      encompassing, it can return an SKB pointer, a pointer error, or
      NULL.  Fix from Florian Westphal.

      This is crummy, and longer term will be fixed to just return error
      pointers or a real SKB.

   6) Encapsulation offloads not being handled by
      skb_gso_transport_seglen().  From Florian Westphal.

   7) Fix deadlock in TIPC stack, from Ying Xue.

   8) Fix performance regression from using rhashtable for netlink
      sockets.  The problem was the synchronize_net() invoked for every
      socket destroy.  From Thomas Graf.

   9) Fix bug in eBPF verifier, and remove the strong dependency of BPF
      on NET.  From Alexei Starovoitov.

  10) In qdisc_create(), use the correct interface to allocate
      ->cpu_bstats, otherwise the u64_stats_sync member isn't
      initialized properly.  From Sabrina Dubroca.

  11) Off by one in ip_set_nfnl_get_byindex(), from Dan Carpenter.

  12) nf_tables_newchain() was erroneously expecting error pointers from
      netdev_alloc_pcpu_stats().  It only returna a valid pointer or
      NULL.  From Sabrina Dubroca.

  13) Fix use-after-free in _decode_session6(), from Li RongQing.

  14) When we set the TX flow hash on a socket, we mistakenly do so
      before we've nailed down the final source port.  Move the setting
      deeper to fix this.  From Sathya Perla.

  15) NAPI budget accounting in amd-xgbe driver was counting descriptors
      instead of full packets, fix from Thomas Lendacky.

  16) Fix total_data_buflen calculation in hyperv driver, from Haiyang
      Zhang.

  17) Fix bcma driver build with OF_ADDRESS disabled, from Hauke
      Mehrtens.

  18) Fix mis-use of per-cpu memory in TCP md5 code.  The problem is
      that something that ends up being vmalloc memory can't be passed
      to the crypto hash routines via scatter-gather lists.  From Eric
      Dumazet.

  19) Fix regression in promiscuous mode enabling in cdc-ether, from
      Olivier Blin.

  20) Bucket eviction and frag entry killing can race with eachother,
      causing an unlink of the object from the wrong list.  Fix from
      Nikolay Aleksandrov.

  21) Missing initialization of spinlock in cxgb4 driver, from Anish
      Bhatt.

  22) Do not cache ipv4 routing failures, otherwise if the sysctl for
      forwarding is subsequently enabled this won't be seen.  From
      Nicolas Cavallari"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (131 commits)
  drivers: net: cpsw: Support ALLMULTI and fix IFF_PROMISC in switch mode
  drivers: net: cpsw: Fix broken loop condition in switch mode
  net: ethtool: Return -EOPNOTSUPP if user space tries to read EEPROM with lengh 0
  stmmac: pci: set default of the filter bins
  net: smc91x: Fix gpios for device tree based booting
  mpls: Allow mpls_gso to be built as module
  mpls: Fix mpls_gso handler.
  r8152: stop submitting intr for -EPROTO
  netfilter: nft_reject_bridge: restrict reject to prerouting and input
  netfilter: nft_reject_bridge: don't use IP stack to reject traffic
  netfilter: nf_reject_ipv6: split nf_send_reset6() in smaller functions
  netfilter: nf_reject_ipv4: split nf_send_reset() in smaller functions
  netfilter: nf_tables_bridge: update hook_mask to allow {pre,post}routing
  drivers/net: macvtap and tun depend on INET
  drivers/net, ipv6: Select IPv6 fragment idents for virtio UFO packets
  drivers/net: Disable UFO through virtio
  net: skb_fclone_busy() needs to detect orphaned skb
  gre: Use inner mac length when computing tunnel length
  mlx4: Avoid leaking steering rules on flow creation error flow
  net/mlx4_en: Don't attempt to TX offload the outer UDP checksum for VXLAN
  ...
This commit is contained in:
Linus Torvalds 2014-10-31 15:04:58 -07:00
commit 89453379aa
164 changed files with 1959 additions and 805 deletions

View File

@ -11,3 +11,5 @@ Optional properties:
are supported on the device. Valid value for SMSC LAN91c111 are are supported on the device. Valid value for SMSC LAN91c111 are
1, 2 or 4. If it's omitted or invalid, the size would be 2 meaning 1, 2 or 4. If it's omitted or invalid, the size would be 2 meaning
16-bit access only. 16-bit access only.
- power-gpios: GPIO to control the PWRDWN pin
- reset-gpios: GPIO to control the RESET pin

View File

@ -0,0 +1,33 @@
# PTP 1588 clock support - User space test program
#
# Copyright (C) 2010 OMICRON electronics GmbH
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
CC = $(CROSS_COMPILE)gcc
INC = -I$(KBUILD_OUTPUT)/usr/include
CFLAGS = -Wall $(INC)
LDLIBS = -lrt
PROGS = testptp
all: $(PROGS)
testptp: testptp.o
clean:
rm -f testptp.o
distclean: clean
rm -f $(PROGS)

View File

@ -668,6 +668,8 @@
bank-width = <2>; bank-width = <2>;
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&ethernet_pins>; pinctrl-0 = <&ethernet_pins>;
power-gpios = <&gpio3 22 GPIO_ACTIVE_HIGH>; /* gpio86 */
reset-gpios = <&gpio6 4 GPIO_ACTIVE_HIGH>; /* gpio164 */
gpmc,device-width = <2>; gpmc,device-width = <2>;
gpmc,sync-clk-ps = <0>; gpmc,sync-clk-ps = <0>;
gpmc,cs-on-ns = <0>; gpmc,cs-on-ns = <0>;

View File

@ -252,9 +252,6 @@ static void __init nokia_n900_legacy_init(void)
platform_device_register(&omap3_rom_rng_device); platform_device_register(&omap3_rom_rng_device);
} }
/* Only on some development boards */
gpio_request_one(164, GPIOF_OUT_INIT_LOW, "smc91x reset");
} }
static void __init omap3_tao3530_legacy_init(void) static void __init omap3_tao3530_legacy_init(void)

View File

@ -275,7 +275,7 @@ static SIMPLE_DEV_PM_OPS(bcma_pm_ops, bcma_host_pci_suspend,
static const struct pci_device_id bcma_pci_bridge_tbl[] = { static const struct pci_device_id bcma_pci_bridge_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x0576) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x0576) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4313) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4313) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43224) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43224) }, /* 0xa8d8 */
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4331) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4331) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4353) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4353) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4357) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4357) },
@ -285,7 +285,8 @@ static const struct pci_device_id bcma_pci_bridge_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43a9) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43a9) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43aa) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43aa) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4727) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4727) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43227) }, /* 0xA8DB */ { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43227) }, /* 0xa8db, BCM43217 (sic!) */
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43228) }, /* 0xa8dc */
{ 0, }, { 0, },
}; };
MODULE_DEVICE_TABLE(pci, bcma_pci_bridge_tbl); MODULE_DEVICE_TABLE(pci, bcma_pci_bridge_tbl);

View File

@ -132,7 +132,7 @@ static bool bcma_is_core_needed_early(u16 core_id)
return false; return false;
} }
#ifdef CONFIG_OF #if defined(CONFIG_OF) && defined(CONFIG_OF_ADDRESS)
static struct device_node *bcma_of_find_child_device(struct platform_device *parent, static struct device_node *bcma_of_find_child_device(struct platform_device *parent,
struct bcma_device *core) struct bcma_device *core)
{ {

View File

@ -1173,18 +1173,24 @@ static struct ib_flow *mlx4_ib_create_flow(struct ib_qp *qp,
err = __mlx4_ib_create_flow(qp, flow_attr, domain, type[i], err = __mlx4_ib_create_flow(qp, flow_attr, domain, type[i],
&mflow->reg_id[i]); &mflow->reg_id[i]);
if (err) if (err)
goto err_free; goto err_create_flow;
i++; i++;
} }
if (i < ARRAY_SIZE(type) && flow_attr->type == IB_FLOW_ATTR_NORMAL) { if (i < ARRAY_SIZE(type) && flow_attr->type == IB_FLOW_ATTR_NORMAL) {
err = mlx4_ib_tunnel_steer_add(qp, flow_attr, &mflow->reg_id[i]); err = mlx4_ib_tunnel_steer_add(qp, flow_attr, &mflow->reg_id[i]);
if (err) if (err)
goto err_free; goto err_create_flow;
i++;
} }
return &mflow->ibflow; return &mflow->ibflow;
err_create_flow:
while (i) {
(void)__mlx4_ib_destroy_flow(to_mdev(qp->device)->dev, mflow->reg_id[i]);
i--;
}
err_free: err_free:
kfree(mflow); kfree(mflow);
return ERR_PTR(err); return ERR_PTR(err);

View File

@ -135,6 +135,7 @@ config MACVLAN
config MACVTAP config MACVTAP
tristate "MAC-VLAN based tap driver" tristate "MAC-VLAN based tap driver"
depends on MACVLAN depends on MACVLAN
depends on INET
help help
This adds a specialized tap character device driver that is based This adds a specialized tap character device driver that is based
on the MAC-VLAN network interface, called macvtap. A macvtap device on the MAC-VLAN network interface, called macvtap. A macvtap device
@ -200,6 +201,7 @@ config RIONET_RX_SIZE
config TUN config TUN
tristate "Universal TUN/TAP device driver support" tristate "Universal TUN/TAP device driver support"
depends on INET
select CRC32 select CRC32
---help--- ---help---
TUN/TAP provides packet reception and transmission for user space TUN/TAP provides packet reception and transmission for user space

View File

@ -395,7 +395,7 @@ static int mv88e6171_get_sset_count(struct dsa_switch *ds)
} }
struct dsa_switch_driver mv88e6171_switch_driver = { struct dsa_switch_driver mv88e6171_switch_driver = {
.tag_protocol = DSA_TAG_PROTO_DSA, .tag_protocol = DSA_TAG_PROTO_EDSA,
.priv_size = sizeof(struct mv88e6xxx_priv_state), .priv_size = sizeof(struct mv88e6xxx_priv_state),
.probe = mv88e6171_probe, .probe = mv88e6171_probe,
.setup = mv88e6171_setup, .setup = mv88e6171_setup,

View File

@ -1465,7 +1465,7 @@ static int xgbe_set_features(struct net_device *netdev,
{ {
struct xgbe_prv_data *pdata = netdev_priv(netdev); struct xgbe_prv_data *pdata = netdev_priv(netdev);
struct xgbe_hw_if *hw_if = &pdata->hw_if; struct xgbe_hw_if *hw_if = &pdata->hw_if;
unsigned int rxcsum, rxvlan, rxvlan_filter; netdev_features_t rxcsum, rxvlan, rxvlan_filter;
rxcsum = pdata->netdev_features & NETIF_F_RXCSUM; rxcsum = pdata->netdev_features & NETIF_F_RXCSUM;
rxvlan = pdata->netdev_features & NETIF_F_HW_VLAN_CTAG_RX; rxvlan = pdata->netdev_features & NETIF_F_HW_VLAN_CTAG_RX;
@ -1598,7 +1598,8 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
struct skb_shared_hwtstamps *hwtstamps; struct skb_shared_hwtstamps *hwtstamps;
unsigned int incomplete, error, context_next, context; unsigned int incomplete, error, context_next, context;
unsigned int len, put_len, max_len; unsigned int len, put_len, max_len;
int received = 0; unsigned int received = 0;
int packet_count = 0;
DBGPR("-->xgbe_rx_poll: budget=%d\n", budget); DBGPR("-->xgbe_rx_poll: budget=%d\n", budget);
@ -1608,7 +1609,7 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
rdata = XGBE_GET_DESC_DATA(ring, ring->cur); rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
packet = &ring->packet_data; packet = &ring->packet_data;
while (received < budget) { while (packet_count < budget) {
DBGPR(" cur = %d\n", ring->cur); DBGPR(" cur = %d\n", ring->cur);
/* First time in loop see if we need to restore state */ /* First time in loop see if we need to restore state */
@ -1662,7 +1663,7 @@ read_again:
if (packet->errors) if (packet->errors)
DBGPR("Error in received packet\n"); DBGPR("Error in received packet\n");
dev_kfree_skb(skb); dev_kfree_skb(skb);
continue; goto next_packet;
} }
if (!context) { if (!context) {
@ -1677,7 +1678,7 @@ read_again:
} }
dev_kfree_skb(skb); dev_kfree_skb(skb);
continue; goto next_packet;
} }
memcpy(skb_tail_pointer(skb), rdata->skb->data, memcpy(skb_tail_pointer(skb), rdata->skb->data,
put_len); put_len);
@ -1694,7 +1695,7 @@ read_again:
/* Stray Context Descriptor? */ /* Stray Context Descriptor? */
if (!skb) if (!skb)
continue; goto next_packet;
/* Be sure we don't exceed the configured MTU */ /* Be sure we don't exceed the configured MTU */
max_len = netdev->mtu + ETH_HLEN; max_len = netdev->mtu + ETH_HLEN;
@ -1705,7 +1706,7 @@ read_again:
if (skb->len > max_len) { if (skb->len > max_len) {
DBGPR("packet length exceeds configured MTU\n"); DBGPR("packet length exceeds configured MTU\n");
dev_kfree_skb(skb); dev_kfree_skb(skb);
continue; goto next_packet;
} }
#ifdef XGMAC_ENABLE_RX_PKT_DUMP #ifdef XGMAC_ENABLE_RX_PKT_DUMP
@ -1739,6 +1740,9 @@ read_again:
netdev->last_rx = jiffies; netdev->last_rx = jiffies;
napi_gro_receive(&pdata->napi, skb); napi_gro_receive(&pdata->napi, skb);
next_packet:
packet_count++;
} }
/* Check if we need to save state before leaving */ /* Check if we need to save state before leaving */
@ -1752,9 +1756,9 @@ read_again:
rdata->state.error = error; rdata->state.error = error;
} }
DBGPR("<--xgbe_rx_poll: received = %d\n", received); DBGPR("<--xgbe_rx_poll: packet_count = %d\n", packet_count);
return received; return packet_count;
} }
static int xgbe_poll(struct napi_struct *napi, int budget) static int xgbe_poll(struct napi_struct *napi, int budget)

View File

@ -124,20 +124,18 @@ static int xgene_enet_ecc_init(struct xgene_enet_pdata *p)
{ {
struct net_device *ndev = p->ndev; struct net_device *ndev = p->ndev;
u32 data; u32 data;
int i; int i = 0;
xgene_enet_wr_diag_csr(p, ENET_CFG_MEM_RAM_SHUTDOWN_ADDR, 0); xgene_enet_wr_diag_csr(p, ENET_CFG_MEM_RAM_SHUTDOWN_ADDR, 0);
for (i = 0; i < 10 && data != ~0U ; i++) { do {
usleep_range(100, 110); usleep_range(100, 110);
data = xgene_enet_rd_diag_csr(p, ENET_BLOCK_MEM_RDY_ADDR); data = xgene_enet_rd_diag_csr(p, ENET_BLOCK_MEM_RDY_ADDR);
} if (data == ~0U)
return 0;
} while (++i < 10);
if (data != ~0U) { netdev_err(ndev, "Failed to release memory from shutdown\n");
netdev_err(ndev, "Failed to release memory from shutdown\n"); return -ENODEV;
return -ENODEV;
}
return 0;
} }
static void xgene_enet_config_ring_if_assoc(struct xgene_enet_pdata *p) static void xgene_enet_config_ring_if_assoc(struct xgene_enet_pdata *p)

View File

@ -1397,6 +1397,9 @@ static void bcm_sysport_netif_start(struct net_device *dev)
/* Enable NAPI */ /* Enable NAPI */
napi_enable(&priv->napi); napi_enable(&priv->napi);
/* Enable RX interrupt and TX ring full interrupt */
intrl2_0_mask_clear(priv, INTRL2_0_RDMA_MBDONE | INTRL2_0_TX_RING_FULL);
phy_start(priv->phydev); phy_start(priv->phydev);
/* Enable TX interrupts for the 32 TXQs */ /* Enable TX interrupts for the 32 TXQs */
@ -1499,9 +1502,6 @@ static int bcm_sysport_open(struct net_device *dev)
if (ret) if (ret)
goto out_free_rx_ring; goto out_free_rx_ring;
/* Enable RX interrupt and TX ring full interrupt */
intrl2_0_mask_clear(priv, INTRL2_0_RDMA_MBDONE | INTRL2_0_TX_RING_FULL);
/* Turn on TDMA */ /* Turn on TDMA */
ret = tdma_enable_set(priv, 1); ret = tdma_enable_set(priv, 1);
if (ret) if (ret)
@ -1858,6 +1858,8 @@ static int bcm_sysport_resume(struct device *d)
if (!netif_running(dev)) if (!netif_running(dev))
return 0; return 0;
umac_reset(priv);
/* We may have been suspended and never received a WOL event that /* We may have been suspended and never received a WOL event that
* would turn off MPD detection, take care of that now * would turn off MPD detection, take care of that now
*/ */
@ -1885,9 +1887,6 @@ static int bcm_sysport_resume(struct device *d)
netif_device_attach(dev); netif_device_attach(dev);
/* Enable RX interrupt and TX ring full interrupt */
intrl2_0_mask_clear(priv, INTRL2_0_RDMA_MBDONE | INTRL2_0_TX_RING_FULL);
/* RX pipe enable */ /* RX pipe enable */
topctrl_writel(priv, 0, RX_FLUSH_CNTL); topctrl_writel(priv, 0, RX_FLUSH_CNTL);

View File

@ -382,10 +382,8 @@ static int cnic_iscsi_nl_msg_recv(struct cnic_dev *dev, u32 msg_type,
if (l5_cid >= MAX_CM_SK_TBL_SZ) if (l5_cid >= MAX_CM_SK_TBL_SZ)
break; break;
rcu_read_lock();
if (!rcu_access_pointer(cp->ulp_ops[CNIC_ULP_L4])) { if (!rcu_access_pointer(cp->ulp_ops[CNIC_ULP_L4])) {
rc = -ENODEV; rc = -ENODEV;
rcu_read_unlock();
break; break;
} }
csk = &cp->csk_tbl[l5_cid]; csk = &cp->csk_tbl[l5_cid];
@ -414,7 +412,6 @@ static int cnic_iscsi_nl_msg_recv(struct cnic_dev *dev, u32 msg_type,
} }
} }
csk_put(csk); csk_put(csk);
rcu_read_unlock();
rc = 0; rc = 0;
} }
} }
@ -615,7 +612,7 @@ static int cnic_unregister_device(struct cnic_dev *dev, int ulp_type)
cnic_send_nlmsg(cp, ISCSI_KEVENT_IF_DOWN, NULL); cnic_send_nlmsg(cp, ISCSI_KEVENT_IF_DOWN, NULL);
mutex_lock(&cnic_lock); mutex_lock(&cnic_lock);
if (rcu_dereference(cp->ulp_ops[ulp_type])) { if (rcu_access_pointer(cp->ulp_ops[ulp_type])) {
RCU_INIT_POINTER(cp->ulp_ops[ulp_type], NULL); RCU_INIT_POINTER(cp->ulp_ops[ulp_type], NULL);
cnic_put(dev); cnic_put(dev);
} else { } else {

View File

@ -60,6 +60,42 @@ void cxgb4_dcb_version_init(struct net_device *dev)
dcb->dcb_version = FW_PORT_DCB_VER_AUTO; dcb->dcb_version = FW_PORT_DCB_VER_AUTO;
} }
static void cxgb4_dcb_cleanup_apps(struct net_device *dev)
{
struct port_info *pi = netdev2pinfo(dev);
struct adapter *adap = pi->adapter;
struct port_dcb_info *dcb = &pi->dcb;
struct dcb_app app;
int i, err;
/* zero priority implies remove */
app.priority = 0;
for (i = 0; i < CXGB4_MAX_DCBX_APP_SUPPORTED; i++) {
/* Check if app list is exhausted */
if (!dcb->app_priority[i].protocolid)
break;
app.protocol = dcb->app_priority[i].protocolid;
if (dcb->dcb_version == FW_PORT_DCB_VER_IEEE) {
app.selector = dcb->app_priority[i].sel_field + 1;
err = dcb_ieee_setapp(dev, &app);
} else {
app.selector = !!(dcb->app_priority[i].sel_field);
err = dcb_setapp(dev, &app);
}
if (err) {
dev_err(adap->pdev_dev,
"Failed DCB Clear %s Application Priority: sel=%d, prot=%d, , err=%d\n",
dcb_ver_array[dcb->dcb_version], app.selector,
app.protocol, -err);
break;
}
}
}
/* Finite State machine for Data Center Bridging. /* Finite State machine for Data Center Bridging.
*/ */
void cxgb4_dcb_state_fsm(struct net_device *dev, void cxgb4_dcb_state_fsm(struct net_device *dev,
@ -80,7 +116,6 @@ void cxgb4_dcb_state_fsm(struct net_device *dev,
/* we're going to use Host DCB */ /* we're going to use Host DCB */
dcb->state = CXGB4_DCB_STATE_HOST; dcb->state = CXGB4_DCB_STATE_HOST;
dcb->supported = CXGB4_DCBX_HOST_SUPPORT; dcb->supported = CXGB4_DCBX_HOST_SUPPORT;
dcb->enabled = 1;
break; break;
} }
@ -145,6 +180,7 @@ void cxgb4_dcb_state_fsm(struct net_device *dev,
* state. We need to reset back to a ground state * state. We need to reset back to a ground state
* of incomplete. * of incomplete.
*/ */
cxgb4_dcb_cleanup_apps(dev);
cxgb4_dcb_state_init(dev); cxgb4_dcb_state_init(dev);
dcb->state = CXGB4_DCB_STATE_FW_INCOMPLETE; dcb->state = CXGB4_DCB_STATE_FW_INCOMPLETE;
dcb->supported = CXGB4_DCBX_FW_SUPPORT; dcb->supported = CXGB4_DCBX_FW_SUPPORT;
@ -349,6 +385,12 @@ static u8 cxgb4_setstate(struct net_device *dev, u8 enabled)
{ {
struct port_info *pi = netdev2pinfo(dev); struct port_info *pi = netdev2pinfo(dev);
/* If DCBx is host-managed, dcb is enabled by outside lldp agents */
if (pi->dcb.state == CXGB4_DCB_STATE_HOST) {
pi->dcb.enabled = enabled;
return 0;
}
/* Firmware doesn't provide any mechanism to control the DCB state. /* Firmware doesn't provide any mechanism to control the DCB state.
*/ */
if (enabled != (pi->dcb.state == CXGB4_DCB_STATE_FW_ALLSYNCED)) if (enabled != (pi->dcb.state == CXGB4_DCB_STATE_FW_ALLSYNCED))
@ -833,11 +875,16 @@ static int cxgb4_setapp(struct net_device *dev, u8 app_idtype, u16 app_id,
/* Return whether IEEE Data Center Bridging has been negotiated. /* Return whether IEEE Data Center Bridging has been negotiated.
*/ */
static inline int cxgb4_ieee_negotiation_complete(struct net_device *dev) static inline int
cxgb4_ieee_negotiation_complete(struct net_device *dev,
enum cxgb4_dcb_fw_msgs dcb_subtype)
{ {
struct port_info *pi = netdev2pinfo(dev); struct port_info *pi = netdev2pinfo(dev);
struct port_dcb_info *dcb = &pi->dcb; struct port_dcb_info *dcb = &pi->dcb;
if (dcb_subtype && !(dcb->msgs & dcb_subtype))
return 0;
return (dcb->state == CXGB4_DCB_STATE_FW_ALLSYNCED && return (dcb->state == CXGB4_DCB_STATE_FW_ALLSYNCED &&
(dcb->supported & DCB_CAP_DCBX_VER_IEEE)); (dcb->supported & DCB_CAP_DCBX_VER_IEEE));
} }
@ -850,7 +897,7 @@ static int cxgb4_ieee_getapp(struct net_device *dev, struct dcb_app *app)
{ {
int prio; int prio;
if (!cxgb4_ieee_negotiation_complete(dev)) if (!cxgb4_ieee_negotiation_complete(dev, CXGB4_DCB_FW_APP_ID))
return -EINVAL; return -EINVAL;
if (!(app->selector && app->protocol)) if (!(app->selector && app->protocol))
return -EINVAL; return -EINVAL;
@ -872,7 +919,7 @@ static int cxgb4_ieee_setapp(struct net_device *dev, struct dcb_app *app)
{ {
int ret; int ret;
if (!cxgb4_ieee_negotiation_complete(dev)) if (!cxgb4_ieee_negotiation_complete(dev, CXGB4_DCB_FW_APP_ID))
return -EINVAL; return -EINVAL;
if (!(app->selector && app->protocol)) if (!(app->selector && app->protocol))
return -EINVAL; return -EINVAL;

View File

@ -694,7 +694,11 @@ int cxgb4_dcb_enabled(const struct net_device *dev)
#ifdef CONFIG_CHELSIO_T4_DCB #ifdef CONFIG_CHELSIO_T4_DCB
struct port_info *pi = netdev_priv(dev); struct port_info *pi = netdev_priv(dev);
return pi->dcb.state == CXGB4_DCB_STATE_FW_ALLSYNCED; if (!pi->dcb.enabled)
return 0;
return ((pi->dcb.state == CXGB4_DCB_STATE_FW_ALLSYNCED) ||
(pi->dcb.state == CXGB4_DCB_STATE_HOST));
#else #else
return 0; return 0;
#endif #endif
@ -6610,6 +6614,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
spin_lock_init(&adapter->stats_lock); spin_lock_init(&adapter->stats_lock);
spin_lock_init(&adapter->tid_release_lock); spin_lock_init(&adapter->tid_release_lock);
spin_lock_init(&adapter->win0_lock);
INIT_WORK(&adapter->tid_release_task, process_tid_release_list); INIT_WORK(&adapter->tid_release_task, process_tid_release_list);
INIT_WORK(&adapter->db_full_task, process_db_full); INIT_WORK(&adapter->db_full_task, process_db_full);

View File

@ -2929,14 +2929,14 @@ static const struct pci_device_id cxgb4vf_pci_tbl[] = {
CH_DEVICE(0x480d), /* T480-cr */ CH_DEVICE(0x480d), /* T480-cr */
CH_DEVICE(0x480e), /* T440-lp-cr */ CH_DEVICE(0x480e), /* T440-lp-cr */
CH_DEVICE(0x4880), CH_DEVICE(0x4880),
CH_DEVICE(0x4880), CH_DEVICE(0x4881),
CH_DEVICE(0x4880), CH_DEVICE(0x4882),
CH_DEVICE(0x4880), CH_DEVICE(0x4883),
CH_DEVICE(0x4880), CH_DEVICE(0x4884),
CH_DEVICE(0x4880), CH_DEVICE(0x4885),
CH_DEVICE(0x4880), CH_DEVICE(0x4886),
CH_DEVICE(0x4880), CH_DEVICE(0x4887),
CH_DEVICE(0x4880), CH_DEVICE(0x4888),
CH_DEVICE(0x5801), /* T520-cr */ CH_DEVICE(0x5801), /* T520-cr */
CH_DEVICE(0x5802), /* T522-cr */ CH_DEVICE(0x5802), /* T522-cr */
CH_DEVICE(0x5803), /* T540-cr */ CH_DEVICE(0x5803), /* T540-cr */

View File

@ -86,7 +86,7 @@ void enic_rfs_flw_tbl_free(struct enic *enic)
int i; int i;
enic_rfs_timer_stop(enic); enic_rfs_timer_stop(enic);
spin_lock(&enic->rfs_h.lock); spin_lock_bh(&enic->rfs_h.lock);
enic->rfs_h.free = 0; enic->rfs_h.free = 0;
for (i = 0; i < (1 << ENIC_RFS_FLW_BITSHIFT); i++) { for (i = 0; i < (1 << ENIC_RFS_FLW_BITSHIFT); i++) {
struct hlist_head *hhead; struct hlist_head *hhead;
@ -100,7 +100,7 @@ void enic_rfs_flw_tbl_free(struct enic *enic)
kfree(n); kfree(n);
} }
} }
spin_unlock(&enic->rfs_h.lock); spin_unlock_bh(&enic->rfs_h.lock);
} }
struct enic_rfs_fltr_node *htbl_fltr_search(struct enic *enic, u16 fltr_id) struct enic_rfs_fltr_node *htbl_fltr_search(struct enic *enic, u16 fltr_id)
@ -128,7 +128,7 @@ void enic_flow_may_expire(unsigned long data)
bool res; bool res;
int j; int j;
spin_lock(&enic->rfs_h.lock); spin_lock_bh(&enic->rfs_h.lock);
for (j = 0; j < ENIC_CLSF_EXPIRE_COUNT; j++) { for (j = 0; j < ENIC_CLSF_EXPIRE_COUNT; j++) {
struct hlist_head *hhead; struct hlist_head *hhead;
struct hlist_node *tmp; struct hlist_node *tmp;
@ -148,7 +148,7 @@ void enic_flow_may_expire(unsigned long data)
} }
} }
} }
spin_unlock(&enic->rfs_h.lock); spin_unlock_bh(&enic->rfs_h.lock);
mod_timer(&enic->rfs_h.rfs_may_expire, jiffies + HZ/4); mod_timer(&enic->rfs_h.rfs_may_expire, jiffies + HZ/4);
} }
@ -183,7 +183,7 @@ int enic_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
return -EPROTONOSUPPORT; return -EPROTONOSUPPORT;
tbl_idx = skb_get_hash_raw(skb) & ENIC_RFS_FLW_MASK; tbl_idx = skb_get_hash_raw(skb) & ENIC_RFS_FLW_MASK;
spin_lock(&enic->rfs_h.lock); spin_lock_bh(&enic->rfs_h.lock);
n = htbl_key_search(&enic->rfs_h.ht_head[tbl_idx], &keys); n = htbl_key_search(&enic->rfs_h.ht_head[tbl_idx], &keys);
if (n) { /* entry already present */ if (n) { /* entry already present */
@ -277,7 +277,7 @@ int enic_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
} }
ret_unlock: ret_unlock:
spin_unlock(&enic->rfs_h.lock); spin_unlock_bh(&enic->rfs_h.lock);
return res; return res;
} }

View File

@ -1674,13 +1674,13 @@ static int enic_stop(struct net_device *netdev)
enic_dev_disable(enic); enic_dev_disable(enic);
local_bh_disable();
for (i = 0; i < enic->rq_count; i++) { for (i = 0; i < enic->rq_count; i++) {
napi_disable(&enic->napi[i]); napi_disable(&enic->napi[i]);
local_bh_disable();
while (!enic_poll_lock_napi(&enic->rq[i])) while (!enic_poll_lock_napi(&enic->rq[i]))
mdelay(1); mdelay(1);
local_bh_enable();
} }
local_bh_enable();
netif_carrier_off(netdev); netif_carrier_off(netdev);
netif_tx_disable(netdev); netif_tx_disable(netdev);

View File

@ -1581,7 +1581,8 @@ fec_enet_interrupt(int irq, void *dev_id)
complete(&fep->mdio_done); complete(&fep->mdio_done);
} }
fec_ptp_check_pps_event(fep); if (fep->ptp_clock)
fec_ptp_check_pps_event(fep);
return ret; return ret;
} }

View File

@ -341,6 +341,9 @@ static void restart(struct net_device *dev)
FC(fecp, x_cntrl, FEC_TCNTRL_FDEN); /* FD disable */ FC(fecp, x_cntrl, FEC_TCNTRL_FDEN); /* FD disable */
} }
/* Restore multicast and promiscuous settings */
set_multicast_list(dev);
/* /*
* Enable interrupts we wish to service. * Enable interrupts we wish to service.
*/ */

View File

@ -355,6 +355,9 @@ static void restart(struct net_device *dev)
if (fep->phydev->duplex) if (fep->phydev->duplex)
S16(sccp, scc_psmr, SCC_PSMR_LPB | SCC_PSMR_FDE); S16(sccp, scc_psmr, SCC_PSMR_LPB | SCC_PSMR_FDE);
/* Restore multicast and promiscuous settings */
set_multicast_list(dev);
S32(sccp, scc_gsmrl, SCC_GSMRL_ENR | SCC_GSMRL_ENT); S32(sccp, scc_gsmrl, SCC_GSMRL_ENR | SCC_GSMRL_ENT);
} }

View File

@ -1075,7 +1075,10 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
NETIF_F_HW_CSUM | NETIF_F_HW_CSUM |
NETIF_F_SG); NETIF_F_SG);
netdev->priv_flags |= IFF_UNICAST_FLT; /* Do not set IFF_UNICAST_FLT for VMWare's 82545EM */
if (hw->device_id != E1000_DEV_ID_82545EM_COPPER ||
hw->subsystem_vendor_id != PCI_VENDOR_ID_VMWARE)
netdev->priv_flags |= IFF_UNICAST_FLT;
adapter->en_mng_pt = e1000_enable_mng_pass_thru(hw); adapter->en_mng_pt = e1000_enable_mng_pass_thru(hw);

View File

@ -6151,7 +6151,7 @@ static void i40e_handle_mdd_event(struct i40e_pf *pf)
I40E_GL_MDET_TX_PF_NUM_SHIFT; I40E_GL_MDET_TX_PF_NUM_SHIFT;
u8 vf_num = (reg & I40E_GL_MDET_TX_VF_NUM_MASK) >> u8 vf_num = (reg & I40E_GL_MDET_TX_VF_NUM_MASK) >>
I40E_GL_MDET_TX_VF_NUM_SHIFT; I40E_GL_MDET_TX_VF_NUM_SHIFT;
u8 event = (reg & I40E_GL_MDET_TX_EVENT_SHIFT) >> u8 event = (reg & I40E_GL_MDET_TX_EVENT_MASK) >>
I40E_GL_MDET_TX_EVENT_SHIFT; I40E_GL_MDET_TX_EVENT_SHIFT;
u8 queue = (reg & I40E_GL_MDET_TX_QUEUE_MASK) >> u8 queue = (reg & I40E_GL_MDET_TX_QUEUE_MASK) >>
I40E_GL_MDET_TX_QUEUE_SHIFT; I40E_GL_MDET_TX_QUEUE_SHIFT;
@ -6165,7 +6165,7 @@ static void i40e_handle_mdd_event(struct i40e_pf *pf)
if (reg & I40E_GL_MDET_RX_VALID_MASK) { if (reg & I40E_GL_MDET_RX_VALID_MASK) {
u8 func = (reg & I40E_GL_MDET_RX_FUNCTION_MASK) >> u8 func = (reg & I40E_GL_MDET_RX_FUNCTION_MASK) >>
I40E_GL_MDET_RX_FUNCTION_SHIFT; I40E_GL_MDET_RX_FUNCTION_SHIFT;
u8 event = (reg & I40E_GL_MDET_RX_EVENT_SHIFT) >> u8 event = (reg & I40E_GL_MDET_RX_EVENT_MASK) >>
I40E_GL_MDET_RX_EVENT_SHIFT; I40E_GL_MDET_RX_EVENT_SHIFT;
u8 queue = (reg & I40E_GL_MDET_RX_QUEUE_MASK) >> u8 queue = (reg & I40E_GL_MDET_RX_QUEUE_MASK) >>
I40E_GL_MDET_RX_QUEUE_SHIFT; I40E_GL_MDET_RX_QUEUE_SHIFT;

View File

@ -6537,6 +6537,9 @@ static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer,
if (unlikely(page_to_nid(page) != numa_node_id())) if (unlikely(page_to_nid(page) != numa_node_id()))
return false; return false;
if (unlikely(page->pfmemalloc))
return false;
#if (PAGE_SIZE < 8192) #if (PAGE_SIZE < 8192)
/* if we are only owner of page we can reuse it */ /* if we are only owner of page we can reuse it */
if (unlikely(page_count(page) != 1)) if (unlikely(page_count(page) != 1))
@ -6603,7 +6606,8 @@ static bool igb_add_rx_frag(struct igb_ring *rx_ring,
memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long))); memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
/* we can reuse buffer as-is, just make sure it is local */ /* we can reuse buffer as-is, just make sure it is local */
if (likely(page_to_nid(page) == numa_node_id())) if (likely((page_to_nid(page) == numa_node_id()) &&
!page->pfmemalloc))
return true; return true;
/* this page cannot be reused so discard it */ /* this page cannot be reused so discard it */

View File

@ -342,12 +342,16 @@ static int ixgbe_set_settings(struct net_device *netdev,
if (old == advertised) if (old == advertised)
return err; return err;
/* this sets the link speed and restarts auto-neg */ /* this sets the link speed and restarts auto-neg */
while (test_and_set_bit(__IXGBE_IN_SFP_INIT, &adapter->state))
usleep_range(1000, 2000);
hw->mac.autotry_restart = true; hw->mac.autotry_restart = true;
err = hw->mac.ops.setup_link(hw, advertised, true); err = hw->mac.ops.setup_link(hw, advertised, true);
if (err) { if (err) {
e_info(probe, "setup link failed with code %d\n", err); e_info(probe, "setup link failed with code %d\n", err);
hw->mac.ops.setup_link(hw, old, true); hw->mac.ops.setup_link(hw, old, true);
} }
clear_bit(__IXGBE_IN_SFP_INIT, &adapter->state);
} else { } else {
/* in this case we currently only support 10Gb/FULL */ /* in this case we currently only support 10Gb/FULL */
u32 speed = ethtool_cmd_speed(ecmd); u32 speed = ethtool_cmd_speed(ecmd);

View File

@ -4321,8 +4321,8 @@ static void ixgbe_clean_rx_ring(struct ixgbe_ring *rx_ring)
IXGBE_CB(skb)->page_released = false; IXGBE_CB(skb)->page_released = false;
} }
dev_kfree_skb(skb); dev_kfree_skb(skb);
rx_buffer->skb = NULL;
} }
rx_buffer->skb = NULL;
if (rx_buffer->dma) if (rx_buffer->dma)
dma_unmap_page(dev, rx_buffer->dma, dma_unmap_page(dev, rx_buffer->dma,
ixgbe_rx_pg_size(rx_ring), ixgbe_rx_pg_size(rx_ring),

View File

@ -836,8 +836,11 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
* whether LSO is used */ * whether LSO is used */
tx_desc->ctrl.srcrb_flags = priv->ctrl_flags; tx_desc->ctrl.srcrb_flags = priv->ctrl_flags;
if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
tx_desc->ctrl.srcrb_flags |= cpu_to_be32(MLX4_WQE_CTRL_IP_CSUM | if (!skb->encapsulation)
MLX4_WQE_CTRL_TCP_UDP_CSUM); tx_desc->ctrl.srcrb_flags |= cpu_to_be32(MLX4_WQE_CTRL_IP_CSUM |
MLX4_WQE_CTRL_TCP_UDP_CSUM);
else
tx_desc->ctrl.srcrb_flags |= cpu_to_be32(MLX4_WQE_CTRL_IP_CSUM);
ring->tx_csum++; ring->tx_csum++;
} }

View File

@ -1026,6 +1026,7 @@ static void mlx4_free_eq(struct mlx4_dev *dev,
pr_cont("\n"); pr_cont("\n");
} }
} }
synchronize_irq(eq->irq);
mlx4_mtt_cleanup(dev, &eq->mtt); mlx4_mtt_cleanup(dev, &eq->mtt);
for (i = 0; i < npages; ++i) for (i = 0; i < npages; ++i)

View File

@ -955,6 +955,10 @@ static void mlx4_err_rule(struct mlx4_dev *dev, char *str,
cur->ib.dst_gid_msk); cur->ib.dst_gid_msk);
break; break;
case MLX4_NET_TRANS_RULE_ID_VXLAN:
len += snprintf(buf + len, BUF_SIZE - len,
"VNID = %d ", be32_to_cpu(cur->vxlan.vni));
break;
case MLX4_NET_TRANS_RULE_ID_IPV6: case MLX4_NET_TRANS_RULE_ID_IPV6:
break; break;

View File

@ -420,6 +420,7 @@ int mlx5_destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq)
if (err) if (err)
mlx5_core_warn(dev, "failed to destroy a previously created eq: eqn %d\n", mlx5_core_warn(dev, "failed to destroy a previously created eq: eqn %d\n",
eq->eqn); eq->eqn);
synchronize_irq(table->msix_arr[eq->irqn].vector);
mlx5_buf_free(dev, &eq->buf); mlx5_buf_free(dev, &eq->buf);
return err; return err;

View File

@ -343,8 +343,6 @@ netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb)
unsigned short dma_flags; unsigned short dma_flags;
int i = 0; int i = 0;
EFX_BUG_ON_PARANOID(tx_queue->write_count > tx_queue->insert_count);
if (skb_shinfo(skb)->gso_size) if (skb_shinfo(skb)->gso_size)
return efx_enqueue_skb_tso(tx_queue, skb); return efx_enqueue_skb_tso(tx_queue, skb);
@ -1258,8 +1256,6 @@ static int efx_enqueue_skb_tso(struct efx_tx_queue *tx_queue,
/* Find the packet protocol and sanity-check it */ /* Find the packet protocol and sanity-check it */
state.protocol = efx_tso_check_protocol(skb); state.protocol = efx_tso_check_protocol(skb);
EFX_BUG_ON_PARANOID(tx_queue->write_count > tx_queue->insert_count);
rc = tso_start(&state, efx, skb); rc = tso_start(&state, efx, skb);
if (rc) if (rc)
goto mem_err; goto mem_err;

View File

@ -81,6 +81,7 @@ static const char version[] =
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/netdevice.h> #include <linux/netdevice.h>
#include <linux/etherdevice.h> #include <linux/etherdevice.h>
@ -2188,6 +2189,41 @@ static const struct of_device_id smc91x_match[] = {
{}, {},
}; };
MODULE_DEVICE_TABLE(of, smc91x_match); MODULE_DEVICE_TABLE(of, smc91x_match);
/**
* of_try_set_control_gpio - configure a gpio if it exists
*/
static int try_toggle_control_gpio(struct device *dev,
struct gpio_desc **desc,
const char *name, int index,
int value, unsigned int nsdelay)
{
struct gpio_desc *gpio = *desc;
int res;
gpio = devm_gpiod_get_index(dev, name, index);
if (IS_ERR(gpio)) {
if (PTR_ERR(gpio) == -ENOENT) {
*desc = NULL;
return 0;
}
return PTR_ERR(gpio);
}
res = gpiod_direction_output(gpio, !value);
if (res) {
dev_err(dev, "unable to toggle gpio %s: %i\n", name, res);
devm_gpiod_put(dev, gpio);
gpio = NULL;
return res;
}
if (nsdelay)
usleep_range(nsdelay, 2 * nsdelay);
gpiod_set_value_cansleep(gpio, value);
*desc = gpio;
return 0;
}
#endif #endif
/* /*
@ -2237,6 +2273,28 @@ static int smc_drv_probe(struct platform_device *pdev)
struct device_node *np = pdev->dev.of_node; struct device_node *np = pdev->dev.of_node;
u32 val; u32 val;
/* Optional pwrdwn GPIO configured? */
ret = try_toggle_control_gpio(&pdev->dev, &lp->power_gpio,
"power", 0, 0, 100);
if (ret)
return ret;
/*
* Optional reset GPIO configured? Minimum 100 ns reset needed
* according to LAN91C96 datasheet page 14.
*/
ret = try_toggle_control_gpio(&pdev->dev, &lp->reset_gpio,
"reset", 0, 0, 100);
if (ret)
return ret;
/*
* Need to wait for optional EEPROM to load, max 750 us according
* to LAN91C96 datasheet page 55.
*/
if (lp->reset_gpio)
usleep_range(750, 1000);
/* Combination of IO widths supported, default to 16-bit */ /* Combination of IO widths supported, default to 16-bit */
if (!of_property_read_u32(np, "reg-io-width", &val)) { if (!of_property_read_u32(np, "reg-io-width", &val)) {
if (val & 1) if (val & 1)

View File

@ -298,6 +298,9 @@ struct smc_local {
struct sk_buff *pending_tx_skb; struct sk_buff *pending_tx_skb;
struct tasklet_struct tx_task; struct tasklet_struct tx_task;
struct gpio_desc *power_gpio;
struct gpio_desc *reset_gpio;
/* version/revision of the SMC91x chip */ /* version/revision of the SMC91x chip */
int version; int version;

View File

@ -33,6 +33,7 @@ static struct stmmac_dma_cfg dma_cfg;
static void stmmac_default_data(void) static void stmmac_default_data(void)
{ {
memset(&plat_dat, 0, sizeof(struct plat_stmmacenet_data)); memset(&plat_dat, 0, sizeof(struct plat_stmmacenet_data));
plat_dat.bus_id = 1; plat_dat.bus_id = 1;
plat_dat.phy_addr = 0; plat_dat.phy_addr = 0;
plat_dat.interface = PHY_INTERFACE_MODE_GMII; plat_dat.interface = PHY_INTERFACE_MODE_GMII;
@ -47,6 +48,12 @@ static void stmmac_default_data(void)
dma_cfg.pbl = 32; dma_cfg.pbl = 32;
dma_cfg.burst_len = DMA_AXI_BLEN_256; dma_cfg.burst_len = DMA_AXI_BLEN_256;
plat_dat.dma_cfg = &dma_cfg; plat_dat.dma_cfg = &dma_cfg;
/* Set default value for multicast hash bins */
plat_dat.multicast_filter_bins = HASH_TABLE_SIZE;
/* Set default value for unicast filter entries */
plat_dat.unicast_filter_entries = 1;
} }
/** /**

View File

@ -591,8 +591,8 @@ static void cpsw_set_promiscious(struct net_device *ndev, bool enable)
if (enable) { if (enable) {
unsigned long timeout = jiffies + HZ; unsigned long timeout = jiffies + HZ;
/* Disable Learn for all ports */ /* Disable Learn for all ports (host is port 0 and slaves are port 1 and up */
for (i = 0; i < priv->data.slaves; i++) { for (i = 0; i <= priv->data.slaves; i++) {
cpsw_ale_control_set(ale, i, cpsw_ale_control_set(ale, i,
ALE_PORT_NOLEARN, 1); ALE_PORT_NOLEARN, 1);
cpsw_ale_control_set(ale, i, cpsw_ale_control_set(ale, i,
@ -616,11 +616,11 @@ static void cpsw_set_promiscious(struct net_device *ndev, bool enable)
cpsw_ale_control_set(ale, 0, ALE_P0_UNI_FLOOD, 1); cpsw_ale_control_set(ale, 0, ALE_P0_UNI_FLOOD, 1);
dev_dbg(&ndev->dev, "promiscuity enabled\n"); dev_dbg(&ndev->dev, "promiscuity enabled\n");
} else { } else {
/* Flood All Unicast Packets to Host port */ /* Don't Flood All Unicast Packets to Host port */
cpsw_ale_control_set(ale, 0, ALE_P0_UNI_FLOOD, 0); cpsw_ale_control_set(ale, 0, ALE_P0_UNI_FLOOD, 0);
/* Enable Learn for all ports */ /* Enable Learn for all ports (host is port 0 and slaves are port 1 and up */
for (i = 0; i < priv->data.slaves; i++) { for (i = 0; i <= priv->data.slaves; i++) {
cpsw_ale_control_set(ale, i, cpsw_ale_control_set(ale, i,
ALE_PORT_NOLEARN, 0); ALE_PORT_NOLEARN, 0);
cpsw_ale_control_set(ale, i, cpsw_ale_control_set(ale, i,
@ -638,12 +638,16 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
if (ndev->flags & IFF_PROMISC) { if (ndev->flags & IFF_PROMISC) {
/* Enable promiscuous mode */ /* Enable promiscuous mode */
cpsw_set_promiscious(ndev, true); cpsw_set_promiscious(ndev, true);
cpsw_ale_set_allmulti(priv->ale, IFF_ALLMULTI);
return; return;
} else { } else {
/* Disable promiscuous mode */ /* Disable promiscuous mode */
cpsw_set_promiscious(ndev, false); cpsw_set_promiscious(ndev, false);
} }
/* Restore allmulti on vlans if necessary */
cpsw_ale_set_allmulti(priv->ale, priv->ndev->flags & IFF_ALLMULTI);
/* Clear all mcast from ALE */ /* Clear all mcast from ALE */
cpsw_ale_flush_multicast(priv->ale, ALE_ALL_PORTS << priv->host_port); cpsw_ale_flush_multicast(priv->ale, ALE_ALL_PORTS << priv->host_port);
@ -1149,6 +1153,7 @@ static inline void cpsw_add_default_vlan(struct cpsw_priv *priv)
const int port = priv->host_port; const int port = priv->host_port;
u32 reg; u32 reg;
int i; int i;
int unreg_mcast_mask;
reg = (priv->version == CPSW_VERSION_1) ? CPSW1_PORT_VLAN : reg = (priv->version == CPSW_VERSION_1) ? CPSW1_PORT_VLAN :
CPSW2_PORT_VLAN; CPSW2_PORT_VLAN;
@ -1158,9 +1163,14 @@ static inline void cpsw_add_default_vlan(struct cpsw_priv *priv)
for (i = 0; i < priv->data.slaves; i++) for (i = 0; i < priv->data.slaves; i++)
slave_write(priv->slaves + i, vlan, reg); slave_write(priv->slaves + i, vlan, reg);
if (priv->ndev->flags & IFF_ALLMULTI)
unreg_mcast_mask = ALE_ALL_PORTS;
else
unreg_mcast_mask = ALE_PORT_1 | ALE_PORT_2;
cpsw_ale_add_vlan(priv->ale, vlan, ALE_ALL_PORTS << port, cpsw_ale_add_vlan(priv->ale, vlan, ALE_ALL_PORTS << port,
ALE_ALL_PORTS << port, ALE_ALL_PORTS << port, ALE_ALL_PORTS << port, ALE_ALL_PORTS << port,
(ALE_PORT_1 | ALE_PORT_2) << port); unreg_mcast_mask << port);
} }
static void cpsw_init_host_port(struct cpsw_priv *priv) static void cpsw_init_host_port(struct cpsw_priv *priv)
@ -1620,11 +1630,17 @@ static inline int cpsw_add_vlan_ale_entry(struct cpsw_priv *priv,
unsigned short vid) unsigned short vid)
{ {
int ret; int ret;
int unreg_mcast_mask;
if (priv->ndev->flags & IFF_ALLMULTI)
unreg_mcast_mask = ALE_ALL_PORTS;
else
unreg_mcast_mask = ALE_PORT_1 | ALE_PORT_2;
ret = cpsw_ale_add_vlan(priv->ale, vid, ret = cpsw_ale_add_vlan(priv->ale, vid,
ALE_ALL_PORTS << priv->host_port, ALE_ALL_PORTS << priv->host_port,
0, ALE_ALL_PORTS << priv->host_port, 0, ALE_ALL_PORTS << priv->host_port,
(ALE_PORT_1 | ALE_PORT_2) << priv->host_port); unreg_mcast_mask << priv->host_port);
if (ret != 0) if (ret != 0)
return ret; return ret;
@ -2006,7 +2022,7 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data,
parp = of_get_property(slave_node, "phy_id", &lenp); parp = of_get_property(slave_node, "phy_id", &lenp);
if ((parp == NULL) || (lenp != (sizeof(void *) * 2))) { if ((parp == NULL) || (lenp != (sizeof(void *) * 2))) {
dev_err(&pdev->dev, "Missing slave[%d] phy_id property\n", i); dev_err(&pdev->dev, "Missing slave[%d] phy_id property\n", i);
return -EINVAL; goto no_phy_slave;
} }
mdio_node = of_find_node_by_phandle(be32_to_cpup(parp)); mdio_node = of_find_node_by_phandle(be32_to_cpup(parp));
phyid = be32_to_cpup(parp+1); phyid = be32_to_cpup(parp+1);
@ -2019,6 +2035,14 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data,
snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), snprintf(slave_data->phy_id, sizeof(slave_data->phy_id),
PHY_ID_FMT, mdio->name, phyid); PHY_ID_FMT, mdio->name, phyid);
slave_data->phy_if = of_get_phy_mode(slave_node);
if (slave_data->phy_if < 0) {
dev_err(&pdev->dev, "Missing or malformed slave[%d] phy-mode property\n",
i);
return slave_data->phy_if;
}
no_phy_slave:
mac_addr = of_get_mac_address(slave_node); mac_addr = of_get_mac_address(slave_node);
if (mac_addr) { if (mac_addr) {
memcpy(slave_data->mac_addr, mac_addr, ETH_ALEN); memcpy(slave_data->mac_addr, mac_addr, ETH_ALEN);
@ -2030,14 +2054,6 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data,
return ret; return ret;
} }
} }
slave_data->phy_if = of_get_phy_mode(slave_node);
if (slave_data->phy_if < 0) {
dev_err(&pdev->dev, "Missing or malformed slave[%d] phy-mode property\n",
i);
return slave_data->phy_if;
}
if (data->dual_emac) { if (data->dual_emac) {
if (of_property_read_u32(slave_node, "dual_emac_res_vlan", if (of_property_read_u32(slave_node, "dual_emac_res_vlan",
&prop)) { &prop)) {

View File

@ -443,6 +443,35 @@ int cpsw_ale_del_vlan(struct cpsw_ale *ale, u16 vid, int port_mask)
return 0; return 0;
} }
void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti)
{
u32 ale_entry[ALE_ENTRY_WORDS];
int type, idx;
int unreg_mcast = 0;
/* Only bother doing the work if the setting is actually changing */
if (ale->allmulti == allmulti)
return;
/* Remember the new setting to check against next time */
ale->allmulti = allmulti;
for (idx = 0; idx < ale->params.ale_entries; idx++) {
cpsw_ale_read(ale, idx, ale_entry);
type = cpsw_ale_get_entry_type(ale_entry);
if (type != ALE_TYPE_VLAN)
continue;
unreg_mcast = cpsw_ale_get_vlan_unreg_mcast(ale_entry);
if (allmulti)
unreg_mcast |= 1;
else
unreg_mcast &= ~1;
cpsw_ale_set_vlan_unreg_mcast(ale_entry, unreg_mcast);
cpsw_ale_write(ale, idx, ale_entry);
}
}
struct ale_control_info { struct ale_control_info {
const char *name; const char *name;
int offset, port_offset; int offset, port_offset;

View File

@ -27,6 +27,7 @@ struct cpsw_ale {
struct cpsw_ale_params params; struct cpsw_ale_params params;
struct timer_list timer; struct timer_list timer;
unsigned long ageout; unsigned long ageout;
int allmulti;
}; };
enum cpsw_ale_control { enum cpsw_ale_control {
@ -103,6 +104,7 @@ int cpsw_ale_del_mcast(struct cpsw_ale *ale, u8 *addr, int port_mask,
int cpsw_ale_add_vlan(struct cpsw_ale *ale, u16 vid, int port, int untag, int cpsw_ale_add_vlan(struct cpsw_ale *ale, u16 vid, int port, int untag,
int reg_mcast, int unreg_mcast); int reg_mcast, int unreg_mcast);
int cpsw_ale_del_vlan(struct cpsw_ale *ale, u16 vid, int port); int cpsw_ale_del_vlan(struct cpsw_ale *ale, u16 vid, int port);
void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti);
int cpsw_ale_control_get(struct cpsw_ale *ale, int port, int control); int cpsw_ale_control_get(struct cpsw_ale *ale, int port, int control);
int cpsw_ale_control_set(struct cpsw_ale *ale, int port, int cpsw_ale_control_set(struct cpsw_ale *ale, int port,

View File

@ -550,6 +550,7 @@ do_lso:
do_send: do_send:
/* Start filling in the page buffers with the rndis hdr */ /* Start filling in the page buffers with the rndis hdr */
rndis_msg->msg_len += rndis_msg_size; rndis_msg->msg_len += rndis_msg_size;
packet->total_data_buflen = rndis_msg->msg_len;
packet->page_buf_cnt = init_page_array(rndis_msg, rndis_msg_size, packet->page_buf_cnt = init_page_array(rndis_msg, rndis_msg_size,
skb, &packet->page_buf[0]); skb, &packet->page_buf[0]);

View File

@ -272,7 +272,7 @@ static void macvlan_process_broadcast(struct work_struct *w)
struct sk_buff *skb; struct sk_buff *skb;
struct sk_buff_head list; struct sk_buff_head list;
skb_queue_head_init(&list); __skb_queue_head_init(&list);
spin_lock_bh(&port->bc_queue.lock); spin_lock_bh(&port->bc_queue.lock);
skb_queue_splice_tail_init(&port->bc_queue, &list); skb_queue_splice_tail_init(&port->bc_queue, &list);
@ -1082,9 +1082,15 @@ static void macvlan_port_destroy(struct net_device *dev)
{ {
struct macvlan_port *port = macvlan_port_get_rtnl(dev); struct macvlan_port *port = macvlan_port_get_rtnl(dev);
cancel_work_sync(&port->bc_work);
dev->priv_flags &= ~IFF_MACVLAN_PORT; dev->priv_flags &= ~IFF_MACVLAN_PORT;
netdev_rx_handler_unregister(dev); netdev_rx_handler_unregister(dev);
/* After this point, no packet can schedule bc_work anymore,
* but we need to cancel it and purge left skbs if any.
*/
cancel_work_sync(&port->bc_work);
__skb_queue_purge(&port->bc_queue);
kfree_rcu(port, rcu); kfree_rcu(port, rcu);
} }

View File

@ -16,6 +16,7 @@
#include <linux/idr.h> #include <linux/idr.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <net/ipv6.h>
#include <net/net_namespace.h> #include <net/net_namespace.h>
#include <net/rtnetlink.h> #include <net/rtnetlink.h>
#include <net/sock.h> #include <net/sock.h>
@ -65,7 +66,7 @@ static struct cdev macvtap_cdev;
static const struct proto_ops macvtap_socket_ops; static const struct proto_ops macvtap_socket_ops;
#define TUN_OFFLOADS (NETIF_F_HW_CSUM | NETIF_F_TSO_ECN | NETIF_F_TSO | \ #define TUN_OFFLOADS (NETIF_F_HW_CSUM | NETIF_F_TSO_ECN | NETIF_F_TSO | \
NETIF_F_TSO6 | NETIF_F_UFO) NETIF_F_TSO6)
#define RX_OFFLOADS (NETIF_F_GRO | NETIF_F_LRO) #define RX_OFFLOADS (NETIF_F_GRO | NETIF_F_LRO)
#define TAP_FEATURES (NETIF_F_GSO | NETIF_F_SG) #define TAP_FEATURES (NETIF_F_GSO | NETIF_F_SG)
@ -569,7 +570,11 @@ static int macvtap_skb_from_vnet_hdr(struct sk_buff *skb,
gso_type = SKB_GSO_TCPV6; gso_type = SKB_GSO_TCPV6;
break; break;
case VIRTIO_NET_HDR_GSO_UDP: case VIRTIO_NET_HDR_GSO_UDP:
pr_warn_once("macvtap: %s: using disabled UFO feature; please fix this program\n",
current->comm);
gso_type = SKB_GSO_UDP; gso_type = SKB_GSO_UDP;
if (skb->protocol == htons(ETH_P_IPV6))
ipv6_proxy_select_ident(skb);
break; break;
default: default:
return -EINVAL; return -EINVAL;
@ -614,8 +619,6 @@ static void macvtap_skb_to_vnet_hdr(const struct sk_buff *skb,
vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4; vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
else if (sinfo->gso_type & SKB_GSO_TCPV6) else if (sinfo->gso_type & SKB_GSO_TCPV6)
vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6; vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
else if (sinfo->gso_type & SKB_GSO_UDP)
vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP;
else else
BUG(); BUG();
if (sinfo->gso_type & SKB_GSO_TCP_ECN) if (sinfo->gso_type & SKB_GSO_TCP_ECN)
@ -950,9 +953,6 @@ static int set_offload(struct macvtap_queue *q, unsigned long arg)
if (arg & TUN_F_TSO6) if (arg & TUN_F_TSO6)
feature_mask |= NETIF_F_TSO6; feature_mask |= NETIF_F_TSO6;
} }
if (arg & TUN_F_UFO)
feature_mask |= NETIF_F_UFO;
} }
/* tun/tap driver inverts the usage for TSO offloads, where /* tun/tap driver inverts the usage for TSO offloads, where
@ -963,7 +963,7 @@ static int set_offload(struct macvtap_queue *q, unsigned long arg)
* When user space turns off TSO, we turn off GSO/LRO so that * When user space turns off TSO, we turn off GSO/LRO so that
* user-space will not receive TSO frames. * user-space will not receive TSO frames.
*/ */
if (feature_mask & (NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_UFO)) if (feature_mask & (NETIF_F_TSO | NETIF_F_TSO6))
features |= RX_OFFLOADS; features |= RX_OFFLOADS;
else else
features &= ~RX_OFFLOADS; features &= ~RX_OFFLOADS;
@ -1064,7 +1064,7 @@ static long macvtap_ioctl(struct file *file, unsigned int cmd,
case TUNSETOFFLOAD: case TUNSETOFFLOAD:
/* let the user check for future flags */ /* let the user check for future flags */
if (arg & ~(TUN_F_CSUM | TUN_F_TSO4 | TUN_F_TSO6 | if (arg & ~(TUN_F_CSUM | TUN_F_TSO4 | TUN_F_TSO6 |
TUN_F_TSO_ECN | TUN_F_UFO)) TUN_F_TSO_ECN))
return -EINVAL; return -EINVAL;
rtnl_lock(); rtnl_lock();

View File

@ -50,10 +50,15 @@
#define MII_M1011_PHY_SCR 0x10 #define MII_M1011_PHY_SCR 0x10
#define MII_M1011_PHY_SCR_AUTO_CROSS 0x0060 #define MII_M1011_PHY_SCR_AUTO_CROSS 0x0060
#define MII_M1145_PHY_EXT_SR 0x1b
#define MII_M1145_PHY_EXT_CR 0x14 #define MII_M1145_PHY_EXT_CR 0x14
#define MII_M1145_RGMII_RX_DELAY 0x0080 #define MII_M1145_RGMII_RX_DELAY 0x0080
#define MII_M1145_RGMII_TX_DELAY 0x0002 #define MII_M1145_RGMII_TX_DELAY 0x0002
#define MII_M1145_HWCFG_MODE_SGMII_NO_CLK 0x4
#define MII_M1145_HWCFG_MODE_MASK 0xf
#define MII_M1145_HWCFG_FIBER_COPPER_AUTO 0x8000
#define MII_M1111_PHY_LED_CONTROL 0x18 #define MII_M1111_PHY_LED_CONTROL 0x18
#define MII_M1111_PHY_LED_DIRECT 0x4100 #define MII_M1111_PHY_LED_DIRECT 0x4100
#define MII_M1111_PHY_LED_COMBINE 0x411c #define MII_M1111_PHY_LED_COMBINE 0x411c
@ -676,6 +681,20 @@ static int m88e1145_config_init(struct phy_device *phydev)
} }
} }
if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {
int temp = phy_read(phydev, MII_M1145_PHY_EXT_SR);
if (temp < 0)
return temp;
temp &= ~MII_M1145_HWCFG_MODE_MASK;
temp |= MII_M1145_HWCFG_MODE_SGMII_NO_CLK;
temp |= MII_M1145_HWCFG_FIBER_COPPER_AUTO;
err = phy_write(phydev, MII_M1145_PHY_EXT_SR, temp);
if (err < 0)
return err;
}
err = marvell_of_reg_init(phydev); err = marvell_of_reg_init(phydev);
if (err < 0) if (err < 0)
return err; return err;

View File

@ -65,6 +65,7 @@
#include <linux/nsproxy.h> #include <linux/nsproxy.h>
#include <linux/virtio_net.h> #include <linux/virtio_net.h>
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <net/ipv6.h>
#include <net/net_namespace.h> #include <net/net_namespace.h>
#include <net/netns/generic.h> #include <net/netns/generic.h>
#include <net/rtnetlink.h> #include <net/rtnetlink.h>
@ -174,7 +175,7 @@ struct tun_struct {
struct net_device *dev; struct net_device *dev;
netdev_features_t set_features; netdev_features_t set_features;
#define TUN_USER_FEATURES (NETIF_F_HW_CSUM|NETIF_F_TSO_ECN|NETIF_F_TSO| \ #define TUN_USER_FEATURES (NETIF_F_HW_CSUM|NETIF_F_TSO_ECN|NETIF_F_TSO| \
NETIF_F_TSO6|NETIF_F_UFO) NETIF_F_TSO6)
int vnet_hdr_sz; int vnet_hdr_sz;
int sndbuf; int sndbuf;
@ -1139,6 +1140,8 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
break; break;
} }
skb_reset_network_header(skb);
if (gso.gso_type != VIRTIO_NET_HDR_GSO_NONE) { if (gso.gso_type != VIRTIO_NET_HDR_GSO_NONE) {
pr_debug("GSO!\n"); pr_debug("GSO!\n");
switch (gso.gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { switch (gso.gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
@ -1149,8 +1152,20 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6;
break; break;
case VIRTIO_NET_HDR_GSO_UDP: case VIRTIO_NET_HDR_GSO_UDP:
{
static bool warned;
if (!warned) {
warned = true;
netdev_warn(tun->dev,
"%s: using disabled UFO feature; please fix this program\n",
current->comm);
}
skb_shinfo(skb)->gso_type = SKB_GSO_UDP; skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
if (skb->protocol == htons(ETH_P_IPV6))
ipv6_proxy_select_ident(skb);
break; break;
}
default: default:
tun->dev->stats.rx_frame_errors++; tun->dev->stats.rx_frame_errors++;
kfree_skb(skb); kfree_skb(skb);
@ -1179,7 +1194,6 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG;
} }
skb_reset_network_header(skb);
skb_probe_transport_header(skb, 0); skb_probe_transport_header(skb, 0);
rxhash = skb_get_hash(skb); rxhash = skb_get_hash(skb);
@ -1251,8 +1265,6 @@ static ssize_t tun_put_user(struct tun_struct *tun,
gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV4; gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
else if (sinfo->gso_type & SKB_GSO_TCPV6) else if (sinfo->gso_type & SKB_GSO_TCPV6)
gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV6; gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
else if (sinfo->gso_type & SKB_GSO_UDP)
gso.gso_type = VIRTIO_NET_HDR_GSO_UDP;
else { else {
pr_err("unexpected GSO type: " pr_err("unexpected GSO type: "
"0x%x, gso_size %d, hdr_len %d\n", "0x%x, gso_size %d, hdr_len %d\n",
@ -1762,11 +1774,6 @@ static int set_offload(struct tun_struct *tun, unsigned long arg)
features |= NETIF_F_TSO6; features |= NETIF_F_TSO6;
arg &= ~(TUN_F_TSO4|TUN_F_TSO6); arg &= ~(TUN_F_TSO4|TUN_F_TSO6);
} }
if (arg & TUN_F_UFO) {
features |= NETIF_F_UFO;
arg &= ~TUN_F_UFO;
}
} }
/* This gives the user a way to test for new features in future by /* This gives the user a way to test for new features in future by

View File

@ -937,6 +937,7 @@ static int ax88179_set_mac_addr(struct net_device *net, void *p)
{ {
struct usbnet *dev = netdev_priv(net); struct usbnet *dev = netdev_priv(net);
struct sockaddr *addr = p; struct sockaddr *addr = p;
int ret;
if (netif_running(net)) if (netif_running(net))
return -EBUSY; return -EBUSY;
@ -946,8 +947,12 @@ static int ax88179_set_mac_addr(struct net_device *net, void *p)
memcpy(net->dev_addr, addr->sa_data, ETH_ALEN); memcpy(net->dev_addr, addr->sa_data, ETH_ALEN);
/* Set the MAC address */ /* Set the MAC address */
return ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_NODE_ID, ETH_ALEN, ret = ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_NODE_ID, ETH_ALEN,
ETH_ALEN, net->dev_addr); ETH_ALEN, net->dev_addr);
if (ret < 0)
return ret;
return 0;
} }
static const struct net_device_ops ax88179_netdev_ops = { static const struct net_device_ops ax88179_netdev_ops = {

View File

@ -67,6 +67,35 @@ static const u8 mbm_guid[16] = {
0xa6, 0x07, 0xc0, 0xff, 0xcb, 0x7e, 0x39, 0x2a, 0xa6, 0x07, 0xc0, 0xff, 0xcb, 0x7e, 0x39, 0x2a,
}; };
static void usbnet_cdc_update_filter(struct usbnet *dev)
{
struct cdc_state *info = (void *) &dev->data;
struct usb_interface *intf = info->control;
u16 cdc_filter =
USB_CDC_PACKET_TYPE_ALL_MULTICAST | USB_CDC_PACKET_TYPE_DIRECTED |
USB_CDC_PACKET_TYPE_BROADCAST;
if (dev->net->flags & IFF_PROMISC)
cdc_filter |= USB_CDC_PACKET_TYPE_PROMISCUOUS;
/* FIXME cdc-ether has some multicast code too, though it complains
* in routine cases. info->ether describes the multicast support.
* Implement that here, manipulating the cdc filter as needed.
*/
usb_control_msg(dev->udev,
usb_sndctrlpipe(dev->udev, 0),
USB_CDC_SET_ETHERNET_PACKET_FILTER,
USB_TYPE_CLASS | USB_RECIP_INTERFACE,
cdc_filter,
intf->cur_altsetting->desc.bInterfaceNumber,
NULL,
0,
USB_CTRL_SET_TIMEOUT
);
}
/* probes control interface, claims data interface, collects the bulk /* probes control interface, claims data interface, collects the bulk
* endpoints, activates data interface (if needed), maybe sets MTU. * endpoints, activates data interface (if needed), maybe sets MTU.
* all pure cdc, except for certain firmware workarounds, and knowing * all pure cdc, except for certain firmware workarounds, and knowing
@ -347,16 +376,8 @@ next_desc:
* don't do reset all the way. So the packet filter should * don't do reset all the way. So the packet filter should
* be set to a sane initial value. * be set to a sane initial value.
*/ */
usb_control_msg(dev->udev, usbnet_cdc_update_filter(dev);
usb_sndctrlpipe(dev->udev, 0),
USB_CDC_SET_ETHERNET_PACKET_FILTER,
USB_TYPE_CLASS | USB_RECIP_INTERFACE,
USB_CDC_PACKET_TYPE_ALL_MULTICAST | USB_CDC_PACKET_TYPE_DIRECTED | USB_CDC_PACKET_TYPE_BROADCAST,
intf->cur_altsetting->desc.bInterfaceNumber,
NULL,
0,
USB_CTRL_SET_TIMEOUT
);
return 0; return 0;
bad_desc: bad_desc:
@ -468,10 +489,6 @@ int usbnet_cdc_bind(struct usbnet *dev, struct usb_interface *intf)
return status; return status;
} }
/* FIXME cdc-ether has some multicast code too, though it complains
* in routine cases. info->ether describes the multicast support.
* Implement that here, manipulating the cdc filter as needed.
*/
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(usbnet_cdc_bind); EXPORT_SYMBOL_GPL(usbnet_cdc_bind);
@ -482,6 +499,7 @@ static const struct driver_info cdc_info = {
.bind = usbnet_cdc_bind, .bind = usbnet_cdc_bind,
.unbind = usbnet_cdc_unbind, .unbind = usbnet_cdc_unbind,
.status = usbnet_cdc_status, .status = usbnet_cdc_status,
.set_rx_mode = usbnet_cdc_update_filter,
.manage_power = usbnet_manage_power, .manage_power = usbnet_manage_power,
}; };
@ -491,6 +509,7 @@ static const struct driver_info wwan_info = {
.bind = usbnet_cdc_bind, .bind = usbnet_cdc_bind,
.unbind = usbnet_cdc_unbind, .unbind = usbnet_cdc_unbind,
.status = usbnet_cdc_status, .status = usbnet_cdc_status,
.set_rx_mode = usbnet_cdc_update_filter,
.manage_power = usbnet_manage_power, .manage_power = usbnet_manage_power,
}; };

View File

@ -1162,6 +1162,9 @@ static void intr_callback(struct urb *urb)
case -ESHUTDOWN: case -ESHUTDOWN:
netif_device_detach(tp->netdev); netif_device_detach(tp->netdev);
case -ENOENT: case -ENOENT:
case -EPROTO:
netif_info(tp, intr, tp->netdev,
"Stop submitting intr, status %d\n", status);
return; return;
case -EOVERFLOW: case -EOVERFLOW:
netif_info(tp, intr, tp->netdev, "intr status -EOVERFLOW\n"); netif_info(tp, intr, tp->netdev, "intr status -EOVERFLOW\n");
@ -2891,6 +2894,9 @@ static int rtl8152_open(struct net_device *netdev)
if (res) if (res)
goto out; goto out;
/* set speed to 0 to avoid autoresume try to submit rx */
tp->speed = 0;
res = usb_autopm_get_interface(tp->intf); res = usb_autopm_get_interface(tp->intf);
if (res < 0) { if (res < 0) {
free_all_mem(tp); free_all_mem(tp);
@ -2904,6 +2910,8 @@ static int rtl8152_open(struct net_device *netdev)
clear_bit(WORK_ENABLE, &tp->flags); clear_bit(WORK_ENABLE, &tp->flags);
usb_kill_urb(tp->intr_urb); usb_kill_urb(tp->intr_urb);
cancel_delayed_work_sync(&tp->schedule); cancel_delayed_work_sync(&tp->schedule);
/* disable the tx/rx, if the workqueue has enabled them. */
if (tp->speed & LINK_STATUS) if (tp->speed & LINK_STATUS)
tp->rtl_ops.disable(tp); tp->rtl_ops.disable(tp);
} }
@ -2955,10 +2963,7 @@ static int rtl8152_close(struct net_device *netdev)
* be disable when autoresume occurs, because the * be disable when autoresume occurs, because the
* netif_running() would be false. * netif_running() would be false.
*/ */
if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) { rtl_runtime_suspend_enable(tp, false);
rtl_runtime_suspend_enable(tp, false);
clear_bit(SELECTIVE_SUSPEND, &tp->flags);
}
tasklet_disable(&tp->tl); tasklet_disable(&tp->tl);
tp->rtl_ops.down(tp); tp->rtl_ops.down(tp);
@ -3205,7 +3210,7 @@ static int rtl8152_suspend(struct usb_interface *intf, pm_message_t message)
netif_device_detach(netdev); netif_device_detach(netdev);
} }
if (netif_running(netdev)) { if (netif_running(netdev) && test_bit(WORK_ENABLE, &tp->flags)) {
clear_bit(WORK_ENABLE, &tp->flags); clear_bit(WORK_ENABLE, &tp->flags);
usb_kill_urb(tp->intr_urb); usb_kill_urb(tp->intr_urb);
tasklet_disable(&tp->tl); tasklet_disable(&tp->tl);
@ -3253,6 +3258,8 @@ static int rtl8152_resume(struct usb_interface *intf)
set_bit(WORK_ENABLE, &tp->flags); set_bit(WORK_ENABLE, &tp->flags);
} }
usb_submit_urb(tp->intr_urb, GFP_KERNEL); usb_submit_urb(tp->intr_urb, GFP_KERNEL);
} else if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
clear_bit(SELECTIVE_SUSPEND, &tp->flags);
} }
mutex_unlock(&tp->control); mutex_unlock(&tp->control);

View File

@ -1052,6 +1052,21 @@ static void __handle_link_change(struct usbnet *dev)
clear_bit(EVENT_LINK_CHANGE, &dev->flags); clear_bit(EVENT_LINK_CHANGE, &dev->flags);
} }
static void usbnet_set_rx_mode(struct net_device *net)
{
struct usbnet *dev = netdev_priv(net);
usbnet_defer_kevent(dev, EVENT_SET_RX_MODE);
}
static void __handle_set_rx_mode(struct usbnet *dev)
{
if (dev->driver_info->set_rx_mode)
(dev->driver_info->set_rx_mode)(dev);
clear_bit(EVENT_SET_RX_MODE, &dev->flags);
}
/* work that cannot be done in interrupt context uses keventd. /* work that cannot be done in interrupt context uses keventd.
* *
* NOTE: with 2.5 we could do more of this using completion callbacks, * NOTE: with 2.5 we could do more of this using completion callbacks,
@ -1157,6 +1172,10 @@ skip_reset:
if (test_bit (EVENT_LINK_CHANGE, &dev->flags)) if (test_bit (EVENT_LINK_CHANGE, &dev->flags))
__handle_link_change(dev); __handle_link_change(dev);
if (test_bit (EVENT_SET_RX_MODE, &dev->flags))
__handle_set_rx_mode(dev);
if (dev->flags) if (dev->flags)
netdev_dbg(dev->net, "kevent done, flags = 0x%lx\n", dev->flags); netdev_dbg(dev->net, "kevent done, flags = 0x%lx\n", dev->flags);
} }
@ -1525,6 +1544,7 @@ static const struct net_device_ops usbnet_netdev_ops = {
.ndo_stop = usbnet_stop, .ndo_stop = usbnet_stop,
.ndo_start_xmit = usbnet_start_xmit, .ndo_start_xmit = usbnet_start_xmit,
.ndo_tx_timeout = usbnet_tx_timeout, .ndo_tx_timeout = usbnet_tx_timeout,
.ndo_set_rx_mode = usbnet_set_rx_mode,
.ndo_change_mtu = usbnet_change_mtu, .ndo_change_mtu = usbnet_change_mtu,
.ndo_set_mac_address = eth_mac_addr, .ndo_set_mac_address = eth_mac_addr,
.ndo_validate_addr = eth_validate_addr, .ndo_validate_addr = eth_validate_addr,

View File

@ -491,8 +491,17 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
break; break;
case VIRTIO_NET_HDR_GSO_UDP: case VIRTIO_NET_HDR_GSO_UDP:
{
static bool warned;
if (!warned) {
warned = true;
netdev_warn(dev,
"host using disabled UFO feature; please fix it\n");
}
skb_shinfo(skb)->gso_type = SKB_GSO_UDP; skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
break; break;
}
case VIRTIO_NET_HDR_GSO_TCPV6: case VIRTIO_NET_HDR_GSO_TCPV6:
skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6;
break; break;
@ -881,8 +890,6 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV4; hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV6; hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
else if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP)
hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_UDP;
else else
BUG(); BUG();
if (skb_shinfo(skb)->gso_type & SKB_GSO_TCP_ECN) if (skb_shinfo(skb)->gso_type & SKB_GSO_TCP_ECN)
@ -1705,7 +1712,7 @@ static int virtnet_probe(struct virtio_device *vdev)
dev->features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST; dev->features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;
if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) { if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) {
dev->hw_features |= NETIF_F_TSO | NETIF_F_UFO dev->hw_features |= NETIF_F_TSO
| NETIF_F_TSO_ECN | NETIF_F_TSO6; | NETIF_F_TSO_ECN | NETIF_F_TSO6;
} }
/* Individual feature bits: what can host handle? */ /* Individual feature bits: what can host handle? */
@ -1715,11 +1722,9 @@ static int virtnet_probe(struct virtio_device *vdev)
dev->hw_features |= NETIF_F_TSO6; dev->hw_features |= NETIF_F_TSO6;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_ECN)) if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_ECN))
dev->hw_features |= NETIF_F_TSO_ECN; dev->hw_features |= NETIF_F_TSO_ECN;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UFO))
dev->hw_features |= NETIF_F_UFO;
if (gso) if (gso)
dev->features |= dev->hw_features & (NETIF_F_ALL_TSO|NETIF_F_UFO); dev->features |= dev->hw_features & NETIF_F_ALL_TSO;
/* (!csum && gso) case will be fixed by register_netdev() */ /* (!csum && gso) case will be fixed by register_netdev() */
} }
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM)) if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
@ -1757,8 +1762,7 @@ static int virtnet_probe(struct virtio_device *vdev)
/* If we can receive ANY GSO packets, we must allocate large ones. */ /* If we can receive ANY GSO packets, we must allocate large ones. */
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) || virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN) || virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN))
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UFO))
vi->big_packets = true; vi->big_packets = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
@ -1952,9 +1956,9 @@ static struct virtio_device_id id_table[] = {
static unsigned int features[] = { static unsigned int features[] = {
VIRTIO_NET_F_CSUM, VIRTIO_NET_F_GUEST_CSUM, VIRTIO_NET_F_CSUM, VIRTIO_NET_F_GUEST_CSUM,
VIRTIO_NET_F_GSO, VIRTIO_NET_F_MAC, VIRTIO_NET_F_GSO, VIRTIO_NET_F_MAC,
VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_UFO, VIRTIO_NET_F_HOST_TSO6, VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_TSO6,
VIRTIO_NET_F_HOST_ECN, VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6, VIRTIO_NET_F_HOST_ECN, VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6,
VIRTIO_NET_F_GUEST_ECN, VIRTIO_NET_F_GUEST_UFO, VIRTIO_NET_F_GUEST_ECN,
VIRTIO_NET_F_MRG_RXBUF, VIRTIO_NET_F_STATUS, VIRTIO_NET_F_CTRL_VQ, VIRTIO_NET_F_MRG_RXBUF, VIRTIO_NET_F_STATUS, VIRTIO_NET_F_CTRL_VQ,
VIRTIO_NET_F_CTRL_RX, VIRTIO_NET_F_CTRL_VLAN, VIRTIO_NET_F_CTRL_RX, VIRTIO_NET_F_CTRL_VLAN,
VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ, VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ,

View File

@ -80,6 +80,7 @@ struct reg_dmn_pair_mapping {
struct ath_regulatory { struct ath_regulatory {
char alpha2[2]; char alpha2[2];
enum nl80211_dfs_regions region;
u16 country_code; u16 country_code;
u16 max_power_level; u16 max_power_level;
u16 current_rd; u16 current_rd;

View File

@ -368,11 +368,11 @@ void ath9k_cmn_update_txpow(struct ath_hw *ah, u16 cur_txpow,
{ {
struct ath_regulatory *reg = ath9k_hw_regulatory(ah); struct ath_regulatory *reg = ath9k_hw_regulatory(ah);
if (reg->power_limit != new_txpow) { if (reg->power_limit != new_txpow)
ath9k_hw_set_txpowerlimit(ah, new_txpow, false); ath9k_hw_set_txpowerlimit(ah, new_txpow, false);
/* read back in case value is clamped */
*txpower = reg->max_power_level; /* read back in case value is clamped */
} *txpower = reg->max_power_level;
} }
EXPORT_SYMBOL(ath9k_cmn_update_txpow); EXPORT_SYMBOL(ath9k_cmn_update_txpow);

View File

@ -455,7 +455,7 @@ static ssize_t read_file_dma(struct file *file, char __user *user_buf,
"%2d %2x %1x %2x %2x\n", "%2d %2x %1x %2x %2x\n",
i, (*qcuBase & (0x7 << qcuOffset)) >> qcuOffset, i, (*qcuBase & (0x7 << qcuOffset)) >> qcuOffset,
(*qcuBase & (0x8 << qcuOffset)) >> (qcuOffset + 3), (*qcuBase & (0x8 << qcuOffset)) >> (qcuOffset + 3),
val[2] & (0x7 << (i * 3)) >> (i * 3), (val[2] & (0x7 << (i * 3))) >> (i * 3),
(*dcuBase & (0x1f << dcuOffset)) >> dcuOffset); (*dcuBase & (0x1f << dcuOffset)) >> dcuOffset);
} }

View File

@ -734,6 +734,32 @@ static const struct ieee80211_iface_combination if_comb[] = {
#endif #endif
}; };
#ifdef CONFIG_ATH9K_CHANNEL_CONTEXT
static void ath9k_set_mcc_capab(struct ath_softc *sc, struct ieee80211_hw *hw)
{
struct ath_hw *ah = sc->sc_ah;
struct ath_common *common = ath9k_hw_common(ah);
if (!ath9k_is_chanctx_enabled())
return;
hw->flags |= IEEE80211_HW_QUEUE_CONTROL;
hw->queues = ATH9K_NUM_TX_QUEUES;
hw->offchannel_tx_hw_queue = hw->queues - 1;
hw->wiphy->interface_modes &= ~ BIT(NL80211_IFTYPE_WDS);
hw->wiphy->iface_combinations = if_comb_multi;
hw->wiphy->n_iface_combinations = ARRAY_SIZE(if_comb_multi);
hw->wiphy->max_scan_ssids = 255;
hw->wiphy->max_scan_ie_len = IEEE80211_MAX_DATA_LEN;
hw->wiphy->max_remain_on_channel_duration = 10000;
hw->chanctx_data_size = sizeof(void *);
hw->extra_beacon_tailroom =
sizeof(struct ieee80211_p2p_noa_attr) + 9;
ath_dbg(common, CHAN_CTX, "Use channel contexts\n");
}
#endif /* CONFIG_ATH9K_CHANNEL_CONTEXT */
static void ath9k_set_hw_capab(struct ath_softc *sc, struct ieee80211_hw *hw) static void ath9k_set_hw_capab(struct ath_softc *sc, struct ieee80211_hw *hw)
{ {
struct ath_hw *ah = sc->sc_ah; struct ath_hw *ah = sc->sc_ah;
@ -746,7 +772,6 @@ static void ath9k_set_hw_capab(struct ath_softc *sc, struct ieee80211_hw *hw)
IEEE80211_HW_SPECTRUM_MGMT | IEEE80211_HW_SPECTRUM_MGMT |
IEEE80211_HW_REPORTS_TX_ACK_STATUS | IEEE80211_HW_REPORTS_TX_ACK_STATUS |
IEEE80211_HW_SUPPORTS_RC_TABLE | IEEE80211_HW_SUPPORTS_RC_TABLE |
IEEE80211_HW_QUEUE_CONTROL |
IEEE80211_HW_SUPPORTS_HT_CCK_RATES; IEEE80211_HW_SUPPORTS_HT_CCK_RATES;
if (ath9k_ps_enable) if (ath9k_ps_enable)
@ -781,24 +806,6 @@ static void ath9k_set_hw_capab(struct ath_softc *sc, struct ieee80211_hw *hw)
hw->wiphy->n_iface_combinations = ARRAY_SIZE(if_comb); hw->wiphy->n_iface_combinations = ARRAY_SIZE(if_comb);
} }
#ifdef CONFIG_ATH9K_CHANNEL_CONTEXT
if (ath9k_is_chanctx_enabled()) {
hw->wiphy->interface_modes &= ~ BIT(NL80211_IFTYPE_WDS);
hw->wiphy->iface_combinations = if_comb_multi;
hw->wiphy->n_iface_combinations = ARRAY_SIZE(if_comb_multi);
hw->wiphy->max_scan_ssids = 255;
hw->wiphy->max_scan_ie_len = IEEE80211_MAX_DATA_LEN;
hw->wiphy->max_remain_on_channel_duration = 10000;
hw->chanctx_data_size = sizeof(void *);
hw->extra_beacon_tailroom =
sizeof(struct ieee80211_p2p_noa_attr) + 9;
ath_dbg(common, CHAN_CTX, "Use channel contexts\n");
}
#endif /* CONFIG_ATH9K_CHANNEL_CONTEXT */
hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT;
hw->wiphy->flags |= WIPHY_FLAG_IBSS_RSN; hw->wiphy->flags |= WIPHY_FLAG_IBSS_RSN;
@ -808,12 +815,7 @@ static void ath9k_set_hw_capab(struct ath_softc *sc, struct ieee80211_hw *hw)
hw->wiphy->flags |= WIPHY_FLAG_HAS_CHANNEL_SWITCH; hw->wiphy->flags |= WIPHY_FLAG_HAS_CHANNEL_SWITCH;
hw->wiphy->flags |= WIPHY_FLAG_AP_UAPSD; hw->wiphy->flags |= WIPHY_FLAG_AP_UAPSD;
/* allow 4 queues per channel context + hw->queues = 4;
* 1 cab queue + 1 offchannel tx queue
*/
hw->queues = ATH9K_NUM_TX_QUEUES;
/* last queue for offchannel */
hw->offchannel_tx_hw_queue = hw->queues - 1;
hw->max_rates = 4; hw->max_rates = 4;
hw->max_listen_interval = 10; hw->max_listen_interval = 10;
hw->max_rate_tries = 10; hw->max_rate_tries = 10;
@ -837,6 +839,9 @@ static void ath9k_set_hw_capab(struct ath_softc *sc, struct ieee80211_hw *hw)
hw->wiphy->bands[IEEE80211_BAND_5GHZ] = hw->wiphy->bands[IEEE80211_BAND_5GHZ] =
&common->sbands[IEEE80211_BAND_5GHZ]; &common->sbands[IEEE80211_BAND_5GHZ];
#ifdef CONFIG_ATH9K_CHANNEL_CONTEXT
ath9k_set_mcc_capab(sc, hw);
#endif
ath9k_init_wow(hw); ath9k_init_wow(hw);
ath9k_cmn_reload_chainmask(ah); ath9k_cmn_reload_chainmask(ah);

View File

@ -1162,6 +1162,9 @@ static void ath9k_assign_hw_queues(struct ieee80211_hw *hw,
{ {
int i; int i;
if (!ath9k_is_chanctx_enabled())
return;
for (i = 0; i < IEEE80211_NUM_ACS; i++) for (i = 0; i < IEEE80211_NUM_ACS; i++)
vif->hw_queue[i] = i; vif->hw_queue[i] = i;

View File

@ -169,7 +169,10 @@ static void ath_txq_skb_done(struct ath_softc *sc, struct ath_txq *txq,
if (txq->stopped && if (txq->stopped &&
txq->pending_frames < sc->tx.txq_max_pending[q]) { txq->pending_frames < sc->tx.txq_max_pending[q]) {
ieee80211_wake_queue(sc->hw, info->hw_queue); if (ath9k_is_chanctx_enabled())
ieee80211_wake_queue(sc->hw, info->hw_queue);
else
ieee80211_wake_queue(sc->hw, q);
txq->stopped = false; txq->stopped = false;
} }
} }
@ -2247,7 +2250,10 @@ int ath_tx_start(struct ieee80211_hw *hw, struct sk_buff *skb,
fi->txq = q; fi->txq = q;
if (++txq->pending_frames > sc->tx.txq_max_pending[q] && if (++txq->pending_frames > sc->tx.txq_max_pending[q] &&
!txq->stopped) { !txq->stopped) {
ieee80211_stop_queue(sc->hw, info->hw_queue); if (ath9k_is_chanctx_enabled())
ieee80211_stop_queue(sc->hw, info->hw_queue);
else
ieee80211_stop_queue(sc->hw, q);
txq->stopped = true; txq->stopped = true;
} }
} }

View File

@ -515,6 +515,7 @@ void ath_reg_notifier_apply(struct wiphy *wiphy,
if (!request) if (!request)
return; return;
reg->region = request->dfs_region;
switch (request->initiator) { switch (request->initiator) {
case NL80211_REGDOM_SET_BY_CORE: case NL80211_REGDOM_SET_BY_CORE:
/* /*
@ -779,6 +780,19 @@ u32 ath_regd_get_band_ctl(struct ath_regulatory *reg,
return SD_NO_CTL; return SD_NO_CTL;
} }
if (ath_regd_get_eepromRD(reg) == CTRY_DEFAULT) {
switch (reg->region) {
case NL80211_DFS_FCC:
return CTL_FCC;
case NL80211_DFS_ETSI:
return CTL_ETSI;
case NL80211_DFS_JP:
return CTL_MKK;
default:
break;
}
}
switch (band) { switch (band) {
case IEEE80211_BAND_2GHZ: case IEEE80211_BAND_2GHZ:
return reg->regpair->reg_2ghz_ctl; return reg->regpair->reg_2ghz_ctl;

View File

@ -670,7 +670,6 @@ static int brcmf_sdio_get_fwnames(struct brcmf_chip *ci,
struct brcmf_sdio_dev *sdiodev) struct brcmf_sdio_dev *sdiodev)
{ {
int i; int i;
uint fw_len, nv_len;
char end; char end;
for (i = 0; i < ARRAY_SIZE(brcmf_fwname_data); i++) { for (i = 0; i < ARRAY_SIZE(brcmf_fwname_data); i++) {
@ -684,25 +683,25 @@ static int brcmf_sdio_get_fwnames(struct brcmf_chip *ci,
return -ENODEV; return -ENODEV;
} }
fw_len = sizeof(sdiodev->fw_name) - 1;
nv_len = sizeof(sdiodev->nvram_name) - 1;
/* check if firmware path is provided by module parameter */ /* check if firmware path is provided by module parameter */
if (brcmf_firmware_path[0] != '\0') { if (brcmf_firmware_path[0] != '\0') {
strncpy(sdiodev->fw_name, brcmf_firmware_path, fw_len); strlcpy(sdiodev->fw_name, brcmf_firmware_path,
strncpy(sdiodev->nvram_name, brcmf_firmware_path, nv_len); sizeof(sdiodev->fw_name));
fw_len -= strlen(sdiodev->fw_name); strlcpy(sdiodev->nvram_name, brcmf_firmware_path,
nv_len -= strlen(sdiodev->nvram_name); sizeof(sdiodev->nvram_name));
end = brcmf_firmware_path[strlen(brcmf_firmware_path) - 1]; end = brcmf_firmware_path[strlen(brcmf_firmware_path) - 1];
if (end != '/') { if (end != '/') {
strncat(sdiodev->fw_name, "/", fw_len); strlcat(sdiodev->fw_name, "/",
strncat(sdiodev->nvram_name, "/", nv_len); sizeof(sdiodev->fw_name));
fw_len--; strlcat(sdiodev->nvram_name, "/",
nv_len--; sizeof(sdiodev->nvram_name));
} }
} }
strncat(sdiodev->fw_name, brcmf_fwname_data[i].bin, fw_len); strlcat(sdiodev->fw_name, brcmf_fwname_data[i].bin,
strncat(sdiodev->nvram_name, brcmf_fwname_data[i].nv, nv_len); sizeof(sdiodev->fw_name));
strlcat(sdiodev->nvram_name, brcmf_fwname_data[i].nv,
sizeof(sdiodev->nvram_name));
return 0; return 0;
} }

View File

@ -1095,6 +1095,7 @@ static void iwlagn_mac_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
u32 queues, bool drop) u32 queues, bool drop)
{ {
struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw); struct iwl_priv *priv = IWL_MAC80211_GET_DVM(hw);
u32 scd_queues;
mutex_lock(&priv->mutex); mutex_lock(&priv->mutex);
IWL_DEBUG_MAC80211(priv, "enter\n"); IWL_DEBUG_MAC80211(priv, "enter\n");
@ -1108,18 +1109,19 @@ static void iwlagn_mac_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
goto done; goto done;
} }
/* scd_queues = BIT(priv->cfg->base_params->num_of_queues) - 1;
* mac80211 will not push any more frames for transmit scd_queues &= ~(BIT(IWL_IPAN_CMD_QUEUE_NUM) |
* until the flush is completed BIT(IWL_DEFAULT_CMD_QUEUE_NUM));
*/
if (drop) { if (vif)
IWL_DEBUG_MAC80211(priv, "send flush command\n"); scd_queues &= ~BIT(vif->hw_queue[IEEE80211_AC_VO]);
if (iwlagn_txfifo_flush(priv, 0)) {
IWL_ERR(priv, "flush request fail\n"); IWL_DEBUG_TX_QUEUES(priv, "Flushing SCD queues: 0x%x\n", scd_queues);
goto done; if (iwlagn_txfifo_flush(priv, scd_queues)) {
} IWL_ERR(priv, "flush request fail\n");
goto done;
} }
IWL_DEBUG_MAC80211(priv, "wait transmit/flush all frames\n"); IWL_DEBUG_TX_QUEUES(priv, "wait transmit/flush all frames\n");
iwl_trans_wait_tx_queue_empty(priv->trans, 0xffffffff); iwl_trans_wait_tx_queue_empty(priv->trans, 0xffffffff);
done: done:
mutex_unlock(&priv->mutex); mutex_unlock(&priv->mutex);

View File

@ -82,7 +82,8 @@
#define IWL8000_TX_POWER_VERSION 0xffff /* meaningless */ #define IWL8000_TX_POWER_VERSION 0xffff /* meaningless */
#define IWL8000_FW_PRE "iwlwifi-8000" #define IWL8000_FW_PRE "iwlwifi-8000"
#define IWL8000_MODULE_FIRMWARE(api) IWL8000_FW_PRE __stringify(api) ".ucode" #define IWL8000_MODULE_FIRMWARE(api) \
IWL8000_FW_PRE "-" __stringify(api) ".ucode"
#define NVM_HW_SECTION_NUM_FAMILY_8000 10 #define NVM_HW_SECTION_NUM_FAMILY_8000 10
#define DEFAULT_NVM_FILE_FAMILY_8000 "iwl_nvm_8000.bin" #define DEFAULT_NVM_FILE_FAMILY_8000 "iwl_nvm_8000.bin"

View File

@ -563,6 +563,7 @@ enum iwl_trans_state {
* Set during transport allocation. * Set during transport allocation.
* @hw_id_str: a string with info about HW ID. Set during transport allocation. * @hw_id_str: a string with info about HW ID. Set during transport allocation.
* @pm_support: set to true in start_hw if link pm is supported * @pm_support: set to true in start_hw if link pm is supported
* @ltr_enabled: set to true if the LTR is enabled
* @dev_cmd_pool: pool for Tx cmd allocation - for internal use only. * @dev_cmd_pool: pool for Tx cmd allocation - for internal use only.
* The user should use iwl_trans_{alloc,free}_tx_cmd. * The user should use iwl_trans_{alloc,free}_tx_cmd.
* @dev_cmd_headroom: room needed for the transport's private use before the * @dev_cmd_headroom: room needed for the transport's private use before the
@ -589,6 +590,7 @@ struct iwl_trans {
u8 rx_mpdu_cmd, rx_mpdu_cmd_hdr_size; u8 rx_mpdu_cmd, rx_mpdu_cmd_hdr_size;
bool pm_support; bool pm_support;
bool ltr_enabled;
/* The following fields are internal only */ /* The following fields are internal only */
struct kmem_cache *dev_cmd_pool; struct kmem_cache *dev_cmd_pool;

View File

@ -303,8 +303,8 @@ static const __le64 iwl_ci_mask[][3] = {
}; };
static const __le32 iwl_bt_mprio_lut[BT_COEX_MULTI_PRIO_LUT_SIZE] = { static const __le32 iwl_bt_mprio_lut[BT_COEX_MULTI_PRIO_LUT_SIZE] = {
cpu_to_le32(0x28412201), cpu_to_le32(0x2e402280),
cpu_to_le32(0x11118451), cpu_to_le32(0x7711a751),
}; };
struct corunning_block_luts { struct corunning_block_luts {

View File

@ -291,8 +291,8 @@ static const __le64 iwl_ci_mask[][3] = {
}; };
static const __le32 iwl_bt_mprio_lut[BT_COEX_MULTI_PRIO_LUT_SIZE] = { static const __le32 iwl_bt_mprio_lut[BT_COEX_MULTI_PRIO_LUT_SIZE] = {
cpu_to_le32(0x28412201), cpu_to_le32(0x2e402280),
cpu_to_le32(0x11118451), cpu_to_le32(0x7711a751),
}; };
struct corunning_block_luts { struct corunning_block_luts {

View File

@ -68,13 +68,46 @@
/* Power Management Commands, Responses, Notifications */ /* Power Management Commands, Responses, Notifications */
/**
* enum iwl_ltr_config_flags - masks for LTR config command flags
* @LTR_CFG_FLAG_FEATURE_ENABLE: Feature operational status
* @LTR_CFG_FLAG_HW_DIS_ON_SHADOW_REG_ACCESS: allow LTR change on shadow
* memory access
* @LTR_CFG_FLAG_HW_EN_SHRT_WR_THROUGH: allow LTR msg send on ANY LTR
* reg change
* @LTR_CFG_FLAG_HW_DIS_ON_D0_2_D3: allow LTR msg send on transition from
* D0 to D3
* @LTR_CFG_FLAG_SW_SET_SHORT: fixed static short LTR register
* @LTR_CFG_FLAG_SW_SET_LONG: fixed static short LONG register
* @LTR_CFG_FLAG_DENIE_C10_ON_PD: allow going into C10 on PD
*/
enum iwl_ltr_config_flags {
LTR_CFG_FLAG_FEATURE_ENABLE = BIT(0),
LTR_CFG_FLAG_HW_DIS_ON_SHADOW_REG_ACCESS = BIT(1),
LTR_CFG_FLAG_HW_EN_SHRT_WR_THROUGH = BIT(2),
LTR_CFG_FLAG_HW_DIS_ON_D0_2_D3 = BIT(3),
LTR_CFG_FLAG_SW_SET_SHORT = BIT(4),
LTR_CFG_FLAG_SW_SET_LONG = BIT(5),
LTR_CFG_FLAG_DENIE_C10_ON_PD = BIT(6),
};
/**
* struct iwl_ltr_config_cmd - configures the LTR
* @flags: See %enum iwl_ltr_config_flags
*/
struct iwl_ltr_config_cmd {
__le32 flags;
__le32 static_long;
__le32 static_short;
} __packed;
/* Radio LP RX Energy Threshold measured in dBm */ /* Radio LP RX Energy Threshold measured in dBm */
#define POWER_LPRX_RSSI_THRESHOLD 75 #define POWER_LPRX_RSSI_THRESHOLD 75
#define POWER_LPRX_RSSI_THRESHOLD_MAX 94 #define POWER_LPRX_RSSI_THRESHOLD_MAX 94
#define POWER_LPRX_RSSI_THRESHOLD_MIN 30 #define POWER_LPRX_RSSI_THRESHOLD_MIN 30
/** /**
* enum iwl_scan_flags - masks for power table command flags * enum iwl_power_flags - masks for power table command flags
* @POWER_FLAGS_POWER_SAVE_ENA_MSK: '1' Allow to save power by turning off * @POWER_FLAGS_POWER_SAVE_ENA_MSK: '1' Allow to save power by turning off
* receiver and transmitter. '0' - does not allow. * receiver and transmitter. '0' - does not allow.
* @POWER_FLAGS_POWER_MANAGEMENT_ENA_MSK: '0' Driver disables power management, * @POWER_FLAGS_POWER_MANAGEMENT_ENA_MSK: '0' Driver disables power management,

View File

@ -157,6 +157,7 @@ enum {
/* Power - legacy power table command */ /* Power - legacy power table command */
POWER_TABLE_CMD = 0x77, POWER_TABLE_CMD = 0x77,
PSM_UAPSD_AP_MISBEHAVING_NOTIFICATION = 0x78, PSM_UAPSD_AP_MISBEHAVING_NOTIFICATION = 0x78,
LTR_CONFIG = 0xee,
/* Thermal Throttling*/ /* Thermal Throttling*/
REPLY_THERMAL_MNG_BACKOFF = 0x7e, REPLY_THERMAL_MNG_BACKOFF = 0x7e,

View File

@ -480,6 +480,15 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
/* Initialize tx backoffs to the minimal possible */ /* Initialize tx backoffs to the minimal possible */
iwl_mvm_tt_tx_backoff(mvm, 0); iwl_mvm_tt_tx_backoff(mvm, 0);
if (mvm->trans->ltr_enabled) {
struct iwl_ltr_config_cmd cmd = {
.flags = cpu_to_le32(LTR_CFG_FLAG_FEATURE_ENABLE),
};
WARN_ON(iwl_mvm_send_cmd_pdu(mvm, LTR_CONFIG, 0,
sizeof(cmd), &cmd));
}
ret = iwl_mvm_power_update_device(mvm); ret = iwl_mvm_power_update_device(mvm);
if (ret) if (ret)
goto error; goto error;

View File

@ -526,7 +526,8 @@ static void iwl_mvm_mac_tx(struct ieee80211_hw *hw,
} }
if (IEEE80211_SKB_CB(skb)->hw_queue == IWL_MVM_OFFCHANNEL_QUEUE && if (IEEE80211_SKB_CB(skb)->hw_queue == IWL_MVM_OFFCHANNEL_QUEUE &&
!test_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status)) !test_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status) &&
!test_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status))
goto drop; goto drop;
/* treat non-bufferable MMPDUs as broadcast if sta is sleeping */ /* treat non-bufferable MMPDUs as broadcast if sta is sleeping */
@ -1734,6 +1735,13 @@ iwl_mvm_bss_info_changed_ap_ibss(struct iwl_mvm *mvm,
if (changes & BSS_CHANGED_BEACON && if (changes & BSS_CHANGED_BEACON &&
iwl_mvm_mac_ctxt_beacon_changed(mvm, vif)) iwl_mvm_mac_ctxt_beacon_changed(mvm, vif))
IWL_WARN(mvm, "Failed updating beacon data\n"); IWL_WARN(mvm, "Failed updating beacon data\n");
if (changes & BSS_CHANGED_TXPOWER) {
IWL_DEBUG_CALIB(mvm, "Changing TX Power to %d\n",
bss_conf->txpower);
iwl_mvm_set_tx_power(mvm, vif, bss_conf->txpower);
}
} }
static void iwl_mvm_bss_info_changed(struct ieee80211_hw *hw, static void iwl_mvm_bss_info_changed(struct ieee80211_hw *hw,
@ -2367,14 +2375,19 @@ static int iwl_mvm_send_aux_roc_cmd(struct iwl_mvm *mvm,
/* Set the node address */ /* Set the node address */
memcpy(aux_roc_req.node_addr, vif->addr, ETH_ALEN); memcpy(aux_roc_req.node_addr, vif->addr, ETH_ALEN);
lockdep_assert_held(&mvm->mutex);
spin_lock_bh(&mvm->time_event_lock);
if (WARN_ON(te_data->id == HOT_SPOT_CMD)) {
spin_unlock_bh(&mvm->time_event_lock);
return -EIO;
}
te_data->vif = vif; te_data->vif = vif;
te_data->duration = duration; te_data->duration = duration;
te_data->id = HOT_SPOT_CMD; te_data->id = HOT_SPOT_CMD;
lockdep_assert_held(&mvm->mutex);
spin_lock_bh(&mvm->time_event_lock);
list_add_tail(&te_data->list, &mvm->time_event_list);
spin_unlock_bh(&mvm->time_event_lock); spin_unlock_bh(&mvm->time_event_lock);
/* /*
@ -2430,22 +2443,23 @@ static int iwl_mvm_roc(struct ieee80211_hw *hw,
IWL_DEBUG_MAC80211(mvm, "enter (%d, %d, %d)\n", channel->hw_value, IWL_DEBUG_MAC80211(mvm, "enter (%d, %d, %d)\n", channel->hw_value,
duration, type); duration, type);
mutex_lock(&mvm->mutex);
switch (vif->type) { switch (vif->type) {
case NL80211_IFTYPE_STATION: case NL80211_IFTYPE_STATION:
/* Use aux roc framework (HS20) */ /* Use aux roc framework (HS20) */
ret = iwl_mvm_send_aux_roc_cmd(mvm, channel, ret = iwl_mvm_send_aux_roc_cmd(mvm, channel,
vif, duration); vif, duration);
return ret; goto out_unlock;
case NL80211_IFTYPE_P2P_DEVICE: case NL80211_IFTYPE_P2P_DEVICE:
/* handle below */ /* handle below */
break; break;
default: default:
IWL_ERR(mvm, "vif isn't P2P_DEVICE: %d\n", vif->type); IWL_ERR(mvm, "vif isn't P2P_DEVICE: %d\n", vif->type);
return -EINVAL; ret = -EINVAL;
goto out_unlock;
} }
mutex_lock(&mvm->mutex);
for (i = 0; i < NUM_PHY_CTX; i++) { for (i = 0; i < NUM_PHY_CTX; i++) {
phy_ctxt = &mvm->phy_ctxts[i]; phy_ctxt = &mvm->phy_ctxts[i];
if (phy_ctxt->ref == 0 || mvmvif->phy_ctxt == phy_ctxt) if (phy_ctxt->ref == 0 || mvmvif->phy_ctxt == phy_ctxt)

View File

@ -336,6 +336,7 @@ static const char *const iwl_mvm_cmd_strings[REPLY_MAX] = {
CMD(DTS_MEASUREMENT_NOTIFICATION), CMD(DTS_MEASUREMENT_NOTIFICATION),
CMD(REPLY_THERMAL_MNG_BACKOFF), CMD(REPLY_THERMAL_MNG_BACKOFF),
CMD(MAC_PM_POWER_TABLE), CMD(MAC_PM_POWER_TABLE),
CMD(LTR_CONFIG),
CMD(BT_COEX_CI), CMD(BT_COEX_CI),
CMD(BT_COEX_UPDATE_SW_BOOST), CMD(BT_COEX_UPDATE_SW_BOOST),
CMD(BT_COEX_UPDATE_CORUN_LUT), CMD(BT_COEX_UPDATE_CORUN_LUT),

View File

@ -459,7 +459,8 @@ int iwl_mvm_scan_request(struct iwl_mvm *mvm,
basic_ssid ? 1 : 0); basic_ssid ? 1 : 0);
cmd->tx_cmd.tx_flags = cpu_to_le32(TX_CMD_FLG_SEQ_CTL | cmd->tx_cmd.tx_flags = cpu_to_le32(TX_CMD_FLG_SEQ_CTL |
TX_CMD_FLG_BT_DIS); 3 << TX_CMD_FLG_BT_PRIO_POS);
cmd->tx_cmd.sta_id = mvm->aux_sta.sta_id; cmd->tx_cmd.sta_id = mvm->aux_sta.sta_id;
cmd->tx_cmd.life_time = cpu_to_le32(TX_CMD_LIFE_TIME_INFINITE); cmd->tx_cmd.life_time = cpu_to_le32(TX_CMD_LIFE_TIME_INFINITE);
cmd->tx_cmd.rate_n_flags = cmd->tx_cmd.rate_n_flags =

View File

@ -305,8 +305,8 @@ static int iwl_mvm_aux_roc_te_handle_notif(struct iwl_mvm *mvm,
te_data->running = false; te_data->running = false;
te_data->vif = NULL; te_data->vif = NULL;
te_data->uid = 0; te_data->uid = 0;
te_data->id = TE_MAX;
} else if (le32_to_cpu(notif->action) == TE_V2_NOTIF_HOST_EVENT_START) { } else if (le32_to_cpu(notif->action) == TE_V2_NOTIF_HOST_EVENT_START) {
set_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status);
set_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status); set_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status);
te_data->running = true; te_data->running = true;
ieee80211_ready_on_channel(mvm->hw); /* Start TE */ ieee80211_ready_on_channel(mvm->hw); /* Start TE */

View File

@ -175,14 +175,10 @@ static void iwl_mvm_set_tx_cmd_rate(struct iwl_mvm *mvm,
/* /*
* for data packets, rate info comes from the table inside the fw. This * for data packets, rate info comes from the table inside the fw. This
* table is controlled by LINK_QUALITY commands. Exclude ctrl port * table is controlled by LINK_QUALITY commands
* frames like EAPOLs which should be treated as mgmt frames. This
* avoids them being sent initially in high rates which increases the
* chances for completion of the 4-Way handshake.
*/ */
if (ieee80211_is_data(fc) && sta && if (ieee80211_is_data(fc) && sta) {
!(info->control.flags & IEEE80211_TX_CTRL_PORT_CTRL_PROTO)) {
tx_cmd->initial_rate_index = 0; tx_cmd->initial_rate_index = 0;
tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE); tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE);
return; return;

View File

@ -174,6 +174,7 @@ static void iwl_pcie_apm_config(struct iwl_trans *trans)
{ {
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
u16 lctl; u16 lctl;
u16 cap;
/* /*
* HW bug W/A for instability in PCIe bus L0S->L1 transition. * HW bug W/A for instability in PCIe bus L0S->L1 transition.
@ -184,16 +185,17 @@ static void iwl_pcie_apm_config(struct iwl_trans *trans)
* power savings, even without L1. * power savings, even without L1.
*/ */
pcie_capability_read_word(trans_pcie->pci_dev, PCI_EXP_LNKCTL, &lctl); pcie_capability_read_word(trans_pcie->pci_dev, PCI_EXP_LNKCTL, &lctl);
if (lctl & PCI_EXP_LNKCTL_ASPM_L1) { if (lctl & PCI_EXP_LNKCTL_ASPM_L1)
/* L1-ASPM enabled; disable(!) L0S */
iwl_set_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_ENABLED); iwl_set_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_ENABLED);
dev_info(trans->dev, "L1 Enabled; Disabling L0S\n"); else
} else {
/* L1-ASPM disabled; enable(!) L0S */
iwl_clear_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_ENABLED); iwl_clear_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_ENABLED);
dev_info(trans->dev, "L1 Disabled; Enabling L0S\n");
}
trans->pm_support = !(lctl & PCI_EXP_LNKCTL_ASPM_L0S); trans->pm_support = !(lctl & PCI_EXP_LNKCTL_ASPM_L0S);
pcie_capability_read_word(trans_pcie->pci_dev, PCI_EXP_DEVCTL2, &cap);
trans->ltr_enabled = cap & PCI_EXP_DEVCTL2_LTR_EN;
dev_info(trans->dev, "L1 %sabled - LTR %sabled\n",
(lctl & PCI_EXP_LNKCTL_ASPM_L1) ? "En" : "Dis",
trans->ltr_enabled ? "En" : "Dis");
} }
/* /*
@ -428,7 +430,7 @@ static int iwl_pcie_apm_stop_master(struct iwl_trans *trans)
ret = iwl_poll_bit(trans, CSR_RESET, ret = iwl_poll_bit(trans, CSR_RESET,
CSR_RESET_REG_FLAG_MASTER_DISABLED, CSR_RESET_REG_FLAG_MASTER_DISABLED,
CSR_RESET_REG_FLAG_MASTER_DISABLED, 100); CSR_RESET_REG_FLAG_MASTER_DISABLED, 100);
if (ret) if (ret < 0)
IWL_WARN(trans, "Master Disable Timed Out, 100 usec\n"); IWL_WARN(trans, "Master Disable Timed Out, 100 usec\n");
IWL_DEBUG_INFO(trans, "stop master\n"); IWL_DEBUG_INFO(trans, "stop master\n");
@ -544,7 +546,7 @@ static int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
msleep(25); msleep(25);
} }
IWL_DEBUG_INFO(trans, "got NIC after %d iterations\n", iter); IWL_ERR(trans, "Couldn't prepare the card\n");
return ret; return ret;
} }
@ -1043,7 +1045,7 @@ static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY,
CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY,
25000); 25000);
if (ret) { if (ret < 0) {
IWL_ERR(trans, "Failed to resume the device (mac ready)\n"); IWL_ERR(trans, "Failed to resume the device (mac ready)\n");
return ret; return ret;
} }

View File

@ -196,6 +196,7 @@ mwifiex_del_rx_reorder_entry(struct mwifiex_private *priv,
mwifiex_11n_dispatch_pkt_until_start_win(priv, tbl, start_win); mwifiex_11n_dispatch_pkt_until_start_win(priv, tbl, start_win);
del_timer_sync(&tbl->timer_context.timer); del_timer_sync(&tbl->timer_context.timer);
tbl->timer_context.timer_is_set = false;
spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
list_del(&tbl->list); list_del(&tbl->list);
@ -297,6 +298,7 @@ mwifiex_flush_data(unsigned long context)
(struct reorder_tmr_cnxt *) context; (struct reorder_tmr_cnxt *) context;
int start_win, seq_num; int start_win, seq_num;
ctx->timer_is_set = false;
seq_num = mwifiex_11n_find_last_seq_num(ctx); seq_num = mwifiex_11n_find_last_seq_num(ctx);
if (seq_num < 0) if (seq_num < 0)
@ -385,6 +387,7 @@ mwifiex_11n_create_rx_reorder_tbl(struct mwifiex_private *priv, u8 *ta,
new_node->timer_context.ptr = new_node; new_node->timer_context.ptr = new_node;
new_node->timer_context.priv = priv; new_node->timer_context.priv = priv;
new_node->timer_context.timer_is_set = false;
init_timer(&new_node->timer_context.timer); init_timer(&new_node->timer_context.timer);
new_node->timer_context.timer.function = mwifiex_flush_data; new_node->timer_context.timer.function = mwifiex_flush_data;
@ -399,6 +402,22 @@ mwifiex_11n_create_rx_reorder_tbl(struct mwifiex_private *priv, u8 *ta,
spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
} }
static void
mwifiex_11n_rxreorder_timer_restart(struct mwifiex_rx_reorder_tbl *tbl)
{
u32 min_flush_time;
if (tbl->win_size >= MWIFIEX_BA_WIN_SIZE_32)
min_flush_time = MIN_FLUSH_TIMER_15_MS;
else
min_flush_time = MIN_FLUSH_TIMER_MS;
mod_timer(&tbl->timer_context.timer,
jiffies + msecs_to_jiffies(min_flush_time * tbl->win_size));
tbl->timer_context.timer_is_set = true;
}
/* /*
* This function prepares command for adding a BA request. * This function prepares command for adding a BA request.
* *
@ -523,31 +542,31 @@ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *priv,
u8 *ta, u8 pkt_type, void *payload) u8 *ta, u8 pkt_type, void *payload)
{ {
struct mwifiex_rx_reorder_tbl *tbl; struct mwifiex_rx_reorder_tbl *tbl;
int start_win, end_win, win_size; int prev_start_win, start_win, end_win, win_size;
u16 pkt_index; u16 pkt_index;
bool init_window_shift = false; bool init_window_shift = false;
int ret = 0;
tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, ta); tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, ta);
if (!tbl) { if (!tbl) {
if (pkt_type != PKT_TYPE_BAR) if (pkt_type != PKT_TYPE_BAR)
mwifiex_11n_dispatch_pkt(priv, payload); mwifiex_11n_dispatch_pkt(priv, payload);
return 0; return ret;
} }
if ((pkt_type == PKT_TYPE_AMSDU) && !tbl->amsdu) { if ((pkt_type == PKT_TYPE_AMSDU) && !tbl->amsdu) {
mwifiex_11n_dispatch_pkt(priv, payload); mwifiex_11n_dispatch_pkt(priv, payload);
return 0; return ret;
} }
start_win = tbl->start_win; start_win = tbl->start_win;
prev_start_win = start_win;
win_size = tbl->win_size; win_size = tbl->win_size;
end_win = ((start_win + win_size) - 1) & (MAX_TID_VALUE - 1); end_win = ((start_win + win_size) - 1) & (MAX_TID_VALUE - 1);
if (tbl->flags & RXREOR_INIT_WINDOW_SHIFT) { if (tbl->flags & RXREOR_INIT_WINDOW_SHIFT) {
init_window_shift = true; init_window_shift = true;
tbl->flags &= ~RXREOR_INIT_WINDOW_SHIFT; tbl->flags &= ~RXREOR_INIT_WINDOW_SHIFT;
} }
mod_timer(&tbl->timer_context.timer,
jiffies + msecs_to_jiffies(MIN_FLUSH_TIMER_MS * win_size));
if (tbl->flags & RXREOR_FORCE_NO_DROP) { if (tbl->flags & RXREOR_FORCE_NO_DROP) {
dev_dbg(priv->adapter->dev, dev_dbg(priv->adapter->dev,
@ -568,11 +587,14 @@ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *priv,
if ((start_win + TWOPOW11) > (MAX_TID_VALUE - 1)) { if ((start_win + TWOPOW11) > (MAX_TID_VALUE - 1)) {
if (seq_num >= ((start_win + TWOPOW11) & if (seq_num >= ((start_win + TWOPOW11) &
(MAX_TID_VALUE - 1)) && (MAX_TID_VALUE - 1)) &&
seq_num < start_win) seq_num < start_win) {
return -1; ret = -1;
goto done;
}
} else if ((seq_num < start_win) || } else if ((seq_num < start_win) ||
(seq_num > (start_win + TWOPOW11))) { (seq_num >= (start_win + TWOPOW11))) {
return -1; ret = -1;
goto done;
} }
} }
@ -601,8 +623,10 @@ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *priv,
else else
pkt_index = (seq_num+MAX_TID_VALUE) - start_win; pkt_index = (seq_num+MAX_TID_VALUE) - start_win;
if (tbl->rx_reorder_ptr[pkt_index]) if (tbl->rx_reorder_ptr[pkt_index]) {
return -1; ret = -1;
goto done;
}
tbl->rx_reorder_ptr[pkt_index] = payload; tbl->rx_reorder_ptr[pkt_index] = payload;
} }
@ -613,7 +637,11 @@ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *priv,
*/ */
mwifiex_11n_scan_and_dispatch(priv, tbl); mwifiex_11n_scan_and_dispatch(priv, tbl);
return 0; done:
if (!tbl->timer_context.timer_is_set ||
prev_start_win != tbl->start_win)
mwifiex_11n_rxreorder_timer_restart(tbl);
return ret;
} }
/* /*

View File

@ -21,6 +21,8 @@
#define _MWIFIEX_11N_RXREORDER_H_ #define _MWIFIEX_11N_RXREORDER_H_
#define MIN_FLUSH_TIMER_MS 50 #define MIN_FLUSH_TIMER_MS 50
#define MIN_FLUSH_TIMER_15_MS 15
#define MWIFIEX_BA_WIN_SIZE_32 32
#define PKT_TYPE_BAR 0xE7 #define PKT_TYPE_BAR 0xE7
#define MAX_TID_VALUE (2 << 11) #define MAX_TID_VALUE (2 << 11)

View File

@ -592,6 +592,7 @@ struct reorder_tmr_cnxt {
struct timer_list timer; struct timer_list timer;
struct mwifiex_rx_reorder_tbl *ptr; struct mwifiex_rx_reorder_tbl *ptr;
struct mwifiex_private *priv; struct mwifiex_private *priv;
u8 timer_is_set;
}; };
struct mwifiex_rx_reorder_tbl { struct mwifiex_rx_reorder_tbl {

View File

@ -1111,6 +1111,7 @@ static struct usb_device_id rt2800usb_device_table[] = {
/* Ovislink */ /* Ovislink */
{ USB_DEVICE(0x1b75, 0x3071) }, { USB_DEVICE(0x1b75, 0x3071) },
{ USB_DEVICE(0x1b75, 0x3072) }, { USB_DEVICE(0x1b75, 0x3072) },
{ USB_DEVICE(0x1b75, 0xa200) },
/* Para */ /* Para */
{ USB_DEVICE(0x20b8, 0x8888) }, { USB_DEVICE(0x20b8, 0x8888) },
/* Pegatron */ /* Pegatron */

View File

@ -467,7 +467,7 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
rtl_easy_concurrent_retrytimer_callback, (unsigned long)hw); rtl_easy_concurrent_retrytimer_callback, (unsigned long)hw);
/* <2> work queue */ /* <2> work queue */
rtlpriv->works.hw = hw; rtlpriv->works.hw = hw;
rtlpriv->works.rtl_wq = alloc_workqueue(rtlpriv->cfg->name, 0, 0); rtlpriv->works.rtl_wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name);
INIT_DELAYED_WORK(&rtlpriv->works.watchdog_wq, INIT_DELAYED_WORK(&rtlpriv->works.watchdog_wq,
(void *)rtl_watchdog_wq_callback); (void *)rtl_watchdog_wq_callback);
INIT_DELAYED_WORK(&rtlpriv->works.ips_nic_off_wq, INIT_DELAYED_WORK(&rtlpriv->works.ips_nic_off_wq,

View File

@ -1828,3 +1828,9 @@ const struct ieee80211_ops rtl_ops = {
.flush = rtl_op_flush, .flush = rtl_op_flush,
}; };
EXPORT_SYMBOL_GPL(rtl_ops); EXPORT_SYMBOL_GPL(rtl_ops);
bool rtl_btc_status_false(void)
{
return false;
}
EXPORT_SYMBOL_GPL(rtl_btc_status_false);

View File

@ -42,5 +42,6 @@ void rtl_rfreg_delay(struct ieee80211_hw *hw, enum radio_path rfpath, u32 addr,
u32 mask, u32 data); u32 mask, u32 data);
void rtl_bb_delay(struct ieee80211_hw *hw, u32 addr, u32 data); void rtl_bb_delay(struct ieee80211_hw *hw, u32 addr, u32 data);
bool rtl_cmd_send_packet(struct ieee80211_hw *hw, struct sk_buff *skb); bool rtl_cmd_send_packet(struct ieee80211_hw *hw, struct sk_buff *skb);
bool rtl_btc_status_false(void);
#endif #endif

View File

@ -1796,7 +1796,8 @@ static int rtl_pci_start(struct ieee80211_hw *hw)
rtl_pci_reset_trx_ring(hw); rtl_pci_reset_trx_ring(hw);
rtlpci->driver_is_goingto_unload = false; rtlpci->driver_is_goingto_unload = false;
if (rtlpriv->cfg->ops->get_btc_status()) { if (rtlpriv->cfg->ops->get_btc_status &&
rtlpriv->cfg->ops->get_btc_status()) {
rtlpriv->btcoexist.btc_ops->btc_init_variables(rtlpriv); rtlpriv->btcoexist.btc_ops->btc_init_variables(rtlpriv);
rtlpriv->btcoexist.btc_ops->btc_init_hal_vars(rtlpriv); rtlpriv->btcoexist.btc_ops->btc_init_hal_vars(rtlpriv);
} }

View File

@ -656,7 +656,8 @@ static u8 reserved_page_packet[TOTAL_RESERVED_PKT_LEN] = {
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
}; };
void rtl92c_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished) void rtl92c_set_fw_rsvdpagepkt(struct ieee80211_hw *hw,
bool (*cmd_send_packet)(struct ieee80211_hw *, struct sk_buff *))
{ {
struct rtl_priv *rtlpriv = rtl_priv(hw); struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_mac *mac = rtl_mac(rtl_priv(hw)); struct rtl_mac *mac = rtl_mac(rtl_priv(hw));
@ -722,7 +723,10 @@ void rtl92c_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished)
memcpy((u8 *)skb_put(skb, totalpacketlen), memcpy((u8 *)skb_put(skb, totalpacketlen),
&reserved_page_packet, totalpacketlen); &reserved_page_packet, totalpacketlen);
rtstatus = rtl_cmd_send_packet(hw, skb); if (cmd_send_packet)
rtstatus = cmd_send_packet(hw, skb);
else
rtstatus = rtl_cmd_send_packet(hw, skb);
if (rtstatus) if (rtstatus)
b_dlok = true; b_dlok = true;

View File

@ -109,7 +109,9 @@ void rtl92c_fill_h2c_cmd(struct ieee80211_hw *hw, u8 element_id,
u32 cmd_len, u8 *p_cmdbuffer); u32 cmd_len, u8 *p_cmdbuffer);
void rtl92c_firmware_selfreset(struct ieee80211_hw *hw); void rtl92c_firmware_selfreset(struct ieee80211_hw *hw);
void rtl92c_set_fw_pwrmode_cmd(struct ieee80211_hw *hw, u8 mode); void rtl92c_set_fw_pwrmode_cmd(struct ieee80211_hw *hw, u8 mode);
void rtl92c_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished); void rtl92c_set_fw_rsvdpagepkt
(struct ieee80211_hw *hw,
bool (*cmd_send_packet)(struct ieee80211_hw *, struct sk_buff *));
void rtl92c_set_fw_joinbss_report_cmd(struct ieee80211_hw *hw, u8 mstatus); void rtl92c_set_fw_joinbss_report_cmd(struct ieee80211_hw *hw, u8 mstatus);
void usb_writeN_async(struct rtl_priv *rtlpriv, u32 addr, void *data, u16 len); void usb_writeN_async(struct rtl_priv *rtlpriv, u32 addr, void *data, u16 len);
void rtl92c_set_p2p_ps_offload_cmd(struct ieee80211_hw *hw, u8 p2p_ps_state); void rtl92c_set_p2p_ps_offload_cmd(struct ieee80211_hw *hw, u8 p2p_ps_state);

View File

@ -114,6 +114,8 @@
LE_BITS_TO_4BYTE(((__pcmdfbhdr) + 4), 16, 4) LE_BITS_TO_4BYTE(((__pcmdfbhdr) + 4), 16, 4)
#define GET_C2H_CMD_FEEDBACK_CCX_SEQ(__pcmdfbhdr) \ #define GET_C2H_CMD_FEEDBACK_CCX_SEQ(__pcmdfbhdr) \
LE_BITS_TO_4BYTE(((__pcmdfbhdr) + 4), 20, 12) LE_BITS_TO_4BYTE(((__pcmdfbhdr) + 4), 20, 12)
#define GET_RX_STATUS_DESC_BUFF_ADDR(__pdesc) \
SHIFT_AND_MASK_LE(__pdesc + 24, 0, 32)
#define CHIP_VER_B BIT(4) #define CHIP_VER_B BIT(4)
#define CHIP_BONDING_IDENTIFIER(_value) (((_value) >> 22) & 0x3) #define CHIP_BONDING_IDENTIFIER(_value) (((_value) >> 22) & 0x3)

View File

@ -459,7 +459,7 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
rtl_write_byte(rtlpriv, REG_FWHW_TXQ_CTRL + 2, rtl_write_byte(rtlpriv, REG_FWHW_TXQ_CTRL + 2,
tmp_reg422 & (~BIT(6))); tmp_reg422 & (~BIT(6)));
rtl92c_set_fw_rsvdpagepkt(hw, 0); rtl92c_set_fw_rsvdpagepkt(hw, NULL);
_rtl92ce_set_bcn_ctrl_reg(hw, BIT(3), 0); _rtl92ce_set_bcn_ctrl_reg(hw, BIT(3), 0);
_rtl92ce_set_bcn_ctrl_reg(hw, 0, BIT(4)); _rtl92ce_set_bcn_ctrl_reg(hw, 0, BIT(4));

View File

@ -244,6 +244,7 @@ static struct rtl_hal_ops rtl8192ce_hal_ops = {
.phy_lc_calibrate = _rtl92ce_phy_lc_calibrate, .phy_lc_calibrate = _rtl92ce_phy_lc_calibrate,
.phy_set_bw_mode_callback = rtl92ce_phy_set_bw_mode_callback, .phy_set_bw_mode_callback = rtl92ce_phy_set_bw_mode_callback,
.dm_dynamic_txpower = rtl92ce_dm_dynamic_txpower, .dm_dynamic_txpower = rtl92ce_dm_dynamic_txpower,
.get_btc_status = rtl_btc_status_false,
}; };
static struct rtl_mod_params rtl92ce_mod_params = { static struct rtl_mod_params rtl92ce_mod_params = {

View File

@ -728,6 +728,9 @@ u32 rtl92ce_get_desc(u8 *p_desc, bool istx, u8 desc_name)
case HW_DESC_RXPKT_LEN: case HW_DESC_RXPKT_LEN:
ret = GET_RX_DESC_PKT_LEN(pdesc); ret = GET_RX_DESC_PKT_LEN(pdesc);
break; break;
case HW_DESC_RXBUFF_ADDR:
ret = GET_RX_STATUS_DESC_BUFF_ADDR(pdesc);
break;
default: default:
RT_ASSERT(false, "ERR rxdesc :%d not process\n", RT_ASSERT(false, "ERR rxdesc :%d not process\n",
desc_name); desc_name);

View File

@ -1592,6 +1592,20 @@ void rtl92cu_get_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
} }
} }
bool usb_cmd_send_packet(struct ieee80211_hw *hw, struct sk_buff *skb)
{
/* Currently nothing happens here.
* Traffic stops after some seconds in WPA2 802.11n mode.
* Maybe because rtl8192cu chip should be set from here?
* If I understand correctly, the realtek vendor driver sends some urbs
* if its "here".
*
* This is maybe necessary:
* rtlpriv->cfg->ops->fill_tx_cmddesc(hw, buffer, 1, 1, skb);
*/
return true;
}
void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val) void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
{ {
struct rtl_priv *rtlpriv = rtl_priv(hw); struct rtl_priv *rtlpriv = rtl_priv(hw);
@ -1939,7 +1953,8 @@ void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
recover = true; recover = true;
rtl_write_byte(rtlpriv, REG_FWHW_TXQ_CTRL + 2, rtl_write_byte(rtlpriv, REG_FWHW_TXQ_CTRL + 2,
tmp_reg422 & (~BIT(6))); tmp_reg422 & (~BIT(6)));
rtl92c_set_fw_rsvdpagepkt(hw, 0); rtl92c_set_fw_rsvdpagepkt(hw,
&usb_cmd_send_packet);
_rtl92cu_set_bcn_ctrl_reg(hw, BIT(3), 0); _rtl92cu_set_bcn_ctrl_reg(hw, BIT(3), 0);
_rtl92cu_set_bcn_ctrl_reg(hw, 0, BIT(4)); _rtl92cu_set_bcn_ctrl_reg(hw, 0, BIT(4));
if (recover) if (recover)

View File

@ -104,7 +104,6 @@ bool rtl92cu_gpio_radio_on_off_checking(struct ieee80211_hw *hw, u8 * valid);
void rtl92cu_set_check_bssid(struct ieee80211_hw *hw, bool check_bssid); void rtl92cu_set_check_bssid(struct ieee80211_hw *hw, bool check_bssid);
int rtl92c_download_fw(struct ieee80211_hw *hw); int rtl92c_download_fw(struct ieee80211_hw *hw);
void rtl92c_set_fw_pwrmode_cmd(struct ieee80211_hw *hw, u8 mode); void rtl92c_set_fw_pwrmode_cmd(struct ieee80211_hw *hw, u8 mode);
void rtl92c_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool dl_finished);
void rtl92c_set_fw_joinbss_report_cmd(struct ieee80211_hw *hw, u8 mstatus); void rtl92c_set_fw_joinbss_report_cmd(struct ieee80211_hw *hw, u8 mstatus);
void rtl92c_fill_h2c_cmd(struct ieee80211_hw *hw, void rtl92c_fill_h2c_cmd(struct ieee80211_hw *hw,
u8 element_id, u32 cmd_len, u8 *p_cmdbuffer); u8 element_id, u32 cmd_len, u8 *p_cmdbuffer);

View File

@ -101,6 +101,12 @@ static void rtl92cu_deinit_sw_vars(struct ieee80211_hw *hw)
} }
} }
/* get bt coexist status */
static bool rtl92cu_get_btc_status(void)
{
return false;
}
static struct rtl_hal_ops rtl8192cu_hal_ops = { static struct rtl_hal_ops rtl8192cu_hal_ops = {
.init_sw_vars = rtl92cu_init_sw_vars, .init_sw_vars = rtl92cu_init_sw_vars,
.deinit_sw_vars = rtl92cu_deinit_sw_vars, .deinit_sw_vars = rtl92cu_deinit_sw_vars,
@ -148,6 +154,7 @@ static struct rtl_hal_ops rtl8192cu_hal_ops = {
.phy_set_bw_mode_callback = rtl92cu_phy_set_bw_mode_callback, .phy_set_bw_mode_callback = rtl92cu_phy_set_bw_mode_callback,
.dm_dynamic_txpower = rtl92cu_dm_dynamic_txpower, .dm_dynamic_txpower = rtl92cu_dm_dynamic_txpower,
.fill_h2c_cmd = rtl92c_fill_h2c_cmd, .fill_h2c_cmd = rtl92c_fill_h2c_cmd,
.get_btc_status = rtl92cu_get_btc_status,
}; };
static struct rtl_mod_params rtl92cu_mod_params = { static struct rtl_mod_params rtl92cu_mod_params = {

View File

@ -251,6 +251,7 @@ static struct rtl_hal_ops rtl8192de_hal_ops = {
.get_rfreg = rtl92d_phy_query_rf_reg, .get_rfreg = rtl92d_phy_query_rf_reg,
.set_rfreg = rtl92d_phy_set_rf_reg, .set_rfreg = rtl92d_phy_set_rf_reg,
.linked_set_reg = rtl92d_linked_set_reg, .linked_set_reg = rtl92d_linked_set_reg,
.get_btc_status = rtl_btc_status_false,
}; };
static struct rtl_mod_params rtl92de_mod_params = { static struct rtl_mod_params rtl92de_mod_params = {

View File

@ -362,7 +362,7 @@ void rtl92ee_get_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
} }
break; break;
default: default:
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, RT_TRACE(rtlpriv, COMP_ERR, DBG_DMESG,
"switch case not process %x\n", variable); "switch case not process %x\n", variable);
break; break;
} }
@ -591,7 +591,7 @@ void rtl92ee_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
acm_ctrl &= (~ACMHW_BEQEN); acm_ctrl &= (~ACMHW_BEQEN);
break; break;
default: default:
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, RT_TRACE(rtlpriv, COMP_ERR, DBG_DMESG,
"switch case not process\n"); "switch case not process\n");
break; break;
} }
@ -710,7 +710,7 @@ void rtl92ee_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
} }
break; break;
default: default:
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, RT_TRACE(rtlpriv, COMP_ERR, DBG_DMESG,
"switch case not process %x\n", variable); "switch case not process %x\n", variable);
break; break;
} }
@ -2424,7 +2424,7 @@ void rtl92ee_set_key(struct ieee80211_hw *hw, u32 key_index,
enc_algo = CAM_AES; enc_algo = CAM_AES;
break; break;
default: default:
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, RT_TRACE(rtlpriv, COMP_ERR, DBG_DMESG,
"switch case not process\n"); "switch case not process\n");
enc_algo = CAM_TKIP; enc_algo = CAM_TKIP;
break; break;

View File

@ -446,6 +446,8 @@
/* DWORD 6 */ /* DWORD 6 */
#define SET_RX_STATUS__DESC_BUFF_ADDR(__pdesc, __val) \ #define SET_RX_STATUS__DESC_BUFF_ADDR(__pdesc, __val) \
SET_BITS_OFFSET_LE(__pdesc + 24, 0, 32, __val) SET_BITS_OFFSET_LE(__pdesc + 24, 0, 32, __val)
#define GET_RX_STATUS_DESC_BUFF_ADDR(__pdesc) \
SHIFT_AND_MASK_LE(__pdesc + 24, 0, 32)
#define SE_RX_HAL_IS_CCK_RATE(_pdesc)\ #define SE_RX_HAL_IS_CCK_RATE(_pdesc)\
(GET_RX_STATUS_DESC_RX_MCS(_pdesc) == DESC92_RATE1M || \ (GET_RX_STATUS_DESC_RX_MCS(_pdesc) == DESC92_RATE1M || \

View File

@ -87,11 +87,8 @@ static void rtl92s_init_aspm_vars(struct ieee80211_hw *hw)
static void rtl92se_fw_cb(const struct firmware *firmware, void *context) static void rtl92se_fw_cb(const struct firmware *firmware, void *context)
{ {
struct ieee80211_hw *hw = context; struct ieee80211_hw *hw = context;
struct rtl_pci_priv *pcipriv = rtl_pcipriv(hw);
struct rtl_priv *rtlpriv = rtl_priv(hw); struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(pcipriv);
struct rt_firmware *pfirmware = NULL; struct rt_firmware *pfirmware = NULL;
int err;
RT_TRACE(rtlpriv, COMP_ERR, DBG_LOUD, RT_TRACE(rtlpriv, COMP_ERR, DBG_LOUD,
"Firmware callback routine entered!\n"); "Firmware callback routine entered!\n");
@ -112,20 +109,6 @@ static void rtl92se_fw_cb(const struct firmware *firmware, void *context)
memcpy(pfirmware->sz_fw_tmpbuffer, firmware->data, firmware->size); memcpy(pfirmware->sz_fw_tmpbuffer, firmware->data, firmware->size);
pfirmware->sz_fw_tmpbufferlen = firmware->size; pfirmware->sz_fw_tmpbufferlen = firmware->size;
release_firmware(firmware); release_firmware(firmware);
err = ieee80211_register_hw(hw);
if (err) {
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
"Can't register mac80211 hw\n");
return;
} else {
rtlpriv->mac80211.mac80211_registered = 1;
}
rtlpci->irq_alloc = 1;
set_bit(RTL_STATUS_INTERFACE_START, &rtlpriv->status);
/*init rfkill */
rtl_init_rfkill(hw);
} }
static int rtl92s_init_sw_vars(struct ieee80211_hw *hw) static int rtl92s_init_sw_vars(struct ieee80211_hw *hw)
@ -226,8 +209,8 @@ static int rtl92s_init_sw_vars(struct ieee80211_hw *hw)
if (!rtlpriv->rtlhal.pfirmware) if (!rtlpriv->rtlhal.pfirmware)
return 1; return 1;
rtlpriv->max_fw_size = RTL8190_MAX_RAW_FIRMWARE_CODE_SIZE; rtlpriv->max_fw_size = RTL8190_MAX_FIRMWARE_CODE_SIZE*2 +
sizeof(struct fw_hdr);
pr_info("Driver for Realtek RTL8192SE/RTL8191SE\n" pr_info("Driver for Realtek RTL8192SE/RTL8191SE\n"
"Loading firmware %s\n", rtlpriv->cfg->fw_name); "Loading firmware %s\n", rtlpriv->cfg->fw_name);
/* request fw */ /* request fw */
@ -294,6 +277,7 @@ static struct rtl_hal_ops rtl8192se_hal_ops = {
.set_bbreg = rtl92s_phy_set_bb_reg, .set_bbreg = rtl92s_phy_set_bb_reg,
.get_rfreg = rtl92s_phy_query_rf_reg, .get_rfreg = rtl92s_phy_query_rf_reg,
.set_rfreg = rtl92s_phy_set_rf_reg, .set_rfreg = rtl92s_phy_set_rf_reg,
.get_btc_status = rtl_btc_status_false,
}; };
static struct rtl_mod_params rtl92se_mod_params = { static struct rtl_mod_params rtl92se_mod_params = {

View File

@ -640,6 +640,9 @@ u32 rtl92se_get_desc(u8 *desc, bool istx, u8 desc_name)
case HW_DESC_RXPKT_LEN: case HW_DESC_RXPKT_LEN:
ret = GET_RX_STATUS_DESC_PKT_LEN(desc); ret = GET_RX_STATUS_DESC_PKT_LEN(desc);
break; break;
case HW_DESC_RXBUFF_ADDR:
ret = GET_RX_STATUS_DESC_BUFF_ADDR(desc);
break;
default: default:
RT_ASSERT(false, "ERR rxdesc :%d not process\n", RT_ASSERT(false, "ERR rxdesc :%d not process\n",
desc_name); desc_name);

View File

@ -1889,15 +1889,18 @@ static void _rtl8821ae_store_tx_power_by_rate(struct ieee80211_hw *hw,
struct rtl_phy *rtlphy = &rtlpriv->phy; struct rtl_phy *rtlphy = &rtlpriv->phy;
u8 rate_section = _rtl8821ae_get_rate_section_index(regaddr); u8 rate_section = _rtl8821ae_get_rate_section_index(regaddr);
if (band != BAND_ON_2_4G && band != BAND_ON_5G) if (band != BAND_ON_2_4G && band != BAND_ON_5G) {
RT_TRACE(rtlpriv, COMP_INIT, DBG_WARNING, "Invalid Band %d\n", band); RT_TRACE(rtlpriv, COMP_INIT, DBG_WARNING, "Invalid Band %d\n", band);
band = BAND_ON_2_4G;
if (rfpath >= MAX_RF_PATH) }
if (rfpath >= MAX_RF_PATH) {
RT_TRACE(rtlpriv, COMP_INIT, DBG_WARNING, "Invalid RfPath %d\n", rfpath); RT_TRACE(rtlpriv, COMP_INIT, DBG_WARNING, "Invalid RfPath %d\n", rfpath);
rfpath = MAX_RF_PATH - 1;
if (txnum >= MAX_RF_PATH) }
if (txnum >= MAX_RF_PATH) {
RT_TRACE(rtlpriv, COMP_INIT, DBG_WARNING, "Invalid TxNum %d\n", txnum); RT_TRACE(rtlpriv, COMP_INIT, DBG_WARNING, "Invalid TxNum %d\n", txnum);
txnum = MAX_RF_PATH - 1;
}
rtlphy->tx_power_by_rate_offset[band][rfpath][txnum][rate_section] = data; rtlphy->tx_power_by_rate_offset[band][rfpath][txnum][rate_section] = data;
RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
"TxPwrByRateOffset[Band %d][RfPath %d][TxNum %d][RateSection %d] = 0x%x\n", "TxPwrByRateOffset[Band %d][RfPath %d][TxNum %d][RateSection %d] = 0x%x\n",

View File

@ -1117,7 +1117,18 @@ int rtl_usb_probe(struct usb_interface *intf,
} }
rtlpriv->cfg->ops->init_sw_leds(hw); rtlpriv->cfg->ops->init_sw_leds(hw);
err = ieee80211_register_hw(hw);
if (err) {
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
"Can't register mac80211 hw.\n");
err = -ENODEV;
goto error_out;
}
rtlpriv->mac80211.mac80211_registered = 1;
set_bit(RTL_STATUS_INTERFACE_START, &rtlpriv->status);
return 0; return 0;
error_out: error_out:
rtl_deinit_core(hw); rtl_deinit_core(hw);
_rtl_usb_io_handler_release(hw); _rtl_usb_io_handler_release(hw);

View File

@ -176,10 +176,11 @@ struct xenvif_queue { /* Per-queue data for xenvif */
char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */ char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
struct xen_netif_rx_back_ring rx; struct xen_netif_rx_back_ring rx;
struct sk_buff_head rx_queue; struct sk_buff_head rx_queue;
RING_IDX rx_last_skb_slots;
unsigned long status;
struct timer_list rx_stalled; unsigned int rx_queue_max;
unsigned int rx_queue_len;
unsigned long last_rx_time;
bool stalled;
struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS]; struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
@ -199,18 +200,14 @@ struct xenvif_queue { /* Per-queue data for xenvif */
struct xenvif_stats stats; struct xenvif_stats stats;
}; };
/* Maximum number of Rx slots a to-guest packet may use, including the
* slot needed for GSO meta-data.
*/
#define XEN_NETBK_RX_SLOTS_MAX (MAX_SKB_FRAGS + 1)
enum state_bit_shift { enum state_bit_shift {
/* This bit marks that the vif is connected */ /* This bit marks that the vif is connected */
VIF_STATUS_CONNECTED, VIF_STATUS_CONNECTED,
/* This bit signals the RX thread that queuing was stopped (in
* start_xmit), and either the timer fired or an RX interrupt came
*/
QUEUE_STATUS_RX_PURGE_EVENT,
/* This bit tells the interrupt handler that this queue was the reason
* for the carrier off, so it should kick the thread. Only queues which
* brought it down can turn on the carrier.
*/
QUEUE_STATUS_RX_STALLED
}; };
struct xenvif { struct xenvif {
@ -228,9 +225,6 @@ struct xenvif {
u8 ip_csum:1; u8 ip_csum:1;
u8 ipv6_csum:1; u8 ipv6_csum:1;
/* Internal feature information. */
u8 can_queue:1; /* can queue packets for receiver? */
/* Is this interface disabled? True when backend discovers /* Is this interface disabled? True when backend discovers
* frontend is rogue. * frontend is rogue.
*/ */
@ -240,6 +234,9 @@ struct xenvif {
/* Queues */ /* Queues */
struct xenvif_queue *queues; struct xenvif_queue *queues;
unsigned int num_queues; /* active queues, resource allocated */ unsigned int num_queues; /* active queues, resource allocated */
unsigned int stalled_queues;
spinlock_t lock;
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
struct dentry *xenvif_dbg_root; struct dentry *xenvif_dbg_root;
@ -249,6 +246,14 @@ struct xenvif {
struct net_device *dev; struct net_device *dev;
}; };
struct xenvif_rx_cb {
unsigned long expires;
int meta_slots_used;
bool full_coalesce;
};
#define XENVIF_RX_CB(skb) ((struct xenvif_rx_cb *)(skb)->cb)
static inline struct xenbus_device *xenvif_to_xenbus_device(struct xenvif *vif) static inline struct xenbus_device *xenvif_to_xenbus_device(struct xenvif *vif)
{ {
return to_xenbus_device(vif->dev->dev.parent); return to_xenbus_device(vif->dev->dev.parent);
@ -272,8 +277,6 @@ void xenvif_xenbus_fini(void);
int xenvif_schedulable(struct xenvif *vif); int xenvif_schedulable(struct xenvif *vif);
int xenvif_must_stop_queue(struct xenvif_queue *queue);
int xenvif_queue_stopped(struct xenvif_queue *queue); int xenvif_queue_stopped(struct xenvif_queue *queue);
void xenvif_wake_queue(struct xenvif_queue *queue); void xenvif_wake_queue(struct xenvif_queue *queue);
@ -296,6 +299,8 @@ void xenvif_kick_thread(struct xenvif_queue *queue);
int xenvif_dealloc_kthread(void *data); int xenvif_dealloc_kthread(void *data);
void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb);
/* Determine whether the needed number of slots (req) are available, /* Determine whether the needed number of slots (req) are available,
* and set req_event if not. * and set req_event if not.
*/ */

View File

@ -43,6 +43,9 @@
#define XENVIF_QUEUE_LENGTH 32 #define XENVIF_QUEUE_LENGTH 32
#define XENVIF_NAPI_WEIGHT 64 #define XENVIF_NAPI_WEIGHT 64
/* Number of bytes allowed on the internal guest Rx queue. */
#define XENVIF_RX_QUEUE_BYTES (XEN_NETIF_RX_RING_SIZE/2 * PAGE_SIZE)
/* This function is used to set SKBTX_DEV_ZEROCOPY as well as /* This function is used to set SKBTX_DEV_ZEROCOPY as well as
* increasing the inflight counter. We need to increase the inflight * increasing the inflight counter. We need to increase the inflight
* counter because core driver calls into xenvif_zerocopy_callback * counter because core driver calls into xenvif_zerocopy_callback
@ -60,20 +63,11 @@ void xenvif_skb_zerocopy_complete(struct xenvif_queue *queue)
atomic_dec(&queue->inflight_packets); atomic_dec(&queue->inflight_packets);
} }
static inline void xenvif_stop_queue(struct xenvif_queue *queue)
{
struct net_device *dev = queue->vif->dev;
if (!queue->vif->can_queue)
return;
netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
}
int xenvif_schedulable(struct xenvif *vif) int xenvif_schedulable(struct xenvif *vif)
{ {
return netif_running(vif->dev) && return netif_running(vif->dev) &&
test_bit(VIF_STATUS_CONNECTED, &vif->status); test_bit(VIF_STATUS_CONNECTED, &vif->status) &&
!vif->disabled;
} }
static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id) static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
@ -114,16 +108,7 @@ int xenvif_poll(struct napi_struct *napi, int budget)
static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id) static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
{ {
struct xenvif_queue *queue = dev_id; struct xenvif_queue *queue = dev_id;
struct netdev_queue *net_queue =
netdev_get_tx_queue(queue->vif->dev, queue->id);
/* QUEUE_STATUS_RX_PURGE_EVENT is only set if either QDisc was off OR
* the carrier went down and this queue was previously blocked
*/
if (unlikely(netif_tx_queue_stopped(net_queue) ||
(!netif_carrier_ok(queue->vif->dev) &&
test_bit(QUEUE_STATUS_RX_STALLED, &queue->status))))
set_bit(QUEUE_STATUS_RX_PURGE_EVENT, &queue->status);
xenvif_kick_thread(queue); xenvif_kick_thread(queue);
return IRQ_HANDLED; return IRQ_HANDLED;
@ -151,24 +136,13 @@ void xenvif_wake_queue(struct xenvif_queue *queue)
netif_tx_wake_queue(netdev_get_tx_queue(dev, id)); netif_tx_wake_queue(netdev_get_tx_queue(dev, id));
} }
/* Callback to wake the queue's thread and turn the carrier off on timeout */
static void xenvif_rx_stalled(unsigned long data)
{
struct xenvif_queue *queue = (struct xenvif_queue *)data;
if (xenvif_queue_stopped(queue)) {
set_bit(QUEUE_STATUS_RX_PURGE_EVENT, &queue->status);
xenvif_kick_thread(queue);
}
}
static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev) static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct xenvif *vif = netdev_priv(dev); struct xenvif *vif = netdev_priv(dev);
struct xenvif_queue *queue = NULL; struct xenvif_queue *queue = NULL;
unsigned int num_queues = vif->num_queues; unsigned int num_queues = vif->num_queues;
u16 index; u16 index;
int min_slots_needed; struct xenvif_rx_cb *cb;
BUG_ON(skb->dev != dev); BUG_ON(skb->dev != dev);
@ -191,30 +165,10 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
!xenvif_schedulable(vif)) !xenvif_schedulable(vif))
goto drop; goto drop;
/* At best we'll need one slot for the header and one for each cb = XENVIF_RX_CB(skb);
* frag. cb->expires = jiffies + rx_drain_timeout_jiffies;
*/
min_slots_needed = 1 + skb_shinfo(skb)->nr_frags;
/* If the skb is GSO then we'll also need an extra slot for the xenvif_rx_queue_tail(queue, skb);
* metadata.
*/
if (skb_is_gso(skb))
min_slots_needed++;
/* If the skb can't possibly fit in the remaining slots
* then turn off the queue to give the ring a chance to
* drain.
*/
if (!xenvif_rx_ring_slots_available(queue, min_slots_needed)) {
queue->rx_stalled.function = xenvif_rx_stalled;
queue->rx_stalled.data = (unsigned long)queue;
xenvif_stop_queue(queue);
mod_timer(&queue->rx_stalled,
jiffies + rx_drain_timeout_jiffies);
}
skb_queue_tail(&queue->rx_queue, skb);
xenvif_kick_thread(queue); xenvif_kick_thread(queue);
return NETDEV_TX_OK; return NETDEV_TX_OK;
@ -465,6 +419,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
vif->queues = NULL; vif->queues = NULL;
vif->num_queues = 0; vif->num_queues = 0;
spin_lock_init(&vif->lock);
dev->netdev_ops = &xenvif_netdev_ops; dev->netdev_ops = &xenvif_netdev_ops;
dev->hw_features = NETIF_F_SG | dev->hw_features = NETIF_F_SG |
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
@ -508,6 +464,8 @@ int xenvif_init_queue(struct xenvif_queue *queue)
init_timer(&queue->credit_timeout); init_timer(&queue->credit_timeout);
queue->credit_window_start = get_jiffies_64(); queue->credit_window_start = get_jiffies_64();
queue->rx_queue_max = XENVIF_RX_QUEUE_BYTES;
skb_queue_head_init(&queue->rx_queue); skb_queue_head_init(&queue->rx_queue);
skb_queue_head_init(&queue->tx_queue); skb_queue_head_init(&queue->tx_queue);
@ -539,8 +497,6 @@ int xenvif_init_queue(struct xenvif_queue *queue)
queue->grant_tx_handle[i] = NETBACK_INVALID_HANDLE; queue->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
} }
init_timer(&queue->rx_stalled);
return 0; return 0;
} }
@ -551,7 +507,6 @@ void xenvif_carrier_on(struct xenvif *vif)
dev_set_mtu(vif->dev, ETH_DATA_LEN); dev_set_mtu(vif->dev, ETH_DATA_LEN);
netdev_update_features(vif->dev); netdev_update_features(vif->dev);
set_bit(VIF_STATUS_CONNECTED, &vif->status); set_bit(VIF_STATUS_CONNECTED, &vif->status);
netif_carrier_on(vif->dev);
if (netif_running(vif->dev)) if (netif_running(vif->dev))
xenvif_up(vif); xenvif_up(vif);
rtnl_unlock(); rtnl_unlock();
@ -611,6 +566,8 @@ int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
disable_irq(queue->rx_irq); disable_irq(queue->rx_irq);
} }
queue->stalled = true;
task = kthread_create(xenvif_kthread_guest_rx, task = kthread_create(xenvif_kthread_guest_rx,
(void *)queue, "%s-guest-rx", queue->name); (void *)queue, "%s-guest-rx", queue->name);
if (IS_ERR(task)) { if (IS_ERR(task)) {
@ -674,7 +631,6 @@ void xenvif_disconnect(struct xenvif *vif)
netif_napi_del(&queue->napi); netif_napi_del(&queue->napi);
if (queue->task) { if (queue->task) {
del_timer_sync(&queue->rx_stalled);
kthread_stop(queue->task); kthread_stop(queue->task);
queue->task = NULL; queue->task = NULL;
} }

View File

@ -55,13 +55,20 @@
bool separate_tx_rx_irq = 1; bool separate_tx_rx_irq = 1;
module_param(separate_tx_rx_irq, bool, 0644); module_param(separate_tx_rx_irq, bool, 0644);
/* When guest ring is filled up, qdisc queues the packets for us, but we have /* The time that packets can stay on the guest Rx internal queue
* to timeout them, otherwise other guests' packets can get stuck there * before they are dropped.
*/ */
unsigned int rx_drain_timeout_msecs = 10000; unsigned int rx_drain_timeout_msecs = 10000;
module_param(rx_drain_timeout_msecs, uint, 0444); module_param(rx_drain_timeout_msecs, uint, 0444);
unsigned int rx_drain_timeout_jiffies; unsigned int rx_drain_timeout_jiffies;
/* The length of time before the frontend is considered unresponsive
* because it isn't providing Rx slots.
*/
static unsigned int rx_stall_timeout_msecs = 60000;
module_param(rx_stall_timeout_msecs, uint, 0444);
static unsigned int rx_stall_timeout_jiffies;
unsigned int xenvif_max_queues; unsigned int xenvif_max_queues;
module_param_named(max_queues, xenvif_max_queues, uint, 0644); module_param_named(max_queues, xenvif_max_queues, uint, 0644);
MODULE_PARM_DESC(max_queues, MODULE_PARM_DESC(max_queues,
@ -83,7 +90,6 @@ static void make_tx_response(struct xenvif_queue *queue,
s8 st); s8 st);
static inline int tx_work_todo(struct xenvif_queue *queue); static inline int tx_work_todo(struct xenvif_queue *queue);
static inline int rx_work_todo(struct xenvif_queue *queue);
static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue, static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
u16 id, u16 id,
@ -163,6 +169,69 @@ bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
return false; return false;
} }
void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
{
unsigned long flags;
spin_lock_irqsave(&queue->rx_queue.lock, flags);
__skb_queue_tail(&queue->rx_queue, skb);
queue->rx_queue_len += skb->len;
if (queue->rx_queue_len > queue->rx_queue_max)
netif_tx_stop_queue(netdev_get_tx_queue(queue->vif->dev, queue->id));
spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
}
static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
{
struct sk_buff *skb;
spin_lock_irq(&queue->rx_queue.lock);
skb = __skb_dequeue(&queue->rx_queue);
if (skb)
queue->rx_queue_len -= skb->len;
spin_unlock_irq(&queue->rx_queue.lock);
return skb;
}
static void xenvif_rx_queue_maybe_wake(struct xenvif_queue *queue)
{
spin_lock_irq(&queue->rx_queue.lock);
if (queue->rx_queue_len < queue->rx_queue_max)
netif_tx_wake_queue(netdev_get_tx_queue(queue->vif->dev, queue->id));
spin_unlock_irq(&queue->rx_queue.lock);
}
static void xenvif_rx_queue_purge(struct xenvif_queue *queue)
{
struct sk_buff *skb;
while ((skb = xenvif_rx_dequeue(queue)) != NULL)
kfree_skb(skb);
}
static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue)
{
struct sk_buff *skb;
for(;;) {
skb = skb_peek(&queue->rx_queue);
if (!skb)
break;
if (time_before(jiffies, XENVIF_RX_CB(skb)->expires))
break;
xenvif_rx_dequeue(queue);
kfree_skb(skb);
}
}
/* /*
* Returns true if we should start a new receive buffer instead of * Returns true if we should start a new receive buffer instead of
* adding 'size' bytes to a buffer which currently contains 'offset' * adding 'size' bytes to a buffer which currently contains 'offset'
@ -237,13 +306,6 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
return meta; return meta;
} }
struct xenvif_rx_cb {
int meta_slots_used;
bool full_coalesce;
};
#define XENVIF_RX_CB(skb) ((struct xenvif_rx_cb *)(skb)->cb)
/* /*
* Set up the grant operations for this fragment. If it's a flipping * Set up the grant operations for this fragment. If it's a flipping
* interface, we also set up the unmap request from here. * interface, we also set up the unmap request from here.
@ -587,12 +649,15 @@ static void xenvif_rx_action(struct xenvif_queue *queue)
skb_queue_head_init(&rxq); skb_queue_head_init(&rxq);
while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) { while (xenvif_rx_ring_slots_available(queue, XEN_NETBK_RX_SLOTS_MAX)
&& (skb = xenvif_rx_dequeue(queue)) != NULL) {
RING_IDX max_slots_needed; RING_IDX max_slots_needed;
RING_IDX old_req_cons; RING_IDX old_req_cons;
RING_IDX ring_slots_used; RING_IDX ring_slots_used;
int i; int i;
queue->last_rx_time = jiffies;
/* We need a cheap worse case estimate for the number of /* We need a cheap worse case estimate for the number of
* slots we'll use. * slots we'll use.
*/ */
@ -634,15 +699,6 @@ static void xenvif_rx_action(struct xenvif_queue *queue)
skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)) skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6))
max_slots_needed++; max_slots_needed++;
/* If the skb may not fit then bail out now */
if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
skb_queue_head(&queue->rx_queue, skb);
need_to_notify = true;
queue->rx_last_skb_slots = max_slots_needed;
break;
} else
queue->rx_last_skb_slots = 0;
old_req_cons = queue->rx.req_cons; old_req_cons = queue->rx.req_cons;
XENVIF_RX_CB(skb)->meta_slots_used = xenvif_gop_skb(skb, &npo, queue); XENVIF_RX_CB(skb)->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
ring_slots_used = queue->rx.req_cons - old_req_cons; ring_slots_used = queue->rx.req_cons - old_req_cons;
@ -1869,12 +1925,6 @@ void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
} }
} }
static inline int rx_work_todo(struct xenvif_queue *queue)
{
return (!skb_queue_empty(&queue->rx_queue) &&
xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots));
}
static inline int tx_work_todo(struct xenvif_queue *queue) static inline int tx_work_todo(struct xenvif_queue *queue)
{ {
if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))) if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)))
@ -1931,92 +1981,121 @@ err:
return err; return err;
} }
static void xenvif_start_queue(struct xenvif_queue *queue) static void xenvif_queue_carrier_off(struct xenvif_queue *queue)
{ {
if (xenvif_schedulable(queue->vif)) struct xenvif *vif = queue->vif;
xenvif_wake_queue(queue);
queue->stalled = true;
/* At least one queue has stalled? Disable the carrier. */
spin_lock(&vif->lock);
if (vif->stalled_queues++ == 0) {
netdev_info(vif->dev, "Guest Rx stalled");
netif_carrier_off(vif->dev);
}
spin_unlock(&vif->lock);
} }
/* Only called from the queue's thread, it handles the situation when the guest static void xenvif_queue_carrier_on(struct xenvif_queue *queue)
* doesn't post enough requests on the receiving ring.
* First xenvif_start_xmit disables QDisc and start a timer, and then either the
* timer fires, or the guest send an interrupt after posting new request. If it
* is the timer, the carrier is turned off here.
* */
static void xenvif_rx_purge_event(struct xenvif_queue *queue)
{ {
/* Either the last unsuccesful skb or at least 1 slot should fit */ struct xenvif *vif = queue->vif;
int needed = queue->rx_last_skb_slots ?
queue->rx_last_skb_slots : 1;
/* It is assumed that if the guest post new slots after this, the RX queue->last_rx_time = jiffies; /* Reset Rx stall detection. */
* interrupt will set the QUEUE_STATUS_RX_PURGE_EVENT bit and wake up queue->stalled = false;
* the thread again
*/
set_bit(QUEUE_STATUS_RX_STALLED, &queue->status);
if (!xenvif_rx_ring_slots_available(queue, needed)) {
rtnl_lock();
if (netif_carrier_ok(queue->vif->dev)) {
/* Timer fired and there are still no slots. Turn off
* everything except the interrupts
*/
netif_carrier_off(queue->vif->dev);
skb_queue_purge(&queue->rx_queue);
queue->rx_last_skb_slots = 0;
if (net_ratelimit())
netdev_err(queue->vif->dev, "Carrier off due to lack of guest response on queue %d\n", queue->id);
} else {
/* Probably an another queue already turned the carrier
* off, make sure nothing is stucked in the internal
* queue of this queue
*/
skb_queue_purge(&queue->rx_queue);
queue->rx_last_skb_slots = 0;
}
rtnl_unlock();
} else if (!netif_carrier_ok(queue->vif->dev)) {
unsigned int num_queues = queue->vif->num_queues;
unsigned int i;
/* The carrier was down, but an interrupt kicked
* the thread again after new requests were
* posted
*/
clear_bit(QUEUE_STATUS_RX_STALLED,
&queue->status);
rtnl_lock();
netif_carrier_on(queue->vif->dev);
netif_tx_wake_all_queues(queue->vif->dev);
rtnl_unlock();
for (i = 0; i < num_queues; i++) { /* All queues are ready? Enable the carrier. */
struct xenvif_queue *temp = &queue->vif->queues[i]; spin_lock(&vif->lock);
if (--vif->stalled_queues == 0) {
xenvif_napi_schedule_or_enable_events(temp); netdev_info(vif->dev, "Guest Rx ready");
} netif_carrier_on(vif->dev);
if (net_ratelimit())
netdev_err(queue->vif->dev, "Carrier on again\n");
} else {
/* Queuing were stopped, but the guest posted
* new requests and sent an interrupt
*/
clear_bit(QUEUE_STATUS_RX_STALLED,
&queue->status);
del_timer_sync(&queue->rx_stalled);
xenvif_start_queue(queue);
} }
spin_unlock(&vif->lock);
}
static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)
{
RING_IDX prod, cons;
prod = queue->rx.sring->req_prod;
cons = queue->rx.req_cons;
return !queue->stalled
&& prod - cons < XEN_NETBK_RX_SLOTS_MAX
&& time_after(jiffies,
queue->last_rx_time + rx_stall_timeout_jiffies);
}
static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)
{
RING_IDX prod, cons;
prod = queue->rx.sring->req_prod;
cons = queue->rx.req_cons;
return queue->stalled
&& prod - cons >= XEN_NETBK_RX_SLOTS_MAX;
}
static bool xenvif_have_rx_work(struct xenvif_queue *queue)
{
return (!skb_queue_empty(&queue->rx_queue)
&& xenvif_rx_ring_slots_available(queue, XEN_NETBK_RX_SLOTS_MAX))
|| xenvif_rx_queue_stalled(queue)
|| xenvif_rx_queue_ready(queue)
|| kthread_should_stop()
|| queue->vif->disabled;
}
static long xenvif_rx_queue_timeout(struct xenvif_queue *queue)
{
struct sk_buff *skb;
long timeout;
skb = skb_peek(&queue->rx_queue);
if (!skb)
return MAX_SCHEDULE_TIMEOUT;
timeout = XENVIF_RX_CB(skb)->expires - jiffies;
return timeout < 0 ? 0 : timeout;
}
/* Wait until the guest Rx thread has work.
*
* The timeout needs to be adjusted based on the current head of the
* queue (and not just the head at the beginning). In particular, if
* the queue is initially empty an infinite timeout is used and this
* needs to be reduced when a skb is queued.
*
* This cannot be done with wait_event_timeout() because it only
* calculates the timeout once.
*/
static void xenvif_wait_for_rx_work(struct xenvif_queue *queue)
{
DEFINE_WAIT(wait);
if (xenvif_have_rx_work(queue))
return;
for (;;) {
long ret;
prepare_to_wait(&queue->wq, &wait, TASK_INTERRUPTIBLE);
if (xenvif_have_rx_work(queue))
break;
ret = schedule_timeout(xenvif_rx_queue_timeout(queue));
if (!ret)
break;
}
finish_wait(&queue->wq, &wait);
} }
int xenvif_kthread_guest_rx(void *data) int xenvif_kthread_guest_rx(void *data)
{ {
struct xenvif_queue *queue = data; struct xenvif_queue *queue = data;
struct sk_buff *skb; struct xenvif *vif = queue->vif;
while (!kthread_should_stop()) { for (;;) {
wait_event_interruptible(queue->wq, xenvif_wait_for_rx_work(queue);
rx_work_todo(queue) ||
queue->vif->disabled ||
test_bit(QUEUE_STATUS_RX_PURGE_EVENT, &queue->status) ||
kthread_should_stop());
if (kthread_should_stop()) if (kthread_should_stop())
break; break;
@ -2028,35 +2107,38 @@ int xenvif_kthread_guest_rx(void *data)
* context so we defer it here, if this thread is * context so we defer it here, if this thread is
* associated with queue 0. * associated with queue 0.
*/ */
if (unlikely(queue->vif->disabled && queue->id == 0)) { if (unlikely(vif->disabled && queue->id == 0)) {
xenvif_carrier_off(queue->vif); xenvif_carrier_off(vif);
} else if (unlikely(queue->vif->disabled)) { xenvif_rx_queue_purge(queue);
/* kthread_stop() would be called upon this thread soon, continue;
* be a bit proactive
*/
skb_queue_purge(&queue->rx_queue);
queue->rx_last_skb_slots = 0;
} else if (unlikely(test_and_clear_bit(QUEUE_STATUS_RX_PURGE_EVENT,
&queue->status))) {
xenvif_rx_purge_event(queue);
} else if (!netif_carrier_ok(queue->vif->dev)) {
/* Another queue stalled and turned the carrier off, so
* purge the internal queue of queues which were not
* blocked
*/
skb_queue_purge(&queue->rx_queue);
queue->rx_last_skb_slots = 0;
} }
if (!skb_queue_empty(&queue->rx_queue)) if (!skb_queue_empty(&queue->rx_queue))
xenvif_rx_action(queue); xenvif_rx_action(queue);
/* If the guest hasn't provided any Rx slots for a
* while it's probably not responsive, drop the
* carrier so packets are dropped earlier.
*/
if (xenvif_rx_queue_stalled(queue))
xenvif_queue_carrier_off(queue);
else if (xenvif_rx_queue_ready(queue))
xenvif_queue_carrier_on(queue);
/* Queued packets may have foreign pages from other
* domains. These cannot be queued indefinitely as
* this would starve guests of grant refs and transmit
* slots.
*/
xenvif_rx_queue_drop_expired(queue);
xenvif_rx_queue_maybe_wake(queue);
cond_resched(); cond_resched();
} }
/* Bin any remaining skbs */ /* Bin any remaining skbs */
while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) xenvif_rx_queue_purge(queue);
dev_kfree_skb(skb);
return 0; return 0;
} }
@ -2113,6 +2195,7 @@ static int __init netback_init(void)
goto failed_init; goto failed_init;
rx_drain_timeout_jiffies = msecs_to_jiffies(rx_drain_timeout_msecs); rx_drain_timeout_jiffies = msecs_to_jiffies(rx_drain_timeout_msecs);
rx_stall_timeout_jiffies = msecs_to_jiffies(rx_stall_timeout_msecs);
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
xen_netback_dbg_root = debugfs_create_dir("xen-netback", NULL); xen_netback_dbg_root = debugfs_create_dir("xen-netback", NULL);

View File

@ -52,6 +52,7 @@ static int xenvif_read_io_ring(struct seq_file *m, void *v)
struct xenvif_queue *queue = m->private; struct xenvif_queue *queue = m->private;
struct xen_netif_tx_back_ring *tx_ring = &queue->tx; struct xen_netif_tx_back_ring *tx_ring = &queue->tx;
struct xen_netif_rx_back_ring *rx_ring = &queue->rx; struct xen_netif_rx_back_ring *rx_ring = &queue->rx;
struct netdev_queue *dev_queue;
if (tx_ring->sring) { if (tx_ring->sring) {
struct xen_netif_tx_sring *sring = tx_ring->sring; struct xen_netif_tx_sring *sring = tx_ring->sring;
@ -112,6 +113,13 @@ static int xenvif_read_io_ring(struct seq_file *m, void *v)
queue->credit_timeout.expires, queue->credit_timeout.expires,
jiffies); jiffies);
dev_queue = netdev_get_tx_queue(queue->vif->dev, queue->id);
seq_printf(m, "\nRx internal queue: len %u max %u pkts %u %s\n",
queue->rx_queue_len, queue->rx_queue_max,
skb_queue_len(&queue->rx_queue),
netif_tx_queue_stopped(dev_queue) ? "stopped" : "running");
return 0; return 0;
} }
@ -703,6 +711,7 @@ static void connect(struct backend_info *be)
be->vif->queues = vzalloc(requested_num_queues * be->vif->queues = vzalloc(requested_num_queues *
sizeof(struct xenvif_queue)); sizeof(struct xenvif_queue));
be->vif->num_queues = requested_num_queues; be->vif->num_queues = requested_num_queues;
be->vif->stalled_queues = requested_num_queues;
for (queue_index = 0; queue_index < requested_num_queues; ++queue_index) { for (queue_index = 0; queue_index < requested_num_queues; ++queue_index) {
queue = &be->vif->queues[queue_index]; queue = &be->vif->queues[queue_index];
@ -873,15 +882,10 @@ static int read_xenbus_vif_flags(struct backend_info *be)
if (!rx_copy) if (!rx_copy)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (vif->dev->tx_queue_len != 0) { if (xenbus_scanf(XBT_NIL, dev->otherend,
if (xenbus_scanf(XBT_NIL, dev->otherend, "feature-rx-notify", "%d", &val) < 0 || val == 0) {
"feature-rx-notify", "%d", &val) < 0) xenbus_dev_fatal(dev, -EINVAL, "feature-rx-notify is mandatory");
val = 0; return -EINVAL;
if (val)
vif->can_queue = 1;
else
/* Must be non-zero for pfifo_fast to work. */
vif->dev->tx_queue_len = 1;
} }
if (xenbus_scanf(XBT_NIL, dev->otherend, "feature-sg", if (xenbus_scanf(XBT_NIL, dev->otherend, "feature-sg",

View File

@ -557,7 +557,9 @@ struct sk_buff {
/* fields enclosed in headers_start/headers_end are copied /* fields enclosed in headers_start/headers_end are copied
* using a single memcpy() in __copy_skb_header() * using a single memcpy() in __copy_skb_header()
*/ */
/* private: */
__u32 headers_start[0]; __u32 headers_start[0];
/* public: */
/* if you move pkt_type around you also must adapt those constants */ /* if you move pkt_type around you also must adapt those constants */
#ifdef __BIG_ENDIAN_BITFIELD #ifdef __BIG_ENDIAN_BITFIELD
@ -642,7 +644,9 @@ struct sk_buff {
__u16 network_header; __u16 network_header;
__u16 mac_header; __u16 mac_header;
/* private: */
__u32 headers_end[0]; __u32 headers_end[0];
/* public: */
/* These elements must be at the end, see alloc_skb() for details. */ /* These elements must be at the end, see alloc_skb() for details. */
sk_buff_data_t tail; sk_buff_data_t tail;
@ -795,15 +799,19 @@ struct sk_buff_fclones {
* @skb: buffer * @skb: buffer
* *
* Returns true is skb is a fast clone, and its clone is not freed. * Returns true is skb is a fast clone, and its clone is not freed.
* Some drivers call skb_orphan() in their ndo_start_xmit(),
* so we also check that this didnt happen.
*/ */
static inline bool skb_fclone_busy(const struct sk_buff *skb) static inline bool skb_fclone_busy(const struct sock *sk,
const struct sk_buff *skb)
{ {
const struct sk_buff_fclones *fclones; const struct sk_buff_fclones *fclones;
fclones = container_of(skb, struct sk_buff_fclones, skb1); fclones = container_of(skb, struct sk_buff_fclones, skb1);
return skb->fclone == SKB_FCLONE_ORIG && return skb->fclone == SKB_FCLONE_ORIG &&
fclones->skb2.fclone == SKB_FCLONE_CLONE; fclones->skb2.fclone == SKB_FCLONE_CLONE &&
fclones->skb2.sk == sk;
} }
static inline struct sk_buff *alloc_skb_fclone(unsigned int size, static inline struct sk_buff *alloc_skb_fclone(unsigned int size,

View File

@ -78,6 +78,7 @@ struct usbnet {
# define EVENT_NO_RUNTIME_PM 9 # define EVENT_NO_RUNTIME_PM 9
# define EVENT_RX_KILL 10 # define EVENT_RX_KILL 10
# define EVENT_LINK_CHANGE 11 # define EVENT_LINK_CHANGE 11
# define EVENT_SET_RX_MODE 12
}; };
static inline struct usb_driver *driver_of(struct usb_interface *intf) static inline struct usb_driver *driver_of(struct usb_interface *intf)
@ -159,6 +160,9 @@ struct driver_info {
/* called by minidriver when receiving indication */ /* called by minidriver when receiving indication */
void (*indication)(struct usbnet *dev, void *ind, int indlen); void (*indication)(struct usbnet *dev, void *ind, int indlen);
/* rx mode change (device changes address list filtering) */
void (*set_rx_mode)(struct usbnet *dev);
/* for new devices, use the descriptor-reading code instead */ /* for new devices, use the descriptor-reading code instead */
int in; /* rx endpoint */ int in; /* rx endpoint */
int out; /* tx endpoint */ int out; /* tx endpoint */

View File

@ -671,6 +671,8 @@ static inline int ipv6_addr_diff(const struct in6_addr *a1, const struct in6_add
return __ipv6_addr_diff(a1, a2, sizeof(struct in6_addr)); return __ipv6_addr_diff(a1, a2, sizeof(struct in6_addr));
} }
void ipv6_proxy_select_ident(struct sk_buff *skb);
int ip6_dst_hoplimit(struct dst_entry *dst); int ip6_dst_hoplimit(struct dst_entry *dst);
static inline int ip6_sk_dst_hoplimit(struct ipv6_pinfo *np, struct flowi6 *fl6, static inline int ip6_sk_dst_hoplimit(struct ipv6_pinfo *np, struct flowi6 *fl6,

Some files were not shown because too many files have changed in this diff Show More