Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

 1) Missing netns pointer init in arp_tables, from Florian Westphal.

 2) Fix normal tcp SACK being treated as D-SACK, from Pengcheng Yang.

 3) Fix divide by zero in sch_cake, from Wen Yang.

 4) Len passed to skb_put_padto() is wrong in qrtr code, from Carl
    Huang.

 5) cmd->obj.chunk is leaked in sctp code error paths, from Xin Long.

 6) cgroup bpf programs can be released out of order, fix from Roman
    Gushchin.

 7) Make sure stmmac debugfs entry name is changed when device name
    changes, from Jiping Ma.

 8) Fix memory leak in vlan_dev_set_egress_priority(), from Eric
    Dumazet.

 9) SKB leak in lan78xx usb driver, also from Eric Dumazet.

10) Ridiculous TCA_FQ_QUANTUM values configured can cause loops in fq
    packet scheduler, reject them. From Eric Dumazet.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (69 commits)
  tipc: fix wrong connect() return code
  tipc: fix link overflow issue at socket shutdown
  netfilter: ipset: avoid null deref when IPSET_ATTR_LINENO is present
  netfilter: conntrack: dccp, sctp: handle null timeout argument
  atm: eni: fix uninitialized variable warning
  macvlan: do not assume mac_header is set in macvlan_broadcast()
  net: sch_prio: When ungrafting, replace with FIFO
  mlxsw: spectrum_qdisc: Ignore grafting of invisible FIFO
  MAINTAINERS: Remove myself as co-maintainer for qcom-ethqos
  gtp: fix bad unlock balance in gtp_encap_enable_socket
  pkt_sched: fq: do not accept silly TCA_FQ_QUANTUM
  tipc: remove meaningless assignment in Makefile
  tipc: do not add socket.o to tipc-y twice
  net: stmmac: dwmac-sun8i: Allow all RGMII modes
  net: stmmac: dwmac-sunxi: Allow all RGMII modes
  net: usb: lan78xx: fix possible skb leak
  net: stmmac: Fixed link does not need MDIO Bus
  vlan: vlan_changelink() should propagate errors
  vlan: fix memory leak in vlan_dev_set_egress_priority
  stmmac: debugfs entry name is not be changed when udev rename device name.
  ...
This commit is contained in:
Linus Torvalds 2020-01-09 10:34:07 -08:00
commit a5f48c7878
71 changed files with 527 additions and 287 deletions

View File

@ -603,7 +603,7 @@ tcp_synack_retries - INTEGER
with the current initial RTO of 1second. With this the final timeout with the current initial RTO of 1second. With this the final timeout
for a passive TCP connection will happen after 63seconds. for a passive TCP connection will happen after 63seconds.
tcp_syncookies - BOOLEAN tcp_syncookies - INTEGER
Only valid when the kernel was compiled with CONFIG_SYN_COOKIES Only valid when the kernel was compiled with CONFIG_SYN_COOKIES
Send out syncookies when the syn backlog queue of a socket Send out syncookies when the syn backlog queue of a socket
overflows. This is to prevent against the common 'SYN flood attack' overflows. This is to prevent against the common 'SYN flood attack'

View File

@ -34,8 +34,8 @@ the names, the ``net`` tree is for fixes to existing code already in the
mainline tree from Linus, and ``net-next`` is where the new code goes mainline tree from Linus, and ``net-next`` is where the new code goes
for the future release. You can find the trees here: for the future release. You can find the trees here:
- https://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
- https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
Q: How often do changes from these trees make it to the mainline Linus tree? Q: How often do changes from these trees make it to the mainline Linus tree?
---------------------------------------------------------------------------- ----------------------------------------------------------------------------

View File

@ -11460,8 +11460,8 @@ M: "David S. Miller" <davem@davemloft.net>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
W: http://www.linuxfoundation.org/en/Net W: http://www.linuxfoundation.org/en/Net
Q: http://patchwork.ozlabs.org/project/netdev/list/ Q: http://patchwork.ozlabs.org/project/netdev/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
S: Odd Fixes S: Odd Fixes
F: Documentation/devicetree/bindings/net/ F: Documentation/devicetree/bindings/net/
F: drivers/net/ F: drivers/net/
@ -11502,8 +11502,8 @@ M: "David S. Miller" <davem@davemloft.net>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
W: http://www.linuxfoundation.org/en/Net W: http://www.linuxfoundation.org/en/Net
Q: http://patchwork.ozlabs.org/project/netdev/list/ Q: http://patchwork.ozlabs.org/project/netdev/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
B: mailto:netdev@vger.kernel.org B: mailto:netdev@vger.kernel.org
S: Maintained S: Maintained
F: net/ F: net/
@ -11548,7 +11548,7 @@ M: "David S. Miller" <davem@davemloft.net>
M: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> M: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
S: Maintained S: Maintained
F: net/ipv4/ F: net/ipv4/
F: net/ipv6/ F: net/ipv6/
@ -13679,7 +13679,6 @@ F: drivers/net/ethernet/qualcomm/emac/
QUALCOMM ETHQOS ETHERNET DRIVER QUALCOMM ETHQOS ETHERNET DRIVER
M: Vinod Koul <vkoul@kernel.org> M: Vinod Koul <vkoul@kernel.org>
M: Niklas Cassel <niklas.cassel@linaro.org>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
@ -14549,8 +14548,6 @@ F: include/linux/platform_data/spi-s3c64xx.h
SAMSUNG SXGBE DRIVERS SAMSUNG SXGBE DRIVERS
M: Byungho An <bh74.an@samsung.com> M: Byungho An <bh74.an@samsung.com>
M: Girish K S <ks.giri@samsung.com>
M: Vipul Pandya <vipul.pandya@samsung.com>
S: Supported S: Supported
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
F: drivers/net/ethernet/samsung/sxgbe/ F: drivers/net/ethernet/samsung/sxgbe/

View File

@ -374,7 +374,7 @@ static int do_rx_dma(struct atm_vcc *vcc,struct sk_buff *skb,
here = (eni_vcc->descr+skip) & (eni_vcc->words-1); here = (eni_vcc->descr+skip) & (eni_vcc->words-1);
dma[j++] = (here << MID_DMA_COUNT_SHIFT) | (vcc->vci dma[j++] = (here << MID_DMA_COUNT_SHIFT) | (vcc->vci
<< MID_DMA_VCI_SHIFT) | MID_DT_JK; << MID_DMA_VCI_SHIFT) | MID_DT_JK;
j++; dma[j++] = 0;
} }
here = (eni_vcc->descr+size+skip) & (eni_vcc->words-1); here = (eni_vcc->descr+size+skip) & (eni_vcc->words-1);
if (!eff) size += skip; if (!eff) size += skip;
@ -447,7 +447,7 @@ static int do_rx_dma(struct atm_vcc *vcc,struct sk_buff *skb,
if (size != eff) { if (size != eff) {
dma[j++] = (here << MID_DMA_COUNT_SHIFT) | dma[j++] = (here << MID_DMA_COUNT_SHIFT) |
(vcc->vci << MID_DMA_VCI_SHIFT) | MID_DT_JK; (vcc->vci << MID_DMA_VCI_SHIFT) | MID_DT_JK;
j++; dma[j++] = 0;
} }
if (!j || j > 2*RX_DMA_BUF) { if (!j || j > 2*RX_DMA_BUF) {
printk(KERN_CRIT DEV_LABEL "!j or j too big!!!\n"); printk(KERN_CRIT DEV_LABEL "!j or j too big!!!\n");

View File

@ -215,7 +215,6 @@ static int tee_bnxt_fw_probe(struct device *dev)
fw_shm_pool = tee_shm_alloc(pvt_data.ctx, MAX_SHM_MEM_SZ, fw_shm_pool = tee_shm_alloc(pvt_data.ctx, MAX_SHM_MEM_SZ,
TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
if (IS_ERR(fw_shm_pool)) { if (IS_ERR(fw_shm_pool)) {
tee_client_close_context(pvt_data.ctx);
dev_err(pvt_data.dev, "tee_shm_alloc failed\n"); dev_err(pvt_data.dev, "tee_shm_alloc failed\n");
err = PTR_ERR(fw_shm_pool); err = PTR_ERR(fw_shm_pool);
goto out_sess; goto out_sess;

View File

@ -102,6 +102,7 @@
#define TCAN4X5X_MODE_NORMAL BIT(7) #define TCAN4X5X_MODE_NORMAL BIT(7)
#define TCAN4X5X_DISABLE_WAKE_MSK (BIT(31) | BIT(30)) #define TCAN4X5X_DISABLE_WAKE_MSK (BIT(31) | BIT(30))
#define TCAN4X5X_DISABLE_INH_MSK BIT(9)
#define TCAN4X5X_SW_RESET BIT(2) #define TCAN4X5X_SW_RESET BIT(2)
@ -166,6 +167,28 @@ static void tcan4x5x_check_wake(struct tcan4x5x_priv *priv)
} }
} }
static int tcan4x5x_reset(struct tcan4x5x_priv *priv)
{
int ret = 0;
if (priv->reset_gpio) {
gpiod_set_value(priv->reset_gpio, 1);
/* tpulse_width minimum 30us */
usleep_range(30, 100);
gpiod_set_value(priv->reset_gpio, 0);
} else {
ret = regmap_write(priv->regmap, TCAN4X5X_CONFIG,
TCAN4X5X_SW_RESET);
if (ret)
return ret;
}
usleep_range(700, 1000);
return ret;
}
static int regmap_spi_gather_write(void *context, const void *reg, static int regmap_spi_gather_write(void *context, const void *reg,
size_t reg_len, const void *val, size_t reg_len, const void *val,
size_t val_len) size_t val_len)
@ -348,14 +371,23 @@ static int tcan4x5x_disable_wake(struct m_can_classdev *cdev)
TCAN4X5X_DISABLE_WAKE_MSK, 0x00); TCAN4X5X_DISABLE_WAKE_MSK, 0x00);
} }
static int tcan4x5x_disable_state(struct m_can_classdev *cdev)
{
struct tcan4x5x_priv *tcan4x5x = cdev->device_data;
return regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG,
TCAN4X5X_DISABLE_INH_MSK, 0x01);
}
static int tcan4x5x_parse_config(struct m_can_classdev *cdev) static int tcan4x5x_parse_config(struct m_can_classdev *cdev)
{ {
struct tcan4x5x_priv *tcan4x5x = cdev->device_data; struct tcan4x5x_priv *tcan4x5x = cdev->device_data;
int ret;
tcan4x5x->device_wake_gpio = devm_gpiod_get(cdev->dev, "device-wake", tcan4x5x->device_wake_gpio = devm_gpiod_get(cdev->dev, "device-wake",
GPIOD_OUT_HIGH); GPIOD_OUT_HIGH);
if (IS_ERR(tcan4x5x->device_wake_gpio)) { if (IS_ERR(tcan4x5x->device_wake_gpio)) {
if (PTR_ERR(tcan4x5x->power) == -EPROBE_DEFER) if (PTR_ERR(tcan4x5x->device_wake_gpio) == -EPROBE_DEFER)
return -EPROBE_DEFER; return -EPROBE_DEFER;
tcan4x5x_disable_wake(cdev); tcan4x5x_disable_wake(cdev);
@ -366,18 +398,17 @@ static int tcan4x5x_parse_config(struct m_can_classdev *cdev)
if (IS_ERR(tcan4x5x->reset_gpio)) if (IS_ERR(tcan4x5x->reset_gpio))
tcan4x5x->reset_gpio = NULL; tcan4x5x->reset_gpio = NULL;
usleep_range(700, 1000); ret = tcan4x5x_reset(tcan4x5x);
if (ret)
return ret;
tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev, tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev,
"device-state", "device-state",
GPIOD_IN); GPIOD_IN);
if (IS_ERR(tcan4x5x->device_state_gpio)) if (IS_ERR(tcan4x5x->device_state_gpio)) {
tcan4x5x->device_state_gpio = NULL; tcan4x5x->device_state_gpio = NULL;
tcan4x5x_disable_state(cdev);
tcan4x5x->power = devm_regulator_get_optional(cdev->dev, }
"vsup");
if (PTR_ERR(tcan4x5x->power) == -EPROBE_DEFER)
return -EPROBE_DEFER;
return 0; return 0;
} }
@ -412,6 +443,12 @@ static int tcan4x5x_can_probe(struct spi_device *spi)
if (!priv) if (!priv)
return -ENOMEM; return -ENOMEM;
priv->power = devm_regulator_get_optional(&spi->dev, "vsup");
if (PTR_ERR(priv->power) == -EPROBE_DEFER)
return -EPROBE_DEFER;
else
priv->power = NULL;
mcan_class->device_data = priv; mcan_class->device_data = priv;
m_can_class_get_clocks(mcan_class); m_can_class_get_clocks(mcan_class);
@ -451,11 +488,17 @@ static int tcan4x5x_can_probe(struct spi_device *spi)
priv->regmap = devm_regmap_init(&spi->dev, &tcan4x5x_bus, priv->regmap = devm_regmap_init(&spi->dev, &tcan4x5x_bus,
&spi->dev, &tcan4x5x_regmap); &spi->dev, &tcan4x5x_regmap);
ret = tcan4x5x_parse_config(mcan_class); ret = tcan4x5x_power_enable(priv->power, 1);
if (ret) if (ret)
goto out_clk; goto out_clk;
tcan4x5x_power_enable(priv->power, 1); ret = tcan4x5x_parse_config(mcan_class);
if (ret)
goto out_power;
ret = tcan4x5x_init(mcan_class);
if (ret)
goto out_power;
ret = m_can_class_register(mcan_class); ret = m_can_class_register(mcan_class);
if (ret) if (ret)

View File

@ -381,13 +381,12 @@ static int mscan_rx_poll(struct napi_struct *napi, int quota)
struct net_device *dev = napi->dev; struct net_device *dev = napi->dev;
struct mscan_regs __iomem *regs = priv->reg_base; struct mscan_regs __iomem *regs = priv->reg_base;
struct net_device_stats *stats = &dev->stats; struct net_device_stats *stats = &dev->stats;
int npackets = 0; int work_done = 0;
int ret = 1;
struct sk_buff *skb; struct sk_buff *skb;
struct can_frame *frame; struct can_frame *frame;
u8 canrflg; u8 canrflg;
while (npackets < quota) { while (work_done < quota) {
canrflg = in_8(&regs->canrflg); canrflg = in_8(&regs->canrflg);
if (!(canrflg & (MSCAN_RXF | MSCAN_ERR_IF))) if (!(canrflg & (MSCAN_RXF | MSCAN_ERR_IF)))
break; break;
@ -408,18 +407,18 @@ static int mscan_rx_poll(struct napi_struct *napi, int quota)
stats->rx_packets++; stats->rx_packets++;
stats->rx_bytes += frame->can_dlc; stats->rx_bytes += frame->can_dlc;
npackets++; work_done++;
netif_receive_skb(skb); netif_receive_skb(skb);
} }
if (!(in_8(&regs->canrflg) & (MSCAN_RXF | MSCAN_ERR_IF))) { if (work_done < quota) {
napi_complete(&priv->napi); if (likely(napi_complete_done(&priv->napi, work_done))) {
clear_bit(F_RX_PROGRESS, &priv->flags); clear_bit(F_RX_PROGRESS, &priv->flags);
if (priv->can.state < CAN_STATE_BUS_OFF) if (priv->can.state < CAN_STATE_BUS_OFF)
out_8(&regs->canrier, priv->shadow_canrier); out_8(&regs->canrier, priv->shadow_canrier);
ret = 0; }
} }
return ret; return work_done;
} }
static irqreturn_t mscan_isr(int irq, void *dev_id) static irqreturn_t mscan_isr(int irq, void *dev_id)

View File

@ -918,7 +918,7 @@ static int gs_usb_probe(struct usb_interface *intf,
GS_USB_BREQ_HOST_FORMAT, GS_USB_BREQ_HOST_FORMAT,
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_INTERFACE, USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_INTERFACE,
1, 1,
intf->altsetting[0].desc.bInterfaceNumber, intf->cur_altsetting->desc.bInterfaceNumber,
hconf, hconf,
sizeof(*hconf), sizeof(*hconf),
1000); 1000);
@ -941,7 +941,7 @@ static int gs_usb_probe(struct usb_interface *intf,
GS_USB_BREQ_DEVICE_CONFIG, GS_USB_BREQ_DEVICE_CONFIG,
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_INTERFACE, USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_INTERFACE,
1, 1,
intf->altsetting[0].desc.bInterfaceNumber, intf->cur_altsetting->desc.bInterfaceNumber,
dconf, dconf,
sizeof(*dconf), sizeof(*dconf),
1000); 1000);

View File

@ -1590,7 +1590,7 @@ static int kvaser_usb_hydra_setup_endpoints(struct kvaser_usb *dev)
struct usb_endpoint_descriptor *ep; struct usb_endpoint_descriptor *ep;
int i; int i;
iface_desc = &dev->intf->altsetting[0]; iface_desc = dev->intf->cur_altsetting;
for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
ep = &iface_desc->endpoint[i].desc; ep = &iface_desc->endpoint[i].desc;

View File

@ -1310,7 +1310,7 @@ static int kvaser_usb_leaf_setup_endpoints(struct kvaser_usb *dev)
struct usb_endpoint_descriptor *endpoint; struct usb_endpoint_descriptor *endpoint;
int i; int i;
iface_desc = &dev->intf->altsetting[0]; iface_desc = dev->intf->cur_altsetting;
for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
endpoint = &iface_desc->endpoint[i].desc; endpoint = &iface_desc->endpoint[i].desc;

View File

@ -360,6 +360,11 @@ int mv88e6390_g1_set_cpu_port(struct mv88e6xxx_chip *chip, int port)
{ {
u16 ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST; u16 ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST;
/* Use the default high priority for management frames sent to
* the CPU.
*/
port |= MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST_MGMTPRI;
return mv88e6390_g1_monitor_write(chip, ptr, port); return mv88e6390_g1_monitor_write(chip, ptr, port);
} }

View File

@ -211,6 +211,7 @@
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_INGRESS_DEST 0x2000 #define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_INGRESS_DEST 0x2000
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_EGRESS_DEST 0x2100 #define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_EGRESS_DEST 0x2100
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST 0x3000 #define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST 0x3000
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST_MGMTPRI 0x00e0
#define MV88E6390_G1_MONITOR_MGMT_CTL_DATA_MASK 0x00ff #define MV88E6390_G1_MONITOR_MGMT_CTL_DATA_MASK 0x00ff
/* Offset 0x1C: Global Control 2 */ /* Offset 0x1C: Global Control 2 */

View File

@ -393,7 +393,7 @@ phy_interface_t mv88e6390x_port_max_speed_mode(int port)
} }
static int mv88e6xxx_port_set_cmode(struct mv88e6xxx_chip *chip, int port, static int mv88e6xxx_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
phy_interface_t mode) phy_interface_t mode, bool force)
{ {
u8 lane; u8 lane;
u16 cmode; u16 cmode;
@ -427,8 +427,8 @@ static int mv88e6xxx_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
cmode = 0; cmode = 0;
} }
/* cmode doesn't change, nothing to do for us */ /* cmode doesn't change, nothing to do for us unless forced */
if (cmode == chip->ports[port].cmode) if (cmode == chip->ports[port].cmode && !force)
return 0; return 0;
lane = mv88e6xxx_serdes_get_lane(chip, port); lane = mv88e6xxx_serdes_get_lane(chip, port);
@ -484,7 +484,7 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
if (port != 9 && port != 10) if (port != 9 && port != 10)
return -EOPNOTSUPP; return -EOPNOTSUPP;
return mv88e6xxx_port_set_cmode(chip, port, mode); return mv88e6xxx_port_set_cmode(chip, port, mode, false);
} }
int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port, int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
@ -504,7 +504,7 @@ int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
break; break;
} }
return mv88e6xxx_port_set_cmode(chip, port, mode); return mv88e6xxx_port_set_cmode(chip, port, mode, false);
} }
static int mv88e6341_port_set_cmode_writable(struct mv88e6xxx_chip *chip, static int mv88e6341_port_set_cmode_writable(struct mv88e6xxx_chip *chip,
@ -555,7 +555,7 @@ int mv88e6341_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
if (err) if (err)
return err; return err;
return mv88e6xxx_port_set_cmode(chip, port, mode); return mv88e6xxx_port_set_cmode(chip, port, mode, true);
} }
int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode) int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode)

View File

@ -403,6 +403,8 @@ int aq_nic_start(struct aq_nic_s *self)
if (err < 0) if (err < 0)
goto err_exit; goto err_exit;
aq_nic_set_loopback(self);
err = self->aq_hw_ops->hw_start(self->aq_hw); err = self->aq_hw_ops->hw_start(self->aq_hw);
if (err < 0) if (err < 0)
goto err_exit; goto err_exit;
@ -413,8 +415,6 @@ int aq_nic_start(struct aq_nic_s *self)
INIT_WORK(&self->service_task, aq_nic_service_task); INIT_WORK(&self->service_task, aq_nic_service_task);
aq_nic_set_loopback(self);
timer_setup(&self->service_timer, aq_nic_service_timer_cb, 0); timer_setup(&self->service_timer, aq_nic_service_timer_cb, 0);
aq_nic_service_timer_cb(&self->service_timer); aq_nic_service_timer_cb(&self->service_timer);

View File

@ -1525,9 +1525,6 @@ const struct aq_hw_ops hw_atl_ops_b0 = {
.rx_extract_ts = hw_atl_b0_rx_extract_ts, .rx_extract_ts = hw_atl_b0_rx_extract_ts,
.extract_hwts = hw_atl_b0_extract_hwts, .extract_hwts = hw_atl_b0_extract_hwts,
.hw_set_offload = hw_atl_b0_hw_offload_set, .hw_set_offload = hw_atl_b0_hw_offload_set,
.hw_get_hw_stats = hw_atl_utils_get_hw_stats,
.hw_get_fw_version = hw_atl_utils_get_fw_version,
.hw_set_offload = hw_atl_b0_hw_offload_set,
.hw_set_loopback = hw_atl_b0_set_loopback, .hw_set_loopback = hw_atl_b0_set_loopback,
.hw_set_fc = hw_atl_b0_set_fc, .hw_set_fc = hw_atl_b0_set_fc,
}; };

View File

@ -667,9 +667,7 @@ int hw_atl_utils_mpi_get_link_status(struct aq_hw_s *self)
u32 speed; u32 speed;
mpi_state = hw_atl_utils_mpi_get_state(self); mpi_state = hw_atl_utils_mpi_get_state(self);
speed = mpi_state & (FW2X_RATE_100M | FW2X_RATE_1G | speed = mpi_state >> HW_ATL_MPI_SPEED_SHIFT;
FW2X_RATE_2G5 | FW2X_RATE_5G |
FW2X_RATE_10G);
if (!speed) { if (!speed) {
link_status->mbps = 0U; link_status->mbps = 0U;

View File

@ -1516,8 +1516,10 @@ static int b44_magic_pattern(u8 *macaddr, u8 *ppattern, u8 *pmask, int offset)
int ethaddr_bytes = ETH_ALEN; int ethaddr_bytes = ETH_ALEN;
memset(ppattern + offset, 0xff, magicsync); memset(ppattern + offset, 0xff, magicsync);
for (j = 0; j < magicsync; j++) for (j = 0; j < magicsync; j++) {
set_bit(len++, (unsigned long *) pmask); pmask[len >> 3] |= BIT(len & 7);
len++;
}
for (j = 0; j < B44_MAX_PATTERNS; j++) { for (j = 0; j < B44_MAX_PATTERNS; j++) {
if ((B44_PATTERN_SIZE - len) >= ETH_ALEN) if ((B44_PATTERN_SIZE - len) >= ETH_ALEN)
@ -1529,7 +1531,8 @@ static int b44_magic_pattern(u8 *macaddr, u8 *ppattern, u8 *pmask, int offset)
for (k = 0; k< ethaddr_bytes; k++) { for (k = 0; k< ethaddr_bytes; k++) {
ppattern[offset + magicsync + ppattern[offset + magicsync +
(j * ETH_ALEN) + k] = macaddr[k]; (j * ETH_ALEN) + k] = macaddr[k];
set_bit(len++, (unsigned long *) pmask); pmask[len >> 3] |= BIT(len & 7);
len++;
} }
} }
return len - 1; return len - 1;

View File

@ -4088,7 +4088,7 @@ static int fu540_c000_clk_init(struct platform_device *pdev, struct clk **pclk,
mgmt->rate = 0; mgmt->rate = 0;
mgmt->hw.init = &init; mgmt->hw.init = &init;
*tx_clk = clk_register(NULL, &mgmt->hw); *tx_clk = devm_clk_register(&pdev->dev, &mgmt->hw);
if (IS_ERR(*tx_clk)) if (IS_ERR(*tx_clk))
return PTR_ERR(*tx_clk); return PTR_ERR(*tx_clk);
@ -4416,7 +4416,6 @@ err_out_free_netdev:
err_disable_clocks: err_disable_clocks:
clk_disable_unprepare(tx_clk); clk_disable_unprepare(tx_clk);
clk_unregister(tx_clk);
clk_disable_unprepare(hclk); clk_disable_unprepare(hclk);
clk_disable_unprepare(pclk); clk_disable_unprepare(pclk);
clk_disable_unprepare(rx_clk); clk_disable_unprepare(rx_clk);
@ -4446,7 +4445,6 @@ static int macb_remove(struct platform_device *pdev)
pm_runtime_dont_use_autosuspend(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev);
if (!pm_runtime_suspended(&pdev->dev)) { if (!pm_runtime_suspended(&pdev->dev)) {
clk_disable_unprepare(bp->tx_clk); clk_disable_unprepare(bp->tx_clk);
clk_unregister(bp->tx_clk);
clk_disable_unprepare(bp->hclk); clk_disable_unprepare(bp->hclk);
clk_disable_unprepare(bp->pclk); clk_disable_unprepare(bp->pclk);
clk_disable_unprepare(bp->rx_clk); clk_disable_unprepare(bp->rx_clk);

View File

@ -2199,8 +2199,14 @@ static void fec_enet_get_regs(struct net_device *ndev,
{ {
struct fec_enet_private *fep = netdev_priv(ndev); struct fec_enet_private *fep = netdev_priv(ndev);
u32 __iomem *theregs = (u32 __iomem *)fep->hwp; u32 __iomem *theregs = (u32 __iomem *)fep->hwp;
struct device *dev = &fep->pdev->dev;
u32 *buf = (u32 *)regbuf; u32 *buf = (u32 *)regbuf;
u32 i, off; u32 i, off;
int ret;
ret = pm_runtime_get_sync(dev);
if (ret < 0)
return;
regs->version = fec_enet_register_version; regs->version = fec_enet_register_version;
@ -2216,6 +2222,9 @@ static void fec_enet_get_regs(struct net_device *ndev,
off >>= 2; off >>= 2;
buf[off] = readl(&theregs[off]); buf[off] = readl(&theregs[off]);
} }
pm_runtime_mark_last_busy(dev);
pm_runtime_put_autosuspend(dev);
} }
static int fec_enet_get_ts_info(struct net_device *ndev, static int fec_enet_get_ts_info(struct net_device *ndev,

View File

@ -418,8 +418,6 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
rx->cnt = cnt; rx->cnt = cnt;
rx->fill_cnt += work_done; rx->fill_cnt += work_done;
/* restock desc ring slots */
dma_wmb(); /* Ensure descs are visible before ringing doorbell */
gve_rx_write_doorbell(priv, rx); gve_rx_write_doorbell(priv, rx);
return gve_rx_work_pending(rx); return gve_rx_work_pending(rx);
} }

View File

@ -487,10 +487,6 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
* may have added descriptors without ringing the doorbell. * may have added descriptors without ringing the doorbell.
*/ */
/* Ensure tx descs from a prior gve_tx are visible before
* ringing doorbell.
*/
dma_wmb();
gve_tx_put_doorbell(priv, tx->q_resources, tx->req); gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
return NETDEV_TX_BUSY; return NETDEV_TX_BUSY;
} }
@ -505,8 +501,6 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
if (!netif_xmit_stopped(tx->netdev_txq) && netdev_xmit_more()) if (!netif_xmit_stopped(tx->netdev_txq) && netdev_xmit_more())
return NETDEV_TX_OK; return NETDEV_TX_OK;
/* Ensure tx descs are visible before ringing doorbell */
dma_wmb();
gve_tx_put_doorbell(priv, tx->q_resources, tx->req); gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }

View File

@ -122,6 +122,22 @@ enum {
#endif #endif
}; };
#define MLX5E_TTC_NUM_GROUPS 3
#define MLX5E_TTC_GROUP1_SIZE (BIT(3) + MLX5E_NUM_TUNNEL_TT)
#define MLX5E_TTC_GROUP2_SIZE BIT(1)
#define MLX5E_TTC_GROUP3_SIZE BIT(0)
#define MLX5E_TTC_TABLE_SIZE (MLX5E_TTC_GROUP1_SIZE +\
MLX5E_TTC_GROUP2_SIZE +\
MLX5E_TTC_GROUP3_SIZE)
#define MLX5E_INNER_TTC_NUM_GROUPS 3
#define MLX5E_INNER_TTC_GROUP1_SIZE BIT(3)
#define MLX5E_INNER_TTC_GROUP2_SIZE BIT(1)
#define MLX5E_INNER_TTC_GROUP3_SIZE BIT(0)
#define MLX5E_INNER_TTC_TABLE_SIZE (MLX5E_INNER_TTC_GROUP1_SIZE +\
MLX5E_INNER_TTC_GROUP2_SIZE +\
MLX5E_INNER_TTC_GROUP3_SIZE)
#ifdef CONFIG_MLX5_EN_RXNFC #ifdef CONFIG_MLX5_EN_RXNFC
struct mlx5e_ethtool_table { struct mlx5e_ethtool_table {

View File

@ -197,9 +197,10 @@ int mlx5e_health_report(struct mlx5e_priv *priv,
struct devlink_health_reporter *reporter, char *err_str, struct devlink_health_reporter *reporter, char *err_str,
struct mlx5e_err_ctx *err_ctx) struct mlx5e_err_ctx *err_ctx)
{ {
if (!reporter) { netdev_err(priv->netdev, err_str);
netdev_err(priv->netdev, err_str);
if (!reporter)
return err_ctx->recover(&err_ctx->ctx); return err_ctx->recover(&err_ctx->ctx);
}
return devlink_health_report(reporter, err_str, err_ctx); return devlink_health_report(reporter, err_str, err_ctx);
} }

View File

@ -904,22 +904,6 @@ del_rules:
return err; return err;
} }
#define MLX5E_TTC_NUM_GROUPS 3
#define MLX5E_TTC_GROUP1_SIZE (BIT(3) + MLX5E_NUM_TUNNEL_TT)
#define MLX5E_TTC_GROUP2_SIZE BIT(1)
#define MLX5E_TTC_GROUP3_SIZE BIT(0)
#define MLX5E_TTC_TABLE_SIZE (MLX5E_TTC_GROUP1_SIZE +\
MLX5E_TTC_GROUP2_SIZE +\
MLX5E_TTC_GROUP3_SIZE)
#define MLX5E_INNER_TTC_NUM_GROUPS 3
#define MLX5E_INNER_TTC_GROUP1_SIZE BIT(3)
#define MLX5E_INNER_TTC_GROUP2_SIZE BIT(1)
#define MLX5E_INNER_TTC_GROUP3_SIZE BIT(0)
#define MLX5E_INNER_TTC_TABLE_SIZE (MLX5E_INNER_TTC_GROUP1_SIZE +\
MLX5E_INNER_TTC_GROUP2_SIZE +\
MLX5E_INNER_TTC_GROUP3_SIZE)
static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc, static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc,
bool use_ipv) bool use_ipv)
{ {

View File

@ -592,7 +592,7 @@ static void mlx5e_hairpin_set_ttc_params(struct mlx5e_hairpin *hp,
for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++)
ttc_params->indir_tirn[tt] = hp->indir_tirn[tt]; ttc_params->indir_tirn[tt] = hp->indir_tirn[tt];
ft_attr->max_fte = MLX5E_NUM_TT; ft_attr->max_fte = MLX5E_TTC_TABLE_SIZE;
ft_attr->level = MLX5E_TC_TTC_FT_LEVEL; ft_attr->level = MLX5E_TC_TTC_FT_LEVEL;
ft_attr->prio = MLX5E_TC_PRIO; ft_attr->prio = MLX5E_TC_PRIO;
} }
@ -2999,6 +2999,25 @@ static struct ip_tunnel_info *dup_tun_info(const struct ip_tunnel_info *tun_info
return kmemdup(tun_info, tun_size, GFP_KERNEL); return kmemdup(tun_info, tun_size, GFP_KERNEL);
} }
static bool is_duplicated_encap_entry(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
int out_index,
struct mlx5e_encap_entry *e,
struct netlink_ext_ack *extack)
{
int i;
for (i = 0; i < out_index; i++) {
if (flow->encaps[i].e != e)
continue;
NL_SET_ERR_MSG_MOD(extack, "can't duplicate encap action");
netdev_err(priv->netdev, "can't duplicate encap action\n");
return true;
}
return false;
}
static int mlx5e_attach_encap(struct mlx5e_priv *priv, static int mlx5e_attach_encap(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow, struct mlx5e_tc_flow *flow,
struct net_device *mirred_dev, struct net_device *mirred_dev,
@ -3034,6 +3053,12 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
/* must verify if encap is valid or not */ /* must verify if encap is valid or not */
if (e) { if (e) {
/* Check that entry was not already attached to this flow */
if (is_duplicated_encap_entry(priv, flow, out_index, e, extack)) {
err = -EOPNOTSUPP;
goto out_err;
}
mutex_unlock(&esw->offloads.encap_tbl_lock); mutex_unlock(&esw->offloads.encap_tbl_lock);
wait_for_completion(&e->res_ready); wait_for_completion(&e->res_ready);
@ -3220,6 +3245,26 @@ bool mlx5e_is_valid_eswitch_fwd_dev(struct mlx5e_priv *priv,
same_hw_devs(priv, netdev_priv(out_dev)); same_hw_devs(priv, netdev_priv(out_dev));
} }
static bool is_duplicated_output_device(struct net_device *dev,
struct net_device *out_dev,
int *ifindexes, int if_count,
struct netlink_ext_ack *extack)
{
int i;
for (i = 0; i < if_count; i++) {
if (ifindexes[i] == out_dev->ifindex) {
NL_SET_ERR_MSG_MOD(extack,
"can't duplicate output to same device");
netdev_err(dev, "can't duplicate output to same device: %s\n",
out_dev->name);
return true;
}
}
return false;
}
static int parse_tc_fdb_actions(struct mlx5e_priv *priv, static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
struct flow_action *flow_action, struct flow_action *flow_action,
struct mlx5e_tc_flow *flow, struct mlx5e_tc_flow *flow,
@ -3231,11 +3276,12 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
struct mlx5e_tc_flow_parse_attr *parse_attr = attr->parse_attr; struct mlx5e_tc_flow_parse_attr *parse_attr = attr->parse_attr;
struct mlx5e_rep_priv *rpriv = priv->ppriv; struct mlx5e_rep_priv *rpriv = priv->ppriv;
const struct ip_tunnel_info *info = NULL; const struct ip_tunnel_info *info = NULL;
int ifindexes[MLX5_MAX_FLOW_FWD_VPORTS];
bool ft_flow = mlx5e_is_ft_flow(flow); bool ft_flow = mlx5e_is_ft_flow(flow);
const struct flow_action_entry *act; const struct flow_action_entry *act;
int err, i, if_count = 0;
bool encap = false; bool encap = false;
u32 action = 0; u32 action = 0;
int err, i;
if (!flow_action_has_entries(flow_action)) if (!flow_action_has_entries(flow_action))
return -EINVAL; return -EINVAL;
@ -3312,6 +3358,16 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
struct net_device *uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH); struct net_device *uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH);
struct net_device *uplink_upper; struct net_device *uplink_upper;
if (is_duplicated_output_device(priv->netdev,
out_dev,
ifindexes,
if_count,
extack))
return -EOPNOTSUPP;
ifindexes[if_count] = out_dev->ifindex;
if_count++;
rcu_read_lock(); rcu_read_lock();
uplink_upper = uplink_upper =
netdev_master_upper_dev_get_rcu(uplink_dev); netdev_master_upper_dev_get_rcu(uplink_dev);

View File

@ -531,16 +531,9 @@ static void del_hw_fte(struct fs_node *node)
} }
} }
static void del_sw_fte_rcu(struct rcu_head *head)
{
struct fs_fte *fte = container_of(head, struct fs_fte, rcu);
struct mlx5_flow_steering *steering = get_steering(&fte->node);
kmem_cache_free(steering->ftes_cache, fte);
}
static void del_sw_fte(struct fs_node *node) static void del_sw_fte(struct fs_node *node)
{ {
struct mlx5_flow_steering *steering = get_steering(node);
struct mlx5_flow_group *fg; struct mlx5_flow_group *fg;
struct fs_fte *fte; struct fs_fte *fte;
int err; int err;
@ -553,8 +546,7 @@ static void del_sw_fte(struct fs_node *node)
rhash_fte); rhash_fte);
WARN_ON(err); WARN_ON(err);
ida_simple_remove(&fg->fte_allocator, fte->index - fg->start_index); ida_simple_remove(&fg->fte_allocator, fte->index - fg->start_index);
kmem_cache_free(steering->ftes_cache, fte);
call_rcu(&fte->rcu, del_sw_fte_rcu);
} }
static void del_hw_flow_group(struct fs_node *node) static void del_hw_flow_group(struct fs_node *node)
@ -1633,67 +1625,35 @@ static u64 matched_fgs_get_version(struct list_head *match_head)
} }
static struct fs_fte * static struct fs_fte *
lookup_fte_for_write_locked(struct mlx5_flow_group *g, const u32 *match_value) lookup_fte_locked(struct mlx5_flow_group *g,
const u32 *match_value,
bool take_write)
{ {
struct fs_fte *fte_tmp; struct fs_fte *fte_tmp;
nested_down_write_ref_node(&g->node, FS_LOCK_PARENT); if (take_write)
nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, match_value, rhash_fte);
if (!fte_tmp || !tree_get_node(&fte_tmp->node)) {
fte_tmp = NULL;
goto out;
}
if (!fte_tmp->node.active) {
tree_put_node(&fte_tmp->node, false);
fte_tmp = NULL;
goto out;
}
nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
out:
up_write_ref_node(&g->node, false);
return fte_tmp;
}
static struct fs_fte *
lookup_fte_for_read_locked(struct mlx5_flow_group *g, const u32 *match_value)
{
struct fs_fte *fte_tmp;
if (!tree_get_node(&g->node))
return NULL;
rcu_read_lock();
fte_tmp = rhashtable_lookup(&g->ftes_hash, match_value, rhash_fte);
if (!fte_tmp || !tree_get_node(&fte_tmp->node)) {
rcu_read_unlock();
fte_tmp = NULL;
goto out;
}
rcu_read_unlock();
if (!fte_tmp->node.active) {
tree_put_node(&fte_tmp->node, false);
fte_tmp = NULL;
goto out;
}
nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
out:
tree_put_node(&g->node, false);
return fte_tmp;
}
static struct fs_fte *
lookup_fte_locked(struct mlx5_flow_group *g, const u32 *match_value, bool write)
{
if (write)
return lookup_fte_for_write_locked(g, match_value);
else else
return lookup_fte_for_read_locked(g, match_value); nested_down_read_ref_node(&g->node, FS_LOCK_PARENT);
fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, match_value,
rhash_fte);
if (!fte_tmp || !tree_get_node(&fte_tmp->node)) {
fte_tmp = NULL;
goto out;
}
if (!fte_tmp->node.active) {
tree_put_node(&fte_tmp->node, false);
fte_tmp = NULL;
goto out;
}
nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
out:
if (take_write)
up_write_ref_node(&g->node, false);
else
up_read_ref_node(&g->node);
return fte_tmp;
} }
static struct mlx5_flow_handle * static struct mlx5_flow_handle *

View File

@ -203,7 +203,6 @@ struct fs_fte {
enum fs_fte_status status; enum fs_fte_status status;
struct mlx5_fc *counter; struct mlx5_fc *counter;
struct rhash_head hash; struct rhash_head hash;
struct rcu_head rcu;
int modify_mask; int modify_mask;
}; };

View File

@ -1193,6 +1193,12 @@ int mlx5_load_one(struct mlx5_core_dev *dev, bool boot)
if (err) if (err)
goto err_load; goto err_load;
if (boot) {
err = mlx5_devlink_register(priv_to_devlink(dev), dev->device);
if (err)
goto err_devlink_reg;
}
if (mlx5_device_registered(dev)) { if (mlx5_device_registered(dev)) {
mlx5_attach_device(dev); mlx5_attach_device(dev);
} else { } else {
@ -1210,6 +1216,9 @@ out:
return err; return err;
err_reg_dev: err_reg_dev:
if (boot)
mlx5_devlink_unregister(priv_to_devlink(dev));
err_devlink_reg:
mlx5_unload(dev); mlx5_unload(dev);
err_load: err_load:
if (boot) if (boot)
@ -1347,10 +1356,6 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id)
request_module_nowait(MLX5_IB_MOD); request_module_nowait(MLX5_IB_MOD);
err = mlx5_devlink_register(devlink, &pdev->dev);
if (err)
goto clean_load;
err = mlx5_crdump_enable(dev); err = mlx5_crdump_enable(dev);
if (err) if (err)
dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err); dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err);
@ -1358,9 +1363,6 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id)
pci_save_state(pdev); pci_save_state(pdev);
return 0; return 0;
clean_load:
mlx5_unload_one(dev, true);
err_load_one: err_load_one:
mlx5_pci_close(dev); mlx5_pci_close(dev);
pci_init_err: pci_init_err:

View File

@ -209,7 +209,7 @@ static void dr_rule_rehash_copy_ste_ctrl(struct mlx5dr_matcher *matcher,
/* We need to copy the refcount since this ste /* We need to copy the refcount since this ste
* may have been traversed several times * may have been traversed several times
*/ */
refcount_set(&new_ste->refcount, refcount_read(&cur_ste->refcount)); new_ste->refcount = cur_ste->refcount;
/* Link old STEs rule_mem list to the new ste */ /* Link old STEs rule_mem list to the new ste */
mlx5dr_rule_update_rule_member(cur_ste, new_ste); mlx5dr_rule_update_rule_member(cur_ste, new_ste);
@ -638,6 +638,9 @@ static int dr_rule_add_member(struct mlx5dr_rule_rx_tx *nic_rule,
if (!rule_mem) if (!rule_mem)
return -ENOMEM; return -ENOMEM;
INIT_LIST_HEAD(&rule_mem->list);
INIT_LIST_HEAD(&rule_mem->use_ste_list);
rule_mem->ste = ste; rule_mem->ste = ste;
list_add_tail(&rule_mem->list, &nic_rule->rule_members_list); list_add_tail(&rule_mem->list, &nic_rule->rule_members_list);

View File

@ -348,7 +348,7 @@ static void dr_ste_replace(struct mlx5dr_ste *dst, struct mlx5dr_ste *src)
if (dst->next_htbl) if (dst->next_htbl)
dst->next_htbl->pointing_ste = dst; dst->next_htbl->pointing_ste = dst;
refcount_set(&dst->refcount, refcount_read(&src->refcount)); dst->refcount = src->refcount;
INIT_LIST_HEAD(&dst->rule_list); INIT_LIST_HEAD(&dst->rule_list);
list_splice_tail_init(&src->rule_list, &dst->rule_list); list_splice_tail_init(&src->rule_list, &dst->rule_list);
@ -565,7 +565,7 @@ bool mlx5dr_ste_is_not_valid_entry(u8 *p_hw_ste)
bool mlx5dr_ste_not_used_ste(struct mlx5dr_ste *ste) bool mlx5dr_ste_not_used_ste(struct mlx5dr_ste *ste)
{ {
return !refcount_read(&ste->refcount); return !ste->refcount;
} }
/* Init one ste as a pattern for ste data array */ /* Init one ste as a pattern for ste data array */
@ -689,14 +689,14 @@ struct mlx5dr_ste_htbl *mlx5dr_ste_htbl_alloc(struct mlx5dr_icm_pool *pool,
htbl->ste_arr = chunk->ste_arr; htbl->ste_arr = chunk->ste_arr;
htbl->hw_ste_arr = chunk->hw_ste_arr; htbl->hw_ste_arr = chunk->hw_ste_arr;
htbl->miss_list = chunk->miss_list; htbl->miss_list = chunk->miss_list;
refcount_set(&htbl->refcount, 0); htbl->refcount = 0;
for (i = 0; i < chunk->num_of_entries; i++) { for (i = 0; i < chunk->num_of_entries; i++) {
struct mlx5dr_ste *ste = &htbl->ste_arr[i]; struct mlx5dr_ste *ste = &htbl->ste_arr[i];
ste->hw_ste = htbl->hw_ste_arr + i * DR_STE_SIZE_REDUCED; ste->hw_ste = htbl->hw_ste_arr + i * DR_STE_SIZE_REDUCED;
ste->htbl = htbl; ste->htbl = htbl;
refcount_set(&ste->refcount, 0); ste->refcount = 0;
INIT_LIST_HEAD(&ste->miss_list_node); INIT_LIST_HEAD(&ste->miss_list_node);
INIT_LIST_HEAD(&htbl->miss_list[i]); INIT_LIST_HEAD(&htbl->miss_list[i]);
INIT_LIST_HEAD(&ste->rule_list); INIT_LIST_HEAD(&ste->rule_list);
@ -713,7 +713,7 @@ out_free_htbl:
int mlx5dr_ste_htbl_free(struct mlx5dr_ste_htbl *htbl) int mlx5dr_ste_htbl_free(struct mlx5dr_ste_htbl *htbl)
{ {
if (refcount_read(&htbl->refcount)) if (htbl->refcount)
return -EBUSY; return -EBUSY;
mlx5dr_icm_free_chunk(htbl->chunk); mlx5dr_icm_free_chunk(htbl->chunk);

View File

@ -123,7 +123,7 @@ struct mlx5dr_matcher_rx_tx;
struct mlx5dr_ste { struct mlx5dr_ste {
u8 *hw_ste; u8 *hw_ste;
/* refcount: indicates the num of rules that using this ste */ /* refcount: indicates the num of rules that using this ste */
refcount_t refcount; u32 refcount;
/* attached to the miss_list head at each htbl entry */ /* attached to the miss_list head at each htbl entry */
struct list_head miss_list_node; struct list_head miss_list_node;
@ -155,7 +155,7 @@ struct mlx5dr_ste_htbl_ctrl {
struct mlx5dr_ste_htbl { struct mlx5dr_ste_htbl {
u8 lu_type; u8 lu_type;
u16 byte_mask; u16 byte_mask;
refcount_t refcount; u32 refcount;
struct mlx5dr_icm_chunk *chunk; struct mlx5dr_icm_chunk *chunk;
struct mlx5dr_ste *ste_arr; struct mlx5dr_ste *ste_arr;
u8 *hw_ste_arr; u8 *hw_ste_arr;
@ -206,13 +206,14 @@ int mlx5dr_ste_htbl_free(struct mlx5dr_ste_htbl *htbl);
static inline void mlx5dr_htbl_put(struct mlx5dr_ste_htbl *htbl) static inline void mlx5dr_htbl_put(struct mlx5dr_ste_htbl *htbl)
{ {
if (refcount_dec_and_test(&htbl->refcount)) htbl->refcount--;
if (!htbl->refcount)
mlx5dr_ste_htbl_free(htbl); mlx5dr_ste_htbl_free(htbl);
} }
static inline void mlx5dr_htbl_get(struct mlx5dr_ste_htbl *htbl) static inline void mlx5dr_htbl_get(struct mlx5dr_ste_htbl *htbl)
{ {
refcount_inc(&htbl->refcount); htbl->refcount++;
} }
/* STE utils */ /* STE utils */
@ -254,14 +255,15 @@ static inline void mlx5dr_ste_put(struct mlx5dr_ste *ste,
struct mlx5dr_matcher *matcher, struct mlx5dr_matcher *matcher,
struct mlx5dr_matcher_rx_tx *nic_matcher) struct mlx5dr_matcher_rx_tx *nic_matcher)
{ {
if (refcount_dec_and_test(&ste->refcount)) ste->refcount--;
if (!ste->refcount)
mlx5dr_ste_free(ste, matcher, nic_matcher); mlx5dr_ste_free(ste, matcher, nic_matcher);
} }
/* initial as 0, increased only when ste appears in a new rule */ /* initial as 0, increased only when ste appears in a new rule */
static inline void mlx5dr_ste_get(struct mlx5dr_ste *ste) static inline void mlx5dr_ste_get(struct mlx5dr_ste *ste)
{ {
refcount_inc(&ste->refcount); ste->refcount++;
} }
void mlx5dr_ste_set_hit_addr_by_next_htbl(u8 *hw_ste, void mlx5dr_ste_set_hit_addr_by_next_htbl(u8 *hw_ste,

View File

@ -651,6 +651,13 @@ mlxsw_sp_qdisc_prio_graft(struct mlxsw_sp_port *mlxsw_sp_port,
mlxsw_sp_port->tclass_qdiscs[tclass_num].handle == p->child_handle) mlxsw_sp_port->tclass_qdiscs[tclass_num].handle == p->child_handle)
return 0; return 0;
if (!p->child_handle) {
/* This is an invisible FIFO replacing the original Qdisc.
* Ignore it--the original Qdisc's destroy will follow.
*/
return 0;
}
/* See if the grafted qdisc is already offloaded on any tclass. If so, /* See if the grafted qdisc is already offloaded on any tclass. If so,
* unoffload it. * unoffload it.
*/ */

View File

@ -2296,7 +2296,7 @@ __setup("sxgbeeth=", sxgbe_cmdline_opt);
MODULE_DESCRIPTION("SAMSUNG 10G/2.5G/1G Ethernet PLATFORM driver"); MODULE_DESCRIPTION("Samsung 10G/2.5G/1G Ethernet PLATFORM driver");
MODULE_PARM_DESC(debug, "Message Level (-1: default, 0: no output, 16: all)"); MODULE_PARM_DESC(debug, "Message Level (-1: default, 0: no output, 16: all)");
MODULE_PARM_DESC(eee_timer, "EEE-LPI Default LS timer value"); MODULE_PARM_DESC(eee_timer, "EEE-LPI Default LS timer value");

View File

@ -957,6 +957,9 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
/* default */ /* default */
break; break;
case PHY_INTERFACE_MODE_RGMII: case PHY_INTERFACE_MODE_RGMII:
case PHY_INTERFACE_MODE_RGMII_ID:
case PHY_INTERFACE_MODE_RGMII_RXID:
case PHY_INTERFACE_MODE_RGMII_TXID:
reg |= SYSCON_EPIT | SYSCON_ETCS_INT_GMII; reg |= SYSCON_EPIT | SYSCON_ETCS_INT_GMII;
break; break;
case PHY_INTERFACE_MODE_RMII: case PHY_INTERFACE_MODE_RMII:

View File

@ -44,7 +44,7 @@ static int sun7i_gmac_init(struct platform_device *pdev, void *priv)
* rate, which then uses the auto-reparenting feature of the * rate, which then uses the auto-reparenting feature of the
* clock driver, and enabling/disabling the clock. * clock driver, and enabling/disabling the clock.
*/ */
if (gmac->interface == PHY_INTERFACE_MODE_RGMII) { if (phy_interface_mode_is_rgmii(gmac->interface)) {
clk_set_rate(gmac->tx_clk, SUN7I_GMAC_GMII_RGMII_RATE); clk_set_rate(gmac->tx_clk, SUN7I_GMAC_GMII_RGMII_RATE);
clk_prepare_enable(gmac->tx_clk); clk_prepare_enable(gmac->tx_clk);
gmac->clk_enabled = 1; gmac->clk_enabled = 1;

View File

@ -106,6 +106,7 @@ MODULE_PARM_DESC(chain_mode, "To use chain instead of ring mode");
static irqreturn_t stmmac_interrupt(int irq, void *dev_id); static irqreturn_t stmmac_interrupt(int irq, void *dev_id);
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
static const struct net_device_ops stmmac_netdev_ops;
static void stmmac_init_fs(struct net_device *dev); static void stmmac_init_fs(struct net_device *dev);
static void stmmac_exit_fs(struct net_device *dev); static void stmmac_exit_fs(struct net_device *dev);
#endif #endif
@ -4256,6 +4257,34 @@ static int stmmac_dma_cap_show(struct seq_file *seq, void *v)
} }
DEFINE_SHOW_ATTRIBUTE(stmmac_dma_cap); DEFINE_SHOW_ATTRIBUTE(stmmac_dma_cap);
/* Use network device events to rename debugfs file entries.
*/
static int stmmac_device_event(struct notifier_block *unused,
unsigned long event, void *ptr)
{
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
struct stmmac_priv *priv = netdev_priv(dev);
if (dev->netdev_ops != &stmmac_netdev_ops)
goto done;
switch (event) {
case NETDEV_CHANGENAME:
if (priv->dbgfs_dir)
priv->dbgfs_dir = debugfs_rename(stmmac_fs_dir,
priv->dbgfs_dir,
stmmac_fs_dir,
dev->name);
break;
}
done:
return NOTIFY_DONE;
}
static struct notifier_block stmmac_notifier = {
.notifier_call = stmmac_device_event,
};
static void stmmac_init_fs(struct net_device *dev) static void stmmac_init_fs(struct net_device *dev)
{ {
struct stmmac_priv *priv = netdev_priv(dev); struct stmmac_priv *priv = netdev_priv(dev);
@ -4270,12 +4299,15 @@ static void stmmac_init_fs(struct net_device *dev)
/* Entry to report the DMA HW features */ /* Entry to report the DMA HW features */
debugfs_create_file("dma_cap", 0444, priv->dbgfs_dir, dev, debugfs_create_file("dma_cap", 0444, priv->dbgfs_dir, dev,
&stmmac_dma_cap_fops); &stmmac_dma_cap_fops);
register_netdevice_notifier(&stmmac_notifier);
} }
static void stmmac_exit_fs(struct net_device *dev) static void stmmac_exit_fs(struct net_device *dev)
{ {
struct stmmac_priv *priv = netdev_priv(dev); struct stmmac_priv *priv = netdev_priv(dev);
unregister_netdevice_notifier(&stmmac_notifier);
debugfs_remove_recursive(priv->dbgfs_dir); debugfs_remove_recursive(priv->dbgfs_dir);
} }
#endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_DEBUG_FS */

View File

@ -320,7 +320,7 @@ out:
static int stmmac_dt_phy(struct plat_stmmacenet_data *plat, static int stmmac_dt_phy(struct plat_stmmacenet_data *plat,
struct device_node *np, struct device *dev) struct device_node *np, struct device *dev)
{ {
bool mdio = false; bool mdio = !of_phy_is_fixed_link(np);
static const struct of_device_id need_mdio_ids[] = { static const struct of_device_id need_mdio_ids[] = {
{ .compatible = "snps,dwc-qos-ethernet-4.10" }, { .compatible = "snps,dwc-qos-ethernet-4.10" },
{}, {},

View File

@ -813,7 +813,7 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
lock_sock(sock->sk); lock_sock(sock->sk);
if (sock->sk->sk_user_data) { if (sock->sk->sk_user_data) {
sk = ERR_PTR(-EBUSY); sk = ERR_PTR(-EBUSY);
goto out_sock; goto out_rel_sock;
} }
sk = sock->sk; sk = sock->sk;
@ -826,8 +826,9 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
setup_udp_tunnel_sock(sock_net(sock->sk), sock, &tuncfg); setup_udp_tunnel_sock(sock_net(sock->sk), sock, &tuncfg);
out_sock: out_rel_sock:
release_sock(sock->sk); release_sock(sock->sk);
out_sock:
sockfd_put(sock); sockfd_put(sock);
return sk; return sk;
} }

View File

@ -259,7 +259,7 @@ static void macvlan_broadcast(struct sk_buff *skb,
struct net_device *src, struct net_device *src,
enum macvlan_mode mode) enum macvlan_mode mode)
{ {
const struct ethhdr *eth = eth_hdr(skb); const struct ethhdr *eth = skb_eth_hdr(skb);
const struct macvlan_dev *vlan; const struct macvlan_dev *vlan;
struct sk_buff *nskb; struct sk_buff *nskb;
unsigned int i; unsigned int i;

View File

@ -566,6 +566,9 @@ static int phylink_register_sfp(struct phylink *pl,
struct sfp_bus *bus; struct sfp_bus *bus;
int ret; int ret;
if (!fwnode)
return 0;
bus = sfp_bus_find_fwnode(fwnode); bus = sfp_bus_find_fwnode(fwnode);
if (IS_ERR(bus)) { if (IS_ERR(bus)) {
ret = PTR_ERR(bus); ret = PTR_ERR(bus);

View File

@ -2724,11 +2724,6 @@ static int lan78xx_stop(struct net_device *net)
return 0; return 0;
} }
static int lan78xx_linearize(struct sk_buff *skb)
{
return skb_linearize(skb);
}
static struct sk_buff *lan78xx_tx_prep(struct lan78xx_net *dev, static struct sk_buff *lan78xx_tx_prep(struct lan78xx_net *dev,
struct sk_buff *skb, gfp_t flags) struct sk_buff *skb, gfp_t flags)
{ {
@ -2740,8 +2735,10 @@ static struct sk_buff *lan78xx_tx_prep(struct lan78xx_net *dev,
return NULL; return NULL;
} }
if (lan78xx_linearize(skb) < 0) if (skb_linearize(skb)) {
dev_kfree_skb_any(skb);
return NULL; return NULL;
}
tx_cmd_a = (u32)(skb->len & TX_CMD_A_LEN_MASK_) | TX_CMD_A_FCS_; tx_cmd_a = (u32)(skb->len & TX_CMD_A_LEN_MASK_) | TX_CMD_A_FCS_;

View File

@ -2541,7 +2541,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
ndst = &rt->dst; ndst = &rt->dst;
skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM); skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM);
tos = ip_tunnel_ecn_encap(tos, old_iph, skb); tos = ip_tunnel_ecn_encap(RT_TOS(tos), old_iph, skb);
ttl = ttl ? : ip4_dst_hoplimit(&rt->dst); ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
err = vxlan_build_skb(skb, ndst, sizeof(struct iphdr), err = vxlan_build_skb(skb, ndst, sizeof(struct iphdr),
vni, md, flags, udp_sum); vni, md, flags, udp_sum);
@ -2581,7 +2581,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM); skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM);
tos = ip_tunnel_ecn_encap(tos, old_iph, skb); tos = ip_tunnel_ecn_encap(RT_TOS(tos), old_iph, skb);
ttl = ttl ? : ip6_dst_hoplimit(ndst); ttl = ttl ? : ip6_dst_hoplimit(ndst);
skb_scrub_packet(skb, xnet); skb_scrub_packet(skb, xnet);
err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr), err = vxlan_build_skb(skb, ndst, sizeof(struct ipv6hdr),

View File

@ -708,7 +708,7 @@ static netdev_tx_t sdla_transmit(struct sk_buff *skb,
spin_lock_irqsave(&sdla_lock, flags); spin_lock_irqsave(&sdla_lock, flags);
SDLA_WINDOW(dev, addr); SDLA_WINDOW(dev, addr);
pbuf = (void *)(((int) dev->mem_start) + (addr & SDLA_ADDR_MASK)); pbuf = (void *)(dev->mem_start + (addr & SDLA_ADDR_MASK));
__sdla_write(dev, pbuf->buf_addr, skb->data, skb->len); __sdla_write(dev, pbuf->buf_addr, skb->data, skb->len);
SDLA_WINDOW(dev, addr); SDLA_WINDOW(dev, addr);
pbuf->opp_flag = 1; pbuf->opp_flag = 1;

View File

@ -18,6 +18,7 @@
#include <linux/can/error.h> #include <linux/can/error.h>
#include <linux/can/led.h> #include <linux/can/led.h>
#include <linux/can/netlink.h> #include <linux/can/netlink.h>
#include <linux/can/skb.h>
#include <linux/netdevice.h> #include <linux/netdevice.h>
/* /*
@ -91,6 +92,36 @@ struct can_priv {
#define get_can_dlc(i) (min_t(__u8, (i), CAN_MAX_DLC)) #define get_can_dlc(i) (min_t(__u8, (i), CAN_MAX_DLC))
#define get_canfd_dlc(i) (min_t(__u8, (i), CANFD_MAX_DLC)) #define get_canfd_dlc(i) (min_t(__u8, (i), CANFD_MAX_DLC))
/* Check for outgoing skbs that have not been created by the CAN subsystem */
static inline bool can_skb_headroom_valid(struct net_device *dev,
struct sk_buff *skb)
{
/* af_packet creates a headroom of HH_DATA_MOD bytes which is fine */
if (WARN_ON_ONCE(skb_headroom(skb) < sizeof(struct can_skb_priv)))
return false;
/* af_packet does not apply CAN skb specific settings */
if (skb->ip_summed == CHECKSUM_NONE) {
/* init headroom */
can_skb_prv(skb)->ifindex = dev->ifindex;
can_skb_prv(skb)->skbcnt = 0;
skb->ip_summed = CHECKSUM_UNNECESSARY;
/* preform proper loopback on capable devices */
if (dev->flags & IFF_ECHO)
skb->pkt_type = PACKET_LOOPBACK;
else
skb->pkt_type = PACKET_HOST;
skb_reset_mac_header(skb);
skb_reset_network_header(skb);
skb_reset_transport_header(skb);
}
return true;
}
/* Drop a given socketbuffer if it does not contain a valid CAN frame. */ /* Drop a given socketbuffer if it does not contain a valid CAN frame. */
static inline bool can_dropped_invalid_skb(struct net_device *dev, static inline bool can_dropped_invalid_skb(struct net_device *dev,
struct sk_buff *skb) struct sk_buff *skb)
@ -108,6 +139,9 @@ static inline bool can_dropped_invalid_skb(struct net_device *dev,
} else } else
goto inval_skb; goto inval_skb;
if (!can_skb_headroom_valid(dev, skb))
goto inval_skb;
return false; return false;
inval_skb: inval_skb:

View File

@ -24,6 +24,14 @@ static inline struct ethhdr *eth_hdr(const struct sk_buff *skb)
return (struct ethhdr *)skb_mac_header(skb); return (struct ethhdr *)skb_mac_header(skb);
} }
/* Prefer this version in TX path, instead of
* skb_reset_mac_header() + eth_hdr()
*/
static inline struct ethhdr *skb_eth_hdr(const struct sk_buff *skb)
{
return (struct ethhdr *)skb->data;
}
static inline struct ethhdr *inner_eth_hdr(const struct sk_buff *skb) static inline struct ethhdr *inner_eth_hdr(const struct sk_buff *skb)
{ {
return (struct ethhdr *)skb_inner_mac_header(skb); return (struct ethhdr *)skb_inner_mac_header(skb);

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */ /* SPDX-License-Identifier: GPL-2.0-only */
/* /*
* 10G controller driver for Samsung EXYNOS SoCs * 10G controller driver for Samsung Exynos SoCs
* *
* Copyright (C) 2013 Samsung Electronics Co., Ltd. * Copyright (C) 2013 Samsung Electronics Co., Ltd.
* http://www.samsung.com * http://www.samsung.com

View File

@ -106,6 +106,12 @@ struct flow_offload {
}; };
#define NF_FLOW_TIMEOUT (30 * HZ) #define NF_FLOW_TIMEOUT (30 * HZ)
#define nf_flowtable_time_stamp (u32)jiffies
static inline __s32 nf_flow_timeout_delta(unsigned int timeout)
{
return (__s32)(timeout - nf_flowtable_time_stamp);
}
struct nf_flow_route { struct nf_flow_route {
struct { struct {

View File

@ -35,8 +35,8 @@ void cgroup_bpf_offline(struct cgroup *cgrp)
*/ */
static void cgroup_bpf_release(struct work_struct *work) static void cgroup_bpf_release(struct work_struct *work)
{ {
struct cgroup *cgrp = container_of(work, struct cgroup, struct cgroup *p, *cgrp = container_of(work, struct cgroup,
bpf.release_work); bpf.release_work);
enum bpf_cgroup_storage_type stype; enum bpf_cgroup_storage_type stype;
struct bpf_prog_array *old_array; struct bpf_prog_array *old_array;
unsigned int type; unsigned int type;
@ -65,6 +65,9 @@ static void cgroup_bpf_release(struct work_struct *work)
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p))
cgroup_bpf_put(p);
percpu_ref_exit(&cgrp->bpf.refcnt); percpu_ref_exit(&cgrp->bpf.refcnt);
cgroup_put(cgrp); cgroup_put(cgrp);
} }
@ -199,6 +202,7 @@ int cgroup_bpf_inherit(struct cgroup *cgrp)
*/ */
#define NR ARRAY_SIZE(cgrp->bpf.effective) #define NR ARRAY_SIZE(cgrp->bpf.effective)
struct bpf_prog_array *arrays[NR] = {}; struct bpf_prog_array *arrays[NR] = {};
struct cgroup *p;
int ret, i; int ret, i;
ret = percpu_ref_init(&cgrp->bpf.refcnt, cgroup_bpf_release_fn, 0, ret = percpu_ref_init(&cgrp->bpf.refcnt, cgroup_bpf_release_fn, 0,
@ -206,6 +210,9 @@ int cgroup_bpf_inherit(struct cgroup *cgrp)
if (ret) if (ret)
return ret; return ret;
for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p))
cgroup_bpf_get(p);
for (i = 0; i < NR; i++) for (i = 0; i < NR; i++)
INIT_LIST_HEAD(&cgrp->bpf.progs[i]); INIT_LIST_HEAD(&cgrp->bpf.progs[i]);

View File

@ -6264,6 +6264,7 @@ static bool may_access_skb(enum bpf_prog_type type)
static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
{ {
struct bpf_reg_state *regs = cur_regs(env); struct bpf_reg_state *regs = cur_regs(env);
static const int ctx_reg = BPF_REG_6;
u8 mode = BPF_MODE(insn->code); u8 mode = BPF_MODE(insn->code);
int i, err; int i, err;
@ -6297,7 +6298,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
} }
/* check whether implicit source operand (register R6) is readable */ /* check whether implicit source operand (register R6) is readable */
err = check_reg_arg(env, BPF_REG_6, SRC_OP); err = check_reg_arg(env, ctx_reg, SRC_OP);
if (err) if (err)
return err; return err;
@ -6316,7 +6317,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
return -EINVAL; return -EINVAL;
} }
if (regs[BPF_REG_6].type != PTR_TO_CTX) { if (regs[ctx_reg].type != PTR_TO_CTX) {
verbose(env, verbose(env,
"at the time of BPF_LD_ABS|IND R6 != pointer to skb\n"); "at the time of BPF_LD_ABS|IND R6 != pointer to skb\n");
return -EINVAL; return -EINVAL;
@ -6329,6 +6330,10 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
return err; return err;
} }
err = check_ctx_reg(env, &regs[ctx_reg], ctx_reg);
if (err < 0)
return err;
/* reset caller saved regs to unreadable */ /* reset caller saved regs to unreadable */
for (i = 0; i < CALLER_SAVED_REGS; i++) { for (i = 0; i < CALLER_SAVED_REGS; i++) {
mark_reg_not_init(env, regs, caller_saved[i]); mark_reg_not_init(env, regs, caller_saved[i]);

View File

@ -126,6 +126,7 @@ int vlan_check_real_dev(struct net_device *real_dev,
void vlan_setup(struct net_device *dev); void vlan_setup(struct net_device *dev);
int register_vlan_dev(struct net_device *dev, struct netlink_ext_ack *extack); int register_vlan_dev(struct net_device *dev, struct netlink_ext_ack *extack);
void unregister_vlan_dev(struct net_device *dev, struct list_head *head); void unregister_vlan_dev(struct net_device *dev, struct list_head *head);
void vlan_dev_uninit(struct net_device *dev);
bool vlan_dev_inherit_address(struct net_device *dev, bool vlan_dev_inherit_address(struct net_device *dev,
struct net_device *real_dev); struct net_device *real_dev);

View File

@ -586,7 +586,8 @@ static int vlan_dev_init(struct net_device *dev)
return 0; return 0;
} }
static void vlan_dev_uninit(struct net_device *dev) /* Note: this function might be called multiple times for the same device. */
void vlan_dev_uninit(struct net_device *dev)
{ {
struct vlan_priority_tci_mapping *pm; struct vlan_priority_tci_mapping *pm;
struct vlan_dev_priv *vlan = vlan_dev_priv(dev); struct vlan_dev_priv *vlan = vlan_dev_priv(dev);

View File

@ -108,11 +108,13 @@ static int vlan_changelink(struct net_device *dev, struct nlattr *tb[],
struct ifla_vlan_flags *flags; struct ifla_vlan_flags *flags;
struct ifla_vlan_qos_mapping *m; struct ifla_vlan_qos_mapping *m;
struct nlattr *attr; struct nlattr *attr;
int rem; int rem, err;
if (data[IFLA_VLAN_FLAGS]) { if (data[IFLA_VLAN_FLAGS]) {
flags = nla_data(data[IFLA_VLAN_FLAGS]); flags = nla_data(data[IFLA_VLAN_FLAGS]);
vlan_dev_change_flags(dev, flags->flags, flags->mask); err = vlan_dev_change_flags(dev, flags->flags, flags->mask);
if (err)
return err;
} }
if (data[IFLA_VLAN_INGRESS_QOS]) { if (data[IFLA_VLAN_INGRESS_QOS]) {
nla_for_each_nested(attr, data[IFLA_VLAN_INGRESS_QOS], rem) { nla_for_each_nested(attr, data[IFLA_VLAN_INGRESS_QOS], rem) {
@ -123,7 +125,9 @@ static int vlan_changelink(struct net_device *dev, struct nlattr *tb[],
if (data[IFLA_VLAN_EGRESS_QOS]) { if (data[IFLA_VLAN_EGRESS_QOS]) {
nla_for_each_nested(attr, data[IFLA_VLAN_EGRESS_QOS], rem) { nla_for_each_nested(attr, data[IFLA_VLAN_EGRESS_QOS], rem) {
m = nla_data(attr); m = nla_data(attr);
vlan_dev_set_egress_priority(dev, m->from, m->to); err = vlan_dev_set_egress_priority(dev, m->from, m->to);
if (err)
return err;
} }
} }
return 0; return 0;
@ -179,10 +183,11 @@ static int vlan_newlink(struct net *src_net, struct net_device *dev,
return -EINVAL; return -EINVAL;
err = vlan_changelink(dev, tb, data, extack); err = vlan_changelink(dev, tb, data, extack);
if (err < 0) if (!err)
return err; err = register_vlan_dev(dev, extack);
if (err)
return register_vlan_dev(dev, extack); vlan_dev_uninit(dev);
return err;
} }
static inline size_t vlan_qos_map_size(unsigned int n) static inline size_t vlan_qos_map_size(unsigned int n)

View File

@ -384,10 +384,11 @@ next: ;
return 1; return 1;
} }
static inline int check_target(struct arpt_entry *e, const char *name) static int check_target(struct arpt_entry *e, struct net *net, const char *name)
{ {
struct xt_entry_target *t = arpt_get_target(e); struct xt_entry_target *t = arpt_get_target(e);
struct xt_tgchk_param par = { struct xt_tgchk_param par = {
.net = net,
.table = name, .table = name,
.entryinfo = e, .entryinfo = e,
.target = t->u.kernel.target, .target = t->u.kernel.target,
@ -399,8 +400,9 @@ static inline int check_target(struct arpt_entry *e, const char *name)
return xt_check_target(&par, t->u.target_size - sizeof(*t), 0, false); return xt_check_target(&par, t->u.target_size - sizeof(*t), 0, false);
} }
static inline int static int
find_check_entry(struct arpt_entry *e, const char *name, unsigned int size, find_check_entry(struct arpt_entry *e, struct net *net, const char *name,
unsigned int size,
struct xt_percpu_counter_alloc_state *alloc_state) struct xt_percpu_counter_alloc_state *alloc_state)
{ {
struct xt_entry_target *t; struct xt_entry_target *t;
@ -419,7 +421,7 @@ find_check_entry(struct arpt_entry *e, const char *name, unsigned int size,
} }
t->u.kernel.target = target; t->u.kernel.target = target;
ret = check_target(e, name); ret = check_target(e, net, name);
if (ret) if (ret)
goto err; goto err;
return 0; return 0;
@ -512,7 +514,9 @@ static inline void cleanup_entry(struct arpt_entry *e)
/* Checks and translates the user-supplied table segment (held in /* Checks and translates the user-supplied table segment (held in
* newinfo). * newinfo).
*/ */
static int translate_table(struct xt_table_info *newinfo, void *entry0, static int translate_table(struct net *net,
struct xt_table_info *newinfo,
void *entry0,
const struct arpt_replace *repl) const struct arpt_replace *repl)
{ {
struct xt_percpu_counter_alloc_state alloc_state = { 0 }; struct xt_percpu_counter_alloc_state alloc_state = { 0 };
@ -569,7 +573,7 @@ static int translate_table(struct xt_table_info *newinfo, void *entry0,
/* Finally, each sanity check must pass */ /* Finally, each sanity check must pass */
i = 0; i = 0;
xt_entry_foreach(iter, entry0, newinfo->size) { xt_entry_foreach(iter, entry0, newinfo->size) {
ret = find_check_entry(iter, repl->name, repl->size, ret = find_check_entry(iter, net, repl->name, repl->size,
&alloc_state); &alloc_state);
if (ret != 0) if (ret != 0)
break; break;
@ -974,7 +978,7 @@ static int do_replace(struct net *net, const void __user *user,
goto free_newinfo; goto free_newinfo;
} }
ret = translate_table(newinfo, loc_cpu_entry, &tmp); ret = translate_table(net, newinfo, loc_cpu_entry, &tmp);
if (ret != 0) if (ret != 0)
goto free_newinfo; goto free_newinfo;
@ -1149,7 +1153,8 @@ compat_copy_entry_from_user(struct compat_arpt_entry *e, void **dstptr,
} }
} }
static int translate_compat_table(struct xt_table_info **pinfo, static int translate_compat_table(struct net *net,
struct xt_table_info **pinfo,
void **pentry0, void **pentry0,
const struct compat_arpt_replace *compatr) const struct compat_arpt_replace *compatr)
{ {
@ -1217,7 +1222,7 @@ static int translate_compat_table(struct xt_table_info **pinfo,
repl.num_counters = 0; repl.num_counters = 0;
repl.counters = NULL; repl.counters = NULL;
repl.size = newinfo->size; repl.size = newinfo->size;
ret = translate_table(newinfo, entry1, &repl); ret = translate_table(net, newinfo, entry1, &repl);
if (ret) if (ret)
goto free_newinfo; goto free_newinfo;
@ -1270,7 +1275,7 @@ static int compat_do_replace(struct net *net, void __user *user,
goto free_newinfo; goto free_newinfo;
} }
ret = translate_compat_table(&newinfo, &loc_cpu_entry, &tmp); ret = translate_compat_table(net, &newinfo, &loc_cpu_entry, &tmp);
if (ret != 0) if (ret != 0)
goto free_newinfo; goto free_newinfo;
@ -1546,7 +1551,7 @@ int arpt_register_table(struct net *net,
loc_cpu_entry = newinfo->entries; loc_cpu_entry = newinfo->entries;
memcpy(loc_cpu_entry, repl->entries, repl->size); memcpy(loc_cpu_entry, repl->entries, repl->size);
ret = translate_table(newinfo, loc_cpu_entry, repl); ret = translate_table(net, newinfo, loc_cpu_entry, repl);
if (ret != 0) if (ret != 0)
goto out_free; goto out_free;

View File

@ -1727,8 +1727,11 @@ tcp_sacktag_write_queue(struct sock *sk, const struct sk_buff *ack_skb,
} }
/* Ignore very old stuff early */ /* Ignore very old stuff early */
if (!after(sp[used_sacks].end_seq, prior_snd_una)) if (!after(sp[used_sacks].end_seq, prior_snd_una)) {
if (i == 0)
first_sack_index = -1;
continue; continue;
}
used_sacks++; used_sacks++;
} }

View File

@ -1848,6 +1848,7 @@ static int ip_set_utest(struct net *net, struct sock *ctnl, struct sk_buff *skb,
struct ip_set *set; struct ip_set *set;
struct nlattr *tb[IPSET_ATTR_ADT_MAX + 1] = {}; struct nlattr *tb[IPSET_ATTR_ADT_MAX + 1] = {};
int ret = 0; int ret = 0;
u32 lineno;
if (unlikely(protocol_min_failed(attr) || if (unlikely(protocol_min_failed(attr) ||
!attr[IPSET_ATTR_SETNAME] || !attr[IPSET_ATTR_SETNAME] ||
@ -1864,7 +1865,7 @@ static int ip_set_utest(struct net *net, struct sock *ctnl, struct sk_buff *skb,
return -IPSET_ERR_PROTOCOL; return -IPSET_ERR_PROTOCOL;
rcu_read_lock_bh(); rcu_read_lock_bh();
ret = set->variant->uadt(set, tb, IPSET_TEST, NULL, 0, 0); ret = set->variant->uadt(set, tb, IPSET_TEST, &lineno, 0, 0);
rcu_read_unlock_bh(); rcu_read_unlock_bh();
/* Userspace can't trigger element to be re-added */ /* Userspace can't trigger element to be re-added */
if (ret == -EAGAIN) if (ret == -EAGAIN)

View File

@ -677,6 +677,9 @@ static int dccp_timeout_nlattr_to_obj(struct nlattr *tb[],
unsigned int *timeouts = data; unsigned int *timeouts = data;
int i; int i;
if (!timeouts)
timeouts = dn->dccp_timeout;
/* set default DCCP timeouts. */ /* set default DCCP timeouts. */
for (i=0; i<CT_DCCP_MAX; i++) for (i=0; i<CT_DCCP_MAX; i++)
timeouts[i] = dn->dccp_timeout[i]; timeouts[i] = dn->dccp_timeout[i];

View File

@ -594,6 +594,9 @@ static int sctp_timeout_nlattr_to_obj(struct nlattr *tb[],
struct nf_sctp_net *sn = nf_sctp_pernet(net); struct nf_sctp_net *sn = nf_sctp_pernet(net);
int i; int i;
if (!timeouts)
timeouts = sn->timeouts;
/* set default SCTP timeouts. */ /* set default SCTP timeouts. */
for (i=0; i<SCTP_CONNTRACK_MAX; i++) for (i=0; i<SCTP_CONNTRACK_MAX; i++)
timeouts[i] = sn->timeouts[i]; timeouts[i] = sn->timeouts[i];

View File

@ -134,11 +134,6 @@ static void flow_offload_fixup_tcp(struct ip_ct_tcp *tcp)
#define NF_FLOWTABLE_TCP_PICKUP_TIMEOUT (120 * HZ) #define NF_FLOWTABLE_TCP_PICKUP_TIMEOUT (120 * HZ)
#define NF_FLOWTABLE_UDP_PICKUP_TIMEOUT (30 * HZ) #define NF_FLOWTABLE_UDP_PICKUP_TIMEOUT (30 * HZ)
static inline __s32 nf_flow_timeout_delta(unsigned int timeout)
{
return (__s32)(timeout - (u32)jiffies);
}
static void flow_offload_fixup_ct_timeout(struct nf_conn *ct) static void flow_offload_fixup_ct_timeout(struct nf_conn *ct)
{ {
const struct nf_conntrack_l4proto *l4proto; const struct nf_conntrack_l4proto *l4proto;
@ -232,7 +227,7 @@ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow)
{ {
int err; int err;
flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; flow->timeout = nf_flowtable_time_stamp + NF_FLOW_TIMEOUT;
err = rhashtable_insert_fast(&flow_table->rhashtable, err = rhashtable_insert_fast(&flow_table->rhashtable,
&flow->tuplehash[0].node, &flow->tuplehash[0].node,

View File

@ -280,7 +280,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
if (nf_flow_nat_ip(flow, skb, thoff, dir) < 0) if (nf_flow_nat_ip(flow, skb, thoff, dir) < 0)
return NF_DROP; return NF_DROP;
flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; flow->timeout = nf_flowtable_time_stamp + NF_FLOW_TIMEOUT;
iph = ip_hdr(skb); iph = ip_hdr(skb);
ip_decrease_ttl(iph); ip_decrease_ttl(iph);
skb->tstamp = 0; skb->tstamp = 0;
@ -509,7 +509,7 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
if (nf_flow_nat_ipv6(flow, skb, dir) < 0) if (nf_flow_nat_ipv6(flow, skb, dir) < 0)
return NF_DROP; return NF_DROP;
flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; flow->timeout = nf_flowtable_time_stamp + NF_FLOW_TIMEOUT;
ip6h = ipv6_hdr(skb); ip6h = ipv6_hdr(skb);
ip6h->hop_limit--; ip6h->hop_limit--;
skb->tstamp = 0; skb->tstamp = 0;

View File

@ -166,24 +166,38 @@ static int flow_offload_eth_dst(struct net *net,
enum flow_offload_tuple_dir dir, enum flow_offload_tuple_dir dir,
struct nf_flow_rule *flow_rule) struct nf_flow_rule *flow_rule)
{ {
const struct flow_offload_tuple *tuple = &flow->tuplehash[dir].tuple;
struct flow_action_entry *entry0 = flow_action_entry_next(flow_rule); struct flow_action_entry *entry0 = flow_action_entry_next(flow_rule);
struct flow_action_entry *entry1 = flow_action_entry_next(flow_rule); struct flow_action_entry *entry1 = flow_action_entry_next(flow_rule);
const void *daddr = &flow->tuplehash[!dir].tuple.src_v4;
const struct dst_entry *dst_cache;
unsigned char ha[ETH_ALEN];
struct neighbour *n; struct neighbour *n;
u32 mask, val; u32 mask, val;
u8 nud_state;
u16 val16; u16 val16;
n = dst_neigh_lookup(tuple->dst_cache, &tuple->dst_v4); dst_cache = flow->tuplehash[dir].tuple.dst_cache;
n = dst_neigh_lookup(dst_cache, daddr);
if (!n) if (!n)
return -ENOENT; return -ENOENT;
read_lock_bh(&n->lock);
nud_state = n->nud_state;
ether_addr_copy(ha, n->ha);
read_unlock_bh(&n->lock);
if (!(nud_state & NUD_VALID)) {
neigh_release(n);
return -ENOENT;
}
mask = ~0xffffffff; mask = ~0xffffffff;
memcpy(&val, n->ha, 4); memcpy(&val, ha, 4);
flow_offload_mangle(entry0, FLOW_ACT_MANGLE_HDR_TYPE_ETH, 0, flow_offload_mangle(entry0, FLOW_ACT_MANGLE_HDR_TYPE_ETH, 0,
&val, &mask); &val, &mask);
mask = ~0x0000ffff; mask = ~0x0000ffff;
memcpy(&val16, n->ha + 4, 2); memcpy(&val16, ha + 4, 2);
val = val16; val = val16;
flow_offload_mangle(entry1, FLOW_ACT_MANGLE_HDR_TYPE_ETH, 4, flow_offload_mangle(entry1, FLOW_ACT_MANGLE_HDR_TYPE_ETH, 4,
&val, &mask); &val, &mask);
@ -335,22 +349,26 @@ static void flow_offload_port_snat(struct net *net,
struct nf_flow_rule *flow_rule) struct nf_flow_rule *flow_rule)
{ {
struct flow_action_entry *entry = flow_action_entry_next(flow_rule); struct flow_action_entry *entry = flow_action_entry_next(flow_rule);
u32 mask = ~htonl(0xffff0000), port; u32 mask, port;
u32 offset; u32 offset;
switch (dir) { switch (dir) {
case FLOW_OFFLOAD_DIR_ORIGINAL: case FLOW_OFFLOAD_DIR_ORIGINAL:
port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_port); port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_port);
offset = 0; /* offsetof(struct tcphdr, source); */ offset = 0; /* offsetof(struct tcphdr, source); */
port = htonl(port << 16);
mask = ~htonl(0xffff0000);
break; break;
case FLOW_OFFLOAD_DIR_REPLY: case FLOW_OFFLOAD_DIR_REPLY:
port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_port); port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_port);
offset = 0; /* offsetof(struct tcphdr, dest); */ offset = 0; /* offsetof(struct tcphdr, dest); */
port = htonl(port);
mask = ~htonl(0xffff);
break; break;
default: default:
return; return;
} }
port = htonl(port << 16);
flow_offload_mangle(entry, flow_offload_l4proto(flow), offset, flow_offload_mangle(entry, flow_offload_l4proto(flow), offset,
&port, &mask); &port, &mask);
} }
@ -361,22 +379,26 @@ static void flow_offload_port_dnat(struct net *net,
struct nf_flow_rule *flow_rule) struct nf_flow_rule *flow_rule)
{ {
struct flow_action_entry *entry = flow_action_entry_next(flow_rule); struct flow_action_entry *entry = flow_action_entry_next(flow_rule);
u32 mask = ~htonl(0xffff), port; u32 mask, port;
u32 offset; u32 offset;
switch (dir) { switch (dir) {
case FLOW_OFFLOAD_DIR_ORIGINAL: case FLOW_OFFLOAD_DIR_ORIGINAL:
port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_port); port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_port);
offset = 0; /* offsetof(struct tcphdr, source); */ offset = 0; /* offsetof(struct tcphdr, dest); */
port = htonl(port);
mask = ~htonl(0xffff);
break; break;
case FLOW_OFFLOAD_DIR_REPLY: case FLOW_OFFLOAD_DIR_REPLY:
port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_port); port = ntohs(flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_port);
offset = 0; /* offsetof(struct tcphdr, dest); */ offset = 0; /* offsetof(struct tcphdr, source); */
port = htonl(port << 16);
mask = ~htonl(0xffff0000);
break; break;
default: default:
return; return;
} }
port = htonl(port);
flow_offload_mangle(entry, flow_offload_l4proto(flow), offset, flow_offload_mangle(entry, flow_offload_l4proto(flow), offset,
&port, &mask); &port, &mask);
} }
@ -759,9 +781,9 @@ void nf_flow_offload_stats(struct nf_flowtable *flowtable,
struct flow_offload *flow) struct flow_offload *flow)
{ {
struct flow_offload_work *offload; struct flow_offload_work *offload;
s64 delta; __s32 delta;
delta = flow->timeout - jiffies; delta = nf_flow_timeout_delta(flow->timeout);
if ((delta >= (9 * NF_FLOW_TIMEOUT) / 10) || if ((delta >= (9 * NF_FLOW_TIMEOUT) / 10) ||
flow->flags & FLOW_OFFLOAD_HW_DYING) flow->flags & FLOW_OFFLOAD_HW_DYING)
return; return;

View File

@ -5984,6 +5984,7 @@ nft_flowtable_type_get(struct net *net, u8 family)
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
} }
/* Only called from error and netdev event paths. */
static void nft_unregister_flowtable_hook(struct net *net, static void nft_unregister_flowtable_hook(struct net *net,
struct nft_flowtable *flowtable, struct nft_flowtable *flowtable,
struct nft_hook *hook) struct nft_hook *hook)
@ -5999,7 +6000,7 @@ static void nft_unregister_flowtable_net_hooks(struct net *net,
struct nft_hook *hook; struct nft_hook *hook;
list_for_each_entry(hook, &flowtable->hook_list, list) list_for_each_entry(hook, &flowtable->hook_list, list)
nft_unregister_flowtable_hook(net, flowtable, hook); nf_unregister_net_hook(net, &hook->ops);
} }
static int nft_register_flowtable_net_hooks(struct net *net, static int nft_register_flowtable_net_hooks(struct net *net,
@ -6448,12 +6449,14 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
{ {
struct nft_hook *hook, *next; struct nft_hook *hook, *next;
flowtable->data.type->free(&flowtable->data);
list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) {
flowtable->data.type->setup(&flowtable->data, hook->ops.dev,
FLOW_BLOCK_UNBIND);
list_del_rcu(&hook->list); list_del_rcu(&hook->list);
kfree(hook); kfree(hook);
} }
kfree(flowtable->name); kfree(flowtable->name);
flowtable->data.type->free(&flowtable->data);
module_put(flowtable->data.type->owner); module_put(flowtable->data.type->owner);
kfree(flowtable); kfree(flowtable);
} }
@ -6497,6 +6500,7 @@ static void nft_flowtable_event(unsigned long event, struct net_device *dev,
if (hook->ops.dev != dev) if (hook->ops.dev != dev)
continue; continue;
/* flow_offload_netdev_event() cleans up entries for us. */
nft_unregister_flowtable_hook(dev_net(dev), flowtable, hook); nft_unregister_flowtable_hook(dev_net(dev), flowtable, hook);
list_del_rcu(&hook->list); list_del_rcu(&hook->list);
kfree_rcu(hook, rcu); kfree_rcu(hook, rcu);

View File

@ -200,9 +200,6 @@ static void nft_flow_offload_activate(const struct nft_ctx *ctx,
static void nft_flow_offload_destroy(const struct nft_ctx *ctx, static void nft_flow_offload_destroy(const struct nft_ctx *ctx,
const struct nft_expr *expr) const struct nft_expr *expr)
{ {
struct nft_flow_offload *priv = nft_expr_priv(expr);
priv->flowtable->use--;
nf_ct_netns_put(ctx->net, ctx->family); nf_ct_netns_put(ctx->net, ctx->family);
} }

View File

@ -196,7 +196,7 @@ static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb,
hdr->size = cpu_to_le32(len); hdr->size = cpu_to_le32(len);
hdr->confirm_rx = 0; hdr->confirm_rx = 0;
skb_put_padto(skb, ALIGN(len, 4)); skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));
mutex_lock(&node->ep_lock); mutex_lock(&node->ep_lock);
if (node->ep) if (node->ep)

View File

@ -1769,7 +1769,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
q->avg_window_begin)); q->avg_window_begin));
u64 b = q->avg_window_bytes * (u64)NSEC_PER_SEC; u64 b = q->avg_window_bytes * (u64)NSEC_PER_SEC;
do_div(b, window_interval); b = div64_u64(b, window_interval);
q->avg_peak_bandwidth = q->avg_peak_bandwidth =
cake_ewma(q->avg_peak_bandwidth, b, cake_ewma(q->avg_peak_bandwidth, b,
b > q->avg_peak_bandwidth ? 2 : 8); b > q->avg_peak_bandwidth ? 2 : 8);

View File

@ -786,10 +786,12 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt,
if (tb[TCA_FQ_QUANTUM]) { if (tb[TCA_FQ_QUANTUM]) {
u32 quantum = nla_get_u32(tb[TCA_FQ_QUANTUM]); u32 quantum = nla_get_u32(tb[TCA_FQ_QUANTUM]);
if (quantum > 0) if (quantum > 0 && quantum <= (1 << 20)) {
q->quantum = quantum; q->quantum = quantum;
else } else {
NL_SET_ERR_MSG_MOD(extack, "invalid quantum");
err = -EINVAL; err = -EINVAL;
}
} }
if (tb[TCA_FQ_INITIAL_QUANTUM]) if (tb[TCA_FQ_INITIAL_QUANTUM])

View File

@ -292,8 +292,14 @@ static int prio_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
struct tc_prio_qopt_offload graft_offload; struct tc_prio_qopt_offload graft_offload;
unsigned long band = arg - 1; unsigned long band = arg - 1;
if (new == NULL) if (!new) {
new = &noop_qdisc; new = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,
TC_H_MAKE(sch->handle, arg), extack);
if (!new)
new = &noop_qdisc;
else
qdisc_hash_add(new, true);
}
*old = qdisc_replace(sch, new, &q->queues[band]); *old = qdisc_replace(sch, new, &q->queues[band]);

View File

@ -1363,8 +1363,10 @@ static int sctp_cmd_interpreter(enum sctp_event_type event_type,
/* Generate an INIT ACK chunk. */ /* Generate an INIT ACK chunk. */
new_obj = sctp_make_init_ack(asoc, chunk, GFP_ATOMIC, new_obj = sctp_make_init_ack(asoc, chunk, GFP_ATOMIC,
0); 0);
if (!new_obj) if (!new_obj) {
goto nomem; error = -ENOMEM;
break;
}
sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, sctp_add_cmd_sf(commands, SCTP_CMD_REPLY,
SCTP_CHUNK(new_obj)); SCTP_CHUNK(new_obj));
@ -1386,7 +1388,8 @@ static int sctp_cmd_interpreter(enum sctp_event_type event_type,
if (!new_obj) { if (!new_obj) {
if (cmd->obj.chunk) if (cmd->obj.chunk)
sctp_chunk_free(cmd->obj.chunk); sctp_chunk_free(cmd->obj.chunk);
goto nomem; error = -ENOMEM;
break;
} }
sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, sctp_add_cmd_sf(commands, SCTP_CMD_REPLY,
SCTP_CHUNK(new_obj)); SCTP_CHUNK(new_obj));
@ -1433,8 +1436,10 @@ static int sctp_cmd_interpreter(enum sctp_event_type event_type,
/* Generate a SHUTDOWN chunk. */ /* Generate a SHUTDOWN chunk. */
new_obj = sctp_make_shutdown(asoc, chunk); new_obj = sctp_make_shutdown(asoc, chunk);
if (!new_obj) if (!new_obj) {
goto nomem; error = -ENOMEM;
break;
}
sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, sctp_add_cmd_sf(commands, SCTP_CMD_REPLY,
SCTP_CHUNK(new_obj)); SCTP_CHUNK(new_obj));
break; break;
@ -1770,11 +1775,17 @@ static int sctp_cmd_interpreter(enum sctp_event_type event_type,
break; break;
} }
if (error) if (error) {
cmd = sctp_next_cmd(commands);
while (cmd) {
if (cmd->verb == SCTP_CMD_REPLY)
sctp_chunk_free(cmd->obj.chunk);
cmd = sctp_next_cmd(commands);
}
break; break;
}
} }
out:
/* If this is in response to a received chunk, wait until /* If this is in response to a received chunk, wait until
* we are done with the packet to open the queue so that we don't * we are done with the packet to open the queue so that we don't
* send multiple packets in response to a single request. * send multiple packets in response to a single request.
@ -1789,7 +1800,4 @@ out:
sp->data_ready_signalled = 0; sp->data_ready_signalled = 0;
return error; return error;
nomem:
error = -ENOMEM;
goto out;
} }

View File

@ -9,7 +9,7 @@ tipc-y += addr.o bcast.o bearer.o \
core.o link.o discover.o msg.o \ core.o link.o discover.o msg.o \
name_distr.o subscr.o monitor.o name_table.o net.o \ name_distr.o subscr.o monitor.o name_table.o net.o \
netlink.o netlink_compat.o node.o socket.o eth_media.o \ netlink.o netlink_compat.o node.o socket.o eth_media.o \
topsrv.o socket.o group.o trace.o topsrv.o group.o trace.o
CFLAGS_trace.o += -I$(src) CFLAGS_trace.o += -I$(src)
@ -20,5 +20,3 @@ tipc-$(CONFIG_TIPC_CRYPTO) += crypto.o
obj-$(CONFIG_TIPC_DIAG) += diag.o obj-$(CONFIG_TIPC_DIAG) += diag.o
tipc_diag-y := diag.o

View File

@ -204,8 +204,8 @@ static int __tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
return -ENOMEM; return -ENOMEM;
} }
attrbuf = kmalloc_array(tipc_genl_family.maxattr + 1, attrbuf = kcalloc(tipc_genl_family.maxattr + 1,
sizeof(struct nlattr *), GFP_KERNEL); sizeof(struct nlattr *), GFP_KERNEL);
if (!attrbuf) { if (!attrbuf) {
err = -ENOMEM; err = -ENOMEM;
goto err_out; goto err_out;

View File

@ -287,12 +287,12 @@ static void tipc_sk_respond(struct sock *sk, struct sk_buff *skb, int err)
* *
* Caller must hold socket lock * Caller must hold socket lock
*/ */
static void tsk_rej_rx_queue(struct sock *sk) static void tsk_rej_rx_queue(struct sock *sk, int error)
{ {
struct sk_buff *skb; struct sk_buff *skb;
while ((skb = __skb_dequeue(&sk->sk_receive_queue))) while ((skb = __skb_dequeue(&sk->sk_receive_queue)))
tipc_sk_respond(sk, skb, TIPC_ERR_NO_PORT); tipc_sk_respond(sk, skb, error);
} }
static bool tipc_sk_connected(struct sock *sk) static bool tipc_sk_connected(struct sock *sk)
@ -545,34 +545,45 @@ static void __tipc_shutdown(struct socket *sock, int error)
/* Remove pending SYN */ /* Remove pending SYN */
__skb_queue_purge(&sk->sk_write_queue); __skb_queue_purge(&sk->sk_write_queue);
/* Reject all unreceived messages, except on an active connection /* Remove partially received buffer if any */
* (which disconnects locally & sends a 'FIN+' to peer). skb = skb_peek(&sk->sk_receive_queue);
*/ if (skb && TIPC_SKB_CB(skb)->bytes_read) {
while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) { __skb_unlink(skb, &sk->sk_receive_queue);
if (TIPC_SKB_CB(skb)->bytes_read) { kfree_skb(skb);
kfree_skb(skb);
continue;
}
if (!tipc_sk_type_connectionless(sk) &&
sk->sk_state != TIPC_DISCONNECTING) {
tipc_set_sk_state(sk, TIPC_DISCONNECTING);
tipc_node_remove_conn(net, dnode, tsk->portid);
}
tipc_sk_respond(sk, skb, error);
} }
if (tipc_sk_type_connectionless(sk)) /* Reject all unreceived messages if connectionless */
if (tipc_sk_type_connectionless(sk)) {
tsk_rej_rx_queue(sk, error);
return; return;
}
if (sk->sk_state != TIPC_DISCONNECTING) { switch (sk->sk_state) {
case TIPC_CONNECTING:
case TIPC_ESTABLISHED:
tipc_set_sk_state(sk, TIPC_DISCONNECTING);
tipc_node_remove_conn(net, dnode, tsk->portid);
/* Send a FIN+/- to its peer */
skb = __skb_dequeue(&sk->sk_receive_queue);
if (skb) {
__skb_queue_purge(&sk->sk_receive_queue);
tipc_sk_respond(sk, skb, error);
break;
}
skb = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE, skb = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE,
TIPC_CONN_MSG, SHORT_H_SIZE, 0, dnode, TIPC_CONN_MSG, SHORT_H_SIZE, 0, dnode,
tsk_own_node(tsk), tsk_peer_port(tsk), tsk_own_node(tsk), tsk_peer_port(tsk),
tsk->portid, error); tsk->portid, error);
if (skb) if (skb)
tipc_node_xmit_skb(net, skb, dnode, tsk->portid); tipc_node_xmit_skb(net, skb, dnode, tsk->portid);
tipc_node_remove_conn(net, dnode, tsk->portid); break;
tipc_set_sk_state(sk, TIPC_DISCONNECTING); case TIPC_LISTEN:
/* Reject all SYN messages */
tsk_rej_rx_queue(sk, error);
break;
default:
__skb_queue_purge(&sk->sk_receive_queue);
break;
} }
} }
@ -2432,8 +2443,8 @@ static int tipc_wait_for_connect(struct socket *sock, long *timeo_p)
return sock_intr_errno(*timeo_p); return sock_intr_errno(*timeo_p);
add_wait_queue(sk_sleep(sk), &wait); add_wait_queue(sk_sleep(sk), &wait);
done = sk_wait_event(sk, timeo_p, done = sk_wait_event(sk, timeo_p, tipc_sk_connected(sk),
sk->sk_state != TIPC_CONNECTING, &wait); &wait);
remove_wait_queue(sk_sleep(sk), &wait); remove_wait_queue(sk_sleep(sk), &wait);
} while (!done); } while (!done);
return 0; return 0;
@ -2643,7 +2654,7 @@ static int tipc_accept(struct socket *sock, struct socket *new_sock, int flags,
* Reject any stray messages received by new socket * Reject any stray messages received by new socket
* before the socket lock was taken (very, very unlikely) * before the socket lock was taken (very, very unlikely)
*/ */
tsk_rej_rx_queue(new_sk); tsk_rej_rx_queue(new_sk, TIPC_ERR_NO_PORT);
/* Connect new socket to it's peer */ /* Connect new socket to it's peer */
tipc_sk_finish_conn(new_tsock, msg_origport(msg), msg_orignode(msg)); tipc_sk_finish_conn(new_tsock, msg_origport(msg), msg_orignode(msg));

View File

@ -1,6 +1,9 @@
#!/bin/bash #!/bin/bash
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
# Kselftest framework requirement - SKIP code is 4.
ksft_skip=4
ALL_TESTS="loopback_test" ALL_TESTS="loopback_test"
NUM_NETIFS=2 NUM_NETIFS=2
source tc_common.sh source tc_common.sh
@ -72,6 +75,11 @@ setup_prepare()
h1_create h1_create
h2_create h2_create
if ethtool -k $h1 | grep loopback | grep -q fixed; then
log_test "SKIP: dev $h1 does not support loopback feature"
exit $ksft_skip
fi
} }
cleanup() cleanup()