Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

 1) Disable RISCV BPF JIT builds when !MMU, from Björn Töpel.

 2) nf_tables leaves dangling pointer after free, fix from Eric Dumazet.

 3) Out of boundary write in __xsk_rcv_memcpy(), fix from Li RongQing.

 4) Adjust icmp6 message source address selection when routes have a
    preferred source address set, from Tim Stallard.

 5) Be sure to validate HSR protocol version when creating new links,
    from Taehee Yoo.

 6) CAP_NET_ADMIN should be sufficient to manage l2tp tunnels even in
    non-initial namespaces, from Michael Weiß.

 7) Missing release firmware call in mlx5, from Eran Ben Elisha.

 8) Fix variable type in macsec_changelink(), caught by KASAN. Fix from
    Taehee Yoo.

 9) Fix pause frame negotiation in marvell phy driver, from Clemens
    Gruber.

10) Record RX queue early enough in tun packet paths such that XDP
    programs will see the correct RX queue index, from Gilberto Bertin.

11) Fix double unlock in mptcp, from Florian Westphal.

12) Fix offset overflow in ARM bpf JIT, from Luke Nelson.

13) marvell10g needs to soft reset PHY when coming out of low power
    mode, from Russell King.

14) Fix MTU setting regression in stmmac for some chip types, from
    Florian Fainelli.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (101 commits)
  amd-xgbe: Use __napi_schedule() in BH context
  mISDN: make dmril and dmrim static
  net: stmmac: dwmac-sunxi: Provide TX and RX fifo sizes
  net: dsa: mt7530: fix tagged frames pass-through in VLAN-unaware mode
  tipc: fix incorrect increasing of link window
  Documentation: Fix tcp_challenge_ack_limit default value
  net: tulip: make early_486_chipsets static
  dt-bindings: net: ethernet-phy: add desciption for ethernet-phy-id1234.d400
  ipv6: remove redundant assignment to variable err
  net/rds: Use ERR_PTR for rds_message_alloc_sgs()
  net: mscc: ocelot: fix untagged packet drops when enslaving to vlan aware bridge
  selftests/bpf: Check for correct program attach/detach in xdp_attach test
  libbpf: Fix type of old_fd in bpf_xdp_set_link_opts
  libbpf: Always specify expected_attach_type on program load if supported
  xsk: Add missing check on user supplied headroom size
  mac80211: fix channel switch trigger from unknown mesh peer
  mac80211: fix race in ieee80211_register_hw()
  net: marvell10g: soft-reset the PHY when coming out of low power
  net: marvell10g: report firmware version
  net/cxgb4: Check the return from t4_query_params properly
  ...
This commit is contained in:
Linus Torvalds 2020-04-16 14:52:29 -07:00
commit c8372665b4
108 changed files with 1078 additions and 690 deletions

View File

@ -43,6 +43,9 @@ properties:
second group of digits is the Phy Identifier 2 register,
this is the chip vendor OUI bits 19:24, followed by 10
bits of a vendor specific ID.
- items:
- pattern: "^ethernet-phy-id[a-f0-9]{4}\\.[a-f0-9]{4}$"
- const: ethernet-phy-ieee802.3-c22
- items:
- pattern: "^ethernet-phy-id[a-f0-9]{4}\\.[a-f0-9]{4}$"
- const: ethernet-phy-ieee802.3-c45

View File

@ -22,6 +22,8 @@ Optional properties:
- fsl,err006687-workaround-present: If present indicates that the system has
the hardware workaround for ERR006687 applied and does not need a software
workaround.
- gpr: phandle of SoC general purpose register mode. Required for wake on LAN
on some SoCs
-interrupt-names: names of the interrupts listed in interrupts property in
the same order. The defaults if not specified are
__Number of interrupts__ __Default__

View File

@ -257,6 +257,8 @@ drivers:
* :doc:`netdevsim`
* :doc:`mlxsw`
.. _Generic-Packet-Trap-Groups:
Generic Packet Trap Groups
==========================

View File

@ -22,6 +22,7 @@ Contents:
z8530book
msg_zerocopy
failover
net_dim
net_failover
phy
sfp-phylink

View File

@ -812,7 +812,7 @@ tcp_limit_output_bytes - INTEGER
tcp_challenge_ack_limit - INTEGER
Limits number of Challenge ACK sent per second, as recommended
in RFC 5961 (Improving TCP's Robustness to Blind In-Window Attacks)
Default: 100
Default: 1000
tcp_rx_skb_cache - BOOLEAN
Controls a per TCP socket cache of one skb, that might help

View File

@ -1,28 +1,20 @@
======================================================
Net DIM - Generic Network Dynamic Interrupt Moderation
======================================================
Author:
Tal Gilboa <talgi@mellanox.com>
:Author: Tal Gilboa <talgi@mellanox.com>
.. contents:: :depth: 2
Contents
=========
- Assumptions
- Introduction
- The Net DIM Algorithm
- Registering a Network Device to DIM
- Example
Part 0: Assumptions
======================
Assumptions
===========
This document assumes the reader has basic knowledge in network drivers
and in general interrupt moderation.
Part I: Introduction
======================
Introduction
============
Dynamic Interrupt Moderation (DIM) (in networking) refers to changing the
interrupt moderation configuration of a channel in order to optimize packet
@ -41,14 +33,15 @@ number of wanted packets per event. The Net DIM algorithm ascribes importance to
increase bandwidth over reducing interrupt rate.
Part II: The Net DIM Algorithm
===============================
Net DIM Algorithm
=================
Each iteration of the Net DIM algorithm follows these steps:
1. Calculates new data sample.
2. Compares it to previous sample.
3. Makes a decision - suggests interrupt moderation configuration fields.
4. Applies a schedule work function, which applies suggested configuration.
#. Calculates new data sample.
#. Compares it to previous sample.
#. Makes a decision - suggests interrupt moderation configuration fields.
#. Applies a schedule work function, which applies suggested configuration.
The first two steps are straightforward, both the new and the previous data are
supplied by the driver registered to Net DIM. The previous data is the new data
@ -89,19 +82,21 @@ manoeuvre as it may provide partial data or ignore the algorithm suggestion
under some conditions.
Part III: Registering a Network Device to DIM
==============================================
Registering a Network Device to DIM
===================================
Net DIM API exposes the main function net_dim(struct dim *dim,
struct dim_sample end_sample). This function is the entry point to the Net
Net DIM API exposes the main function net_dim().
This function is the entry point to the Net
DIM algorithm and has to be called every time the driver would like to check if
it should change interrupt moderation parameters. The driver should provide two
data structures: struct dim and struct dim_sample. Struct dim
data structures: :c:type:`struct dim <dim>` and
:c:type:`struct dim_sample <dim_sample>`. :c:type:`struct dim <dim>`
describes the state of DIM for a specific object (RX queue, TX queue,
other queues, etc.). This includes the current selected profile, previous data
samples, the callback function provided by the driver and more.
Struct dim_sample describes a data sample, which will be compared to the
data sample stored in struct dim in order to decide on the algorithm's next
:c:type:`struct dim_sample <dim_sample>` describes a data sample,
which will be compared to the data sample stored in :c:type:`struct dim <dim>`
in order to decide on the algorithm's next
step. The sample should include bytes, packets and interrupts, measured by
the driver.
@ -110,9 +105,10 @@ main net_dim() function. The recommended method is to call net_dim() on each
interrupt. Since Net DIM has a built-in moderation and it might decide to skip
iterations under certain conditions, there is no need to moderate the net_dim()
calls as well. As mentioned above, the driver needs to provide an object of type
struct dim to the net_dim() function call. It is advised for each entity
using Net DIM to hold a struct dim as part of its data structure and use it
as the main Net DIM API object. The struct dim_sample should hold the latest
:c:type:`struct dim <dim>` to the net_dim() function call. It is advised for
each entity using Net DIM to hold a :c:type:`struct dim <dim>` as part of its
data structure and use it as the main Net DIM API object.
The :c:type:`struct dim_sample <dim_sample>` should hold the latest
bytes, packets and interrupts count. No need to perform any calculations, just
include the raw data.
@ -124,19 +120,19 @@ the data flow. After the work is done, Net DIM algorithm needs to be set to
the proper state in order to move to the next iteration.
Part IV: Example
=================
Example
=======
The following code demonstrates how to register a driver to Net DIM. The actual
usage is not complete but it should make the outline of the usage clear.
my_driver.c:
.. code-block:: c
#include <linux/dim.h>
#include <linux/dim.h>
/* Callback for net DIM to schedule on a decision to change moderation */
void my_driver_do_dim_work(struct work_struct *work)
{
/* Callback for net DIM to schedule on a decision to change moderation */
void my_driver_do_dim_work(struct work_struct *work)
{
/* Get struct dim from struct work_struct */
struct dim *dim = container_of(work, struct dim,
work);
@ -145,11 +141,11 @@ void my_driver_do_dim_work(struct work_struct *work)
/* Signal net DIM work is done and it should move to next iteration */
dim->state = DIM_START_MEASURE;
}
}
/* My driver's interrupt handler */
int my_driver_handle_interrupt(struct my_driver_entity *my_entity, ...)
{
/* My driver's interrupt handler */
int my_driver_handle_interrupt(struct my_driver_entity *my_entity, ...)
{
...
/* A struct to hold current measured data */
struct dim_sample dim_sample;
@ -162,13 +158,19 @@ int my_driver_handle_interrupt(struct my_driver_entity *my_entity, ...)
/* Call net DIM */
net_dim(&my_entity->dim, dim_sample);
...
}
}
/* My entity's initialization function (my_entity was already allocated) */
int my_driver_init_my_entity(struct my_driver_entity *my_entity, ...)
{
/* My entity's initialization function (my_entity was already allocated) */
int my_driver_init_my_entity(struct my_driver_entity *my_entity, ...)
{
...
/* Initiate struct work_struct with my driver's callback function */
INIT_WORK(&my_entity->dim.work, my_driver_do_dim_work);
...
}
}
Dynamic Interrupt Moderation (DIM) library API
==============================================
.. kernel-doc:: include/linux/dim.h
:internal:

View File

@ -5934,6 +5934,7 @@ M: Tal Gilboa <talgi@mellanox.com>
S: Maintained
F: include/linux/dim.h
F: lib/dim/
F: Documentation/networking/net_dim.rst
DZ DECSTATION DZ11 SERIAL DRIVER
M: "Maciej W. Rozycki" <macro@linux-mips.org>

View File

@ -1039,13 +1039,13 @@
compatible = "fsl,imx6q-fec";
reg = <0x02188000 0x4000>;
interrupt-names = "int0", "pps";
interrupts-extended =
<&intc 0 118 IRQ_TYPE_LEVEL_HIGH>,
<&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <0 118 IRQ_TYPE_LEVEL_HIGH>,
<0 119 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clks IMX6QDL_CLK_ENET>,
<&clks IMX6QDL_CLK_ENET>,
<&clks IMX6QDL_CLK_ENET_REF>;
clock-names = "ipg", "ahb", "ptp";
gpr = <&gpr>;
status = "disabled";
};

View File

@ -77,7 +77,6 @@
};
&fec {
/delete-property/interrupts-extended;
interrupts = <0 118 IRQ_TYPE_LEVEL_HIGH>,
<0 119 IRQ_TYPE_LEVEL_HIGH>;
};

View File

@ -929,7 +929,11 @@ static inline void emit_a32_rsh_i64(const s8 dst[],
rd = arm_bpf_get_reg64(dst, tmp, ctx);
/* Do LSR operation */
if (val < 32) {
if (val == 0) {
/* An immediate value of 0 encodes a shift amount of 32
* for LSR. To shift by 0, don't do anything.
*/
} else if (val < 32) {
emit(ARM_MOV_SI(tmp2[1], rd[1], SRTYPE_LSR, val), ctx);
emit(ARM_ORR_SI(rd[1], tmp2[1], rd[0], SRTYPE_ASL, 32 - val), ctx);
emit(ARM_MOV_SI(rd[0], rd[0], SRTYPE_LSR, val), ctx);
@ -955,7 +959,11 @@ static inline void emit_a32_arsh_i64(const s8 dst[],
rd = arm_bpf_get_reg64(dst, tmp, ctx);
/* Do ARSH operation */
if (val < 32) {
if (val == 0) {
/* An immediate value of 0 encodes a shift amount of 32
* for ASR. To shift by 0, don't do anything.
*/
} else if (val < 32) {
emit(ARM_MOV_SI(tmp2[1], rd[1], SRTYPE_LSR, val), ctx);
emit(ARM_ORR_SI(rd[1], tmp2[1], rd[0], SRTYPE_ASL, 32 - val), ctx);
emit(ARM_MOV_SI(rd[0], rd[0], SRTYPE_ASR, val), ctx);
@ -992,21 +1000,35 @@ static inline void emit_a32_mul_r64(const s8 dst[], const s8 src[],
arm_bpf_put_reg32(dst_hi, rd[0], ctx);
}
static bool is_ldst_imm(s16 off, const u8 size)
{
s16 off_max = 0;
switch (size) {
case BPF_B:
case BPF_W:
off_max = 0xfff;
break;
case BPF_H:
off_max = 0xff;
break;
case BPF_DW:
/* Need to make sure off+4 does not overflow. */
off_max = 0xfff - 4;
break;
}
return -off_max <= off && off <= off_max;
}
/* *(size *)(dst + off) = src */
static inline void emit_str_r(const s8 dst, const s8 src[],
s32 off, struct jit_ctx *ctx, const u8 sz){
s16 off, struct jit_ctx *ctx, const u8 sz){
const s8 *tmp = bpf2a32[TMP_REG_1];
s32 off_max;
s8 rd;
rd = arm_bpf_get_reg32(dst, tmp[1], ctx);
if (sz == BPF_H)
off_max = 0xff;
else
off_max = 0xfff;
if (off < 0 || off > off_max) {
if (!is_ldst_imm(off, sz)) {
emit_a32_mov_i(tmp[0], off, ctx);
emit(ARM_ADD_R(tmp[0], tmp[0], rd), ctx);
rd = tmp[0];
@ -1035,18 +1057,12 @@ static inline void emit_str_r(const s8 dst, const s8 src[],
/* dst = *(size*)(src + off) */
static inline void emit_ldx_r(const s8 dst[], const s8 src,
s32 off, struct jit_ctx *ctx, const u8 sz){
s16 off, struct jit_ctx *ctx, const u8 sz){
const s8 *tmp = bpf2a32[TMP_REG_1];
const s8 *rd = is_stacked(dst_lo) ? tmp : dst;
s8 rm = src;
s32 off_max;
if (sz == BPF_H)
off_max = 0xff;
else
off_max = 0xfff;
if (off < 0 || off > off_max) {
if (!is_ldst_imm(off, sz)) {
emit_a32_mov_i(tmp[0], off, ctx);
emit(ARM_ADD_R(tmp[0], tmp[0], src), ctx);
rm = tmp[0];

View File

@ -55,7 +55,7 @@ config RISCV
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_MMIOWB
select ARCH_HAS_DEBUG_VIRTUAL
select HAVE_EBPF_JIT
select HAVE_EBPF_JIT if MMU
select EDAC_SUPPORT
select ARCH_HAS_GIGANTIC_PAGE
select ARCH_HAS_SET_DIRECT_MAP

View File

@ -110,6 +110,16 @@ static bool is_32b_int(s64 val)
return -(1L << 31) <= val && val < (1L << 31);
}
static bool in_auipc_jalr_range(s64 val)
{
/*
* auipc+jalr can reach any signed PC-relative offset in the range
* [-2^31 - 2^11, 2^31 - 2^11).
*/
return (-(1L << 31) - (1L << 11)) <= val &&
val < ((1L << 31) - (1L << 11));
}
static void emit_imm(u8 rd, s64 val, struct rv_jit_context *ctx)
{
/* Note that the immediate from the add is sign-extended,
@ -380,20 +390,24 @@ static void emit_sext_32_rd(u8 *rd, struct rv_jit_context *ctx)
*rd = RV_REG_T2;
}
static void emit_jump_and_link(u8 rd, s64 rvoff, bool force_jalr,
struct rv_jit_context *ctx)
static int emit_jump_and_link(u8 rd, s64 rvoff, bool force_jalr,
struct rv_jit_context *ctx)
{
s64 upper, lower;
if (rvoff && is_21b_int(rvoff) && !force_jalr) {
emit(rv_jal(rd, rvoff >> 1), ctx);
return;
return 0;
} else if (in_auipc_jalr_range(rvoff)) {
upper = (rvoff + (1 << 11)) >> 12;
lower = rvoff & 0xfff;
emit(rv_auipc(RV_REG_T1, upper), ctx);
emit(rv_jalr(rd, RV_REG_T1, lower), ctx);
return 0;
}
upper = (rvoff + (1 << 11)) >> 12;
lower = rvoff & 0xfff;
emit(rv_auipc(RV_REG_T1, upper), ctx);
emit(rv_jalr(rd, RV_REG_T1, lower), ctx);
pr_err("bpf-jit: target offset 0x%llx is out of range\n", rvoff);
return -ERANGE;
}
static bool is_signed_bpf_cond(u8 cond)
@ -407,18 +421,16 @@ static int emit_call(bool fixed, u64 addr, struct rv_jit_context *ctx)
s64 off = 0;
u64 ip;
u8 rd;
int ret;
if (addr && ctx->insns) {
ip = (u64)(long)(ctx->insns + ctx->ninsns);
off = addr - ip;
if (!is_32b_int(off)) {
pr_err("bpf-jit: target call addr %pK is out of range\n",
(void *)addr);
return -ERANGE;
}
}
emit_jump_and_link(RV_REG_RA, off, !fixed, ctx);
ret = emit_jump_and_link(RV_REG_RA, off, !fixed, ctx);
if (ret)
return ret;
rd = bpf_to_rv_reg(BPF_REG_0, ctx);
emit(rv_addi(rd, RV_REG_A0, 0), ctx);
return 0;
@ -429,7 +441,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
{
bool is64 = BPF_CLASS(insn->code) == BPF_ALU64 ||
BPF_CLASS(insn->code) == BPF_JMP;
int s, e, rvoff, i = insn - ctx->prog->insnsi;
int s, e, rvoff, ret, i = insn - ctx->prog->insnsi;
struct bpf_prog_aux *aux = ctx->prog->aux;
u8 rd = -1, rs = -1, code = insn->code;
s16 off = insn->off;
@ -699,7 +711,9 @@ out_be:
/* JUMP off */
case BPF_JMP | BPF_JA:
rvoff = rv_offset(i, off, ctx);
emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);
ret = emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);
if (ret)
return ret;
break;
/* IF (dst COND src) JUMP off */
@ -801,7 +815,6 @@ out_be:
case BPF_JMP | BPF_CALL:
{
bool fixed;
int ret;
u64 addr;
mark_call(ctx);
@ -826,7 +839,9 @@ out_be:
break;
rvoff = epilogue_offset(ctx);
emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);
ret = emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);
if (ret)
return ret;
break;
/* dst = imm64 */

View File

@ -743,10 +743,10 @@ check_send(struct isar_hw *isar, u8 rdm)
}
}
const char *dmril[] = {"NO SPEED", "1200/75", "NODEF2", "75/1200", "NODEF4",
static const char *dmril[] = {"NO SPEED", "1200/75", "NODEF2", "75/1200", "NODEF4",
"300", "600", "1200", "2400", "4800", "7200",
"9600nt", "9600t", "12000", "14400", "WRONG"};
const char *dmrim[] = {"NO MOD", "NO DEF", "V32/V32b", "V22", "V21",
static const char *dmrim[] = {"NO MOD", "NO DEF", "V32/V32b", "V22", "V21",
"Bell103", "V23", "Bell202", "V17", "V29", "V27ter"};
static void

View File

@ -66,58 +66,6 @@ static const struct mt7530_mib_desc mt7530_mib[] = {
MIB_DESC(1, 0xb8, "RxArlDrop"),
};
static int
mt7623_trgmii_write(struct mt7530_priv *priv, u32 reg, u32 val)
{
int ret;
ret = regmap_write(priv->ethernet, TRGMII_BASE(reg), val);
if (ret < 0)
dev_err(priv->dev,
"failed to priv write register\n");
return ret;
}
static u32
mt7623_trgmii_read(struct mt7530_priv *priv, u32 reg)
{
int ret;
u32 val;
ret = regmap_read(priv->ethernet, TRGMII_BASE(reg), &val);
if (ret < 0) {
dev_err(priv->dev,
"failed to priv read register\n");
return ret;
}
return val;
}
static void
mt7623_trgmii_rmw(struct mt7530_priv *priv, u32 reg,
u32 mask, u32 set)
{
u32 val;
val = mt7623_trgmii_read(priv, reg);
val &= ~mask;
val |= set;
mt7623_trgmii_write(priv, reg, val);
}
static void
mt7623_trgmii_set(struct mt7530_priv *priv, u32 reg, u32 val)
{
mt7623_trgmii_rmw(priv, reg, 0, val);
}
static void
mt7623_trgmii_clear(struct mt7530_priv *priv, u32 reg, u32 val)
{
mt7623_trgmii_rmw(priv, reg, val, 0);
}
static int
core_read_mmd_indirect(struct mt7530_priv *priv, int prtad, int devad)
{
@ -530,27 +478,6 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, int mode)
for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
mt7530_rmw(priv, MT7530_TRGMII_RD(i),
RD_TAP_MASK, RD_TAP(16));
else
if (priv->id != ID_MT7621)
mt7623_trgmii_set(priv, GSW_INTF_MODE,
INTF_MODE_TRGMII);
return 0;
}
static int
mt7623_pad_clk_setup(struct dsa_switch *ds)
{
struct mt7530_priv *priv = ds->priv;
int i;
for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
mt7623_trgmii_write(priv, GSW_TRGMII_TD_ODT(i),
TD_DM_DRVP(8) | TD_DM_DRVN(8));
mt7623_trgmii_set(priv, GSW_TRGMII_RCK_CTRL, RX_RST | RXC_DQSISEL);
mt7623_trgmii_clear(priv, GSW_TRGMII_RCK_CTRL, RX_RST);
return 0;
}
@ -846,8 +773,9 @@ mt7530_port_set_vlan_unaware(struct dsa_switch *ds, int port)
*/
mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK,
MT7530_PORT_MATRIX_MODE);
mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK,
VLAN_ATTR(MT7530_VLAN_TRANSPARENT));
mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK,
VLAN_ATTR(MT7530_VLAN_TRANSPARENT) |
PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
for (i = 0; i < MT7530_NUM_PORTS; i++) {
if (dsa_is_user_port(ds, i) &&
@ -863,8 +791,8 @@ mt7530_port_set_vlan_unaware(struct dsa_switch *ds, int port)
if (all_user_ports_removed) {
mt7530_write(priv, MT7530_PCR_P(MT7530_CPU_PORT),
PCR_MATRIX(dsa_user_ports(priv->ds)));
mt7530_write(priv, MT7530_PVC_P(MT7530_CPU_PORT),
PORT_SPEC_TAG);
mt7530_write(priv, MT7530_PVC_P(MT7530_CPU_PORT), PORT_SPEC_TAG
| PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
}
}
@ -890,8 +818,9 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port)
/* Set the port as a user port which is to be able to recognize VID
* from incoming packets before fetching entry within the VLAN table.
*/
mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK,
VLAN_ATTR(MT7530_VLAN_USER));
mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK,
VLAN_ATTR(MT7530_VLAN_USER) |
PVC_EG_TAG(MT7530_VLAN_EG_DISABLED));
}
static void
@ -1303,10 +1232,6 @@ mt7530_setup(struct dsa_switch *ds)
dn = dsa_to_port(ds, MT7530_CPU_PORT)->master->dev.of_node->parent;
if (priv->id == ID_MT7530) {
priv->ethernet = syscon_node_to_regmap(dn);
if (IS_ERR(priv->ethernet))
return PTR_ERR(priv->ethernet);
regulator_set_voltage(priv->core_pwr, 1000000, 1000000);
ret = regulator_enable(priv->core_pwr);
if (ret < 0) {
@ -1380,6 +1305,10 @@ mt7530_setup(struct dsa_switch *ds)
mt7530_cpu_port_enable(priv, i);
else
mt7530_port_disable(ds, i);
/* Enable consistent egress tag */
mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK,
PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
}
/* Setup port 5 */
@ -1468,14 +1397,6 @@ static void mt7530_phylink_mac_config(struct dsa_switch *ds, int port,
/* Setup TX circuit incluing relevant PAD and driving */
mt7530_pad_clk_setup(ds, state->interface);
if (priv->id == ID_MT7530) {
/* Setup RX circuit, relevant PAD and driving on the
* host which must be placed after the setup on the
* device side is all finished.
*/
mt7623_pad_clk_setup(ds);
}
priv->p6_interface = state->interface;
break;
default:

View File

@ -172,9 +172,16 @@ enum mt7530_port_mode {
/* Register for port vlan control */
#define MT7530_PVC_P(x) (0x2010 + ((x) * 0x100))
#define PORT_SPEC_TAG BIT(5)
#define PVC_EG_TAG(x) (((x) & 0x7) << 8)
#define PVC_EG_TAG_MASK PVC_EG_TAG(7)
#define VLAN_ATTR(x) (((x) & 0x3) << 6)
#define VLAN_ATTR_MASK VLAN_ATTR(3)
enum mt7530_vlan_port_eg_tag {
MT7530_VLAN_EG_DISABLED = 0,
MT7530_VLAN_EG_CONSISTENT = 1,
};
enum mt7530_vlan_port_attr {
MT7530_VLAN_USER = 0,
MT7530_VLAN_TRANSPARENT = 3,
@ -277,7 +284,6 @@ enum mt7530_vlan_port_attr {
/* Registers for TRGMII on the both side */
#define MT7530_TRGMII_RCK_CTRL 0x7a00
#define GSW_TRGMII_RCK_CTRL 0x300
#define RX_RST BIT(31)
#define RXC_DQSISEL BIT(30)
#define DQSI1_TAP_MASK (0x7f << 8)
@ -286,31 +292,24 @@ enum mt7530_vlan_port_attr {
#define DQSI0_TAP(x) ((x) & 0x7f)
#define MT7530_TRGMII_RCK_RTT 0x7a04
#define GSW_TRGMII_RCK_RTT 0x304
#define DQS1_GATE BIT(31)
#define DQS0_GATE BIT(30)
#define MT7530_TRGMII_RD(x) (0x7a10 + (x) * 8)
#define GSW_TRGMII_RD(x) (0x310 + (x) * 8)
#define BSLIP_EN BIT(31)
#define EDGE_CHK BIT(30)
#define RD_TAP_MASK 0x7f
#define RD_TAP(x) ((x) & 0x7f)
#define GSW_TRGMII_TXCTRL 0x340
#define MT7530_TRGMII_TXCTRL 0x7a40
#define TRAIN_TXEN BIT(31)
#define TXC_INV BIT(30)
#define TX_RST BIT(28)
#define MT7530_TRGMII_TD_ODT(i) (0x7a54 + 8 * (i))
#define GSW_TRGMII_TD_ODT(i) (0x354 + 8 * (i))
#define TD_DM_DRVP(x) ((x) & 0xf)
#define TD_DM_DRVN(x) (((x) & 0xf) << 4)
#define GSW_INTF_MODE 0x390
#define INTF_MODE_TRGMII BIT(1)
#define MT7530_TRGMII_TCK_CTRL 0x7a78
#define TCK_TAP(x) (((x) & 0xf) << 8)
@ -443,7 +442,6 @@ static const char *p5_intf_modes(unsigned int p5_interface)
* @ds: The pointer to the dsa core structure
* @bus: The bus used for the device and built-in PHY
* @rstc: The pointer to reset control used by MCM
* @ethernet: The regmap used for access TRGMII-based registers
* @core_pwr: The power supplied into the core
* @io_pwr: The power supplied into the I/O
* @reset: The descriptor for GPIO line tied to its reset pin
@ -460,7 +458,6 @@ struct mt7530_priv {
struct dsa_switch *ds;
struct mii_bus *bus;
struct reset_control *rstc;
struct regmap *ethernet;
struct regulator *core_pwr;
struct regulator *io_pwr;
struct gpio_desc *reset;

View File

@ -709,7 +709,8 @@ static void mv88e6xxx_mac_link_down(struct dsa_switch *ds, int port,
ops = chip->info->ops;
mv88e6xxx_reg_lock(chip);
if (!mv88e6xxx_port_ppu_updates(chip, port) && ops->port_set_link)
if ((!mv88e6xxx_port_ppu_updates(chip, port) ||
mode == MLO_AN_FIXED) && ops->port_set_link)
err = ops->port_set_link(chip, port, LINK_FORCED_DOWN);
mv88e6xxx_reg_unlock(chip);
@ -731,7 +732,7 @@ static void mv88e6xxx_mac_link_up(struct dsa_switch *ds, int port,
ops = chip->info->ops;
mv88e6xxx_reg_lock(chip);
if (!mv88e6xxx_port_ppu_updates(chip, port)) {
if (!mv88e6xxx_port_ppu_updates(chip, port) || mode == MLO_AN_FIXED) {
/* FIXME: for an automedia port, should we force the link
* down here - what if the link comes up due to "other" media
* while we're bringing the port up, how is the exclusivity

View File

@ -46,11 +46,8 @@ static int felix_fdb_add(struct dsa_switch *ds, int port,
const unsigned char *addr, u16 vid)
{
struct ocelot *ocelot = ds->priv;
bool vlan_aware;
vlan_aware = dsa_port_is_vlan_filtering(dsa_to_port(ds, port));
return ocelot_fdb_add(ocelot, port, addr, vid, vlan_aware);
return ocelot_fdb_add(ocelot, port, addr, vid);
}
static int felix_fdb_del(struct dsa_switch *ds, int port,

View File

@ -514,7 +514,7 @@ static void xgbe_isr_task(unsigned long data)
xgbe_disable_rx_tx_ints(pdata);
/* Turn on polling */
__napi_schedule_irqoff(&pdata->napi);
__napi_schedule(&pdata->napi);
}
} else {
/* Don't clear Rx/Tx status if doing per channel DMA

View File

@ -3742,7 +3742,7 @@ int t4_phy_fw_ver(struct adapter *adap, int *phy_fw_ver)
FW_PARAMS_PARAM_Z_V(FW_PARAMS_PARAM_DEV_PHYFW_VERSION));
ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1,
&param, &val);
if (ret < 0)
if (ret)
return ret;
*phy_fw_ver = val;
return 0;

View File

@ -1277,7 +1277,7 @@ static const struct net_device_ops tulip_netdev_ops = {
#endif
};
const struct pci_device_id early_486_chipsets[] = {
static const struct pci_device_id early_486_chipsets[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82424) },
{ PCI_DEVICE(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_496) },
{ },

View File

@ -488,6 +488,12 @@ struct fec_enet_priv_rx_q {
struct sk_buff *rx_skbuff[RX_RING_SIZE];
};
struct fec_stop_mode_gpr {
struct regmap *gpr;
u8 reg;
u8 bit;
};
/* The FEC buffer descriptors track the ring buffers. The rx_bd_base and
* tx_bd_base always point to the base of the buffer descriptors. The
* cur_rx and cur_tx point to the currently available buffer.
@ -562,6 +568,7 @@ struct fec_enet_private {
int hwts_tx_en;
struct delayed_work time_keep;
struct regulator *reg_phy;
struct fec_stop_mode_gpr stop_gpr;
unsigned int tx_align;
unsigned int rx_align;

View File

@ -62,6 +62,8 @@
#include <linux/if_vlan.h>
#include <linux/pinctrl/consumer.h>
#include <linux/prefetch.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
#include <soc/imx/cpuidle.h>
#include <asm/cacheflush.h>
@ -84,6 +86,56 @@ static void fec_enet_itr_coal_init(struct net_device *ndev);
#define FEC_ENET_OPD_V 0xFFF0
#define FEC_MDIO_PM_TIMEOUT 100 /* ms */
struct fec_devinfo {
u32 quirks;
u8 stop_gpr_reg;
u8 stop_gpr_bit;
};
static const struct fec_devinfo fec_imx25_info = {
.quirks = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
FEC_QUIRK_HAS_FRREG,
};
static const struct fec_devinfo fec_imx27_info = {
.quirks = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
};
static const struct fec_devinfo fec_imx28_info = {
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
FEC_QUIRK_HAS_FRREG,
};
static const struct fec_devinfo fec_imx6q_info = {
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 |
FEC_QUIRK_HAS_RACC,
.stop_gpr_reg = 0x34,
.stop_gpr_bit = 27,
};
static const struct fec_devinfo fec_mvf600_info = {
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_RACC,
};
static const struct fec_devinfo fec_imx6x_info = {
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE |
FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE,
};
static const struct fec_devinfo fec_imx6ul_info = {
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR007885 |
FEC_QUIRK_BUG_CAPTURE | FEC_QUIRK_HAS_RACC |
FEC_QUIRK_HAS_COALESCE,
};
static struct platform_device_id fec_devtype[] = {
{
/* keep it for coldfire */
@ -91,39 +143,25 @@ static struct platform_device_id fec_devtype[] = {
.driver_data = 0,
}, {
.name = "imx25-fec",
.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
FEC_QUIRK_HAS_FRREG,
.driver_data = (kernel_ulong_t)&fec_imx25_info,
}, {
.name = "imx27-fec",
.driver_data = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
.driver_data = (kernel_ulong_t)&fec_imx27_info,
}, {
.name = "imx28-fec",
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
FEC_QUIRK_HAS_FRREG,
.driver_data = (kernel_ulong_t)&fec_imx28_info,
}, {
.name = "imx6q-fec",
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 |
FEC_QUIRK_HAS_RACC,
.driver_data = (kernel_ulong_t)&fec_imx6q_info,
}, {
.name = "mvf600-fec",
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_RACC,
.driver_data = (kernel_ulong_t)&fec_mvf600_info,
}, {
.name = "imx6sx-fec",
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE |
FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE,
.driver_data = (kernel_ulong_t)&fec_imx6x_info,
}, {
.name = "imx6ul-fec",
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR007885 |
FEC_QUIRK_BUG_CAPTURE | FEC_QUIRK_HAS_RACC |
FEC_QUIRK_HAS_COALESCE,
.driver_data = (kernel_ulong_t)&fec_imx6ul_info,
}, {
/* sentinel */
}
@ -1092,11 +1130,28 @@ fec_restart(struct net_device *ndev)
}
static void fec_enet_stop_mode(struct fec_enet_private *fep, bool enabled)
{
struct fec_platform_data *pdata = fep->pdev->dev.platform_data;
struct fec_stop_mode_gpr *stop_gpr = &fep->stop_gpr;
if (stop_gpr->gpr) {
if (enabled)
regmap_update_bits(stop_gpr->gpr, stop_gpr->reg,
BIT(stop_gpr->bit),
BIT(stop_gpr->bit));
else
regmap_update_bits(stop_gpr->gpr, stop_gpr->reg,
BIT(stop_gpr->bit), 0);
} else if (pdata && pdata->sleep_mode_enable) {
pdata->sleep_mode_enable(enabled);
}
}
static void
fec_stop(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
struct fec_platform_data *pdata = fep->pdev->dev.platform_data;
u32 rmii_mode = readl(fep->hwp + FEC_R_CNTRL) & (1 << 8);
u32 val;
@ -1125,9 +1180,7 @@ fec_stop(struct net_device *ndev)
val = readl(fep->hwp + FEC_ECNTRL);
val |= (FEC_ECR_MAGICEN | FEC_ECR_SLEEP);
writel(val, fep->hwp + FEC_ECNTRL);
if (pdata && pdata->sleep_mode_enable)
pdata->sleep_mode_enable(true);
fec_enet_stop_mode(fep, true);
}
writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);
@ -3398,6 +3451,37 @@ static int fec_enet_get_irq_cnt(struct platform_device *pdev)
return irq_cnt;
}
static int fec_enet_init_stop_mode(struct fec_enet_private *fep,
struct fec_devinfo *dev_info,
struct device_node *np)
{
struct device_node *gpr_np;
int ret = 0;
if (!dev_info)
return 0;
gpr_np = of_parse_phandle(np, "gpr", 0);
if (!gpr_np)
return 0;
fep->stop_gpr.gpr = syscon_node_to_regmap(gpr_np);
if (IS_ERR(fep->stop_gpr.gpr)) {
dev_err(&fep->pdev->dev, "could not find gpr regmap\n");
ret = PTR_ERR(fep->stop_gpr.gpr);
fep->stop_gpr.gpr = NULL;
goto out;
}
fep->stop_gpr.reg = dev_info->stop_gpr_reg;
fep->stop_gpr.bit = dev_info->stop_gpr_bit;
out:
of_node_put(gpr_np);
return ret;
}
static int
fec_probe(struct platform_device *pdev)
{
@ -3413,6 +3497,7 @@ fec_probe(struct platform_device *pdev)
int num_rx_qs;
char irq_name[8];
int irq_cnt;
struct fec_devinfo *dev_info;
fec_enet_get_queue_num(pdev, &num_tx_qs, &num_rx_qs);
@ -3430,7 +3515,9 @@ fec_probe(struct platform_device *pdev)
of_id = of_match_device(fec_dt_ids, &pdev->dev);
if (of_id)
pdev->id_entry = of_id->data;
fep->quirks = pdev->id_entry->driver_data;
dev_info = (struct fec_devinfo *)pdev->id_entry->driver_data;
if (dev_info)
fep->quirks = dev_info->quirks;
fep->netdev = ndev;
fep->num_rx_queues = num_rx_qs;
@ -3464,6 +3551,10 @@ fec_probe(struct platform_device *pdev)
if (of_get_property(np, "fsl,magic-packet", NULL))
fep->wol_flag |= FEC_WOL_HAS_MAGIC_PACKET;
ret = fec_enet_init_stop_mode(fep, dev_info, np);
if (ret)
goto failed_stop_mode;
phy_node = of_parse_phandle(np, "phy-handle", 0);
if (!phy_node && of_phy_is_fixed_link(np)) {
ret = of_phy_register_fixed_link(np);
@ -3632,6 +3723,7 @@ failed_clk:
if (of_phy_is_fixed_link(np))
of_phy_deregister_fixed_link(np);
of_node_put(phy_node);
failed_stop_mode:
failed_phy:
dev_id--;
failed_ioremap:
@ -3709,7 +3801,6 @@ static int __maybe_unused fec_resume(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct fec_enet_private *fep = netdev_priv(ndev);
struct fec_platform_data *pdata = fep->pdev->dev.platform_data;
int ret;
int val;
@ -3727,8 +3818,8 @@ static int __maybe_unused fec_resume(struct device *dev)
goto failed_clk;
}
if (fep->wol_flag & FEC_WOL_FLAG_ENABLE) {
if (pdata && pdata->sleep_mode_enable)
pdata->sleep_mode_enable(false);
fec_enet_stop_mode(fep, false);
val = readl(fep->hwp + FEC_ECNTRL);
val &= ~(FEC_ECR_MAGICEN | FEC_ECR_SLEEP);
writel(val, fep->hwp + FEC_ECNTRL);

View File

@ -5383,7 +5383,7 @@ static int __init mvneta_driver_init(void)
{
int ret;
ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "net/mvmeta:online",
ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "net/mvneta:online",
mvneta_cpu_online,
mvneta_cpu_down_prepare);
if (ret < 0)

View File

@ -65,6 +65,17 @@ u32 mtk_r32(struct mtk_eth *eth, unsigned reg)
return __raw_readl(eth->base + reg);
}
u32 mtk_m32(struct mtk_eth *eth, u32 mask, u32 set, unsigned reg)
{
u32 val;
val = mtk_r32(eth, reg);
val &= ~mask;
val |= set;
mtk_w32(eth, val, reg);
return reg;
}
static int mtk_mdio_busy_wait(struct mtk_eth *eth)
{
unsigned long t_start = jiffies;
@ -193,7 +204,7 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
struct mtk_mac *mac = container_of(config, struct mtk_mac,
phylink_config);
struct mtk_eth *eth = mac->hw;
u32 mcr_cur, mcr_new, sid;
u32 mcr_cur, mcr_new, sid, i;
int val, ge_mode, err;
/* MT76x8 has no hardware settings between for the MAC */
@ -255,6 +266,17 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
PHY_INTERFACE_MODE_TRGMII)
mtk_gmac0_rgmii_adjust(mac->hw,
state->speed);
/* mt7623_pad_clk_setup */
for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
mtk_w32(mac->hw,
TD_DM_DRVP(8) | TD_DM_DRVN(8),
TRGMII_TD_ODT(i));
/* Assert/release MT7623 RXC reset */
mtk_m32(mac->hw, 0, RXC_RST | RXC_DQSISEL,
TRGMII_RCK_CTRL);
mtk_m32(mac->hw, RXC_RST, 0, TRGMII_RCK_CTRL);
}
}

View File

@ -352,10 +352,13 @@
#define DQSI0(x) ((x << 0) & GENMASK(6, 0))
#define DQSI1(x) ((x << 8) & GENMASK(14, 8))
#define RXCTL_DMWTLAT(x) ((x << 16) & GENMASK(18, 16))
#define RXC_RST BIT(31)
#define RXC_DQSISEL BIT(30)
#define RCK_CTRL_RGMII_1000 (RXC_DQSISEL | RXCTL_DMWTLAT(2) | DQSI1(16))
#define RCK_CTRL_RGMII_10_100 RXCTL_DMWTLAT(2)
#define NUM_TRGMII_CTRL 5
/* TRGMII RXC control register */
#define TRGMII_TCK_CTRL 0x10340
#define TXCTL_DMWTLAT(x) ((x << 16) & GENMASK(18, 16))
@ -363,6 +366,11 @@
#define TCK_CTRL_RGMII_1000 TXCTL_DMWTLAT(2)
#define TCK_CTRL_RGMII_10_100 (TXC_INV | TXCTL_DMWTLAT(2))
/* TRGMII TX Drive Strength */
#define TRGMII_TD_ODT(i) (0x10354 + 8 * (i))
#define TD_DM_DRVP(x) ((x) & 0xf)
#define TD_DM_DRVN(x) (((x) & 0xf) << 4)
/* TRGMII Interface mode register */
#define INTF_MODE 0x10390
#define TRGMII_INTF_DIS BIT(0)

View File

@ -23,7 +23,10 @@ static int mlx5_devlink_flash_update(struct devlink *devlink,
if (err)
return err;
return mlx5_firmware_flash(dev, fw, extack);
err = mlx5_firmware_flash(dev, fw, extack);
release_firmware(fw);
return err;
}
static u8 mlx5_fw_ver_major(u32 version)

View File

@ -67,11 +67,9 @@ struct mlx5_ct_ft {
struct nf_flowtable *nf_ft;
struct mlx5_tc_ct_priv *ct_priv;
struct rhashtable ct_entries_ht;
struct list_head ct_entries_list;
};
struct mlx5_ct_entry {
struct list_head list;
u16 zone;
struct rhash_head node;
struct flow_rule *flow_rule;
@ -617,8 +615,6 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft,
if (err)
goto err_insert;
list_add(&entry->list, &ft->ct_entries_list);
return 0;
err_insert:
@ -646,7 +642,6 @@ mlx5_tc_ct_block_flow_offload_del(struct mlx5_ct_ft *ft,
WARN_ON(rhashtable_remove_fast(&ft->ct_entries_ht,
&entry->node,
cts_ht_params));
list_del(&entry->list);
kfree(entry);
return 0;
@ -818,7 +813,6 @@ mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone,
ft->zone = zone;
ft->nf_ft = nf_ft;
ft->ct_priv = ct_priv;
INIT_LIST_HEAD(&ft->ct_entries_list);
refcount_set(&ft->refcount, 1);
err = rhashtable_init(&ft->ct_entries_ht, &cts_ht_params);
@ -847,12 +841,12 @@ err_init:
}
static void
mlx5_tc_ct_flush_ft(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft)
mlx5_tc_ct_flush_ft_entry(void *ptr, void *arg)
{
struct mlx5_ct_entry *entry;
struct mlx5_tc_ct_priv *ct_priv = arg;
struct mlx5_ct_entry *entry = ptr;
list_for_each_entry(entry, &ft->ct_entries_list, list)
mlx5_tc_ct_entry_del_rules(ft->ct_priv, entry);
mlx5_tc_ct_entry_del_rules(ct_priv, entry);
}
static void
@ -863,9 +857,10 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft)
nf_flow_table_offload_del_cb(ft->nf_ft,
mlx5_tc_ct_block_flow_offload, ft);
mlx5_tc_ct_flush_ft(ct_priv, ft);
rhashtable_remove_fast(&ct_priv->zone_ht, &ft->node, zone_params);
rhashtable_destroy(&ft->ct_entries_ht);
rhashtable_free_and_destroy(&ft->ct_entries_ht,
mlx5_tc_ct_flush_ft_entry,
ct_priv);
kfree(ft);
}

View File

@ -5526,8 +5526,8 @@ static void mlx5e_remove(struct mlx5_core_dev *mdev, void *vpriv)
#ifdef CONFIG_MLX5_CORE_EN_DCB
mlx5e_dcbnl_delete_app(priv);
#endif
mlx5e_devlink_port_unregister(priv);
unregister_netdev(priv->netdev);
mlx5e_devlink_port_unregister(priv);
mlx5e_detach(mdev, vpriv);
mlx5e_destroy_netdev(priv);
}

View File

@ -2050,29 +2050,30 @@ static int register_devlink_port(struct mlx5_core_dev *dev,
struct mlx5_eswitch_rep *rep = rpriv->rep;
struct netdev_phys_item_id ppid = {};
unsigned int dl_port_index = 0;
u16 pfnum;
if (!is_devlink_port_supported(dev, rpriv))
return 0;
mlx5e_rep_get_port_parent_id(rpriv->netdev, &ppid);
pfnum = PCI_FUNC(dev->pdev->devfn);
if (rep->vport == MLX5_VPORT_UPLINK) {
devlink_port_attrs_set(&rpriv->dl_port,
DEVLINK_PORT_FLAVOUR_PHYSICAL,
PCI_FUNC(dev->pdev->devfn), false, 0,
pfnum, false, 0,
&ppid.id[0], ppid.id_len);
dl_port_index = vport_to_devlink_port_index(dev, rep->vport);
} else if (rep->vport == MLX5_VPORT_PF) {
devlink_port_attrs_pci_pf_set(&rpriv->dl_port,
&ppid.id[0], ppid.id_len,
dev->pdev->devfn);
pfnum);
dl_port_index = rep->vport;
} else if (mlx5_eswitch_is_vf_vport(dev->priv.eswitch,
rpriv->rep->vport)) {
devlink_port_attrs_pci_vf_set(&rpriv->dl_port,
&ppid.id[0], ppid.id_len,
dev->pdev->devfn,
rep->vport - 1);
pfnum, rep->vport - 1);
dl_port_index = vport_to_devlink_port_index(dev, rep->vport);
}

View File

@ -1343,7 +1343,8 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
if (err)
return err;
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
!(attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR)) {
err = mlx5e_attach_mod_hdr(priv, flow, parse_attr);
dealloc_mod_hdr_actions(&parse_attr->mod_hdr_acts);
if (err)
@ -3558,12 +3559,13 @@ static int add_vlan_pop_action(struct mlx5e_priv *priv,
struct mlx5_esw_flow_attr *attr,
u32 *action)
{
int nest_level = attr->parse_attr->filter_dev->lower_level;
struct flow_action_entry vlan_act = {
.id = FLOW_ACTION_VLAN_POP,
};
int err = 0;
int nest_level, err = 0;
nest_level = attr->parse_attr->filter_dev->lower_level -
priv->netdev->lower_level;
while (nest_level--) {
err = parse_tc_vlan_action(priv, &vlan_act, attr, action);
if (err)

View File

@ -403,7 +403,6 @@ enum {
MLX5_ESW_ATTR_FLAG_VLAN_HANDLED = BIT(0),
MLX5_ESW_ATTR_FLAG_SLOW_PATH = BIT(1),
MLX5_ESW_ATTR_FLAG_NO_IN_PORT = BIT(2),
MLX5_ESW_ATTR_FLAG_HAIRPIN = BIT(3),
};
struct mlx5_esw_flow_attr {

View File

@ -300,7 +300,6 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
bool split = !!(attr->split_count);
struct mlx5_flow_handle *rule;
struct mlx5_flow_table *fdb;
bool hairpin = false;
int j, i = 0;
if (esw->mode != MLX5_ESWITCH_OFFLOADS)
@ -398,21 +397,16 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
goto err_esw_get;
}
if (mlx5_eswitch_termtbl_required(esw, attr, &flow_act, spec)) {
if (mlx5_eswitch_termtbl_required(esw, attr, &flow_act, spec))
rule = mlx5_eswitch_add_termtbl_rule(esw, fdb, spec, attr,
&flow_act, dest, i);
hairpin = true;
} else {
else
rule = mlx5_add_flow_rules(fdb, spec, &flow_act, dest, i);
}
if (IS_ERR(rule))
goto err_add_rule;
else
atomic64_inc(&esw->offloads.num_flows);
if (hairpin)
attr->flags |= MLX5_ESW_ATTR_FLAG_HAIRPIN;
return rule;
err_add_rule:
@ -501,7 +495,7 @@ __mlx5_eswitch_del_rule(struct mlx5_eswitch *esw,
mlx5_del_flow_rules(rule);
if (attr->flags & MLX5_ESW_ATTR_FLAG_HAIRPIN) {
if (!(attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH)) {
/* unref the term table */
for (i = 0; i < MLX5_MAX_FLOW_FWD_VPORTS; i++) {
if (attr->dests[i].termtbl)

View File

@ -243,7 +243,7 @@ recover_from_sw_reset:
if (mlx5_get_nic_state(dev) == MLX5_NIC_IFC_DISABLED)
break;
cond_resched();
msleep(20);
} while (!time_after(jiffies, end));
if (mlx5_get_nic_state(dev) != MLX5_NIC_IFC_DISABLED) {

View File

@ -183,58 +183,11 @@ static void ocelot_vlan_mode(struct ocelot *ocelot, int port,
ocelot_write(ocelot, val, ANA_VLANMASK);
}
void ocelot_port_vlan_filtering(struct ocelot *ocelot, int port,
bool vlan_aware)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
u32 val;
if (vlan_aware)
val = ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA |
ANA_PORT_VLAN_CFG_VLAN_POP_CNT(1);
else
val = 0;
ocelot_rmw_gix(ocelot, val,
ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA |
ANA_PORT_VLAN_CFG_VLAN_POP_CNT_M,
ANA_PORT_VLAN_CFG, port);
if (vlan_aware && !ocelot_port->vid)
/* If port is vlan-aware and tagged, drop untagged and priority
* tagged frames.
*/
val = ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA;
else
val = 0;
ocelot_rmw_gix(ocelot, val,
ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA,
ANA_PORT_DROP_CFG, port);
if (vlan_aware) {
if (ocelot_port->vid)
/* Tag all frames except when VID == DEFAULT_VLAN */
val |= REW_TAG_CFG_TAG_CFG(1);
else
/* Tag all frames */
val |= REW_TAG_CFG_TAG_CFG(3);
} else {
/* Port tagging disabled. */
val = REW_TAG_CFG_TAG_CFG(0);
}
ocelot_rmw_gix(ocelot, val,
REW_TAG_CFG_TAG_CFG_M,
REW_TAG_CFG, port);
}
EXPORT_SYMBOL(ocelot_port_vlan_filtering);
static int ocelot_port_set_native_vlan(struct ocelot *ocelot, int port,
u16 vid)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
u32 val = 0;
if (ocelot_port->vid != vid) {
/* Always permit deleting the native VLAN (vid = 0) */
@ -251,9 +204,59 @@ static int ocelot_port_set_native_vlan(struct ocelot *ocelot, int port,
REW_PORT_VLAN_CFG_PORT_VID_M,
REW_PORT_VLAN_CFG, port);
if (ocelot_port->vlan_aware && !ocelot_port->vid)
/* If port is vlan-aware and tagged, drop untagged and priority
* tagged frames.
*/
val = ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA;
ocelot_rmw_gix(ocelot, val,
ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA,
ANA_PORT_DROP_CFG, port);
if (ocelot_port->vlan_aware) {
if (ocelot_port->vid)
/* Tag all frames except when VID == DEFAULT_VLAN */
val = REW_TAG_CFG_TAG_CFG(1);
else
/* Tag all frames */
val = REW_TAG_CFG_TAG_CFG(3);
} else {
/* Port tagging disabled. */
val = REW_TAG_CFG_TAG_CFG(0);
}
ocelot_rmw_gix(ocelot, val,
REW_TAG_CFG_TAG_CFG_M,
REW_TAG_CFG, port);
return 0;
}
void ocelot_port_vlan_filtering(struct ocelot *ocelot, int port,
bool vlan_aware)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
u32 val;
ocelot_port->vlan_aware = vlan_aware;
if (vlan_aware)
val = ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA |
ANA_PORT_VLAN_CFG_VLAN_POP_CNT(1);
else
val = 0;
ocelot_rmw_gix(ocelot, val,
ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA |
ANA_PORT_VLAN_CFG_VLAN_POP_CNT_M,
ANA_PORT_VLAN_CFG, port);
ocelot_port_set_native_vlan(ocelot, port, ocelot_port->vid);
}
EXPORT_SYMBOL(ocelot_port_vlan_filtering);
/* Default vlan to clasify for untagged frames (may be zero) */
static void ocelot_port_set_pvid(struct ocelot *ocelot, int port, u16 pvid)
{
@ -873,12 +876,12 @@ static void ocelot_get_stats64(struct net_device *dev,
}
int ocelot_fdb_add(struct ocelot *ocelot, int port,
const unsigned char *addr, u16 vid, bool vlan_aware)
const unsigned char *addr, u16 vid)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
if (!vid) {
if (!vlan_aware)
if (!ocelot_port->vlan_aware)
/* If the bridge is not VLAN aware and no VID was
* provided, set it to pvid to ensure the MAC entry
* matches incoming untagged packets
@ -905,7 +908,7 @@ static int ocelot_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
struct ocelot *ocelot = priv->port.ocelot;
int port = priv->chip_port;
return ocelot_fdb_add(ocelot, port, addr, vid, priv->vlan_aware);
return ocelot_fdb_add(ocelot, port, addr, vid);
}
int ocelot_fdb_del(struct ocelot *ocelot, int port,
@ -1496,8 +1499,8 @@ static int ocelot_port_attr_set(struct net_device *dev,
ocelot_port_attr_ageing_set(ocelot, port, attr->u.ageing_time);
break;
case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
priv->vlan_aware = attr->u.vlan_filtering;
ocelot_port_vlan_filtering(ocelot, port, priv->vlan_aware);
ocelot_port_vlan_filtering(ocelot, port,
attr->u.vlan_filtering);
break;
case SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED:
ocelot_port_attr_mc_set(ocelot, port, !attr->u.mc_disabled);
@ -1868,7 +1871,6 @@ static int ocelot_netdevice_port_event(struct net_device *dev,
} else {
err = ocelot_port_bridge_leave(ocelot, port,
info->upper_dev);
priv->vlan_aware = false;
}
}
if (netif_is_lag_master(info->upper_dev)) {

View File

@ -56,8 +56,6 @@ struct ocelot_port_private {
struct phy_device *phy;
u8 chip_port;
u8 vlan_aware;
struct phy *serdes;
struct ocelot_port_tc tc;

View File

@ -5155,7 +5155,7 @@ static int do_s2io_delete_unicast_mc(struct s2io_nic *sp, u64 addr)
/* read mac entries from CAM */
static u64 do_s2io_read_unicast_mc(struct s2io_nic *sp, int offset)
{
u64 tmp64 = 0xffffffffffff0000ULL, val64;
u64 tmp64, val64;
struct XENA_dev_config __iomem *bar0 = sp->bar0;
/* read mac addr */

View File

@ -2127,6 +2127,8 @@ static void ionic_lif_handle_fw_up(struct ionic_lif *lif)
if (lif->registered)
ionic_lif_set_netdev_info(lif);
ionic_rx_filter_replay(lif);
if (netif_running(lif->netdev)) {
err = ionic_txrx_alloc(lif);
if (err)
@ -2206,9 +2208,9 @@ static void ionic_lif_deinit(struct ionic_lif *lif)
if (!test_bit(IONIC_LIF_F_FW_RESET, lif->state)) {
cancel_work_sync(&lif->deferred.work);
cancel_work_sync(&lif->tx_timeout_work);
ionic_rx_filters_deinit(lif);
}
ionic_rx_filters_deinit(lif);
if (lif->netdev->features & NETIF_F_RXHASH)
ionic_lif_rss_deinit(lif);
@ -2339,24 +2341,30 @@ static int ionic_station_set(struct ionic_lif *lif)
err = ionic_adminq_post_wait(lif, &ctx);
if (err)
return err;
netdev_dbg(lif->netdev, "found initial MAC addr %pM\n",
ctx.comp.lif_getattr.mac);
if (is_zero_ether_addr(ctx.comp.lif_getattr.mac))
return 0;
memcpy(addr.sa_data, ctx.comp.lif_getattr.mac, netdev->addr_len);
addr.sa_family = AF_INET;
err = eth_prepare_mac_addr_change(netdev, &addr);
if (err) {
netdev_warn(lif->netdev, "ignoring bad MAC addr from NIC %pM - err %d\n",
addr.sa_data, err);
return 0;
if (!ether_addr_equal(ctx.comp.lif_getattr.mac, netdev->dev_addr)) {
memcpy(addr.sa_data, ctx.comp.lif_getattr.mac, netdev->addr_len);
addr.sa_family = AF_INET;
err = eth_prepare_mac_addr_change(netdev, &addr);
if (err) {
netdev_warn(lif->netdev, "ignoring bad MAC addr from NIC %pM - err %d\n",
addr.sa_data, err);
return 0;
}
if (!is_zero_ether_addr(netdev->dev_addr)) {
netdev_dbg(lif->netdev, "deleting station MAC addr %pM\n",
netdev->dev_addr);
ionic_lif_addr(lif, netdev->dev_addr, false);
}
eth_commit_mac_addr_change(netdev, &addr);
}
netdev_dbg(lif->netdev, "deleting station MAC addr %pM\n",
netdev->dev_addr);
ionic_lif_addr(lif, netdev->dev_addr, false);
eth_commit_mac_addr_change(netdev, &addr);
netdev_dbg(lif->netdev, "adding station MAC addr %pM\n",
netdev->dev_addr);
ionic_lif_addr(lif, netdev->dev_addr, true);
@ -2421,9 +2429,11 @@ static int ionic_lif_init(struct ionic_lif *lif)
if (err)
goto err_out_notifyq_deinit;
err = ionic_rx_filters_init(lif);
if (err)
goto err_out_notifyq_deinit;
if (!test_bit(IONIC_LIF_F_FW_RESET, lif->state)) {
err = ionic_rx_filters_init(lif);
if (err)
goto err_out_notifyq_deinit;
}
err = ionic_station_set(lif);
if (err)

View File

@ -2,6 +2,7 @@
/* Copyright(c) 2017 - 2019 Pensando Systems, Inc */
#include <linux/netdevice.h>
#include <linux/dynamic_debug.h>
#include <linux/etherdevice.h>
#include "ionic.h"
@ -17,17 +18,49 @@ void ionic_rx_filter_free(struct ionic_lif *lif, struct ionic_rx_filter *f)
devm_kfree(dev, f);
}
int ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f)
void ionic_rx_filter_replay(struct ionic_lif *lif)
{
struct ionic_admin_ctx ctx = {
.work = COMPLETION_INITIALIZER_ONSTACK(ctx.work),
.cmd.rx_filter_del = {
.opcode = IONIC_CMD_RX_FILTER_DEL,
.filter_id = cpu_to_le32(f->filter_id),
},
};
struct ionic_rx_filter_add_cmd *ac;
struct ionic_admin_ctx ctx;
struct ionic_rx_filter *f;
struct hlist_head *head;
struct hlist_node *tmp;
unsigned int i;
int err;
return ionic_adminq_post_wait(lif, &ctx);
ac = &ctx.cmd.rx_filter_add;
for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) {
head = &lif->rx_filters.by_id[i];
hlist_for_each_entry_safe(f, tmp, head, by_id) {
ctx.work = COMPLETION_INITIALIZER_ONSTACK(ctx.work);
memcpy(ac, &f->cmd, sizeof(f->cmd));
dev_dbg(&lif->netdev->dev, "replay filter command:\n");
dynamic_hex_dump("cmd ", DUMP_PREFIX_OFFSET, 16, 1,
&ctx.cmd, sizeof(ctx.cmd), true);
err = ionic_adminq_post_wait(lif, &ctx);
if (err) {
switch (le16_to_cpu(ac->match)) {
case IONIC_RX_FILTER_MATCH_VLAN:
netdev_info(lif->netdev, "Replay failed - %d: vlan %d\n",
err,
le16_to_cpu(ac->vlan.vlan));
break;
case IONIC_RX_FILTER_MATCH_MAC:
netdev_info(lif->netdev, "Replay failed - %d: mac %pM\n",
err, ac->mac.addr);
break;
case IONIC_RX_FILTER_MATCH_MAC_VLAN:
netdev_info(lif->netdev, "Replay failed - %d: vlan %d mac %pM\n",
err,
le16_to_cpu(ac->vlan.vlan),
ac->mac.addr);
break;
}
}
}
}
}
int ionic_rx_filters_init(struct ionic_lif *lif)

View File

@ -24,7 +24,7 @@ struct ionic_rx_filters {
};
void ionic_rx_filter_free(struct ionic_lif *lif, struct ionic_rx_filter *f);
int ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f);
void ionic_rx_filter_replay(struct ionic_lif *lif);
int ionic_rx_filters_init(struct ionic_lif *lif);
void ionic_rx_filters_deinit(struct ionic_lif *lif);
int ionic_rx_filter_save(struct ionic_lif *lif, u32 flow_id, u16 rxq_index,

View File

@ -241,6 +241,8 @@ static int socfpga_set_phy_mode_common(int phymode, u32 *val)
switch (phymode) {
case PHY_INTERFACE_MODE_RGMII:
case PHY_INTERFACE_MODE_RGMII_ID:
case PHY_INTERFACE_MODE_RGMII_RXID:
case PHY_INTERFACE_MODE_RGMII_TXID:
*val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII;
break;
case PHY_INTERFACE_MODE_MII:

View File

@ -150,6 +150,8 @@ static int sun7i_gmac_probe(struct platform_device *pdev)
plat_dat->init = sun7i_gmac_init;
plat_dat->exit = sun7i_gmac_exit;
plat_dat->fix_mac_speed = sun7i_fix_speed;
plat_dat->tx_fifo_size = 4096;
plat_dat->rx_fifo_size = 16384;
ret = sun7i_gmac_init(pdev, plat_dat->bsp_priv);
if (ret)

View File

@ -1372,7 +1372,7 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_cpsw_common *common)
err:
i = devm_add_action(dev, am65_cpsw_nuss_free_tx_chns, common);
if (i) {
dev_err(dev, "failed to add free_tx_chns action %d", i);
dev_err(dev, "Failed to add free_tx_chns action %d\n", i);
return i;
}
@ -1481,7 +1481,7 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
err:
i = devm_add_action(dev, am65_cpsw_nuss_free_rx_chns, common);
if (i) {
dev_err(dev, "failed to add free_rx_chns action %d", i);
dev_err(dev, "Failed to add free_rx_chns action %d\n", i);
return i;
}
@ -1691,7 +1691,7 @@ static int am65_cpsw_nuss_init_ndev_2g(struct am65_cpsw_common *common)
ret = devm_add_action_or_reset(dev, am65_cpsw_pcpu_stats_free,
ndev_priv->stats);
if (ret) {
dev_err(dev, "failed to add percpu stat free action %d", ret);
dev_err(dev, "Failed to add percpu stat free action %d\n", ret);
return ret;
}

View File

@ -297,14 +297,13 @@ static void ipa_modem_crashed(struct ipa *ipa)
ret = ipa_endpoint_modem_exception_reset_all(ipa);
if (ret)
dev_err(dev, "error %d resetting exception endpoint",
ret);
dev_err(dev, "error %d resetting exception endpoint\n", ret);
ipa_endpoint_modem_pause_all(ipa, false);
ret = ipa_modem_stop(ipa);
if (ret)
dev_err(dev, "error %d stopping modem", ret);
dev_err(dev, "error %d stopping modem\n", ret);
/* Now prepare for the next modem boot */
ret = ipa_mem_zero_modem(ipa);

View File

@ -3809,7 +3809,7 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[],
struct netlink_ext_ack *extack)
{
struct macsec_dev *macsec = macsec_priv(dev);
struct macsec_tx_sa tx_sc;
struct macsec_tx_sc tx_sc;
struct macsec_secy secy;
int ret;

View File

@ -1263,6 +1263,30 @@ static int marvell_read_status_page_an(struct phy_device *phydev,
int lpa;
int err;
if (!(status & MII_M1011_PHY_STATUS_RESOLVED)) {
phydev->link = 0;
return 0;
}
if (status & MII_M1011_PHY_STATUS_FULLDUPLEX)
phydev->duplex = DUPLEX_FULL;
else
phydev->duplex = DUPLEX_HALF;
switch (status & MII_M1011_PHY_STATUS_SPD_MASK) {
case MII_M1011_PHY_STATUS_1000:
phydev->speed = SPEED_1000;
break;
case MII_M1011_PHY_STATUS_100:
phydev->speed = SPEED_100;
break;
default:
phydev->speed = SPEED_10;
break;
}
if (!fiber) {
err = genphy_read_lpa(phydev);
if (err < 0)
@ -1291,28 +1315,6 @@ static int marvell_read_status_page_an(struct phy_device *phydev,
}
}
if (!(status & MII_M1011_PHY_STATUS_RESOLVED))
return 0;
if (status & MII_M1011_PHY_STATUS_FULLDUPLEX)
phydev->duplex = DUPLEX_FULL;
else
phydev->duplex = DUPLEX_HALF;
switch (status & MII_M1011_PHY_STATUS_SPD_MASK) {
case MII_M1011_PHY_STATUS_1000:
phydev->speed = SPEED_1000;
break;
case MII_M1011_PHY_STATUS_100:
phydev->speed = SPEED_100;
break;
default:
phydev->speed = SPEED_10;
break;
}
return 0;
}

View File

@ -33,6 +33,8 @@
#define MV_PHY_ALASKA_NBT_QUIRK_REV (MARVELL_PHY_ID_88X3310 | 0xa)
enum {
MV_PMA_FW_VER0 = 0xc011,
MV_PMA_FW_VER1 = 0xc012,
MV_PMA_BOOT = 0xc050,
MV_PMA_BOOT_FATAL = BIT(0),
@ -73,7 +75,8 @@ enum {
/* Vendor2 MMD registers */
MV_V2_PORT_CTRL = 0xf001,
MV_V2_PORT_CTRL_PWRDOWN = 0x0800,
MV_V2_PORT_CTRL_SWRST = BIT(15),
MV_V2_PORT_CTRL_PWRDOWN = BIT(11),
MV_V2_TEMP_CTRL = 0xf08a,
MV_V2_TEMP_CTRL_MASK = 0xc000,
MV_V2_TEMP_CTRL_SAMPLE = 0x0000,
@ -83,6 +86,8 @@ enum {
};
struct mv3310_priv {
u32 firmware_ver;
struct device *hwmon_dev;
char *hwmon_name;
};
@ -235,8 +240,17 @@ static int mv3310_power_down(struct phy_device *phydev)
static int mv3310_power_up(struct phy_device *phydev)
{
return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL,
MV_V2_PORT_CTRL_PWRDOWN);
struct mv3310_priv *priv = dev_get_drvdata(&phydev->mdio.dev);
int ret;
ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL,
MV_V2_PORT_CTRL_PWRDOWN);
if (priv->firmware_ver < 0x00030000)
return ret;
return phy_set_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL,
MV_V2_PORT_CTRL_SWRST);
}
static int mv3310_reset(struct phy_device *phydev, u32 unit)
@ -355,6 +369,22 @@ static int mv3310_probe(struct phy_device *phydev)
dev_set_drvdata(&phydev->mdio.dev, priv);
ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MV_PMA_FW_VER0);
if (ret < 0)
return ret;
priv->firmware_ver = ret << 16;
ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MV_PMA_FW_VER1);
if (ret < 0)
return ret;
priv->firmware_ver |= ret;
phydev_info(phydev, "Firmware version %u.%u.%u.%u\n",
priv->firmware_ver >> 24, (priv->firmware_ver >> 16) & 255,
(priv->firmware_ver >> 8) & 255, priv->firmware_ver & 255);
/* Powering down the port when not in use saves about 600mW */
ret = mv3310_power_down(phydev);
if (ret)

View File

@ -464,7 +464,7 @@ static struct class mdio_bus_class = {
/**
* mdio_find_bus - Given the name of a mdiobus, find the mii_bus.
* @mdio_bus_np: Pointer to the mii_bus.
* @mdio_name: The name of a mdiobus.
*
* Returns a reference to the mii_bus, or NULL if none found. The
* embedded struct device will have its reference count incremented,

View File

@ -1204,7 +1204,7 @@ static struct phy_driver ksphy_driver[] = {
.driver_data = &ksz9021_type,
.probe = kszphy_probe,
.config_init = ksz9131_config_init,
.read_status = ksz9031_read_status,
.read_status = genphy_read_status,
.ack_interrupt = kszphy_ack_interrupt,
.config_intr = kszphy_config_intr,
.get_sset_count = kszphy_get_sset_count,

View File

@ -1888,6 +1888,7 @@ drop:
skb_reset_network_header(skb);
skb_probe_transport_header(skb);
skb_record_rx_queue(skb, tfile->queue_index);
if (skb_xdp) {
struct bpf_prog *xdp_prog;
@ -2459,6 +2460,7 @@ build:
skb->protocol = eth_type_trans(skb, tun->dev);
skb_reset_network_header(skb);
skb_probe_transport_header(skb);
skb_record_rx_queue(skb, tfile->queue_index);
if (skb_xdp) {
err = do_xdp_generic(xdp_prog, skb);
@ -2470,7 +2472,6 @@ build:
!tfile->detached)
rxhash = __skb_get_hash_symmetric(skb);
skb_record_rx_queue(skb, tfile->queue_index);
netif_receive_skb(skb);
/* No need for get_cpu_ptr() here since this function is

View File

@ -36,12 +36,13 @@ static inline int ath11k_thermal_register(struct ath11k_base *sc)
return 0;
}
static inline void ath11k_thermal_unregister(struct ath11k *ar)
static inline void ath11k_thermal_unregister(struct ath11k_base *sc)
{
}
static inline int ath11k_thermal_set_throttling(struct ath11k *ar, u32 throttle_state)
{
return 0;
}
static inline void ath11k_thermal_event_temperature(struct ath11k *ar,

View File

@ -729,9 +729,18 @@ static int brcmf_net_mon_stop(struct net_device *ndev)
return err;
}
static netdev_tx_t brcmf_net_mon_start_xmit(struct sk_buff *skb,
struct net_device *ndev)
{
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
static const struct net_device_ops brcmf_netdev_ops_mon = {
.ndo_open = brcmf_net_mon_open,
.ndo_stop = brcmf_net_mon_stop,
.ndo_start_xmit = brcmf_net_mon_start_xmit,
};
int brcmf_net_mon_attach(struct brcmf_if *ifp)

View File

@ -3669,9 +3669,9 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
}
if (info->attrs[HWSIM_ATTR_RADIO_NAME]) {
hwname = kasprintf(GFP_KERNEL, "%.*s",
nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]),
(char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]));
hwname = kstrndup((char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]),
nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]),
GFP_KERNEL);
if (!hwname)
return -ENOMEM;
param.hwname = hwname;
@ -3691,9 +3691,9 @@ static int hwsim_del_radio_nl(struct sk_buff *msg, struct genl_info *info)
if (info->attrs[HWSIM_ATTR_RADIO_ID]) {
idx = nla_get_u32(info->attrs[HWSIM_ATTR_RADIO_ID]);
} else if (info->attrs[HWSIM_ATTR_RADIO_NAME]) {
hwname = kasprintf(GFP_KERNEL, "%.*s",
nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]),
(char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]));
hwname = kstrndup((char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]),
nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]),
GFP_KERNEL);
if (!hwname)
return -ENOMEM;
} else

View File

@ -1338,22 +1338,17 @@ static void rtw_pci_phy_cfg(struct rtw_dev *rtwdev)
rtw_pci_link_cfg(rtwdev);
}
#ifdef CONFIG_PM
static int rtw_pci_suspend(struct device *dev)
static int __maybe_unused rtw_pci_suspend(struct device *dev)
{
return 0;
}
static int rtw_pci_resume(struct device *dev)
static int __maybe_unused rtw_pci_resume(struct device *dev)
{
return 0;
}
static SIMPLE_DEV_PM_OPS(rtw_pm_ops, rtw_pci_suspend, rtw_pci_resume);
#define RTW_PM_OPS (&rtw_pm_ops)
#else
#define RTW_PM_OPS NULL
#endif
static int rtw_pci_claim(struct rtw_dev *rtwdev, struct pci_dev *pdev)
{
@ -1582,7 +1577,7 @@ static struct pci_driver rtw_pci_driver = {
.id_table = rtw_pci_id_table,
.probe = rtw_pci_probe,
.remove = rtw_pci_remove,
.driver.pm = RTW_PM_OPS,
.driver.pm = &rtw_pm_ops,
};
module_pci_driver(rtw_pci_driver);

View File

@ -905,6 +905,8 @@ struct survey_info {
* protocol frames.
* @control_port_over_nl80211: TRUE if userspace expects to exchange control
* port frames over NL80211 instead of the network interface.
* @control_port_no_preauth: disables pre-auth rx over the nl80211 control
* port for mac80211
* @wep_keys: static WEP keys, if not NULL points to an array of
* CFG80211_MAX_WEP_KEYS WEP keys
* @wep_tx_key: key index (0..3) of the default TX static WEP key
@ -1222,6 +1224,7 @@ struct sta_txpwr {
* @he_capa: HE capabilities of station
* @he_capa_len: the length of the HE capabilities
* @airtime_weight: airtime scheduler weight for this station
* @txpwr: transmit power for an associated station
*/
struct station_parameters {
const u8 *supported_rates;
@ -4666,6 +4669,9 @@ struct wiphy_iftype_akm_suites {
* @txq_memory_limit: configuration internal TX queue memory limit
* @txq_quantum: configuration of internal TX queue scheduler quantum
*
* @tx_queue_len: allow setting transmit queue len for drivers not using
* wake_tx_queue
*
* @support_mbssid: can HW support association with nontransmitted AP
* @support_only_he_mbssid: don't parse MBSSID elements if it is not
* HE AP, in order to avoid compatibility issues.
@ -4681,6 +4687,10 @@ struct wiphy_iftype_akm_suites {
* supported by the driver for each peer
* @tid_config_support.max_retry: maximum supported retry count for
* long/short retry configuration
*
* @max_data_retry_count: maximum supported per TID retry count for
* configuration through the %NL80211_TID_CONFIG_ATTR_RETRY_SHORT and
* %NL80211_TID_CONFIG_ATTR_RETRY_LONG attributes
*/
struct wiphy {
/* assign these fields before you register the wiphy */

View File

@ -254,6 +254,7 @@ static inline bool ipv6_anycast_destination(const struct dst_entry *dst,
return rt->rt6i_flags & RTF_ANYCAST ||
(rt->rt6i_dst.plen < 127 &&
!(rt->rt6i_flags & (RTF_GATEWAY | RTF_NONEXTHOP)) &&
ipv6_addr_equal(&rt->rt6i_dst.addr, daddr));
}

View File

@ -901,7 +901,7 @@ static inline void nft_set_elem_update_expr(const struct nft_set_ext *ext,
{
struct nft_expr *expr;
if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) {
if (__nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) {
expr = nft_set_ext_expr(ext);
expr->ops->eval(expr, regs, pkt);
}

View File

@ -2553,9 +2553,9 @@ sk_is_refcounted(struct sock *sk)
}
/**
* skb_steal_sock
* @skb to steal the socket from
* @refcounted is set to true if the socket is reference-counted
* skb_steal_sock - steal a socket from an sk_buff
* @skb: sk_buff to steal the socket from
* @refcounted: is set to true if the socket is reference-counted
*/
static inline struct sock *
skb_steal_sock(struct sk_buff *skb, bool *refcounted)

View File

@ -476,6 +476,8 @@ struct ocelot_port {
void __iomem *regs;
bool vlan_aware;
/* Ingress default VLAN (pvid) */
u16 pvid;
@ -610,7 +612,7 @@ int ocelot_port_bridge_leave(struct ocelot *ocelot, int port,
int ocelot_fdb_dump(struct ocelot *ocelot, int port,
dsa_fdb_dump_cb_t *cb, void *data);
int ocelot_fdb_add(struct ocelot *ocelot, int port,
const unsigned char *addr, u16 vid, bool vlan_aware);
const unsigned char *addr, u16 vid);
int ocelot_fdb_del(struct ocelot *ocelot, int port,
const unsigned char *addr, u16 vid);
int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid,

View File

@ -276,6 +276,7 @@ enum nft_rule_compat_attributes {
* @NFT_SET_TIMEOUT: set uses timeouts
* @NFT_SET_EVAL: set can be updated from the evaluation path
* @NFT_SET_OBJECT: set contains stateful objects
* @NFT_SET_CONCAT: set contains a concatenation
*/
enum nft_set_flags {
NFT_SET_ANONYMOUS = 0x1,
@ -285,6 +286,7 @@ enum nft_set_flags {
NFT_SET_TIMEOUT = 0x10,
NFT_SET_EVAL = 0x20,
NFT_SET_OBJECT = 0x40,
NFT_SET_CONCAT = 0x80,
};
/**

View File

@ -48,6 +48,7 @@ struct idletimer_tg_info_v1 {
char label[MAX_IDLETIMER_LABEL_SIZE];
__u8 send_nl_msg; /* unused: for compatibility with Android */
__u8 timer_type;
/* for kernel module internal use only */

View File

@ -30,7 +30,7 @@ struct bpf_lru_node {
struct bpf_lru_list {
struct list_head lists[NR_BPF_LRU_LIST_T];
unsigned int counts[NR_BPF_LRU_LIST_COUNT];
/* The next inacitve list rotation starts from here */
/* The next inactive list rotation starts from here */
struct list_head *next_inactive_rotation;
raw_spinlock_t lock ____cacheline_aligned_in_smp;

View File

@ -586,9 +586,7 @@ static void bpf_map_mmap_open(struct vm_area_struct *vma)
{
struct bpf_map *map = vma->vm_file->private_data;
bpf_map_inc_with_uref(map);
if (vma->vm_flags & VM_WRITE) {
if (vma->vm_flags & VM_MAYWRITE) {
mutex_lock(&map->freeze_mutex);
map->writecnt++;
mutex_unlock(&map->freeze_mutex);
@ -600,13 +598,11 @@ static void bpf_map_mmap_close(struct vm_area_struct *vma)
{
struct bpf_map *map = vma->vm_file->private_data;
if (vma->vm_flags & VM_WRITE) {
if (vma->vm_flags & VM_MAYWRITE) {
mutex_lock(&map->freeze_mutex);
map->writecnt--;
mutex_unlock(&map->freeze_mutex);
}
bpf_map_put_with_uref(map);
}
static const struct vm_operations_struct bpf_map_default_vmops = {
@ -635,14 +631,16 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
/* set default open/close callbacks */
vma->vm_ops = &bpf_map_default_vmops;
vma->vm_private_data = map;
vma->vm_flags &= ~VM_MAYEXEC;
if (!(vma->vm_flags & VM_WRITE))
/* disallow re-mapping with PROT_WRITE */
vma->vm_flags &= ~VM_MAYWRITE;
err = map->ops->map_mmap(map, vma);
if (err)
goto out;
bpf_map_inc_with_uref(map);
if (vma->vm_flags & VM_WRITE)
if (vma->vm_flags & VM_MAYWRITE)
map->writecnt++;
out:
mutex_unlock(&map->freeze_mutex);

View File

@ -1255,8 +1255,7 @@ static void __mark_reg_unknown(const struct bpf_verifier_env *env,
reg->type = SCALAR_VALUE;
reg->var_off = tnum_unknown;
reg->frameno = 0;
reg->precise = env->subprog_cnt > 1 || !env->allow_ptr_leaks ?
true : false;
reg->precise = env->subprog_cnt > 1 || !env->allow_ptr_leaks;
__mark_reg_unbounded(reg);
}

View File

@ -242,6 +242,8 @@ config DEBUG_INFO_DWARF4
config DEBUG_INFO_BTF
bool "Generate BTF typeinfo"
depends on DEBUG_INFO
depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED
depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST
help
Generate deduplicated BTF type information from DWARF debug info.
Turning this on expects presence of pahole tool, which will convert

View File

@ -4140,7 +4140,8 @@ EXPORT_SYMBOL(netdev_max_backlog);
int netdev_tstamp_prequeue __read_mostly = 1;
int netdev_budget __read_mostly = 300;
unsigned int __read_mostly netdev_budget_usecs = 2000;
/* Must be at least 2 jiffes to guarantee 1 jiffy timeout */
unsigned int __read_mostly netdev_budget_usecs = 2 * USEC_PER_SEC / HZ;
int weight_p __read_mostly = 64; /* old backlog weight */
int dev_weight_rx_bias __read_mostly = 1; /* bias for backlog weight */
int dev_weight_tx_bias __read_mostly = 1; /* bias for output_queue quota */
@ -8666,8 +8667,8 @@ int dev_change_xdp_fd(struct net_device *dev, struct netlink_ext_ack *extack,
const struct net_device_ops *ops = dev->netdev_ops;
enum bpf_netdev_command query;
u32 prog_id, expected_id = 0;
struct bpf_prog *prog = NULL;
bpf_op_t bpf_op, bpf_chk;
struct bpf_prog *prog;
bool offload;
int err;
@ -8733,6 +8734,7 @@ int dev_change_xdp_fd(struct net_device *dev, struct netlink_ext_ack *extack,
} else {
if (!prog_id)
return 0;
prog = NULL;
}
err = dev_xdp_install(dev, bpf_op, extack, flags, prog);

View File

@ -5925,7 +5925,7 @@ BPF_CALL_3(bpf_sk_assign, struct sk_buff *, skb, struct sock *, sk, u64, flags)
return -EOPNOTSUPP;
if (unlikely(dev_net(skb->dev) != sock_net(sk)))
return -ENETUNREACH;
if (unlikely(sk->sk_reuseport))
if (unlikely(sk_fullsock(sk) && sk->sk_reuseport))
return -ESOCKTNOSUPPORT;
if (sk_is_refcounted(sk) &&
unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))

View File

@ -80,7 +80,7 @@ static ssize_t netdev_store(struct device *dev, struct device_attribute *attr,
struct net_device *netdev = to_net_dev(dev);
struct net *net = dev_net(netdev);
unsigned long new;
int ret = -EINVAL;
int ret;
if (!ns_capable(net->user_ns, CAP_NET_ADMIN))
return -EPERM;

View File

@ -1872,7 +1872,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
* as not suitable for copying when cloning.
*/
if (sk_user_data_is_nocopy(newsk))
RCU_INIT_POINTER(newsk->sk_user_data, NULL);
newsk->sk_user_data = NULL;
newsk->sk_err = 0;
newsk->sk_err_soft = 0;

View File

@ -670,11 +670,16 @@ int dsa_port_link_register_of(struct dsa_port *dp)
{
struct dsa_switch *ds = dp->ds;
struct device_node *phy_np;
int port = dp->index;
if (!ds->ops->adjust_link) {
phy_np = of_parse_phandle(dp->dn, "phy-handle", 0);
if (of_phy_is_fixed_link(dp->dn) || phy_np)
if (of_phy_is_fixed_link(dp->dn) || phy_np) {
if (ds->ops->phylink_mac_link_down)
ds->ops->phylink_mac_link_down(ds, port,
MLO_AN_FIXED, PHY_INTERFACE_MODE_NA);
return dsa_port_phylink_register(dp);
}
return 0;
}

View File

@ -69,10 +69,16 @@ static int hsr_newlink(struct net *src_net, struct net_device *dev,
else
multicast_spec = nla_get_u8(data[IFLA_HSR_MULTICAST_SPEC]);
if (!data[IFLA_HSR_VERSION])
if (!data[IFLA_HSR_VERSION]) {
hsr_version = 0;
else
} else {
hsr_version = nla_get_u8(data[IFLA_HSR_VERSION]);
if (hsr_version > 1) {
NL_SET_ERR_MSG_MOD(extack,
"Only versions 0..1 are supported");
return -EINVAL;
}
}
return hsr_dev_finalize(dev, link, multicast_spec, hsr_version, extack);
}

View File

@ -614,12 +614,15 @@ struct in_ifaddr *inet_ifa_byprefix(struct in_device *in_dev, __be32 prefix,
return NULL;
}
static int ip_mc_config(struct sock *sk, bool join, const struct in_ifaddr *ifa)
static int ip_mc_autojoin_config(struct net *net, bool join,
const struct in_ifaddr *ifa)
{
#if defined(CONFIG_IP_MULTICAST)
struct ip_mreqn mreq = {
.imr_multiaddr.s_addr = ifa->ifa_address,
.imr_ifindex = ifa->ifa_dev->dev->ifindex,
};
struct sock *sk = net->ipv4.mc_autojoin_sk;
int ret;
ASSERT_RTNL();
@ -632,6 +635,9 @@ static int ip_mc_config(struct sock *sk, bool join, const struct in_ifaddr *ifa)
release_sock(sk);
return ret;
#else
return -EOPNOTSUPP;
#endif
}
static int inet_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh,
@ -675,7 +681,7 @@ static int inet_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh,
continue;
if (ipv4_is_multicast(ifa->ifa_address))
ip_mc_config(net->ipv4.mc_autojoin_sk, false, ifa);
ip_mc_autojoin_config(net, false, ifa);
__inet_del_ifa(in_dev, ifap, 1, nlh, NETLINK_CB(skb).portid);
return 0;
}
@ -940,8 +946,7 @@ static int inet_rtm_newaddr(struct sk_buff *skb, struct nlmsghdr *nlh,
*/
set_ifa_lifetime(ifa, valid_lft, prefered_lft);
if (ifa->ifa_flags & IFA_F_MCAUTOJOIN) {
int ret = ip_mc_config(net->ipv4.mc_autojoin_sk,
true, ifa);
int ret = ip_mc_autojoin_config(net, true, ifa);
if (ret < 0) {
inet_free_ifa(ifa);

View File

@ -229,6 +229,25 @@ static bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
return res;
}
static bool icmpv6_rt_has_prefsrc(struct sock *sk, u8 type,
struct flowi6 *fl6)
{
struct net *net = sock_net(sk);
struct dst_entry *dst;
bool res = false;
dst = ip6_route_output(net, sk, fl6);
if (!dst->error) {
struct rt6_info *rt = (struct rt6_info *)dst;
struct in6_addr prefsrc;
rt6_get_prefsrc(rt, &prefsrc);
res = !ipv6_addr_any(&prefsrc);
}
dst_release(dst);
return res;
}
/*
* an inline helper for the "simple" if statement below
* checks if parameter problem report is caused by an
@ -527,7 +546,7 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
saddr = force_saddr;
if (saddr) {
fl6.saddr = *saddr;
} else {
} else if (!icmpv6_rt_has_prefsrc(sk, type, &fl6)) {
/* select a more meaningful saddr from input if */
struct net_device *in_netdev;

View File

@ -434,7 +434,7 @@ static struct genl_family seg6_genl_family __ro_after_init = {
int __init seg6_init(void)
{
int err = -ENOMEM;
int err;
err = genl_register_family(&seg6_genl_family);
if (err)

View File

@ -920,51 +920,51 @@ static const struct genl_ops l2tp_nl_ops[] = {
.cmd = L2TP_CMD_TUNNEL_CREATE,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = l2tp_nl_cmd_tunnel_create,
.flags = GENL_ADMIN_PERM,
.flags = GENL_UNS_ADMIN_PERM,
},
{
.cmd = L2TP_CMD_TUNNEL_DELETE,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = l2tp_nl_cmd_tunnel_delete,
.flags = GENL_ADMIN_PERM,
.flags = GENL_UNS_ADMIN_PERM,
},
{
.cmd = L2TP_CMD_TUNNEL_MODIFY,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = l2tp_nl_cmd_tunnel_modify,
.flags = GENL_ADMIN_PERM,
.flags = GENL_UNS_ADMIN_PERM,
},
{
.cmd = L2TP_CMD_TUNNEL_GET,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = l2tp_nl_cmd_tunnel_get,
.dumpit = l2tp_nl_cmd_tunnel_dump,
.flags = GENL_ADMIN_PERM,
.flags = GENL_UNS_ADMIN_PERM,
},
{
.cmd = L2TP_CMD_SESSION_CREATE,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = l2tp_nl_cmd_session_create,
.flags = GENL_ADMIN_PERM,
.flags = GENL_UNS_ADMIN_PERM,
},
{
.cmd = L2TP_CMD_SESSION_DELETE,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = l2tp_nl_cmd_session_delete,
.flags = GENL_ADMIN_PERM,
.flags = GENL_UNS_ADMIN_PERM,
},
{
.cmd = L2TP_CMD_SESSION_MODIFY,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = l2tp_nl_cmd_session_modify,
.flags = GENL_ADMIN_PERM,
.flags = GENL_UNS_ADMIN_PERM,
},
{
.cmd = L2TP_CMD_SESSION_GET,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = l2tp_nl_cmd_session_get,
.dumpit = l2tp_nl_cmd_session_dump,
.flags = GENL_ADMIN_PERM,
.flags = GENL_UNS_ADMIN_PERM,
},
};

View File

@ -1069,7 +1069,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
local->hw.wiphy->signal_type = CFG80211_SIGNAL_TYPE_UNSPEC;
if (hw->max_signal <= 0) {
result = -EINVAL;
goto fail_wiphy_register;
goto fail_workqueue;
}
}
@ -1135,7 +1135,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
result = ieee80211_init_cipher_suites(local);
if (result < 0)
goto fail_wiphy_register;
goto fail_workqueue;
if (!local->ops->remain_on_channel)
local->hw.wiphy->max_remain_on_channel_duration = 5000;
@ -1161,10 +1161,6 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
local->hw.wiphy->max_num_csa_counters = IEEE80211_MAX_CSA_COUNTERS_NUM;
result = wiphy_register(local->hw.wiphy);
if (result < 0)
goto fail_wiphy_register;
/*
* We use the number of queues for feature tests (QoS, HT) internally
* so restrict them appropriately.
@ -1217,9 +1213,9 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
goto fail_flows;
rtnl_lock();
result = ieee80211_init_rate_ctrl_alg(local,
hw->rate_control_algorithm);
rtnl_unlock();
if (result < 0) {
wiphy_debug(local->hw.wiphy,
"Failed to initialize rate control algorithm\n");
@ -1273,6 +1269,12 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
local->sband_allocated |= BIT(band);
}
result = wiphy_register(local->hw.wiphy);
if (result < 0)
goto fail_wiphy_register;
rtnl_lock();
/* add one default STA interface if supported */
if (local->hw.wiphy->interface_modes & BIT(NL80211_IFTYPE_STATION) &&
!ieee80211_hw_check(hw, NO_AUTO_VIF)) {
@ -1312,17 +1314,17 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
#if defined(CONFIG_INET) || defined(CONFIG_IPV6)
fail_ifa:
#endif
wiphy_unregister(local->hw.wiphy);
fail_wiphy_register:
rtnl_lock();
rate_control_deinitialize(local);
ieee80211_remove_interfaces(local);
fail_rate:
rtnl_unlock();
fail_rate:
fail_flows:
ieee80211_led_exit(local);
destroy_workqueue(local->workqueue);
fail_workqueue:
wiphy_unregister(local->hw.wiphy);
fail_wiphy_register:
if (local->wiphy_ciphers_allocated)
kfree(local->hw.wiphy->cipher_suites);
kfree(local->int_scan_req);
@ -1372,8 +1374,8 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw)
skb_queue_purge(&local->skb_queue_unreliable);
skb_queue_purge(&local->skb_queue_tdls_chsw);
destroy_workqueue(local->workqueue);
wiphy_unregister(local->hw.wiphy);
destroy_workqueue(local->workqueue);
ieee80211_led_exit(local);
kfree(local->int_scan_req);
}

View File

@ -1257,15 +1257,15 @@ static void ieee80211_mesh_rx_bcn_presp(struct ieee80211_sub_if_data *sdata,
sdata->u.mesh.mshcfg.rssi_threshold < rx_status->signal)
mesh_neighbour_update(sdata, mgmt->sa, &elems,
rx_status);
if (ifmsh->csa_role != IEEE80211_MESH_CSA_ROLE_INIT &&
!sdata->vif.csa_active)
ieee80211_mesh_process_chnswitch(sdata, &elems, true);
}
if (ifmsh->sync_ops)
ifmsh->sync_ops->rx_bcn_presp(sdata,
stype, mgmt, &elems, rx_status);
if (ifmsh->csa_role != IEEE80211_MESH_CSA_ROLE_INIT &&
!sdata->vif.csa_active)
ieee80211_mesh_process_chnswitch(sdata, &elems, true);
}
int ieee80211_mesh_finish_csa(struct ieee80211_sub_if_data *sdata)
@ -1373,6 +1373,9 @@ static void mesh_rx_csa_frame(struct ieee80211_sub_if_data *sdata,
ieee802_11_parse_elems(pos, len - baselen, true, &elems,
mgmt->bssid, NULL);
if (!mesh_matches_local(sdata, &elems))
return;
ifmsh->chsw_ttl = elems.mesh_chansw_params_ie->mesh_ttl;
if (!--ifmsh->chsw_ttl)
fwd_csa = false;

View File

@ -97,12 +97,7 @@ static struct socket *__mptcp_tcp_fallback(struct mptcp_sock *msk)
if (likely(!__mptcp_needs_tcp_fallback(msk)))
return NULL;
if (msk->subflow) {
release_sock((struct sock *)msk);
return msk->subflow;
}
return NULL;
return msk->subflow;
}
static bool __mptcp_can_create_subflow(const struct mptcp_sock *msk)
@ -734,9 +729,10 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
goto out;
}
fallback:
ssock = __mptcp_tcp_fallback(msk);
if (unlikely(ssock)) {
fallback:
release_sock(sk);
pr_debug("fallback passthrough");
ret = sock_sendmsg(ssock, msg);
return ret >= 0 ? ret + copied : (copied ? copied : ret);
@ -769,8 +765,14 @@ fallback:
if (ret < 0)
break;
if (ret == 0 && unlikely(__mptcp_needs_tcp_fallback(msk))) {
/* Can happen for passive sockets:
* 3WHS negotiated MPTCP, but first packet after is
* plain TCP (e.g. due to middlebox filtering unknown
* options).
*
* Fall back to TCP.
*/
release_sock(ssk);
ssock = __mptcp_tcp_fallback(msk);
goto fallback;
}
@ -883,6 +885,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
ssock = __mptcp_tcp_fallback(msk);
if (unlikely(ssock)) {
fallback:
release_sock(sk);
pr_debug("fallback-read subflow=%p",
mptcp_subflow_ctx(ssock->sk));
copied = sock_recvmsg(ssock, msg, flags);
@ -1467,12 +1470,11 @@ static int mptcp_setsockopt(struct sock *sk, int level, int optname,
*/
lock_sock(sk);
ssock = __mptcp_tcp_fallback(msk);
release_sock(sk);
if (ssock)
return tcp_setsockopt(ssock->sk, level, optname, optval,
optlen);
release_sock(sk);
return -EOPNOTSUPP;
}
@ -1492,12 +1494,11 @@ static int mptcp_getsockopt(struct sock *sk, int level, int optname,
*/
lock_sock(sk);
ssock = __mptcp_tcp_fallback(msk);
release_sock(sk);
if (ssock)
return tcp_getsockopt(ssock->sk, level, optname, optval,
option);
release_sock(sk);
return -EOPNOTSUPP;
}

View File

@ -86,7 +86,8 @@ find_set_type(const char *name, u8 family, u8 revision)
{
struct ip_set_type *type;
list_for_each_entry_rcu(type, &ip_set_type_list, list)
list_for_each_entry_rcu(type, &ip_set_type_list, list,
lockdep_is_held(&ip_set_type_mutex))
if (STRNCMP(type->name, name) &&
(type->family == family ||
type->family == NFPROTO_UNSPEC) &&

View File

@ -3542,6 +3542,7 @@ cont:
continue;
if (!strcmp(set->name, i->name)) {
kfree(set->name);
set->name = NULL;
return -ENFILE;
}
}
@ -3961,8 +3962,8 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
if (flags & ~(NFT_SET_ANONYMOUS | NFT_SET_CONSTANT |
NFT_SET_INTERVAL | NFT_SET_TIMEOUT |
NFT_SET_MAP | NFT_SET_EVAL |
NFT_SET_OBJECT))
return -EINVAL;
NFT_SET_OBJECT | NFT_SET_CONCAT))
return -EOPNOTSUPP;
/* Only one of these operations is supported */
if ((flags & (NFT_SET_MAP | NFT_SET_OBJECT)) ==
(NFT_SET_MAP | NFT_SET_OBJECT))
@ -4000,7 +4001,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
objtype = ntohl(nla_get_be32(nla[NFTA_SET_OBJ_TYPE]));
if (objtype == NFT_OBJECT_UNSPEC ||
objtype > NFT_OBJECT_MAX)
return -EINVAL;
return -EOPNOTSUPP;
} else if (flags & NFT_SET_OBJECT)
return -EINVAL;
else

View File

@ -29,7 +29,7 @@ void nft_lookup_eval(const struct nft_expr *expr,
{
const struct nft_lookup *priv = nft_expr_priv(expr);
const struct nft_set *set = priv->set;
const struct nft_set_ext *ext;
const struct nft_set_ext *ext = NULL;
bool found;
found = set->ops->lookup(nft_net(pkt), set, &regs->data[priv->sreg],
@ -39,11 +39,13 @@ void nft_lookup_eval(const struct nft_expr *expr,
return;
}
if (set->flags & NFT_SET_MAP)
nft_data_copy(&regs->data[priv->dreg],
nft_set_ext_data(ext), set->dlen);
if (ext) {
if (set->flags & NFT_SET_MAP)
nft_data_copy(&regs->data[priv->dreg],
nft_set_ext_data(ext), set->dlen);
nft_set_elem_update_expr(ext, regs, pkt);
nft_set_elem_update_expr(ext, regs, pkt);
}
}
static const struct nla_policy nft_lookup_policy[NFTA_LOOKUP_MAX + 1] = {

View File

@ -81,7 +81,6 @@ static bool nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
u32 idx, off;
nft_bitmap_location(set, key, &idx, &off);
*ext = NULL;
return nft_bitmap_active(priv->bitmap, idx, off, genmask);
}

View File

@ -218,27 +218,26 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
/* Detect overlaps as we descend the tree. Set the flag in these cases:
*
* a1. |__ _ _? >|__ _ _ (insert start after existing start)
* a2. _ _ __>| ?_ _ __| (insert end before existing end)
* a3. _ _ ___| ?_ _ _>| (insert end after existing end)
* a4. >|__ _ _ _ _ __| (insert start before existing end)
* a1. _ _ __>| ?_ _ __| (insert end before existing end)
* a2. _ _ ___| ?_ _ _>| (insert end after existing end)
* a3. _ _ ___? >|_ _ __| (insert start before existing end)
*
* and clear it later on, as we eventually reach the points indicated by
* '?' above, in the cases described below. We'll always meet these
* later, locally, due to tree ordering, and overlaps for the intervals
* that are the closest together are always evaluated last.
*
* b1. |__ _ _! >|__ _ _ (insert start after existing end)
* b2. _ _ __>| !_ _ __| (insert end before existing start)
* b3. !_____>| (insert end after existing start)
* b1. _ _ __>| !_ _ __| (insert end before existing start)
* b2. _ _ ___| !_ _ _>| (insert end after existing start)
* b3. _ _ ___! >|_ _ __| (insert start after existing end)
*
* Case a4. resolves to b1.:
* Case a3. resolves to b3.:
* - if the inserted start element is the leftmost, because the '0'
* element in the tree serves as end element
* - otherwise, if an existing end is found. Note that end elements are
* always inserted after corresponding start elements.
*
* For a new, rightmost pair of elements, we'll hit cases b1. and b3.,
* For a new, rightmost pair of elements, we'll hit cases b3. and b2.,
* in that order.
*
* The flag is also cleared in two special cases:
@ -262,9 +261,9 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
p = &parent->rb_left;
if (nft_rbtree_interval_start(new)) {
overlap = nft_rbtree_interval_start(rbe) &&
nft_set_elem_active(&rbe->ext,
genmask);
if (nft_rbtree_interval_end(rbe) &&
nft_set_elem_active(&rbe->ext, genmask))
overlap = false;
} else {
overlap = nft_rbtree_interval_end(rbe) &&
nft_set_elem_active(&rbe->ext,

View File

@ -346,6 +346,9 @@ static int idletimer_tg_checkentry_v1(const struct xt_tgchk_param *par)
pr_debug("checkentry targinfo%s\n", info->label);
if (info->send_nl_msg)
return -EOPNOTSUPP;
ret = idletimer_tg_helper((struct idletimer_tg_info *)info);
if(ret < 0)
{

View File

@ -906,20 +906,21 @@ static int qrtr_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
node = NULL;
if (addr->sq_node == QRTR_NODE_BCAST) {
enqueue_fn = qrtr_bcast_enqueue;
if (addr->sq_port != QRTR_PORT_CTRL) {
if (addr->sq_port != QRTR_PORT_CTRL &&
qrtr_local_nid != QRTR_NODE_BCAST) {
release_sock(sk);
return -ENOTCONN;
}
enqueue_fn = qrtr_bcast_enqueue;
} else if (addr->sq_node == ipc->us.sq_node) {
enqueue_fn = qrtr_local_enqueue;
} else {
enqueue_fn = qrtr_node_enqueue;
node = qrtr_node_lookup(addr->sq_node);
if (!node) {
release_sock(sk);
return -ECONNRESET;
}
enqueue_fn = qrtr_node_enqueue;
}
plen = (len + 3) & ~3;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 Oracle. All rights reserved.
* Copyright (c) 2006, 2020 Oracle and/or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -162,12 +162,12 @@ static void rds_message_purge(struct rds_message *rm)
if (rm->rdma.op_active)
rds_rdma_free_op(&rm->rdma);
if (rm->rdma.op_rdma_mr)
rds_mr_put(rm->rdma.op_rdma_mr);
kref_put(&rm->rdma.op_rdma_mr->r_kref, __rds_put_mr_final);
if (rm->atomic.op_active)
rds_atomic_free_op(&rm->atomic);
if (rm->atomic.op_rdma_mr)
rds_mr_put(rm->atomic.op_rdma_mr);
kref_put(&rm->atomic.op_rdma_mr->r_kref, __rds_put_mr_final);
}
void rds_message_put(struct rds_message *rm)
@ -308,26 +308,20 @@ out:
/*
* RDS ops use this to grab SG entries from the rm's sg pool.
*/
struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents,
int *ret)
struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents)
{
struct scatterlist *sg_first = (struct scatterlist *) &rm[1];
struct scatterlist *sg_ret;
if (WARN_ON(!ret))
return NULL;
if (nents <= 0) {
pr_warn("rds: alloc sgs failed! nents <= 0\n");
*ret = -EINVAL;
return NULL;
return ERR_PTR(-EINVAL);
}
if (rm->m_used_sgs + nents > rm->m_total_sgs) {
pr_warn("rds: alloc sgs failed! total %d used %d nents %d\n",
rm->m_total_sgs, rm->m_used_sgs, nents);
*ret = -ENOMEM;
return NULL;
return ERR_PTR(-ENOMEM);
}
sg_ret = &sg_first[rm->m_used_sgs];
@ -343,7 +337,6 @@ struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned in
unsigned int i;
int num_sgs = DIV_ROUND_UP(total_len, PAGE_SIZE);
int extra_bytes = num_sgs * sizeof(struct scatterlist);
int ret;
rm = rds_message_alloc(extra_bytes, GFP_NOWAIT);
if (!rm)
@ -352,10 +345,10 @@ struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned in
set_bit(RDS_MSG_PAGEVEC, &rm->m_flags);
rm->m_inc.i_hdr.h_len = cpu_to_be32(total_len);
rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE);
rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret);
if (!rm->data.op_sg) {
rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs);
if (IS_ERR(rm->data.op_sg)) {
rds_message_put(rm);
return ERR_PTR(ret);
return ERR_CAST(rm->data.op_sg);
}
for (i = 0; i < rm->data.op_nents; ++i) {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2007, 2017 Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2007, 2020 Oracle and/or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -84,7 +84,7 @@ static struct rds_mr *rds_mr_tree_walk(struct rb_root *root, u64 key,
if (insert) {
rb_link_node(&insert->r_rb_node, parent, p);
rb_insert_color(&insert->r_rb_node, root);
refcount_inc(&insert->r_refcount);
kref_get(&insert->r_kref);
}
return NULL;
}
@ -99,10 +99,7 @@ static void rds_destroy_mr(struct rds_mr *mr)
unsigned long flags;
rdsdebug("RDS: destroy mr key is %x refcnt %u\n",
mr->r_key, refcount_read(&mr->r_refcount));
if (test_and_set_bit(RDS_MR_DEAD, &mr->r_state))
return;
mr->r_key, kref_read(&mr->r_kref));
spin_lock_irqsave(&rs->rs_rdma_lock, flags);
if (!RB_EMPTY_NODE(&mr->r_rb_node))
@ -115,8 +112,10 @@ static void rds_destroy_mr(struct rds_mr *mr)
mr->r_trans->free_mr(trans_private, mr->r_invalidate);
}
void __rds_put_mr_final(struct rds_mr *mr)
void __rds_put_mr_final(struct kref *kref)
{
struct rds_mr *mr = container_of(kref, struct rds_mr, r_kref);
rds_destroy_mr(mr);
kfree(mr);
}
@ -140,8 +139,7 @@ void rds_rdma_drop_keys(struct rds_sock *rs)
rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys);
RB_CLEAR_NODE(&mr->r_rb_node);
spin_unlock_irqrestore(&rs->rs_rdma_lock, flags);
rds_destroy_mr(mr);
rds_mr_put(mr);
kref_put(&mr->r_kref, __rds_put_mr_final);
spin_lock_irqsave(&rs->rs_rdma_lock, flags);
}
spin_unlock_irqrestore(&rs->rs_rdma_lock, flags);
@ -242,7 +240,7 @@ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
goto out;
}
refcount_set(&mr->r_refcount, 1);
kref_init(&mr->r_kref);
RB_CLEAR_NODE(&mr->r_rb_node);
mr->r_trans = rs->rs_transport;
mr->r_sock = rs;
@ -343,7 +341,7 @@ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
rdsdebug("RDS: get_mr key is %x\n", mr->r_key);
if (mr_ret) {
refcount_inc(&mr->r_refcount);
kref_get(&mr->r_kref);
*mr_ret = mr;
}
@ -351,7 +349,7 @@ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
out:
kfree(pages);
if (mr)
rds_mr_put(mr);
kref_put(&mr->r_kref, __rds_put_mr_final);
return ret;
}
@ -434,13 +432,7 @@ int rds_free_mr(struct rds_sock *rs, char __user *optval, int optlen)
if (!mr)
return -EINVAL;
/*
* call rds_destroy_mr() ourselves so that we're sure it's done by the time
* we return. If we let rds_mr_put() do it it might not happen until
* someone else drops their ref.
*/
rds_destroy_mr(mr);
rds_mr_put(mr);
kref_put(&mr->r_kref, __rds_put_mr_final);
return 0;
}
@ -464,6 +456,14 @@ void rds_rdma_unuse(struct rds_sock *rs, u32 r_key, int force)
return;
}
/* Get a reference so that the MR won't go away before calling
* sync_mr() below.
*/
kref_get(&mr->r_kref);
/* If it is going to be freed, remove it from the tree now so
* that no other thread can find it and free it.
*/
if (mr->r_use_once || force) {
rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys);
RB_CLEAR_NODE(&mr->r_rb_node);
@ -477,12 +477,13 @@ void rds_rdma_unuse(struct rds_sock *rs, u32 r_key, int force)
if (mr->r_trans->sync_mr)
mr->r_trans->sync_mr(mr->r_trans_private, DMA_FROM_DEVICE);
/* Release the reference held above. */
kref_put(&mr->r_kref, __rds_put_mr_final);
/* If the MR was marked as invalidate, this will
* trigger an async flush. */
if (zot_me) {
rds_destroy_mr(mr);
rds_mr_put(mr);
}
if (zot_me)
kref_put(&mr->r_kref, __rds_put_mr_final);
}
void rds_rdma_free_op(struct rm_rdma_op *ro)
@ -490,7 +491,7 @@ void rds_rdma_free_op(struct rm_rdma_op *ro)
unsigned int i;
if (ro->op_odp_mr) {
rds_mr_put(ro->op_odp_mr);
kref_put(&ro->op_odp_mr->r_kref, __rds_put_mr_final);
} else {
for (i = 0; i < ro->op_nents; i++) {
struct page *page = sg_page(&ro->op_sg[i]);
@ -664,9 +665,11 @@ int rds_cmsg_rdma_args(struct rds_sock *rs, struct rds_message *rm,
op->op_odp_mr = NULL;
WARN_ON(!nr_pages);
op->op_sg = rds_message_alloc_sgs(rm, nr_pages, &ret);
if (!op->op_sg)
op->op_sg = rds_message_alloc_sgs(rm, nr_pages);
if (IS_ERR(op->op_sg)) {
ret = PTR_ERR(op->op_sg);
goto out_pages;
}
if (op->op_notify || op->op_recverr) {
/* We allocate an uninitialized notifier here, because
@ -730,7 +733,7 @@ int rds_cmsg_rdma_args(struct rds_sock *rs, struct rds_message *rm,
goto out_pages;
}
RB_CLEAR_NODE(&local_odp_mr->r_rb_node);
refcount_set(&local_odp_mr->r_refcount, 1);
kref_init(&local_odp_mr->r_kref);
local_odp_mr->r_trans = rs->rs_transport;
local_odp_mr->r_sock = rs;
local_odp_mr->r_trans_private =
@ -827,7 +830,7 @@ int rds_cmsg_rdma_dest(struct rds_sock *rs, struct rds_message *rm,
if (!mr)
err = -EINVAL; /* invalid r_key */
else
refcount_inc(&mr->r_refcount);
kref_get(&mr->r_kref);
spin_unlock_irqrestore(&rs->rs_rdma_lock, flags);
if (mr) {
@ -905,9 +908,11 @@ int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message *rm,
rm->atomic.op_silent = !!(args->flags & RDS_RDMA_SILENT);
rm->atomic.op_active = 1;
rm->atomic.op_recverr = rs->rs_recverr;
rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1, &ret);
if (!rm->atomic.op_sg)
rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1);
if (IS_ERR(rm->atomic.op_sg)) {
ret = PTR_ERR(rm->atomic.op_sg);
goto err;
}
/* verify 8 byte-aligned */
if (args->local_addr & 0x7) {

View File

@ -291,7 +291,7 @@ struct rds_incoming {
struct rds_mr {
struct rb_node r_rb_node;
refcount_t r_refcount;
struct kref r_kref;
u32 r_key;
/* A copy of the creation flags */
@ -299,19 +299,11 @@ struct rds_mr {
unsigned int r_invalidate:1;
unsigned int r_write:1;
/* This is for RDS_MR_DEAD.
* It would be nice & consistent to make this part of the above
* bit field here, but we need to use test_and_set_bit.
*/
unsigned long r_state;
struct rds_sock *r_sock; /* back pointer to the socket that owns us */
struct rds_transport *r_trans;
void *r_trans_private;
};
/* Flags for mr->r_state */
#define RDS_MR_DEAD 0
static inline rds_rdma_cookie_t rds_rdma_make_cookie(u32 r_key, u32 offset)
{
return r_key | (((u64) offset) << 32);
@ -852,8 +844,7 @@ rds_conn_connecting(struct rds_connection *conn)
/* message.c */
struct rds_message *rds_message_alloc(unsigned int nents, gfp_t gfp);
struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents,
int *ret);
struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents);
int rds_message_copy_from_user(struct rds_message *rm, struct iov_iter *from,
bool zcopy);
struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned int total_len);
@ -946,12 +937,7 @@ void rds_atomic_send_complete(struct rds_message *rm, int wc_status);
int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message *rm,
struct cmsghdr *cmsg);
void __rds_put_mr_final(struct rds_mr *mr);
static inline void rds_mr_put(struct rds_mr *mr)
{
if (refcount_dec_and_test(&mr->r_refcount))
__rds_put_mr_final(mr);
}
void __rds_put_mr_final(struct kref *kref);
static inline bool rds_destroy_pending(struct rds_connection *conn)
{

View File

@ -1274,9 +1274,11 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
/* Attach data to the rm */
if (payload_len) {
rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret);
if (!rm->data.op_sg)
rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs);
if (IS_ERR(rm->data.op_sg)) {
ret = PTR_ERR(rm->data.op_sg);
goto out;
}
ret = rds_message_copy_from_user(rm, &msg->msg_iter, zcopy);
if (ret)
goto out;

View File

@ -165,15 +165,6 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
goto error;
}
/* we want to set the don't fragment bit */
opt = IPV6_PMTUDISC_DO;
ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,
(char *) &opt, sizeof(opt));
if (ret < 0) {
_debug("setsockopt failed");
goto error;
}
/* Fall through and set IPv4 options too otherwise we don't get
* errors from IPv4 packets sent through the IPv6 socket.
*/

View File

@ -474,42 +474,22 @@ send_fragmentable:
skb->tstamp = ktime_get_real();
switch (conn->params.local->srx.transport.family) {
case AF_INET6:
case AF_INET:
opt = IP_PMTUDISC_DONT;
ret = kernel_setsockopt(conn->params.local->socket,
SOL_IP, IP_MTU_DISCOVER,
(char *)&opt, sizeof(opt));
if (ret == 0) {
ret = kernel_sendmsg(conn->params.local->socket, &msg,
iov, 2, len);
conn->params.peer->last_tx_at = ktime_get_seconds();
kernel_setsockopt(conn->params.local->socket,
SOL_IP, IP_MTU_DISCOVER,
(char *)&opt, sizeof(opt));
ret = kernel_sendmsg(conn->params.local->socket, &msg,
iov, 2, len);
conn->params.peer->last_tx_at = ktime_get_seconds();
opt = IP_PMTUDISC_DO;
kernel_setsockopt(conn->params.local->socket, SOL_IP,
IP_MTU_DISCOVER,
(char *)&opt, sizeof(opt));
}
opt = IP_PMTUDISC_DO;
kernel_setsockopt(conn->params.local->socket,
SOL_IP, IP_MTU_DISCOVER,
(char *)&opt, sizeof(opt));
break;
#ifdef CONFIG_AF_RXRPC_IPV6
case AF_INET6:
opt = IPV6_PMTUDISC_DONT;
ret = kernel_setsockopt(conn->params.local->socket,
SOL_IPV6, IPV6_MTU_DISCOVER,
(char *)&opt, sizeof(opt));
if (ret == 0) {
ret = kernel_sendmsg(conn->params.local->socket, &msg,
iov, 2, len);
conn->params.peer->last_tx_at = ktime_get_seconds();
opt = IPV6_PMTUDISC_DO;
kernel_setsockopt(conn->params.local->socket,
SOL_IPV6, IPV6_MTU_DISCOVER,
(char *)&opt, sizeof(opt));
}
break;
#endif
default:
BUG();
}

View File

@ -1667,6 +1667,7 @@ int tcf_classify_ingress(struct sk_buff *skb,
skb_ext_del(skb, TC_SKB_EXT);
tp = rcu_dereference_bh(fchain->filter_chain);
last_executed_chain = fchain->index;
}
ret = __tcf_classify(skb, tp, orig_tp, res, compat_mode,

View File

@ -1065,7 +1065,7 @@ static void tipc_link_update_cwin(struct tipc_link *l, int released,
/* Enter fast recovery */
if (unlikely(retransmitted)) {
l->ssthresh = max_t(u16, l->window / 2, 300);
l->window = l->ssthresh;
l->window = min_t(u16, l->ssthresh, l->window);
return;
}
/* Enter slow start */

View File

@ -56,9 +56,9 @@ enum {
TLS_NUM_PROTS,
};
static struct proto *saved_tcpv6_prot;
static const struct proto *saved_tcpv6_prot;
static DEFINE_MUTEX(tcpv6_prot_mutex);
static struct proto *saved_tcpv4_prot;
static const struct proto *saved_tcpv4_prot;
static DEFINE_MUTEX(tcpv4_prot_mutex);
static struct proto tls_prots[TLS_NUM_PROTS][TLS_NUM_CONFIG][TLS_NUM_CONFIG];
static struct proto_ops tls_sw_proto_ops;

View File

@ -644,10 +644,8 @@ const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
[NL80211_ATTR_HE_CAPABILITY] = { .type = NLA_BINARY,
.len = NL80211_HE_MAX_CAPABILITY_LEN },
[NL80211_ATTR_FTM_RESPONDER] = {
.type = NLA_NESTED,
.validation_data = nl80211_ftm_responder_policy,
},
[NL80211_ATTR_FTM_RESPONDER] =
NLA_POLICY_NESTED(nl80211_ftm_responder_policy),
[NL80211_ATTR_TIMEOUT] = NLA_POLICY_MIN(NLA_U32, 1),
[NL80211_ATTR_PEER_MEASUREMENTS] =
NLA_POLICY_NESTED(nl80211_pmsr_attr_policy),

View File

@ -343,7 +343,7 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
unsigned int chunks, chunks_per_page;
u64 addr = mr->addr, size = mr->len;
int size_chk, err;
int err;
if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
/* Strictly speaking we could support this, if:
@ -382,8 +382,7 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
return -EINVAL;
}
size_chk = chunk_size - headroom - XDP_PACKET_HEADROOM;
if (size_chk < 0)
if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
return -EINVAL;
umem->address = (unsigned long)addr;

View File

@ -131,8 +131,9 @@ static void __xsk_rcv_memcpy(struct xdp_umem *umem, u64 addr, void *from_buf,
u64 page_start = addr & ~(PAGE_SIZE - 1);
u64 first_len = PAGE_SIZE - (addr - page_start);
memcpy(to_buf, from_buf, first_len + metalen);
memcpy(next_pg_addr, from_buf + first_len, len - first_len);
memcpy(to_buf, from_buf, first_len);
memcpy(next_pg_addr, from_buf + first_len,
len + metalen - first_len);
return;
}

View File

@ -591,6 +591,8 @@ int do_struct_ops(int argc, char **argv)
err = cmd_select(cmds, argc, argv, do_help);
btf__free(btf_vmlinux);
if (!IS_ERR(btf_vmlinux))
btf__free(btf_vmlinux);
return err;
}

View File

@ -178,6 +178,8 @@ struct bpf_capabilities {
__u32 array_mmap:1;
/* BTF_FUNC_GLOBAL is supported */
__u32 btf_func_global:1;
/* kernel support for expected_attach_type in BPF_PROG_LOAD */
__u32 exp_attach_type:1;
};
enum reloc_type {
@ -194,6 +196,22 @@ struct reloc_desc {
int sym_off;
};
struct bpf_sec_def;
typedef struct bpf_link *(*attach_fn_t)(const struct bpf_sec_def *sec,
struct bpf_program *prog);
struct bpf_sec_def {
const char *sec;
size_t len;
enum bpf_prog_type prog_type;
enum bpf_attach_type expected_attach_type;
bool is_exp_attach_type_optional;
bool is_attachable;
bool is_attach_btf;
attach_fn_t attach_fn;
};
/*
* bpf_prog should be a better name but it has been used in
* linux/filter.h.
@ -204,6 +222,7 @@ struct bpf_program {
char *name;
int prog_ifindex;
char *section_name;
const struct bpf_sec_def *sec_def;
/* section_name with / replaced by _; makes recursive pinning
* in bpf_object__pin_programs easier
*/
@ -3315,6 +3334,37 @@ static int bpf_object__probe_array_mmap(struct bpf_object *obj)
return 0;
}
static int
bpf_object__probe_exp_attach_type(struct bpf_object *obj)
{
struct bpf_load_program_attr attr;
struct bpf_insn insns[] = {
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
};
int fd;
memset(&attr, 0, sizeof(attr));
/* use any valid combination of program type and (optional)
* non-zero expected attach type (i.e., not a BPF_CGROUP_INET_INGRESS)
* to see if kernel supports expected_attach_type field for
* BPF_PROG_LOAD command
*/
attr.prog_type = BPF_PROG_TYPE_CGROUP_SOCK;
attr.expected_attach_type = BPF_CGROUP_INET_SOCK_CREATE;
attr.insns = insns;
attr.insns_cnt = ARRAY_SIZE(insns);
attr.license = "GPL";
fd = bpf_load_program_xattr(&attr, NULL, 0);
if (fd >= 0) {
obj->caps.exp_attach_type = 1;
close(fd);
return 1;
}
return 0;
}
static int
bpf_object__probe_caps(struct bpf_object *obj)
{
@ -3325,6 +3375,7 @@ bpf_object__probe_caps(struct bpf_object *obj)
bpf_object__probe_btf_func_global,
bpf_object__probe_btf_datasec,
bpf_object__probe_array_mmap,
bpf_object__probe_exp_attach_type,
};
int i, ret;
@ -4861,7 +4912,12 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
memset(&load_attr, 0, sizeof(struct bpf_load_program_attr));
load_attr.prog_type = prog->type;
load_attr.expected_attach_type = prog->expected_attach_type;
/* old kernels might not support specifying expected_attach_type */
if (!prog->caps->exp_attach_type && prog->sec_def &&
prog->sec_def->is_exp_attach_type_optional)
load_attr.expected_attach_type = 0;
else
load_attr.expected_attach_type = prog->expected_attach_type;
if (prog->caps->name)
load_attr.name = prog->name;
load_attr.insns = insns;
@ -5062,6 +5118,8 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
return 0;
}
static const struct bpf_sec_def *find_sec_def(const char *sec_name);
static struct bpf_object *
__bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
const struct bpf_object_open_opts *opts)
@ -5117,24 +5175,17 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
bpf_object__elf_finish(obj);
bpf_object__for_each_program(prog, obj) {
enum bpf_prog_type prog_type;
enum bpf_attach_type attach_type;
if (prog->type != BPF_PROG_TYPE_UNSPEC)
continue;
err = libbpf_prog_type_by_name(prog->section_name, &prog_type,
&attach_type);
if (err == -ESRCH)
prog->sec_def = find_sec_def(prog->section_name);
if (!prog->sec_def)
/* couldn't guess, but user might manually specify */
continue;
if (err)
goto out;
bpf_program__set_type(prog, prog_type);
bpf_program__set_expected_attach_type(prog, attach_type);
if (prog_type == BPF_PROG_TYPE_TRACING ||
prog_type == BPF_PROG_TYPE_EXT)
bpf_program__set_type(prog, prog->sec_def->prog_type);
bpf_program__set_expected_attach_type(prog,
prog->sec_def->expected_attach_type);
if (prog->sec_def->prog_type == BPF_PROG_TYPE_TRACING ||
prog->sec_def->prog_type == BPF_PROG_TYPE_EXT)
prog->attach_prog_fd = OPTS_GET(opts, attach_prog_fd, 0);
}
@ -6223,23 +6274,32 @@ void bpf_program__set_expected_attach_type(struct bpf_program *prog,
prog->expected_attach_type = type;
}
#define BPF_PROG_SEC_IMPL(string, ptype, eatype, is_attachable, btf, atype) \
{ string, sizeof(string) - 1, ptype, eatype, is_attachable, btf, atype }
#define BPF_PROG_SEC_IMPL(string, ptype, eatype, eatype_optional, \
attachable, attach_btf) \
{ \
.sec = string, \
.len = sizeof(string) - 1, \
.prog_type = ptype, \
.expected_attach_type = eatype, \
.is_exp_attach_type_optional = eatype_optional, \
.is_attachable = attachable, \
.is_attach_btf = attach_btf, \
}
/* Programs that can NOT be attached. */
#define BPF_PROG_SEC(string, ptype) BPF_PROG_SEC_IMPL(string, ptype, 0, 0, 0, 0)
/* Programs that can be attached. */
#define BPF_APROG_SEC(string, ptype, atype) \
BPF_PROG_SEC_IMPL(string, ptype, 0, 1, 0, atype)
BPF_PROG_SEC_IMPL(string, ptype, atype, true, 1, 0)
/* Programs that must specify expected attach type at load time. */
#define BPF_EAPROG_SEC(string, ptype, eatype) \
BPF_PROG_SEC_IMPL(string, ptype, eatype, 1, 0, eatype)
BPF_PROG_SEC_IMPL(string, ptype, eatype, false, 1, 0)
/* Programs that use BTF to identify attach point */
#define BPF_PROG_BTF(string, ptype, eatype) \
BPF_PROG_SEC_IMPL(string, ptype, eatype, 0, 1, 0)
BPF_PROG_SEC_IMPL(string, ptype, eatype, false, 0, 1)
/* Programs that can be attached but attach type can't be identified by section
* name. Kept for backward compatibility.
@ -6253,11 +6313,6 @@ void bpf_program__set_expected_attach_type(struct bpf_program *prog,
__VA_ARGS__ \
}
struct bpf_sec_def;
typedef struct bpf_link *(*attach_fn_t)(const struct bpf_sec_def *sec,
struct bpf_program *prog);
static struct bpf_link *attach_kprobe(const struct bpf_sec_def *sec,
struct bpf_program *prog);
static struct bpf_link *attach_tp(const struct bpf_sec_def *sec,
@ -6269,17 +6324,6 @@ static struct bpf_link *attach_trace(const struct bpf_sec_def *sec,
static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec,
struct bpf_program *prog);
struct bpf_sec_def {
const char *sec;
size_t len;
enum bpf_prog_type prog_type;
enum bpf_attach_type expected_attach_type;
bool is_attachable;
bool is_attach_btf;
enum bpf_attach_type attach_type;
attach_fn_t attach_fn;
};
static const struct bpf_sec_def section_defs[] = {
BPF_PROG_SEC("socket", BPF_PROG_TYPE_SOCKET_FILTER),
BPF_PROG_SEC("sk_reuseport", BPF_PROG_TYPE_SK_REUSEPORT),
@ -6713,7 +6757,7 @@ int libbpf_attach_type_by_name(const char *name,
continue;
if (!section_defs[i].is_attachable)
return -EINVAL;
*attach_type = section_defs[i].attach_type;
*attach_type = section_defs[i].expected_attach_type;
return 0;
}
pr_debug("failed to guess attach type based on ELF section name '%s'\n", name);
@ -7542,7 +7586,6 @@ static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec,
struct bpf_link *
bpf_program__attach_cgroup(struct bpf_program *prog, int cgroup_fd)
{
const struct bpf_sec_def *sec_def;
enum bpf_attach_type attach_type;
char errmsg[STRERR_BUFSIZE];
struct bpf_link *link;
@ -7561,11 +7604,6 @@ bpf_program__attach_cgroup(struct bpf_program *prog, int cgroup_fd)
link->detach = &bpf_link__detach_fd;
attach_type = bpf_program__get_expected_attach_type(prog);
if (!attach_type) {
sec_def = find_sec_def(bpf_program__title(prog, false));
if (sec_def)
attach_type = sec_def->attach_type;
}
link_fd = bpf_link_create(prog_fd, cgroup_fd, attach_type, NULL);
if (link_fd < 0) {
link_fd = -errno;

View File

@ -458,7 +458,7 @@ struct xdp_link_info {
struct bpf_xdp_set_link_opts {
size_t sz;
__u32 old_fd;
int old_fd;
};
#define bpf_xdp_set_link_opts__last_field old_fd

View File

@ -142,7 +142,7 @@ static int __bpf_set_link_xdp_fd_replace(int ifindex, int fd, int old_fd,
struct ifinfomsg ifinfo;
char attrbuf[64];
} req;
__u32 nl_pid;
__u32 nl_pid = 0;
sock = libbpf_netlink_open(&nl_pid);
if (sock < 0)
@ -288,7 +288,7 @@ int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
{
struct xdp_id_md xdp_id = {};
int sock, ret;
__u32 nl_pid;
__u32 nl_pid = 0;
__u32 mask;
if (flags & ~XDP_FLAGS_MASK || !info_size)
@ -321,7 +321,7 @@ int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
static __u32 get_xdp_id(struct xdp_link_info *info, __u32 flags)
{
if (info->attach_mode != XDP_ATTACHED_MULTI)
if (info->attach_mode != XDP_ATTACHED_MULTI && !flags)
return info->prog_id;
if (flags & XDP_FLAGS_DRV_MODE)
return info->drv_prog_id;

Some files were not shown because too many files have changed in this diff Show More