Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:
 "Mostly these are fixes for fallout due to merge window changes, as
  well as cures for problems that have been with us for a much longer
  period of time"

 1) Johannes Berg noticed two major deficiencies in our genetlink
    registration.  Some genetlink protocols we passing in constant
    counts for their ops array rather than something like
    ARRAY_SIZE(ops) or similar.  Also, some genetlink protocols were
    using fixed IDs for their multicast groups.

    We have to retain these fixed IDs to keep existing userland tools
    working, but reserve them so that other multicast groups used by
    other protocols can not possibly conflict.

    In dealing with these two problems, we actually now use less state
    management for genetlink operations and multicast groups.

 2) When configuring interface hardware timestamping, fix several
    drivers that simply do not validate that the hwtstamp_config value
    is one the driver actually supports.  From Ben Hutchings.

 3) Invalid memory references in mwifiex driver, from Amitkumar Karwar.

 4) In dev_forward_skb(), set the skb->protocol in the right order
    relative to skb_scrub_packet().  From Alexei Starovoitov.

 5) Bridge erroneously fails to use the proper wrapper functions to make
    calls to netdev_ops->ndo_vlan_rx_{add,kill}_vid.  Fix from Toshiaki
    Makita.

 6) When detaching a bridge port, make sure to flush all VLAN IDs to
    prevent them from leaking, also from Toshiaki Makita.

 7) Put in a compromise for TCP Small Queues so that deep queued devices
    that delay TX reclaim non-trivially don't have such a performance
    decrease.  One particularly problematic area is 802.11 AMPDU in
    wireless.  From Eric Dumazet.

 8) Fix crashes in tcp_fastopen_cache_get(), we can see NULL socket dsts
    here.  Fix from Eric Dumzaet, reported by Dave Jones.

 9) Fix use after free in ipv6 SIT driver, from Willem de Bruijn.

10) When computing mergeable buffer sizes, virtio-net fails to take the
    virtio-net header into account.  From Michael Dalton.

11) Fix seqlock deadlock in ip4_datagram_connect() wrt.  statistic
    bumping, this one has been with us for a while.  From Eric Dumazet.

12) Fix NULL deref in the new TIPC fragmentation handling, from Erik
    Hugne.

13) 6lowpan bit used for traffic classification was wrong, from Jukka
    Rissanen.

14) macvlan has the same issue as normal vlans did wrt.  propagating LRO
    disabling down to the real device, fix it the same way.  From Michal
    Kubecek.

15) CPSW driver needs to soft reset all slaves during suspend, from
    Daniel Mack.

16) Fix small frame pacing in FQ packet scheduler, from Eric Dumazet.

17) The xen-netfront RX buffer refill timer isn't properly scheduled on
    partial RX allocation success, from Ma JieYue.

18) When ipv6 ping protocol support was added, the AF_INET6 protocol
    initialization cleanup path on failure was borked a little.  Fix
    from Vlad Yasevich.

19) If a socket disconnects during a read/recvmsg/recvfrom/etc that
    blocks we can do the wrong thing with the msg_name we write back to
    userspace.  From Hannes Frederic Sowa.  There is another fix in the
    works from Hannes which will prevent future problems of this nature.

20) Fix route leak in VTI tunnel transmit, from Fan Du.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (106 commits)
  genetlink: make multicast groups const, prevent abuse
  genetlink: pass family to functions using groups
  genetlink: add and use genl_set_err()
  genetlink: remove family pointer from genl_multicast_group
  genetlink: remove genl_unregister_mc_group()
  hsr: don't call genl_unregister_mc_group()
  quota/genetlink: use proper genetlink multicast APIs
  drop_monitor/genetlink: use proper genetlink multicast APIs
  genetlink: only pass array to genl_register_family_with_ops()
  tcp: don't update snd_nxt, when a socket is switched from repair mode
  atm: idt77252: fix dev refcnt leak
  xfrm: Release dst if this dst is improper for vti tunnel
  netlink: fix documentation typo in netlink_set_err()
  be2net: Delete secondary unicast MAC addresses during be_close
  be2net: Fix unconditional enabling of Rx interface options
  net, virtio_net: replace the magic value
  ping: prevent NULL pointer dereference on write to msg_name
  bnx2x: Prevent "timeout waiting for state X"
  bnx2x: prevent CFC attention
  bnx2x: Prevent panic during DMAE timeout
  ...
This commit is contained in:
Linus Torvalds 2013-11-19 15:50:47 -08:00
commit 1ee2dcc224
129 changed files with 1445 additions and 1440 deletions

View File

@ -577,9 +577,6 @@ tcp_limit_output_bytes - INTEGER
typical pfifo_fast qdiscs.
tcp_limit_output_bytes limits the number of bytes on qdisc
or device to reduce artificial RTT/cwnd and reduce bufferbloat.
Note: For GSO/TSO enabled flows, we try to have at least two
packets in flight. Reducing tcp_limit_output_bytes might also
reduce the size of individual GSO packet (64KB being the max)
Default: 131072
tcp_challenge_ack_limit - INTEGER

View File

@ -2468,7 +2468,7 @@ S: Maintained
F: drivers/media/dvb-frontends/cxd2820r*
CXGB3 ETHERNET DRIVER (CXGB3)
M: Divy Le Ray <divy@chelsio.com>
M: Santosh Raspatur <santosh@chelsio.com>
L: netdev@vger.kernel.org
W: http://www.chelsio.com
S: Supported
@ -8960,8 +8960,8 @@ USB PEGASUS DRIVER
M: Petko Manolov <petkan@nucleusys.com>
L: linux-usb@vger.kernel.org
L: netdev@vger.kernel.org
T: git git://git.code.sf.net/p/pegasus2/git
W: http://pegasus2.sourceforge.net/
T: git git://github.com/petkan/pegasus.git
W: https://github.com/petkan/pegasus
S: Maintained
F: drivers/net/usb/pegasus.*
@ -8982,8 +8982,8 @@ USB RTL8150 DRIVER
M: Petko Manolov <petkan@nucleusys.com>
L: linux-usb@vger.kernel.org
L: netdev@vger.kernel.org
T: git git://git.code.sf.net/p/pegasus2/git
W: http://pegasus2.sourceforge.net/
T: git git://github.com/petkan/rtl8150.git
W: https://github.com/petkan/rtl8150
S: Maintained
F: drivers/net/usb/rtl8150.c

View File

@ -78,15 +78,17 @@ enum {
#define ACPI_GENL_VERSION 0x01
#define ACPI_GENL_MCAST_GROUP_NAME "acpi_mc_group"
static const struct genl_multicast_group acpi_event_mcgrps[] = {
{ .name = ACPI_GENL_MCAST_GROUP_NAME, },
};
static struct genl_family acpi_event_genl_family = {
.id = GENL_ID_GENERATE,
.name = ACPI_GENL_FAMILY_NAME,
.version = ACPI_GENL_VERSION,
.maxattr = ACPI_GENL_ATTR_MAX,
};
static struct genl_multicast_group acpi_event_mcgrp = {
.name = ACPI_GENL_MCAST_GROUP_NAME,
.mcgrps = acpi_event_mcgrps,
.n_mcgrps = ARRAY_SIZE(acpi_event_mcgrps),
};
int acpi_bus_generate_netlink_event(const char *device_class,
@ -141,7 +143,7 @@ int acpi_bus_generate_netlink_event(const char *device_class,
return result;
}
genlmsg_multicast(skb, 0, acpi_event_mcgrp.id, GFP_ATOMIC);
genlmsg_multicast(&acpi_event_genl_family, skb, 0, 0, GFP_ATOMIC);
return 0;
}
@ -149,18 +151,7 @@ EXPORT_SYMBOL(acpi_bus_generate_netlink_event);
static int acpi_event_genetlink_init(void)
{
int result;
result = genl_register_family(&acpi_event_genl_family);
if (result)
return result;
result = genl_register_mc_group(&acpi_event_genl_family,
&acpi_event_mcgrp);
if (result)
genl_unregister_family(&acpi_event_genl_family);
return result;
return genl_register_family(&acpi_event_genl_family);
}
#else

View File

@ -3511,7 +3511,7 @@ static int init_card(struct atm_dev *dev)
tmp = dev_get_by_name(&init_net, tname); /* jhs: was "tmp = dev_get(tname);" */
if (tmp) {
memcpy(card->atmdev->esi, tmp->dev_addr, 6);
dev_put(tmp);
printk("%s: ESI %pM\n", card->name, card->atmdev->esi);
}
/*

View File

@ -32,11 +32,23 @@
#include <linux/atomic.h>
#include <linux/pid_namespace.h>
#include <asm/unaligned.h>
#include <linux/cn_proc.h>
#define CN_PROC_MSG_SIZE (sizeof(struct cn_msg) + sizeof(struct proc_event))
/*
* Size of a cn_msg followed by a proc_event structure. Since the
* sizeof struct cn_msg is a multiple of 4 bytes, but not 8 bytes, we
* add one 4-byte word to the size here, and then start the actual
* cn_msg structure 4 bytes into the stack buffer. The result is that
* the immediately following proc_event structure is aligned to 8 bytes.
*/
#define CN_PROC_MSG_SIZE (sizeof(struct cn_msg) + sizeof(struct proc_event) + 4)
/* See comment above; we test our assumption about sizeof struct cn_msg here. */
static inline struct cn_msg *buffer_to_cn_msg(__u8 *buffer)
{
BUILD_BUG_ON(sizeof(struct cn_msg) != 20);
return (struct cn_msg *)(buffer + 4);
}
static atomic_t proc_event_num_listeners = ATOMIC_INIT(0);
static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
@ -56,19 +68,19 @@ void proc_fork_connector(struct task_struct *task)
{
struct cn_msg *msg;
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE];
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
struct timespec ts;
struct task_struct *parent;
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
msg = buffer_to_cn_msg(buffer);
ev = (struct proc_event *)msg->data;
memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->timestamp_ns = timespec_to_ns(&ts);
ev->what = PROC_EVENT_FORK;
rcu_read_lock();
parent = rcu_dereference(task->real_parent);
@ -91,17 +103,17 @@ void proc_exec_connector(struct task_struct *task)
struct cn_msg *msg;
struct proc_event *ev;
struct timespec ts;
__u8 buffer[CN_PROC_MSG_SIZE];
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
msg = buffer_to_cn_msg(buffer);
ev = (struct proc_event *)msg->data;
memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->timestamp_ns = timespec_to_ns(&ts);
ev->what = PROC_EVENT_EXEC;
ev->event_data.exec.process_pid = task->pid;
ev->event_data.exec.process_tgid = task->tgid;
@ -117,14 +129,14 @@ void proc_id_connector(struct task_struct *task, int which_id)
{
struct cn_msg *msg;
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE];
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
struct timespec ts;
const struct cred *cred;
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
msg = buffer_to_cn_msg(buffer);
ev = (struct proc_event *)msg->data;
memset(&ev->event_data, 0, sizeof(ev->event_data));
ev->what = which_id;
@ -145,7 +157,7 @@ void proc_id_connector(struct task_struct *task, int which_id)
rcu_read_unlock();
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->timestamp_ns = timespec_to_ns(&ts);
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
@ -159,17 +171,17 @@ void proc_sid_connector(struct task_struct *task)
struct cn_msg *msg;
struct proc_event *ev;
struct timespec ts;
__u8 buffer[CN_PROC_MSG_SIZE];
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
msg = buffer_to_cn_msg(buffer);
ev = (struct proc_event *)msg->data;
memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->timestamp_ns = timespec_to_ns(&ts);
ev->what = PROC_EVENT_SID;
ev->event_data.sid.process_pid = task->pid;
ev->event_data.sid.process_tgid = task->tgid;
@ -186,17 +198,17 @@ void proc_ptrace_connector(struct task_struct *task, int ptrace_id)
struct cn_msg *msg;
struct proc_event *ev;
struct timespec ts;
__u8 buffer[CN_PROC_MSG_SIZE];
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
msg = buffer_to_cn_msg(buffer);
ev = (struct proc_event *)msg->data;
memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->timestamp_ns = timespec_to_ns(&ts);
ev->what = PROC_EVENT_PTRACE;
ev->event_data.ptrace.process_pid = task->pid;
ev->event_data.ptrace.process_tgid = task->tgid;
@ -221,17 +233,17 @@ void proc_comm_connector(struct task_struct *task)
struct cn_msg *msg;
struct proc_event *ev;
struct timespec ts;
__u8 buffer[CN_PROC_MSG_SIZE];
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
msg = buffer_to_cn_msg(buffer);
ev = (struct proc_event *)msg->data;
memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->timestamp_ns = timespec_to_ns(&ts);
ev->what = PROC_EVENT_COMM;
ev->event_data.comm.process_pid = task->pid;
ev->event_data.comm.process_tgid = task->tgid;
@ -248,18 +260,18 @@ void proc_coredump_connector(struct task_struct *task)
{
struct cn_msg *msg;
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE];
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
struct timespec ts;
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
msg = buffer_to_cn_msg(buffer);
ev = (struct proc_event *)msg->data;
memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->timestamp_ns = timespec_to_ns(&ts);
ev->what = PROC_EVENT_COREDUMP;
ev->event_data.coredump.process_pid = task->pid;
ev->event_data.coredump.process_tgid = task->tgid;
@ -275,18 +287,18 @@ void proc_exit_connector(struct task_struct *task)
{
struct cn_msg *msg;
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE];
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
struct timespec ts;
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
msg = buffer_to_cn_msg(buffer);
ev = (struct proc_event *)msg->data;
memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->timestamp_ns = timespec_to_ns(&ts);
ev->what = PROC_EVENT_EXIT;
ev->event_data.exit.process_pid = task->pid;
ev->event_data.exit.process_tgid = task->tgid;
@ -312,18 +324,18 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack)
{
struct cn_msg *msg;
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE];
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
struct timespec ts;
if (atomic_read(&proc_event_num_listeners) < 1)
return;
msg = (struct cn_msg *)buffer;
msg = buffer_to_cn_msg(buffer);
ev = (struct proc_event *)msg->data;
memset(&ev->event_data, 0, sizeof(ev->event_data));
msg->seq = rcvd_seq;
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
ev->timestamp_ns = timespec_to_ns(&ts);
ev->cpu = -1;
ev->what = PROC_EVENT_NONE;
ev->event_data.ack.err = err;

View File

@ -1083,8 +1083,10 @@ isdnloop_start(isdnloop_card *card, isdnloop_sdef *sdefp)
spin_unlock_irqrestore(&card->isdnloop_lock, flags);
return -ENOMEM;
}
for (i = 0; i < 3; i++)
strcpy(card->s0num[i], sdef.num[i]);
for (i = 0; i < 3; i++) {
strlcpy(card->s0num[i], sdef.num[i],
sizeof(card->s0num[0]));
}
break;
case ISDN_PTYPE_1TR6:
if (isdnloop_fake(card, "DRV1.04TC-1TR6-CAPI-CNS-BASIS-29.11.95",
@ -1097,7 +1099,7 @@ isdnloop_start(isdnloop_card *card, isdnloop_sdef *sdefp)
spin_unlock_irqrestore(&card->isdnloop_lock, flags);
return -ENOMEM;
}
strcpy(card->s0num[0], sdef.num[0]);
strlcpy(card->s0num[0], sdef.num[0], sizeof(card->s0num[0]));
card->s0num[1][0] = '\0';
card->s0num[2][0] = '\0';
break;

View File

@ -524,8 +524,9 @@ static ssize_t bonding_store_arp_interval(struct device *d,
goto out;
}
if (bond->params.mode == BOND_MODE_ALB ||
bond->params.mode == BOND_MODE_TLB) {
pr_info("%s: ARP monitoring cannot be used with ALB/TLB. Only MII monitoring is supported on %s.\n",
bond->params.mode == BOND_MODE_TLB ||
bond->params.mode == BOND_MODE_8023AD) {
pr_info("%s: ARP monitoring cannot be used with ALB/TLB/802.3ad. Only MII monitoring is supported on %s.\n",
bond->dev->name, bond->dev->name);
ret = -EINVAL;
goto out;
@ -603,15 +604,14 @@ static ssize_t bonding_store_arp_targets(struct device *d,
return restart_syscall();
targets = bond->params.arp_targets;
newtarget = in_aton(buf + 1);
if (!in4_pton(buf + 1, -1, (u8 *)&newtarget, -1, NULL) ||
IS_IP_TARGET_UNUSABLE_ADDRESS(newtarget)) {
pr_err("%s: invalid ARP target %pI4 specified for addition\n",
bond->dev->name, &newtarget);
goto out;
}
/* look for adds */
if (buf[0] == '+') {
if ((newtarget == 0) || (newtarget == htonl(INADDR_BROADCAST))) {
pr_err("%s: invalid ARP target %pI4 specified for addition\n",
bond->dev->name, &newtarget);
goto out;
}
if (bond_get_targets_ip(targets, newtarget) != -1) { /* dup */
pr_err("%s: ARP target %pI4 is already present\n",
bond->dev->name, &newtarget);
@ -634,12 +634,6 @@ static ssize_t bonding_store_arp_targets(struct device *d,
targets[ind] = newtarget;
write_unlock_bh(&bond->lock);
} else if (buf[0] == '-') {
if ((newtarget == 0) || (newtarget == htonl(INADDR_BROADCAST))) {
pr_err("%s: invalid ARP target %pI4 specified for removal\n",
bond->dev->name, &newtarget);
goto out;
}
ind = bond_get_targets_ip(targets, newtarget);
if (ind == -1) {
pr_err("%s: unable to remove nonexistent ARP target %pI4.\n",
@ -701,6 +695,8 @@ static ssize_t bonding_store_downdelay(struct device *d,
int new_value, ret = count;
struct bonding *bond = to_bond(d);
if (!rtnl_trylock())
return restart_syscall();
if (!(bond->params.miimon)) {
pr_err("%s: Unable to set down delay as MII monitoring is disabled\n",
bond->dev->name);
@ -734,6 +730,7 @@ static ssize_t bonding_store_downdelay(struct device *d,
}
out:
rtnl_unlock();
return ret;
}
static DEVICE_ATTR(downdelay, S_IRUGO | S_IWUSR,
@ -756,6 +753,8 @@ static ssize_t bonding_store_updelay(struct device *d,
int new_value, ret = count;
struct bonding *bond = to_bond(d);
if (!rtnl_trylock())
return restart_syscall();
if (!(bond->params.miimon)) {
pr_err("%s: Unable to set up delay as MII monitoring is disabled\n",
bond->dev->name);
@ -789,6 +788,7 @@ static ssize_t bonding_store_updelay(struct device *d,
}
out:
rtnl_unlock();
return ret;
}
static DEVICE_ATTR(updelay, S_IRUGO | S_IWUSR,

View File

@ -63,6 +63,9 @@
(((mode) == BOND_MODE_TLB) || \
((mode) == BOND_MODE_ALB))
#define IS_IP_TARGET_UNUSABLE_ADDRESS(a) \
((htonl(INADDR_BROADCAST) == a) || \
ipv4_is_zeronet(a))
/*
* Less bad way to call ioctl from within the kernel; this needs to be
* done some other way to get the call out of interrupt context.

View File

@ -1388,6 +1388,9 @@ static int alx_resume(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct alx_priv *alx = pci_get_drvdata(pdev);
struct alx_hw *hw = &alx->hw;
alx_reset_phy(hw);
if (!netif_running(alx->dev))
return 0;

View File

@ -1376,7 +1376,6 @@ enum {
BNX2X_SP_RTNL_RX_MODE,
BNX2X_SP_RTNL_HYPERVISOR_VLAN,
BNX2X_SP_RTNL_TX_STOP,
BNX2X_SP_RTNL_TX_RESUME,
};
struct bnx2x_prev_path_list {

View File

@ -2959,6 +2959,10 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
bp->port.pmf = 0;
/* clear pending work in rtnl task */
bp->sp_rtnl_state = 0;
smp_mb();
/* Free SKBs, SGEs, TPA pool and driver internals */
bnx2x_free_skbs(bp);
if (CNIC_LOADED(bp))

View File

@ -778,11 +778,6 @@ void bnx2x_dcbx_set_params(struct bnx2x *bp, u32 state)
/* ets may affect cmng configuration: reinit it in hw */
bnx2x_set_local_cmng(bp);
set_bit(BNX2X_SP_RTNL_TX_RESUME, &bp->sp_rtnl_state);
schedule_delayed_work(&bp->sp_rtnl_task, 0);
return;
case BNX2X_DCBX_STATE_TX_RELEASED:
DP(BNX2X_MSG_DCB, "BNX2X_DCBX_STATE_TX_RELEASED\n");

View File

@ -577,7 +577,9 @@ void bnx2x_write_dmae(struct bnx2x *bp, dma_addr_t dma_addr, u32 dst_addr,
rc = bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp));
if (rc) {
BNX2X_ERR("DMAE returned failure %d\n", rc);
#ifdef BNX2X_STOP_ON_ERROR
bnx2x_panic();
#endif
}
}
@ -614,7 +616,9 @@ void bnx2x_read_dmae(struct bnx2x *bp, u32 src_addr, u32 len32)
rc = bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp));
if (rc) {
BNX2X_ERR("DMAE returned failure %d\n", rc);
#ifdef BNX2X_STOP_ON_ERROR
bnx2x_panic();
#endif
}
}
@ -5231,18 +5235,18 @@ static void bnx2x_eq_int(struct bnx2x *bp)
case EVENT_RING_OPCODE_STOP_TRAFFIC:
DP(BNX2X_MSG_SP | BNX2X_MSG_DCB, "got STOP TRAFFIC\n");
bnx2x_dcbx_set_params(bp, BNX2X_DCBX_STATE_TX_PAUSED);
if (f_obj->complete_cmd(bp, f_obj,
BNX2X_F_CMD_TX_STOP))
break;
bnx2x_dcbx_set_params(bp, BNX2X_DCBX_STATE_TX_PAUSED);
goto next_spqe;
case EVENT_RING_OPCODE_START_TRAFFIC:
DP(BNX2X_MSG_SP | BNX2X_MSG_DCB, "got START TRAFFIC\n");
bnx2x_dcbx_set_params(bp, BNX2X_DCBX_STATE_TX_RELEASED);
if (f_obj->complete_cmd(bp, f_obj,
BNX2X_F_CMD_TX_START))
break;
bnx2x_dcbx_set_params(bp, BNX2X_DCBX_STATE_TX_RELEASED);
goto next_spqe;
case EVENT_RING_OPCODE_FUNCTION_UPDATE:
@ -9352,6 +9356,10 @@ static int bnx2x_process_kill(struct bnx2x *bp, bool global)
bnx2x_process_kill_chip_reset(bp, global);
barrier();
/* clear errors in PGB */
if (!CHIP_IS_E1x(bp))
REG_WR(bp, PGLUE_B_REG_LATCHED_ERRORS_CLR, 0x7f);
/* Recover after reset: */
/* MCP */
if (global && bnx2x_reset_mcp_comp(bp, val))
@ -9706,11 +9714,10 @@ sp_rtnl_not_reset:
&bp->sp_rtnl_state))
bnx2x_pf_set_vfs_vlan(bp);
if (test_and_clear_bit(BNX2X_SP_RTNL_TX_STOP, &bp->sp_rtnl_state))
if (test_and_clear_bit(BNX2X_SP_RTNL_TX_STOP, &bp->sp_rtnl_state)) {
bnx2x_dcbx_stop_hw_tx(bp);
if (test_and_clear_bit(BNX2X_SP_RTNL_TX_RESUME, &bp->sp_rtnl_state))
bnx2x_dcbx_resume_hw_tx(bp);
}
/* work which needs rtnl lock not-taken (as it takes the lock itself and
* can be called from other contexts as well)

View File

@ -2864,6 +2864,17 @@
#define PGLUE_B_REG_INTERNAL_PFID_ENABLE_TARGET_READ 0x9430
#define PGLUE_B_REG_INTERNAL_PFID_ENABLE_TARGET_WRITE 0x9434
#define PGLUE_B_REG_INTERNAL_VFID_ENABLE 0x9438
/* [W 7] Writing 1 to each bit in this register clears a corresponding error
* details register and enables logging new error details. Bit 0 - clears
* INCORRECT_RCV_DETAILS; Bit 1 - clears RX_ERR_DETAILS; Bit 2 - clears
* TX_ERR_WR_ADD_31_0 TX_ERR_WR_ADD_63_32 TX_ERR_WR_DETAILS
* TX_ERR_WR_DETAILS2 TX_ERR_RD_ADD_31_0 TX_ERR_RD_ADD_63_32
* TX_ERR_RD_DETAILS TX_ERR_RD_DETAILS2 TX_ERR_WR_DETAILS_ICPL; Bit 3 -
* clears VF_LENGTH_VIOLATION_DETAILS. Bit 4 - clears
* VF_GRC_SPACE_VIOLATION_DETAILS. Bit 5 - clears RX_TCPL_ERR_DETAILS. Bit 6
* - clears TCPL_IN_TWO_RCBS_DETAILS. */
#define PGLUE_B_REG_LATCHED_ERRORS_CLR 0x943c
/* [R 9] Interrupt register #0 read */
#define PGLUE_B_REG_PGLUE_B_INT_STS 0x9298
/* [RC 9] Interrupt register #0 read clear */

View File

@ -152,7 +152,7 @@ static int bnx2x_send_msg2pf(struct bnx2x *bp, u8 *done, dma_addr_t msg_mapping)
if (bp->old_bulletin.valid_bitmap & 1 << CHANNEL_DOWN) {
DP(BNX2X_MSG_IOV, "detecting channel down. Aborting message\n");
*done = PFVF_STATUS_SUCCESS;
return 0;
return -EINVAL;
}
/* Write message address */

View File

@ -13618,16 +13618,9 @@ static int tg3_hwtstamp_ioctl(struct net_device *dev,
if (stmpconf.flags)
return -EINVAL;
switch (stmpconf.tx_type) {
case HWTSTAMP_TX_ON:
tg3_flag_set(tp, TX_TSTAMP_EN);
break;
case HWTSTAMP_TX_OFF:
tg3_flag_clear(tp, TX_TSTAMP_EN);
break;
default:
if (stmpconf.tx_type != HWTSTAMP_TX_ON &&
stmpconf.tx_type != HWTSTAMP_TX_OFF)
return -ERANGE;
}
switch (stmpconf.rx_filter) {
case HWTSTAMP_FILTER_NONE:
@ -13689,6 +13682,11 @@ static int tg3_hwtstamp_ioctl(struct net_device *dev,
tw32(TG3_RX_PTP_CTL,
tp->rxptpctl | TG3_RX_PTP_CTL_HWTS_INTERLOCK);
if (stmpconf.tx_type == HWTSTAMP_TX_ON)
tg3_flag_set(tp, TX_TSTAMP_EN);
else
tg3_flag_clear(tp, TX_TSTAMP_EN);
return copy_to_user(ifr->ifr_data, &stmpconf, sizeof(stmpconf)) ?
-EFAULT : 0;
}

View File

@ -1758,7 +1758,7 @@ err:
/* Uses sycnhronous mcc */
int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
u32 num, bool untagged, bool promiscuous)
u32 num, bool promiscuous)
{
struct be_mcc_wrb *wrb;
struct be_cmd_req_vlan_config *req;
@ -1778,7 +1778,7 @@ int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
req->interface_id = if_id;
req->promiscuous = promiscuous;
req->untagged = untagged;
req->untagged = BE_IF_FLAGS_UNTAGGED & be_if_cap_flags(adapter) ? 1 : 0;
req->num_vlan = num;
if (!promiscuous) {
memcpy(req->normal_vlan, vtag_array,
@ -1847,7 +1847,19 @@ int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 value)
memcpy(req->mcast_mac[i++].byte, ha->addr, ETH_ALEN);
}
if ((req->if_flags_mask & cpu_to_le32(be_if_cap_flags(adapter))) !=
req->if_flags_mask) {
dev_warn(&adapter->pdev->dev,
"Cannot set rx filter flags 0x%x\n",
req->if_flags_mask);
dev_warn(&adapter->pdev->dev,
"Interface is capable of 0x%x flags only\n",
be_if_cap_flags(adapter));
}
req->if_flags_mask &= cpu_to_le32(be_if_cap_flags(adapter));
status = be_mcc_notify_wait(adapter);
err:
spin_unlock_bh(&adapter->mcc_lock);
return status;

View File

@ -1984,7 +1984,7 @@ int be_cmd_get_fw_ver(struct be_adapter *adapter, char *fw_ver,
char *fw_on_flash);
int be_cmd_modify_eqd(struct be_adapter *adapter, struct be_set_eqd *, int num);
int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
u32 num, bool untagged, bool promiscuous);
u32 num, bool promiscuous);
int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 status);
int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc);
int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc);

View File

@ -1079,7 +1079,7 @@ static int be_vid_config(struct be_adapter *adapter)
vids[num++] = cpu_to_le16(i);
status = be_cmd_vlan_config(adapter, adapter->if_handle,
vids, num, 1, 0);
vids, num, 0);
if (status) {
/* Set to VLAN promisc mode as setting VLAN filter failed */
@ -2676,6 +2676,11 @@ static int be_close(struct net_device *netdev)
be_rx_qs_destroy(adapter);
for (i = 1; i < (adapter->uc_macs + 1); i++)
be_cmd_pmac_del(adapter, adapter->if_handle,
adapter->pmac_id[i], 0);
adapter->uc_macs = 0;
for_all_evt_queues(adapter, eqo, i) {
if (msix_enabled(adapter))
synchronize_irq(be_msix_vec_get(adapter, eqo));

View File

@ -386,7 +386,14 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
*/
bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, bufaddr,
FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
if (dma_mapping_error(&fep->pdev->dev, bdp->cbd_bufaddr)) {
bdp->cbd_bufaddr = 0;
fep->tx_skbuff[index] = NULL;
dev_kfree_skb_any(skb);
if (net_ratelimit())
netdev_err(ndev, "Tx DMA memory map failed\n");
return NETDEV_TX_OK;
}
/* Send it on its way. Tell FEC it's ready, interrupt when done,
* it's the last BD of the frame, and to put the CRC on the end.
*/
@ -861,6 +868,7 @@ fec_enet_rx(struct net_device *ndev, int budget)
struct bufdesc_ex *ebdp = NULL;
bool vlan_packet_rcvd = false;
u16 vlan_tag;
int index = 0;
#ifdef CONFIG_M532x
flush_cache_all();
@ -916,10 +924,15 @@ fec_enet_rx(struct net_device *ndev, int budget)
ndev->stats.rx_packets++;
pkt_len = bdp->cbd_datlen;
ndev->stats.rx_bytes += pkt_len;
data = (__u8*)__va(bdp->cbd_bufaddr);
dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
FEC_ENET_TX_FRSIZE, DMA_FROM_DEVICE);
if (fep->bufdesc_ex)
index = (struct bufdesc_ex *)bdp -
(struct bufdesc_ex *)fep->rx_bd_base;
else
index = bdp - fep->rx_bd_base;
data = fep->rx_skbuff[index]->data;
dma_sync_single_for_cpu(&fep->pdev->dev, bdp->cbd_bufaddr,
FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(data, pkt_len);
@ -999,8 +1012,8 @@ fec_enet_rx(struct net_device *ndev, int budget)
napi_gro_receive(&fep->napi, skb);
}
bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, data,
FEC_ENET_TX_FRSIZE, DMA_FROM_DEVICE);
dma_sync_single_for_device(&fep->pdev->dev, bdp->cbd_bufaddr,
FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
rx_processing_done:
/* Clear the status flags for this buffer */
status &= ~BD_ENET_RX_STATS;
@ -1719,6 +1732,12 @@ static int fec_enet_alloc_buffers(struct net_device *ndev)
bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, skb->data,
FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(&fep->pdev->dev, bdp->cbd_bufaddr)) {
fec_enet_free_buffers(ndev);
if (net_ratelimit())
netdev_err(ndev, "Rx DMA memory map failed\n");
return -ENOMEM;
}
bdp->cbd_sc = BD_ENET_RX_EMPTY;
if (fep->bufdesc_ex) {

View File

@ -3482,10 +3482,10 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca)
* specified. Matching the kind of event packet is not supported, with the
* exception of "all V2 events regardless of level 2 or 4".
**/
static int e1000e_config_hwtstamp(struct e1000_adapter *adapter)
static int e1000e_config_hwtstamp(struct e1000_adapter *adapter,
struct hwtstamp_config *config)
{
struct e1000_hw *hw = &adapter->hw;
struct hwtstamp_config *config = &adapter->hwtstamp_config;
u32 tsync_tx_ctl = E1000_TSYNCTXCTL_ENABLED;
u32 tsync_rx_ctl = E1000_TSYNCRXCTL_ENABLED;
u32 rxmtrl = 0;
@ -3586,6 +3586,8 @@ static int e1000e_config_hwtstamp(struct e1000_adapter *adapter)
return -ERANGE;
}
adapter->hwtstamp_config = *config;
/* enable/disable Tx h/w time stamping */
regval = er32(TSYNCTXCTL);
regval &= ~E1000_TSYNCTXCTL_ENABLED;
@ -3874,7 +3876,7 @@ void e1000e_reset(struct e1000_adapter *adapter)
e1000e_reset_adaptive(hw);
/* initialize systim and reset the ns time counter */
e1000e_config_hwtstamp(adapter);
e1000e_config_hwtstamp(adapter, &adapter->hwtstamp_config);
/* Set EEE advertisement as appropriate */
if (adapter->flags2 & FLAG2_HAS_EEE) {
@ -5797,14 +5799,10 @@ static int e1000e_hwtstamp_ioctl(struct net_device *netdev, struct ifreq *ifr)
if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
return -EFAULT;
adapter->hwtstamp_config = config;
ret_val = e1000e_config_hwtstamp(adapter);
ret_val = e1000e_config_hwtstamp(adapter, &config);
if (ret_val)
return ret_val;
config = adapter->hwtstamp_config;
switch (config.rx_filter) {
case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:

View File

@ -2890,7 +2890,8 @@ static int mv643xx_eth_probe(struct platform_device *pdev)
PHY_INTERFACE_MODE_GMII);
if (!mp->phy)
err = -ENODEV;
phy_addr_set(mp, mp->phy->addr);
else
phy_addr_set(mp, mp->phy->addr);
} else if (pd->phy_addr != MV643XX_ETH_PHY_NONE) {
mp->phy = phy_scan(mp, pd->phy_addr);

View File

@ -245,16 +245,8 @@ static int hwtstamp_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
/* Get ieee1588's dev information */
pdev = adapter->ptp_pdev;
switch (cfg.tx_type) {
case HWTSTAMP_TX_OFF:
adapter->hwts_tx_en = 0;
break;
case HWTSTAMP_TX_ON:
adapter->hwts_tx_en = 1;
break;
default:
if (cfg.tx_type != HWTSTAMP_TX_OFF && cfg.tx_type != HWTSTAMP_TX_ON)
return -ERANGE;
}
switch (cfg.rx_filter) {
case HWTSTAMP_FILTER_NONE:
@ -284,6 +276,8 @@ static int hwtstamp_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
return -ERANGE;
}
adapter->hwts_tx_en = cfg.tx_type == HWTSTAMP_TX_ON;
/* Clear out any old time stamps. */
pch_ch_event_write(pdev, TX_SNAPSHOT_LOCKED | RX_SNAPSHOT_LOCKED);

View File

@ -435,16 +435,9 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
if (config.flags)
return -EINVAL;
switch (config.tx_type) {
case HWTSTAMP_TX_OFF:
priv->hwts_tx_en = 0;
break;
case HWTSTAMP_TX_ON:
priv->hwts_tx_en = 1;
break;
default:
if (config.tx_type != HWTSTAMP_TX_OFF &&
config.tx_type != HWTSTAMP_TX_ON)
return -ERANGE;
}
if (priv->adv_ts) {
switch (config.rx_filter) {
@ -576,6 +569,7 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
}
}
priv->hwts_rx_en = ((config.rx_filter == HWTSTAMP_FILTER_NONE) ? 0 : 1);
priv->hwts_tx_en = config.tx_type == HWTSTAMP_TX_ON;
if (!priv->hwts_tx_en && !priv->hwts_rx_en)
priv->hw->ptp->config_hw_tstamping(priv->ioaddr, 0);

View File

@ -967,14 +967,19 @@ static inline void cpsw_add_dual_emac_def_ale_entries(
priv->host_port, ALE_VLAN, slave->port_vlan);
}
static void cpsw_slave_open(struct cpsw_slave *slave, struct cpsw_priv *priv)
static void soft_reset_slave(struct cpsw_slave *slave)
{
char name[32];
snprintf(name, sizeof(name), "slave-%d", slave->slave_num);
soft_reset(name, &slave->sliver->soft_reset);
}
static void cpsw_slave_open(struct cpsw_slave *slave, struct cpsw_priv *priv)
{
u32 slave_port;
sprintf(name, "slave-%d", slave->slave_num);
soft_reset(name, &slave->sliver->soft_reset);
soft_reset_slave(slave);
/* setup priority mapping */
__raw_writel(RX_PRIORITY_MAPPING, &slave->sliver->rx_pri_map);
@ -1323,6 +1328,10 @@ static int cpsw_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
struct cpts *cpts = priv->cpts;
struct hwtstamp_config cfg;
if (priv->version != CPSW_VERSION_1 &&
priv->version != CPSW_VERSION_2)
return -EOPNOTSUPP;
if (copy_from_user(&cfg, ifr->ifr_data, sizeof(cfg)))
return -EFAULT;
@ -1330,16 +1339,8 @@ static int cpsw_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
if (cfg.flags)
return -EINVAL;
switch (cfg.tx_type) {
case HWTSTAMP_TX_OFF:
cpts->tx_enable = 0;
break;
case HWTSTAMP_TX_ON:
cpts->tx_enable = 1;
break;
default:
if (cfg.tx_type != HWTSTAMP_TX_OFF && cfg.tx_type != HWTSTAMP_TX_ON)
return -ERANGE;
}
switch (cfg.rx_filter) {
case HWTSTAMP_FILTER_NONE:
@ -1366,6 +1367,8 @@ static int cpsw_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
return -ERANGE;
}
cpts->tx_enable = cfg.tx_type == HWTSTAMP_TX_ON;
switch (priv->version) {
case CPSW_VERSION_1:
cpsw_hwtstamp_v1(priv);
@ -1374,7 +1377,7 @@ static int cpsw_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
cpsw_hwtstamp_v2(priv);
break;
default:
return -ENOTSUPP;
WARN_ON(1);
}
return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
@ -2173,8 +2176,9 @@ static int cpsw_suspend(struct device *dev)
if (netif_running(ndev))
cpsw_ndo_stop(ndev);
soft_reset("sliver 0", &priv->slaves[0].sliver->soft_reset);
soft_reset("sliver 1", &priv->slaves[1].sliver->soft_reset);
for_each_slave(priv, soft_reset_slave);
pm_runtime_put_sync(&pdev->dev);
/* Select sleep pin state */

View File

@ -389,16 +389,8 @@ static int hwtstamp_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
ch = PORT2CHANNEL(port);
regs = (struct ixp46x_ts_regs __iomem *) IXP4XX_TIMESYNC_BASE_VIRT;
switch (cfg.tx_type) {
case HWTSTAMP_TX_OFF:
port->hwts_tx_en = 0;
break;
case HWTSTAMP_TX_ON:
port->hwts_tx_en = 1;
break;
default:
if (cfg.tx_type != HWTSTAMP_TX_OFF && cfg.tx_type != HWTSTAMP_TX_ON)
return -ERANGE;
}
switch (cfg.rx_filter) {
case HWTSTAMP_FILTER_NONE:
@ -416,6 +408,8 @@ static int hwtstamp_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
return -ERANGE;
}
port->hwts_tx_en = cfg.tx_type == HWTSTAMP_TX_ON;
/* Clear out any old time stamps. */
__raw_writel(TX_SNAPSHOT_LOCKED | RX_SNAPSHOT_LOCKED,
&regs->channel[ch].ch_event);

View File

@ -628,6 +628,7 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
const struct iovec *iv, unsigned long total_len,
size_t count, int noblock)
{
int good_linear = SKB_MAX_HEAD(NET_IP_ALIGN);
struct sk_buff *skb;
struct macvlan_dev *vlan;
unsigned long len = total_len;
@ -670,6 +671,8 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
if (m && m->msg_control && sock_flag(&q->sk, SOCK_ZEROCOPY)) {
copylen = vnet_hdr.hdr_len ? vnet_hdr.hdr_len : GOODCOPY_LEN;
if (copylen > good_linear)
copylen = good_linear;
linear = copylen;
if (iov_pages(iv, vnet_hdr_len + copylen, count)
<= MAX_SKB_FRAGS)
@ -678,7 +681,10 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
if (!zerocopy) {
copylen = len;
linear = vnet_hdr.hdr_len;
if (vnet_hdr.hdr_len > good_linear)
linear = good_linear;
else
linear = vnet_hdr.hdr_len;
}
skb = macvtap_alloc_skb(&q->sk, NET_IP_ALIGN, copylen,

View File

@ -2650,7 +2650,7 @@ static int team_nl_cmd_port_list_get(struct sk_buff *skb,
return err;
}
static struct genl_ops team_nl_ops[] = {
static const struct genl_ops team_nl_ops[] = {
{
.cmd = TEAM_CMD_NOOP,
.doit = team_nl_cmd_noop,
@ -2676,15 +2676,15 @@ static struct genl_ops team_nl_ops[] = {
},
};
static struct genl_multicast_group team_change_event_mcgrp = {
.name = TEAM_GENL_CHANGE_EVENT_MC_GRP_NAME,
static const struct genl_multicast_group team_nl_mcgrps[] = {
{ .name = TEAM_GENL_CHANGE_EVENT_MC_GRP_NAME, },
};
static int team_nl_send_multicast(struct sk_buff *skb,
struct team *team, u32 portid)
{
return genlmsg_multicast_netns(dev_net(team->dev), skb, 0,
team_change_event_mcgrp.id, GFP_KERNEL);
return genlmsg_multicast_netns(&team_nl_family, dev_net(team->dev),
skb, 0, 0, GFP_KERNEL);
}
static int team_nl_send_event_options_get(struct team *team,
@ -2703,23 +2703,8 @@ static int team_nl_send_event_port_get(struct team *team,
static int team_nl_init(void)
{
int err;
err = genl_register_family_with_ops(&team_nl_family, team_nl_ops,
ARRAY_SIZE(team_nl_ops));
if (err)
return err;
err = genl_register_mc_group(&team_nl_family, &team_change_event_mcgrp);
if (err)
goto err_change_event_grp_reg;
return 0;
err_change_event_grp_reg:
genl_unregister_family(&team_nl_family);
return err;
return genl_register_family_with_ops_groups(&team_nl_family, team_nl_ops,
team_nl_mcgrps);
}
static void team_nl_fini(void)

View File

@ -981,6 +981,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
struct sk_buff *skb;
size_t len = total_len, align = NET_SKB_PAD, linear;
struct virtio_net_hdr gso = { 0 };
int good_linear;
int offset = 0;
int copylen;
bool zerocopy = false;
@ -1021,12 +1022,16 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
return -EINVAL;
}
good_linear = SKB_MAX_HEAD(align);
if (msg_control) {
/* There are 256 bytes to be copied in skb, so there is
* enough room for skb expand head in case it is used.
* The rest of the buffer is mapped from userspace.
*/
copylen = gso.hdr_len ? gso.hdr_len : GOODCOPY_LEN;
if (copylen > good_linear)
copylen = good_linear;
linear = copylen;
if (iov_pages(iv, offset + copylen, count) <= MAX_SKB_FRAGS)
zerocopy = true;
@ -1034,7 +1039,10 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
if (!zerocopy) {
copylen = len;
linear = gso.hdr_len;
if (gso.hdr_len > good_linear)
linear = good_linear;
else
linear = gso.hdr_len;
}
skb = tun_alloc_skb(tfile, align, copylen, linear, noblock);

View File

@ -66,7 +66,7 @@ static void cdc_ncm_tx_timeout_start(struct cdc_ncm_ctx *ctx);
static enum hrtimer_restart cdc_ncm_tx_timer_cb(struct hrtimer *hr_timer);
static struct usb_driver cdc_ncm_driver;
static u8 cdc_ncm_setup(struct usbnet *dev)
static int cdc_ncm_setup(struct usbnet *dev)
{
struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
struct usb_cdc_ncm_ntb_parameters ncm_parm;

View File

@ -204,9 +204,6 @@ static void intr_complete (struct urb *urb)
break;
}
if (!netif_running (dev->net))
return;
status = usb_submit_urb (urb, GFP_ATOMIC);
if (status != 0)
netif_err(dev, timer, dev->net,

View File

@ -36,7 +36,10 @@ module_param(csum, bool, 0444);
module_param(gso, bool, 0444);
/* FIXME: MTU in config. */
#define MAX_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
#define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
#define MERGE_BUFFER_LEN (ALIGN(GOOD_PACKET_LEN + \
sizeof(struct virtio_net_hdr_mrg_rxbuf), \
L1_CACHE_BYTES))
#define GOOD_COPY_LEN 128
#define VIRTNET_DRIVER_VERSION "1.0.0"
@ -314,10 +317,10 @@ static int receive_mergeable(struct receive_queue *rq, struct sk_buff *head_skb)
head_skb->dev->stats.rx_length_errors++;
return -EINVAL;
}
if (unlikely(len > MAX_PACKET_LEN)) {
if (unlikely(len > MERGE_BUFFER_LEN)) {
pr_debug("%s: rx error: merge buffer too long\n",
head_skb->dev->name);
len = MAX_PACKET_LEN;
len = MERGE_BUFFER_LEN;
}
if (unlikely(num_skb_frags == MAX_SKB_FRAGS)) {
struct sk_buff *nskb = alloc_skb(0, GFP_ATOMIC);
@ -336,18 +339,17 @@ static int receive_mergeable(struct receive_queue *rq, struct sk_buff *head_skb)
if (curr_skb != head_skb) {
head_skb->data_len += len;
head_skb->len += len;
head_skb->truesize += MAX_PACKET_LEN;
head_skb->truesize += MERGE_BUFFER_LEN;
}
page = virt_to_head_page(buf);
offset = buf - (char *)page_address(page);
if (skb_can_coalesce(curr_skb, num_skb_frags, page, offset)) {
put_page(page);
skb_coalesce_rx_frag(curr_skb, num_skb_frags - 1,
len, MAX_PACKET_LEN);
len, MERGE_BUFFER_LEN);
} else {
skb_add_rx_frag(curr_skb, num_skb_frags, page,
offset, len,
MAX_PACKET_LEN);
offset, len, MERGE_BUFFER_LEN);
}
--rq->num;
}
@ -383,7 +385,7 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len)
struct page *page = virt_to_head_page(buf);
skb = page_to_skb(rq, page,
(char *)buf - (char *)page_address(page),
len, MAX_PACKET_LEN);
len, MERGE_BUFFER_LEN);
if (unlikely(!skb)) {
dev->stats.rx_dropped++;
put_page(page);
@ -471,11 +473,11 @@ static int add_recvbuf_small(struct receive_queue *rq, gfp_t gfp)
struct skb_vnet_hdr *hdr;
int err;
skb = __netdev_alloc_skb_ip_align(vi->dev, MAX_PACKET_LEN, gfp);
skb = __netdev_alloc_skb_ip_align(vi->dev, GOOD_PACKET_LEN, gfp);
if (unlikely(!skb))
return -ENOMEM;
skb_put(skb, MAX_PACKET_LEN);
skb_put(skb, GOOD_PACKET_LEN);
hdr = skb_vnet_hdr(skb);
sg_set_buf(rq->sg, &hdr->hdr, sizeof hdr->hdr);
@ -542,20 +544,20 @@ static int add_recvbuf_mergeable(struct receive_queue *rq, gfp_t gfp)
int err;
if (gfp & __GFP_WAIT) {
if (skb_page_frag_refill(MAX_PACKET_LEN, &vi->alloc_frag,
if (skb_page_frag_refill(MERGE_BUFFER_LEN, &vi->alloc_frag,
gfp)) {
buf = (char *)page_address(vi->alloc_frag.page) +
vi->alloc_frag.offset;
get_page(vi->alloc_frag.page);
vi->alloc_frag.offset += MAX_PACKET_LEN;
vi->alloc_frag.offset += MERGE_BUFFER_LEN;
}
} else {
buf = netdev_alloc_frag(MAX_PACKET_LEN);
buf = netdev_alloc_frag(MERGE_BUFFER_LEN);
}
if (!buf)
return -ENOMEM;
sg_init_one(rq->sg, buf, MAX_PACKET_LEN);
sg_init_one(rq->sg, buf, MERGE_BUFFER_LEN);
err = virtqueue_add_inbuf(rq->vq, rq->sg, 1, buf, gfp);
if (err < 0)
put_page(virt_to_head_page(buf));
@ -1619,8 +1621,8 @@ static int virtnet_probe(struct virtio_device *vdev)
if (err)
goto free_stats;
netif_set_real_num_tx_queues(dev, 1);
netif_set_real_num_rx_queues(dev, 1);
netif_set_real_num_tx_queues(dev, vi->curr_queue_pairs);
netif_set_real_num_rx_queues(dev, vi->curr_queue_pairs);
err = register_netdev(dev);
if (err) {

View File

@ -187,17 +187,17 @@ static void ar9003_hw_init_mode_regs(struct ath_hw *ah)
INIT_INI_ARRAY(&ah->iniCckfirJapan2484,
ar9485_1_1_baseband_core_txfir_coeff_japan_2484);
/* Load PCIE SERDES settings from INI */
/* Awake Setting */
INIT_INI_ARRAY(&ah->iniPcieSerdes,
ar9485_1_1_pcie_phy_clkreq_disable_L1);
/* Sleep Setting */
INIT_INI_ARRAY(&ah->iniPcieSerdesLowPower,
ar9485_1_1_pcie_phy_clkreq_disable_L1);
if (ah->config.no_pll_pwrsave) {
INIT_INI_ARRAY(&ah->iniPcieSerdes,
ar9485_1_1_pcie_phy_clkreq_disable_L1);
INIT_INI_ARRAY(&ah->iniPcieSerdesLowPower,
ar9485_1_1_pcie_phy_clkreq_disable_L1);
} else {
INIT_INI_ARRAY(&ah->iniPcieSerdes,
ar9485_1_1_pll_on_cdr_on_clkreq_disable_L1);
INIT_INI_ARRAY(&ah->iniPcieSerdesLowPower,
ar9485_1_1_pll_on_cdr_on_clkreq_disable_L1);
}
} else if (AR_SREV_9462_21(ah)) {
INIT_INI_ARRAY(&ah->iniMac[ATH_INI_CORE],
ar9462_2p1_mac_core);

View File

@ -32,13 +32,6 @@ static const u32 ar9485_1_1_mac_postamble[][5] = {
{0x00008318, 0x00003e80, 0x00007d00, 0x00006880, 0x00003440},
};
static const u32 ar9485_1_1_pcie_phy_pll_on_clkreq_disable_L1[][2] = {
/* Addr allmodes */
{0x00018c00, 0x18012e5e},
{0x00018c04, 0x000801d8},
{0x00018c08, 0x0000080c},
};
static const u32 ar9485Common_wo_xlna_rx_gain_1_1[][2] = {
/* Addr allmodes */
{0x00009e00, 0x037216a0},
@ -1101,20 +1094,6 @@ static const u32 ar9485_common_rx_gain_1_1[][2] = {
{0x0000a1fc, 0x00000296},
};
static const u32 ar9485_1_1_pcie_phy_pll_on_clkreq_enable_L1[][2] = {
/* Addr allmodes */
{0x00018c00, 0x18052e5e},
{0x00018c04, 0x000801d8},
{0x00018c08, 0x0000080c},
};
static const u32 ar9485_1_1_pcie_phy_clkreq_enable_L1[][2] = {
/* Addr allmodes */
{0x00018c00, 0x18053e5e},
{0x00018c04, 0x000801d8},
{0x00018c08, 0x0000080c},
};
static const u32 ar9485_1_1_soc_preamble[][2] = {
/* Addr allmodes */
{0x00004014, 0xba280400},
@ -1173,13 +1152,6 @@ static const u32 ar9485_1_1_baseband_postamble[][5] = {
{0x0000be18, 0x00000000, 0x00000000, 0x00000000, 0x00000000},
};
static const u32 ar9485_1_1_pcie_phy_clkreq_disable_L1[][2] = {
/* Addr allmodes */
{0x00018c00, 0x18013e5e},
{0x00018c04, 0x000801d8},
{0x00018c08, 0x0000080c},
};
static const u32 ar9485_1_1_radio_postamble[][2] = {
/* Addr allmodes */
{0x0001609c, 0x0b283f31},
@ -1358,4 +1330,18 @@ static const u32 ar9485_1_1_baseband_core_txfir_coeff_japan_2484[][2] = {
{0x0000a3a0, 0xca9228ee},
};
static const u32 ar9485_1_1_pcie_phy_clkreq_disable_L1[][2] = {
/* Addr allmodes */
{0x00018c00, 0x18013e5e},
{0x00018c04, 0x000801d8},
{0x00018c08, 0x0000080c},
};
static const u32 ar9485_1_1_pll_on_cdr_on_clkreq_disable_L1[][2] = {
/* Addr allmodes */
{0x00018c00, 0x1801265e},
{0x00018c04, 0x000801d8},
{0x00018c08, 0x0000080c},
};
#endif /* INITVALS_9485_H */

View File

@ -632,15 +632,16 @@ void ath_ant_comb_scan(struct ath_softc *sc, struct ath_rx_status *rs);
/* Main driver core */
/********************/
#define ATH9K_PCI_CUS198 0x0001
#define ATH9K_PCI_CUS230 0x0002
#define ATH9K_PCI_CUS217 0x0004
#define ATH9K_PCI_CUS252 0x0008
#define ATH9K_PCI_WOW 0x0010
#define ATH9K_PCI_BT_ANT_DIV 0x0020
#define ATH9K_PCI_D3_L1_WAR 0x0040
#define ATH9K_PCI_AR9565_1ANT 0x0080
#define ATH9K_PCI_AR9565_2ANT 0x0100
#define ATH9K_PCI_CUS198 0x0001
#define ATH9K_PCI_CUS230 0x0002
#define ATH9K_PCI_CUS217 0x0004
#define ATH9K_PCI_CUS252 0x0008
#define ATH9K_PCI_WOW 0x0010
#define ATH9K_PCI_BT_ANT_DIV 0x0020
#define ATH9K_PCI_D3_L1_WAR 0x0040
#define ATH9K_PCI_AR9565_1ANT 0x0080
#define ATH9K_PCI_AR9565_2ANT 0x0100
#define ATH9K_PCI_NO_PLL_PWRSAVE 0x0200
/*
* Default cache line size, in bytes.

View File

@ -44,14 +44,20 @@ static ssize_t read_file_dfs(struct file *file, char __user *user_buf,
if (buf == NULL)
return -ENOMEM;
if (sc->dfs_detector)
dfs_pool_stats = sc->dfs_detector->get_stats(sc->dfs_detector);
len += scnprintf(buf + len, size - len, "DFS support for "
"macVersion = 0x%x, macRev = 0x%x: %s\n",
hw_ver->macVersion, hw_ver->macRev,
(sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_DFS) ?
"enabled" : "disabled");
if (!sc->dfs_detector) {
len += scnprintf(buf + len, size - len,
"DFS detector not enabled\n");
goto exit;
}
dfs_pool_stats = sc->dfs_detector->get_stats(sc->dfs_detector);
len += scnprintf(buf + len, size - len, "Pulse detector statistics:\n");
ATH9K_DFS_STAT("pulse events reported ", pulses_total);
ATH9K_DFS_STAT("invalid pulse events ", pulses_no_dfs);
@ -76,6 +82,7 @@ static ssize_t read_file_dfs(struct file *file, char __user *user_buf,
ATH9K_DFS_POOL_STAT("Seqs. alloc error ", pseq_alloc_error);
ATH9K_DFS_POOL_STAT("Seqs. in use ", pseq_used);
exit:
if (len > size)
len = size;

View File

@ -316,6 +316,7 @@ struct ath9k_ops_config {
u32 ant_ctrl_comm2g_switch_enable;
bool xatten_margin_cfg;
bool alt_mingainidx;
bool no_pll_pwrsave;
};
enum ath9k_int {

View File

@ -609,6 +609,11 @@ static void ath9k_init_platform(struct ath_softc *sc)
ah->config.pcie_waen = 0x0040473b;
ath_info(common, "Enable WAR for ASPM D3/L1\n");
}
if (sc->driver_data & ATH9K_PCI_NO_PLL_PWRSAVE) {
ah->config.no_pll_pwrsave = true;
ath_info(common, "Disable PLL PowerSave\n");
}
}
static void ath9k_eeprom_request_cb(const struct firmware *eeprom_blob,
@ -863,8 +868,8 @@ static const struct ieee80211_iface_combination if_comb[] = {
.max_interfaces = 1,
.num_different_channels = 1,
.beacon_int_infra_match = true,
.radar_detect_widths = BIT(NL80211_CHAN_NO_HT) |
BIT(NL80211_CHAN_HT20),
.radar_detect_widths = BIT(NL80211_CHAN_WIDTH_20_NOHT) |
BIT(NL80211_CHAN_WIDTH_20),
}
};

View File

@ -195,6 +195,93 @@ static DEFINE_PCI_DEVICE_TABLE(ath_pci_id_table) = {
0x3219),
.driver_data = ATH9K_PCI_BT_ANT_DIV },
/* AR9485 cards with PLL power-save disabled by default. */
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
PCI_VENDOR_ID_AZWAVE,
0x2C97),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
PCI_VENDOR_ID_AZWAVE,
0x2100),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
0x1C56, /* ASKEY */
0x4001),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
0x11AD, /* LITEON */
0x6627),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
0x11AD, /* LITEON */
0x6628),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
PCI_VENDOR_ID_FOXCONN,
0xE04E),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
PCI_VENDOR_ID_FOXCONN,
0xE04F),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
0x144F, /* ASKEY */
0x7197),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
0x1B9A, /* XAVI */
0x2000),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
0x1B9A, /* XAVI */
0x2001),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
PCI_VENDOR_ID_AZWAVE,
0x1186),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
PCI_VENDOR_ID_AZWAVE,
0x1F86),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
PCI_VENDOR_ID_AZWAVE,
0x1195),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
PCI_VENDOR_ID_AZWAVE,
0x1F95),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
0x1B9A, /* XAVI */
0x1C00),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
0x1B9A, /* XAVI */
0x1C01),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
0x0032,
PCI_VENDOR_ID_ASUSTEK,
0x850D),
.driver_data = ATH9K_PCI_NO_PLL_PWRSAVE },
{ PCI_VDEVICE(ATHEROS, 0x0032) }, /* PCI-E AR9485 */
{ PCI_VDEVICE(ATHEROS, 0x0033) }, /* PCI-E AR9580 */

View File

@ -126,7 +126,7 @@ static ssize_t write_file_dump(struct file *file,
if (begin == NULL)
break;
if (kstrtoul(begin, 0, (unsigned long *)(arg + i)) != 0)
if (kstrtou32(begin, 0, &arg[i]) != 0)
break;
}

View File

@ -1286,7 +1286,8 @@ int wcn36xx_smd_send_beacon(struct wcn36xx *wcn, struct ieee80211_vif *vif,
} else {
wcn36xx_err("Beacon is to big: beacon size=%d\n",
msg_body.beacon_length);
return -ENOMEM;
ret = -ENOMEM;
goto out;
}
memcpy(msg_body.bssid, vif->addr, ETH_ALEN);
@ -1327,7 +1328,8 @@ int wcn36xx_smd_update_proberesp_tmpl(struct wcn36xx *wcn,
if (skb->len > BEACON_TEMPLATE_SIZE) {
wcn36xx_warn("probe response template is too big: %d\n",
skb->len);
return -E2BIG;
ret = -E2BIG;
goto out;
}
msg.probe_resp_template_len = skb->len;
@ -1606,7 +1608,8 @@ int wcn36xx_smd_keep_alive_req(struct wcn36xx *wcn,
/* TODO: it also support ARP response type */
} else {
wcn36xx_warn("unknow keep alive packet type %d\n", packet_type);
return -EINVAL;
ret = -EINVAL;
goto out;
}
PREPARE_HAL_BUF(wcn->hal_buf, msg_body);

View File

@ -913,7 +913,10 @@ static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf,
char *p2;
struct debug_data *d = f->private_data;
pdata = kmalloc(cnt, GFP_KERNEL);
if (cnt == 0)
return 0;
pdata = kmalloc(cnt + 1, GFP_KERNEL);
if (pdata == NULL)
return 0;
@ -922,6 +925,7 @@ static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf,
kfree(pdata);
return 0;
}
pdata[cnt] = '\0';
p0 = pdata;
for (i = 0; i < num_of_items; i++) {

View File

@ -902,6 +902,7 @@ static int if_cs_probe(struct pcmcia_device *p_dev)
if (card->model == MODEL_UNKNOWN) {
pr_err("unsupported manf_id 0x%04x / card_id 0x%04x\n",
p_dev->manf_id, p_dev->card_id);
ret = -ENODEV;
goto out2;
}

View File

@ -2097,7 +2097,7 @@ out:
}
/* Generic Netlink operations array */
static struct genl_ops hwsim_ops[] = {
static const struct genl_ops hwsim_ops[] = {
{
.cmd = HWSIM_CMD_REGISTER,
.policy = hwsim_genl_policy,
@ -2148,8 +2148,7 @@ static int hwsim_init_netlink(void)
printk(KERN_INFO "mac80211_hwsim: initializing netlink\n");
rc = genl_register_family_with_ops(&hwsim_genl_family,
hwsim_ops, ARRAY_SIZE(hwsim_ops));
rc = genl_register_family_with_ops(&hwsim_genl_family, hwsim_ops);
if (rc)
goto failure;

View File

@ -1020,8 +1020,8 @@ struct mwifiex_power_group {
} __packed;
struct mwifiex_types_power_group {
u16 type;
u16 length;
__le16 type;
__le16 length;
} __packed;
struct host_cmd_ds_txpwr_cfg {

View File

@ -82,7 +82,7 @@ mwifiex_update_autoindex_ies(struct mwifiex_private *priv,
struct mwifiex_ie_list *ie_list)
{
u16 travel_len, index, mask;
s16 input_len;
s16 input_len, tlv_len;
struct mwifiex_ie *ie;
u8 *tmp;
@ -91,11 +91,13 @@ mwifiex_update_autoindex_ies(struct mwifiex_private *priv,
ie_list->len = 0;
while (input_len > 0) {
while (input_len >= sizeof(struct mwifiex_ie_types_header)) {
ie = (struct mwifiex_ie *)(((u8 *)ie_list) + travel_len);
input_len -= le16_to_cpu(ie->ie_length) + MWIFIEX_IE_HDR_SIZE;
travel_len += le16_to_cpu(ie->ie_length) + MWIFIEX_IE_HDR_SIZE;
tlv_len = le16_to_cpu(ie->ie_length);
travel_len += tlv_len + MWIFIEX_IE_HDR_SIZE;
if (input_len < tlv_len + MWIFIEX_IE_HDR_SIZE)
return -1;
index = le16_to_cpu(ie->ie_index);
mask = le16_to_cpu(ie->mgmt_subtype_mask);
@ -132,6 +134,7 @@ mwifiex_update_autoindex_ies(struct mwifiex_private *priv,
le16_add_cpu(&ie_list->len,
le16_to_cpu(priv->mgmt_ie[index].ie_length) +
MWIFIEX_IE_HDR_SIZE);
input_len -= tlv_len + MWIFIEX_IE_HDR_SIZE;
}
if (GET_BSS_ROLE(priv) == MWIFIEX_BSS_ROLE_UAP)

View File

@ -1029,7 +1029,10 @@ static int mwifiex_decode_rx_packet(struct mwifiex_adapter *adapter,
struct sk_buff *skb, u32 upld_typ)
{
u8 *cmd_buf;
__le16 *curr_ptr = (__le16 *)skb->data;
u16 pkt_len = le16_to_cpu(*curr_ptr);
skb_trim(skb, pkt_len);
skb_pull(skb, INTF_HEADER_LEN);
switch (upld_typ) {

View File

@ -239,14 +239,14 @@ static int mwifiex_cmd_tx_power_cfg(struct host_cmd_ds_command *cmd,
memmove(cmd_txp_cfg, txp,
sizeof(struct host_cmd_ds_txpwr_cfg) +
sizeof(struct mwifiex_types_power_group) +
pg_tlv->length);
le16_to_cpu(pg_tlv->length));
pg_tlv = (struct mwifiex_types_power_group *) ((u8 *)
cmd_txp_cfg +
sizeof(struct host_cmd_ds_txpwr_cfg));
cmd->size = cpu_to_le16(le16_to_cpu(cmd->size) +
sizeof(struct mwifiex_types_power_group) +
pg_tlv->length);
le16_to_cpu(pg_tlv->length));
} else {
memmove(cmd_txp_cfg, txp, sizeof(*txp));
}

View File

@ -274,17 +274,20 @@ static int mwifiex_ret_tx_rate_cfg(struct mwifiex_private *priv,
struct host_cmd_ds_tx_rate_cfg *rate_cfg = &resp->params.tx_rate_cfg;
struct mwifiex_rate_scope *rate_scope;
struct mwifiex_ie_types_header *head;
u16 tlv, tlv_buf_len;
u16 tlv, tlv_buf_len, tlv_buf_left;
u8 *tlv_buf;
u32 i;
tlv_buf = ((u8 *)rate_cfg) +
sizeof(struct host_cmd_ds_tx_rate_cfg);
tlv_buf_len = le16_to_cpu(*(__le16 *) (tlv_buf + sizeof(u16)));
tlv_buf = ((u8 *)rate_cfg) + sizeof(struct host_cmd_ds_tx_rate_cfg);
tlv_buf_left = le16_to_cpu(resp->size) - S_DS_GEN - sizeof(*rate_cfg);
while (tlv_buf && tlv_buf_len > 0) {
tlv = (*tlv_buf);
tlv = tlv | (*(tlv_buf + 1) << 8);
while (tlv_buf_left >= sizeof(*head)) {
head = (struct mwifiex_ie_types_header *)tlv_buf;
tlv = le16_to_cpu(head->type);
tlv_buf_len = le16_to_cpu(head->len);
if (tlv_buf_left < (sizeof(*head) + tlv_buf_len))
break;
switch (tlv) {
case TLV_TYPE_RATE_SCOPE:
@ -304,9 +307,8 @@ static int mwifiex_ret_tx_rate_cfg(struct mwifiex_private *priv,
/* Add RATE_DROP tlv here */
}
head = (struct mwifiex_ie_types_header *) tlv_buf;
tlv_buf += le16_to_cpu(head->len) + sizeof(*head);
tlv_buf_len -= le16_to_cpu(head->len);
tlv_buf += (sizeof(*head) + tlv_buf_len);
tlv_buf_left -= (sizeof(*head) + tlv_buf_len);
}
priv->is_data_rate_auto = mwifiex_is_rate_auto(priv);
@ -340,13 +342,17 @@ static int mwifiex_get_power_level(struct mwifiex_private *priv, void *data_buf)
((u8 *) data_buf + sizeof(struct host_cmd_ds_txpwr_cfg));
pg = (struct mwifiex_power_group *)
((u8 *) pg_tlv_hdr + sizeof(struct mwifiex_types_power_group));
length = pg_tlv_hdr->length;
if (length > 0) {
max_power = pg->power_max;
min_power = pg->power_min;
length -= sizeof(struct mwifiex_power_group);
}
while (length) {
length = le16_to_cpu(pg_tlv_hdr->length);
/* At least one structure required to update power */
if (length < sizeof(struct mwifiex_power_group))
return 0;
max_power = pg->power_max;
min_power = pg->power_min;
length -= sizeof(struct mwifiex_power_group);
while (length >= sizeof(struct mwifiex_power_group)) {
pg++;
if (max_power < pg->power_max)
max_power = pg->power_max;
@ -356,10 +362,8 @@ static int mwifiex_get_power_level(struct mwifiex_private *priv, void *data_buf)
length -= sizeof(struct mwifiex_power_group);
}
if (pg_tlv_hdr->length > 0) {
priv->min_tx_power_level = (u8) min_power;
priv->max_tx_power_level = (u8) max_power;
}
priv->min_tx_power_level = (u8) min_power;
priv->max_tx_power_level = (u8) max_power;
return 0;
}

View File

@ -638,8 +638,9 @@ int mwifiex_set_tx_power(struct mwifiex_private *priv,
txp_cfg->mode = cpu_to_le32(1);
pg_tlv = (struct mwifiex_types_power_group *)
(buf + sizeof(struct host_cmd_ds_txpwr_cfg));
pg_tlv->type = TLV_TYPE_POWER_GROUP;
pg_tlv->length = 4 * sizeof(struct mwifiex_power_group);
pg_tlv->type = cpu_to_le16(TLV_TYPE_POWER_GROUP);
pg_tlv->length =
cpu_to_le16(4 * sizeof(struct mwifiex_power_group));
pg = (struct mwifiex_power_group *)
(buf + sizeof(struct host_cmd_ds_txpwr_cfg)
+ sizeof(struct mwifiex_types_power_group));

View File

@ -97,6 +97,7 @@ static void mwifiex_uap_queue_bridged_pkt(struct mwifiex_private *priv,
struct mwifiex_txinfo *tx_info;
int hdr_chop;
struct timeval tv;
struct ethhdr *p_ethhdr;
u8 rfc1042_eth_hdr[ETH_ALEN] = { 0xaa, 0xaa, 0x03, 0x00, 0x00, 0x00 };
uap_rx_pd = (struct uap_rxpd *)(skb->data);
@ -112,14 +113,36 @@ static void mwifiex_uap_queue_bridged_pkt(struct mwifiex_private *priv,
}
if (!memcmp(&rx_pkt_hdr->rfc1042_hdr,
rfc1042_eth_hdr, sizeof(rfc1042_eth_hdr)))
rfc1042_eth_hdr, sizeof(rfc1042_eth_hdr))) {
/* Replace the 803 header and rfc1042 header (llc/snap) with
* an Ethernet II header, keep the src/dst and snap_type
* (ethertype).
*
* The firmware only passes up SNAP frames converting all RX
* data from 802.11 to 802.2/LLC/SNAP frames.
*
* To create the Ethernet II, just move the src, dst address
* right before the snap_type.
*/
p_ethhdr = (struct ethhdr *)
((u8 *)(&rx_pkt_hdr->eth803_hdr)
+ sizeof(rx_pkt_hdr->eth803_hdr)
+ sizeof(rx_pkt_hdr->rfc1042_hdr)
- sizeof(rx_pkt_hdr->eth803_hdr.h_dest)
- sizeof(rx_pkt_hdr->eth803_hdr.h_source)
- sizeof(rx_pkt_hdr->rfc1042_hdr.snap_type));
memcpy(p_ethhdr->h_source, rx_pkt_hdr->eth803_hdr.h_source,
sizeof(p_ethhdr->h_source));
memcpy(p_ethhdr->h_dest, rx_pkt_hdr->eth803_hdr.h_dest,
sizeof(p_ethhdr->h_dest));
/* Chop off the rxpd + the excess memory from
* 802.2/llc/snap header that was removed.
*/
hdr_chop = (u8 *)eth_hdr - (u8 *)uap_rx_pd;
else
hdr_chop = (u8 *)p_ethhdr - (u8 *)uap_rx_pd;
} else {
/* Chop off the rxpd */
hdr_chop = (u8 *)&rx_pkt_hdr->eth803_hdr - (u8 *)uap_rx_pd;
}
/* Chop off the leading header bytes so the it points
* to the start of either the reconstructed EthII frame

View File

@ -722,6 +722,9 @@ int mwifiex_ret_wmm_get_status(struct mwifiex_private *priv,
tlv_hdr = (struct mwifiex_ie_types_data *) curr;
tlv_len = le16_to_cpu(tlv_hdr->header.len);
if (resp_len < tlv_len + sizeof(tlv_hdr->header))
break;
switch (le16_to_cpu(tlv_hdr->header.type)) {
case TLV_TYPE_WMMQSTATUS:
tlv_wmm_qstatus =

View File

@ -811,6 +811,10 @@ static const struct net_device_ops islpci_netdev_ops = {
.ndo_validate_addr = eth_validate_addr,
};
static struct device_type wlan_type = {
.name = "wlan",
};
struct net_device *
islpci_setup(struct pci_dev *pdev)
{
@ -821,9 +825,8 @@ islpci_setup(struct pci_dev *pdev)
return ndev;
pci_set_drvdata(pdev, ndev);
#if defined(SET_NETDEV_DEV)
SET_NETDEV_DEV(ndev, &pdev->dev);
#endif
SET_NETDEV_DEVTYPE(ndev, &wlan_type);
/* setup the structure members */
ndev->base_addr = pci_resource_start(pdev, 0);

View File

@ -2640,7 +2640,7 @@ static void rt2800_config_channel_rf53xx(struct rt2x00_dev *rt2x00dev,
if (rt2x00_rt(rt2x00dev, RT5392)) {
rt2800_rfcsr_read(rt2x00dev, 50, &rfcsr);
if (info->default_power1 > POWER_BOUND)
if (info->default_power2 > POWER_BOUND)
rt2x00_set_field8(&rfcsr, RFCSR50_TX, POWER_BOUND);
else
rt2x00_set_field8(&rfcsr, RFCSR50_TX,

View File

@ -146,7 +146,7 @@ void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int header_length);
* @local: frame is not from mac80211
*/
int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
bool local);
struct ieee80211_sta *sta, bool local);
/**
* rt2x00queue_update_beacon - Send new beacon from mac80211

View File

@ -90,7 +90,7 @@ static int rt2x00mac_tx_rts_cts(struct rt2x00_dev *rt2x00dev,
frag_skb->data, data_length, tx_info,
(struct ieee80211_rts *)(skb->data));
retval = rt2x00queue_write_tx_frame(queue, skb, true);
retval = rt2x00queue_write_tx_frame(queue, skb, NULL, true);
if (retval) {
dev_kfree_skb_any(skb);
rt2x00_warn(rt2x00dev, "Failed to send RTS/CTS frame\n");
@ -151,7 +151,7 @@ void rt2x00mac_tx(struct ieee80211_hw *hw,
goto exit_fail;
}
if (unlikely(rt2x00queue_write_tx_frame(queue, skb, false)))
if (unlikely(rt2x00queue_write_tx_frame(queue, skb, control->sta, false)))
goto exit_fail;
/*

View File

@ -635,7 +635,7 @@ static void rt2x00queue_bar_check(struct queue_entry *entry)
}
int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
bool local)
struct ieee80211_sta *sta, bool local)
{
struct ieee80211_tx_info *tx_info;
struct queue_entry *entry;
@ -649,7 +649,7 @@ int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
* after that we are free to use the skb->cb array
* for our information.
*/
rt2x00queue_create_tx_descriptor(queue->rt2x00dev, skb, &txdesc, NULL);
rt2x00queue_create_tx_descriptor(queue->rt2x00dev, skb, &txdesc, sta);
/*
* All information is retrieved from the skb->cb array,

View File

@ -37,6 +37,7 @@
#include <linux/ip.h>
#include <linux/module.h>
#include <linux/udp.h>
/*
*NOTICE!!!: This file will be very big, we should
@ -1074,64 +1075,52 @@ u8 rtl_is_special_data(struct ieee80211_hw *hw, struct sk_buff *skb, u8 is_tx)
if (!ieee80211_is_data(fc))
return false;
ip = (const struct iphdr *)(skb->data + mac_hdr_len +
SNAP_SIZE + PROTOC_TYPE_SIZE);
ether_type = be16_to_cpup((__be16 *)
(skb->data + mac_hdr_len + SNAP_SIZE));
ip = (struct iphdr *)((u8 *) skb->data + mac_hdr_len +
SNAP_SIZE + PROTOC_TYPE_SIZE);
ether_type = *(u16 *) ((u8 *) skb->data + mac_hdr_len + SNAP_SIZE);
/* ether_type = ntohs(ether_type); */
switch (ether_type) {
case ETH_P_IP: {
struct udphdr *udp;
u16 src;
u16 dst;
if (ETH_P_IP == ether_type) {
if (IPPROTO_UDP == ip->protocol) {
struct udphdr *udp = (struct udphdr *)((u8 *) ip +
(ip->ihl << 2));
if (((((u8 *) udp)[1] == 68) &&
(((u8 *) udp)[3] == 67)) ||
((((u8 *) udp)[1] == 67) &&
(((u8 *) udp)[3] == 68))) {
/*
* 68 : UDP BOOTP client
* 67 : UDP BOOTP server
*/
RT_TRACE(rtlpriv, (COMP_SEND | COMP_RECV),
DBG_DMESG, "dhcp %s !!\n",
is_tx ? "Tx" : "Rx");
if (ip->protocol != IPPROTO_UDP)
return false;
udp = (struct udphdr *)((u8 *)ip + (ip->ihl << 2));
src = be16_to_cpu(udp->source);
dst = be16_to_cpu(udp->dest);
if (is_tx) {
rtlpriv->enter_ps = false;
schedule_work(&rtlpriv->
works.lps_change_work);
ppsc->last_delaylps_stamp_jiffies =
jiffies;
}
/* If this case involves port 68 (UDP BOOTP client) connecting
* with port 67 (UDP BOOTP server), then return true so that
* the lowest speed is used.
*/
if (!((src == 68 && dst == 67) || (src == 67 && dst == 68)))
return false;
return true;
}
}
} else if (ETH_P_ARP == ether_type) {
if (is_tx) {
rtlpriv->enter_ps = false;
schedule_work(&rtlpriv->works.lps_change_work);
ppsc->last_delaylps_stamp_jiffies = jiffies;
}
return true;
} else if (ETH_P_PAE == ether_type) {
RT_TRACE(rtlpriv, (COMP_SEND | COMP_RECV), DBG_DMESG,
"dhcp %s !!\n", is_tx ? "Tx" : "Rx");
break;
}
case ETH_P_ARP:
break;
case ETH_P_PAE:
RT_TRACE(rtlpriv, (COMP_SEND | COMP_RECV), DBG_DMESG,
"802.1X %s EAPOL pkt!!\n", is_tx ? "Tx" : "Rx");
if (is_tx) {
rtlpriv->enter_ps = false;
schedule_work(&rtlpriv->works.lps_change_work);
ppsc->last_delaylps_stamp_jiffies = jiffies;
}
return true;
} else if (ETH_P_IPV6 == ether_type) {
/* IPv6 */
return true;
break;
case ETH_P_IPV6:
/* TODO: Is this right? */
return false;
default:
return false;
}
return false;
if (is_tx) {
rtlpriv->enter_ps = false;
schedule_work(&rtlpriv->works.lps_change_work);
ppsc->last_delaylps_stamp_jiffies = jiffies;
}
return true;
}
EXPORT_SYMBOL_GPL(rtl_is_special_data);

View File

@ -262,9 +262,9 @@ void read_efuse(struct ieee80211_hw *hw, u16 _offset, u16 _size_byte, u8 *pbuf)
sizeof(u8), GFP_ATOMIC);
if (!efuse_tbl)
return;
efuse_word = kmalloc(EFUSE_MAX_WORD_UNIT * sizeof(u16 *), GFP_ATOMIC);
efuse_word = kzalloc(EFUSE_MAX_WORD_UNIT * sizeof(u16 *), GFP_ATOMIC);
if (!efuse_word)
goto done;
goto out;
for (i = 0; i < EFUSE_MAX_WORD_UNIT; i++) {
efuse_word[i] = kmalloc(efuse_max_section * sizeof(u16),
GFP_ATOMIC);
@ -378,6 +378,7 @@ done:
for (i = 0; i < EFUSE_MAX_WORD_UNIT; i++)
kfree(efuse_word[i]);
kfree(efuse_word);
out:
kfree(efuse_tbl);
}

View File

@ -349,7 +349,7 @@ bool rtl92cu_rx_query_desc(struct ieee80211_hw *hw,
p_drvinfo);
}
/*rx_status->qual = stats->signal; */
rx_status->signal = stats->rssi + 10;
rx_status->signal = stats->recvsignalpower + 10;
return true;
}

View File

@ -525,7 +525,7 @@ bool rtl92de_rx_query_desc(struct ieee80211_hw *hw, struct rtl_stats *stats,
p_drvinfo);
}
/*rx_status->qual = stats->signal; */
rx_status->signal = stats->rssi + 10;
rx_status->signal = stats->recvsignalpower + 10;
return true;
}

View File

@ -265,7 +265,7 @@ static void _rtl92s_get_txpower_writeval_byregulatory(struct ieee80211_hw *hw,
rtlefuse->pwrgroup_ht40
[RF90_PATH_A][chnl - 1]) {
pwrdiff_limit[i] =
rtlefuse->pwrgroup_ht20
rtlefuse->pwrgroup_ht40
[RF90_PATH_A][chnl - 1];
}
} else {

View File

@ -329,7 +329,7 @@ bool rtl92se_rx_query_desc(struct ieee80211_hw *hw, struct rtl_stats *stats,
}
/*rx_status->qual = stats->signal; */
rx_status->signal = stats->rssi + 10;
rx_status->signal = stats->recvsignalpower + 10;
return true;
}

View File

@ -77,11 +77,7 @@
#define RTL_SLOT_TIME_9 9
#define RTL_SLOT_TIME_20 20
/*related with tcp/ip. */
/*if_ehther.h*/
#define ETH_P_PAE 0x888E /*Port Access Entity (IEEE 802.1X) */
#define ETH_P_IP 0x0800 /*Internet Protocol packet */
#define ETH_P_ARP 0x0806 /*Address Resolution packet */
/*related to tcp/ip. */
#define SNAP_SIZE 6
#define PROTOC_TYPE_SIZE 2

View File

@ -277,12 +277,13 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
if (!page) {
kfree_skb(skb);
no_skb:
/* Any skbuffs queued for refill? Force them out. */
if (i != 0)
goto refill;
/* Could not allocate any skbuffs. Try again later. */
mod_timer(&np->rx_refill_timer,
jiffies + (HZ/10));
/* Any skbuffs queued for refill? Force them out. */
if (i != 0)
goto refill;
break;
}

View File

@ -1512,7 +1512,8 @@ static int pmcraid_notify_aen(
}
result =
genlmsg_multicast(skb, 0, pmcraid_event_family.id, GFP_ATOMIC);
genlmsg_multicast(&pmcraid_event_family, skb, 0,
pmcraid_event_family.id, GFP_ATOMIC);
/* If there are no listeners, genlmsg_multicast may return non-zero
* value.

View File

@ -1608,15 +1608,17 @@ exit:
EXPORT_SYMBOL_GPL(thermal_zone_get_zone_by_name);
#ifdef CONFIG_NET
static const struct genl_multicast_group thermal_event_mcgrps[] = {
{ .name = THERMAL_GENL_MCAST_GROUP_NAME, },
};
static struct genl_family thermal_event_genl_family = {
.id = GENL_ID_GENERATE,
.name = THERMAL_GENL_FAMILY_NAME,
.version = THERMAL_GENL_VERSION,
.maxattr = THERMAL_GENL_ATTR_MAX,
};
static struct genl_multicast_group thermal_event_mcgrp = {
.name = THERMAL_GENL_MCAST_GROUP_NAME,
.mcgrps = thermal_event_mcgrps,
.n_mcgrps = ARRAY_SIZE(thermal_event_mcgrps),
};
int thermal_generate_netlink_event(struct thermal_zone_device *tz,
@ -1677,7 +1679,8 @@ int thermal_generate_netlink_event(struct thermal_zone_device *tz,
return result;
}
result = genlmsg_multicast(skb, 0, thermal_event_mcgrp.id, GFP_ATOMIC);
result = genlmsg_multicast(&thermal_event_genl_family, skb, 0,
0, GFP_ATOMIC);
if (result)
dev_err(&tz->device, "Failed to send netlink event:%d", result);
@ -1687,17 +1690,7 @@ EXPORT_SYMBOL_GPL(thermal_generate_netlink_event);
static int genetlink_init(void)
{
int result;
result = genl_register_family(&thermal_event_genl_family);
if (result)
return result;
result = genl_register_mc_group(&thermal_event_genl_family,
&thermal_event_mcgrp);
if (result)
genl_unregister_family(&thermal_event_genl_family);
return result;
return genl_register_family(&thermal_event_genl_family);
}
static void genetlink_exit(void)

View File

@ -74,14 +74,16 @@ static int user_cmd(struct sk_buff *skb, struct genl_info *info)
return 0;
}
static struct genl_ops dlm_nl_ops = {
.cmd = DLM_CMD_HELLO,
.doit = user_cmd,
static struct genl_ops dlm_nl_ops[] = {
{
.cmd = DLM_CMD_HELLO,
.doit = user_cmd,
},
};
int __init dlm_netlink_init(void)
{
return genl_register_family_with_ops(&family, &dlm_nl_ops, 1);
return genl_register_family_with_ops(&family, dlm_nl_ops);
}
void dlm_netlink_exit(void)

View File

@ -9,13 +9,25 @@
#include <net/netlink.h>
#include <net/genetlink.h>
static const struct genl_multicast_group quota_mcgrps[] = {
{ .name = "events", },
};
/* Netlink family structure for quota */
static struct genl_family quota_genl_family = {
.id = GENL_ID_GENERATE,
/*
* Needed due to multicast group ID abuse - old code assumed
* the family ID was also a valid multicast group ID (which
* isn't true) and userspace might thus rely on it. Assign a
* static ID for this group to make dealing with that easier.
*/
.id = GENL_ID_VFS_DQUOT,
.hdrsize = 0,
.name = "VFS_DQUOT",
.version = 1,
.maxattr = QUOTA_NL_A_MAX,
.mcgrps = quota_mcgrps,
.n_mcgrps = ARRAY_SIZE(quota_mcgrps),
};
/**
@ -78,7 +90,7 @@ void quota_send_warning(struct kqid qid, dev_t dev,
goto attr_err_out;
genlmsg_end(skb, msg_head);
genlmsg_multicast(skb, 0, quota_genl_family.id, GFP_NOFS);
genlmsg_multicast(&quota_genl_family, skb, 0, 0, GFP_NOFS);
return;
attr_err_out:
printk(KERN_ERR "VFS: Not enough space to compose quota message!\n");

View File

@ -273,49 +273,40 @@ static struct genl_family ZZZ_genl_family __read_mostly = {
* Magic: define multicast groups
* Magic: define multicast group registration helper
*/
#define ZZZ_genl_mcgrps CONCAT_(GENL_MAGIC_FAMILY, _genl_mcgrps)
static const struct genl_multicast_group ZZZ_genl_mcgrps[] = {
#undef GENL_mc_group
#define GENL_mc_group(group) { .name = #group, },
#include GENL_MAGIC_INCLUDE_FILE
};
enum CONCAT_(GENL_MAGIC_FAMILY, group_ids) {
#undef GENL_mc_group
#define GENL_mc_group(group) CONCAT_(GENL_MAGIC_FAMILY, _group_ ## group),
#include GENL_MAGIC_INCLUDE_FILE
};
#undef GENL_mc_group
#define GENL_mc_group(group) \
static struct genl_multicast_group \
CONCAT_(GENL_MAGIC_FAMILY, _mcg_ ## group) __read_mostly = { \
.name = #group, \
}; \
static int CONCAT_(GENL_MAGIC_FAMILY, _genl_multicast_ ## group)( \
struct sk_buff *skb, gfp_t flags) \
{ \
unsigned int group_id = \
CONCAT_(GENL_MAGIC_FAMILY, _mcg_ ## group).id; \
if (!group_id) \
return -EINVAL; \
return genlmsg_multicast(skb, 0, group_id, flags); \
CONCAT_(GENL_MAGIC_FAMILY, _group_ ## group); \
return genlmsg_multicast(&ZZZ_genl_family, skb, 0, \
group_id, flags); \
}
#include GENL_MAGIC_INCLUDE_FILE
int CONCAT_(GENL_MAGIC_FAMILY, _genl_register)(void)
{
int err = genl_register_family_with_ops(&ZZZ_genl_family,
ZZZ_genl_ops, ARRAY_SIZE(ZZZ_genl_ops));
if (err)
return err;
#undef GENL_mc_group
#define GENL_mc_group(group) \
err = genl_register_mc_group(&ZZZ_genl_family, \
&CONCAT_(GENL_MAGIC_FAMILY, _mcg_ ## group)); \
if (err) \
goto fail; \
else \
pr_info("%s: mcg %s: %u\n", #group, \
__stringify(GENL_MAGIC_FAMILY), \
CONCAT_(GENL_MAGIC_FAMILY, _mcg_ ## group).id);
#include GENL_MAGIC_INCLUDE_FILE
#undef GENL_mc_group
#define GENL_mc_group(group)
return 0;
fail:
genl_unregister_family(&ZZZ_genl_family);
return err;
int CONCAT_(GENL_MAGIC_FAMILY, _genl_register)(void)
{
return genl_register_family_with_ops_groups(&ZZZ_genl_family, \
ZZZ_genl_ops, \
ZZZ_genl_mcgrps);
}
void CONCAT_(GENL_MAGIC_FAMILY, _genl_unregister)(void)

View File

@ -119,4 +119,21 @@ extern int macvlan_link_register(struct rtnl_link_ops *ops);
extern netdev_tx_t macvlan_start_xmit(struct sk_buff *skb,
struct net_device *dev);
#if IS_ENABLED(CONFIG_MACVLAN)
static inline struct net_device *
macvlan_dev_real_dev(const struct net_device *dev)
{
struct macvlan_dev *macvlan = netdev_priv(dev);
return macvlan->lowerdev;
}
#else
static inline struct net_device *
macvlan_dev_real_dev(const struct net_device *dev)
{
BUG();
return NULL;
}
#endif
#endif /* _LINUX_IF_MACVLAN_H */

View File

@ -2263,24 +2263,6 @@ static inline void skb_postpull_rcsum(struct sk_buff *skb,
unsigned char *skb_pull_rcsum(struct sk_buff *skb, unsigned int len);
/**
* pskb_trim_rcsum - trim received skb and update checksum
* @skb: buffer to trim
* @len: new length
*
* This is exactly the same as pskb_trim except that it ensures the
* checksum of received packets are still valid after the operation.
*/
static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
{
if (likely(len >= skb->len))
return 0;
if (skb->ip_summed == CHECKSUM_COMPLETE)
skb->ip_summed = CHECKSUM_NONE;
return __pskb_trim(skb, len);
}
#define skb_queue_walk(queue, skb) \
for (skb = (queue)->next; \
skb != (struct sk_buff *)(queue); \
@ -2378,6 +2360,27 @@ __wsum __skb_checksum(const struct sk_buff *skb, int offset, int len,
__wsum skb_checksum(const struct sk_buff *skb, int offset, int len,
__wsum csum);
/**
* pskb_trim_rcsum - trim received skb and update checksum
* @skb: buffer to trim
* @len: new length
*
* This is exactly the same as pskb_trim except that it ensures the
* checksum of received packets are still valid after the operation.
*/
static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
{
if (likely(len >= skb->len))
return 0;
if (skb->ip_summed == CHECKSUM_COMPLETE) {
__wsum adj = skb_checksum(skb, len, skb->len - len, 0);
skb->csum = csum_sub(skb->csum, adj);
}
return __pskb_trim(skb, len);
}
static inline void *skb_header_pointer(const struct sk_buff *skb, int offset,
int len, void *buffer)
{

View File

@ -10,16 +10,9 @@
/**
* struct genl_multicast_group - generic netlink multicast group
* @name: name of the multicast group, names are per-family
* @id: multicast group ID, assigned by the core, to use with
* genlmsg_multicast().
* @list: list entry for linking
* @family: pointer to family, need not be set before registering
*/
struct genl_multicast_group {
struct genl_family *family; /* private */
struct list_head list; /* private */
char name[GENL_NAMSIZ];
u32 id;
};
struct genl_ops;
@ -39,9 +32,12 @@ struct genl_info;
* @post_doit: called after an operation's doit callback, it may
* undo operations done by pre_doit, for example release locks
* @attrbuf: buffer to store parsed attributes
* @ops_list: list of all assigned operations
* @family_list: family list
* @mcast_groups: multicast groups list
* @mcgrps: multicast groups used by this family (private)
* @n_mcgrps: number of multicast groups (private)
* @mcgrp_offset: starting number of multicast group IDs in this family
* @ops: the operations supported by this family (private)
* @n_ops: number of operations supported by this family (private)
*/
struct genl_family {
unsigned int id;
@ -51,16 +47,19 @@ struct genl_family {
unsigned int maxattr;
bool netnsok;
bool parallel_ops;
int (*pre_doit)(struct genl_ops *ops,
int (*pre_doit)(const struct genl_ops *ops,
struct sk_buff *skb,
struct genl_info *info);
void (*post_doit)(struct genl_ops *ops,
void (*post_doit)(const struct genl_ops *ops,
struct sk_buff *skb,
struct genl_info *info);
struct nlattr ** attrbuf; /* private */
struct list_head ops_list; /* private */
const struct genl_ops * ops; /* private */
const struct genl_multicast_group *mcgrps; /* private */
unsigned int n_ops; /* private */
unsigned int n_mcgrps; /* private */
unsigned int mcgrp_offset; /* private */
struct list_head family_list; /* private */
struct list_head mcast_groups; /* private */
struct module *module;
};
@ -110,16 +109,15 @@ static inline void genl_info_net_set(struct genl_info *info, struct net *net)
* @ops_list: operations list
*/
struct genl_ops {
u8 cmd;
u8 internal_flags;
unsigned int flags;
const struct nla_policy *policy;
int (*doit)(struct sk_buff *skb,
struct genl_info *info);
int (*dumpit)(struct sk_buff *skb,
struct netlink_callback *cb);
int (*done)(struct netlink_callback *cb);
struct list_head ops_list;
u8 cmd;
u8 internal_flags;
u8 flags;
};
int __genl_register_family(struct genl_family *family);
@ -130,24 +128,53 @@ static inline int genl_register_family(struct genl_family *family)
return __genl_register_family(family);
}
int __genl_register_family_with_ops(struct genl_family *family,
struct genl_ops *ops, size_t n_ops);
static inline int genl_register_family_with_ops(struct genl_family *family,
struct genl_ops *ops, size_t n_ops)
/**
* genl_register_family_with_ops - register a generic netlink family with ops
* @family: generic netlink family
* @ops: operations to be registered
* @n_ops: number of elements to register
*
* Registers the specified family and operations from the specified table.
* Only one family may be registered with the same family name or identifier.
*
* The family id may equal GENL_ID_GENERATE causing an unique id to
* be automatically generated and assigned.
*
* Either a doit or dumpit callback must be specified for every registered
* operation or the function will fail. Only one operation structure per
* command identifier may be registered.
*
* See include/net/genetlink.h for more documenation on the operations
* structure.
*
* Return 0 on success or a negative error code.
*/
static inline int
_genl_register_family_with_ops_grps(struct genl_family *family,
const struct genl_ops *ops, size_t n_ops,
const struct genl_multicast_group *mcgrps,
size_t n_mcgrps)
{
family->module = THIS_MODULE;
return __genl_register_family_with_ops(family, ops, n_ops);
family->ops = ops;
family->n_ops = n_ops;
family->mcgrps = mcgrps;
family->n_mcgrps = n_mcgrps;
return __genl_register_family(family);
}
#define genl_register_family_with_ops(family, ops) \
_genl_register_family_with_ops_grps((family), \
(ops), ARRAY_SIZE(ops), \
NULL, 0)
#define genl_register_family_with_ops_groups(family, ops, grps) \
_genl_register_family_with_ops_grps((family), \
(ops), ARRAY_SIZE(ops), \
(grps), ARRAY_SIZE(grps))
int genl_unregister_family(struct genl_family *family);
int genl_register_ops(struct genl_family *, struct genl_ops *ops);
int genl_unregister_ops(struct genl_family *, struct genl_ops *ops);
int genl_register_mc_group(struct genl_family *family,
struct genl_multicast_group *grp);
void genl_unregister_mc_group(struct genl_family *family,
struct genl_multicast_group *grp);
void genl_notify(struct sk_buff *skb, struct net *net, u32 portid,
void genl_notify(struct genl_family *family,
struct sk_buff *skb, struct net *net, u32 portid,
u32 group, struct nlmsghdr *nlh, gfp_t flags);
void *genlmsg_put(struct sk_buff *skb, u32 portid, u32 seq,
@ -227,41 +254,54 @@ static inline void genlmsg_cancel(struct sk_buff *skb, void *hdr)
/**
* genlmsg_multicast_netns - multicast a netlink message to a specific netns
* @family: the generic netlink family
* @net: the net namespace
* @skb: netlink message as socket buffer
* @portid: own netlink portid to avoid sending to yourself
* @group: multicast group id
* @group: offset of multicast group in groups array
* @flags: allocation flags
*/
static inline int genlmsg_multicast_netns(struct net *net, struct sk_buff *skb,
static inline int genlmsg_multicast_netns(struct genl_family *family,
struct net *net, struct sk_buff *skb,
u32 portid, unsigned int group, gfp_t flags)
{
if (group >= family->n_mcgrps)
return -EINVAL;
group = family->mcgrp_offset + group;
return nlmsg_multicast(net->genl_sock, skb, portid, group, flags);
}
/**
* genlmsg_multicast - multicast a netlink message to the default netns
* @family: the generic netlink family
* @skb: netlink message as socket buffer
* @portid: own netlink portid to avoid sending to yourself
* @group: multicast group id
* @group: offset of multicast group in groups array
* @flags: allocation flags
*/
static inline int genlmsg_multicast(struct sk_buff *skb, u32 portid,
static inline int genlmsg_multicast(struct genl_family *family,
struct sk_buff *skb, u32 portid,
unsigned int group, gfp_t flags)
{
return genlmsg_multicast_netns(&init_net, skb, portid, group, flags);
if (group >= family->n_mcgrps)
return -EINVAL;
group = family->mcgrp_offset + group;
return genlmsg_multicast_netns(family, &init_net, skb,
portid, group, flags);
}
/**
* genlmsg_multicast_allns - multicast a netlink message to all net namespaces
* @family: the generic netlink family
* @skb: netlink message as socket buffer
* @portid: own netlink portid to avoid sending to yourself
* @group: multicast group id
* @group: offset of multicast group in groups array
* @flags: allocation flags
*
* This function must hold the RTNL or rcu_read_lock().
*/
int genlmsg_multicast_allns(struct sk_buff *skb, u32 portid,
int genlmsg_multicast_allns(struct genl_family *family,
struct sk_buff *skb, u32 portid,
unsigned int group, gfp_t flags);
/**
@ -332,5 +372,22 @@ static inline struct sk_buff *genlmsg_new(size_t payload, gfp_t flags)
return nlmsg_new(genlmsg_total_size(payload), flags);
}
/**
* genl_set_err - report error to genetlink broadcast listeners
* @family: the generic netlink family
* @net: the network namespace to report the error to
* @portid: the PORTID of a process that we want to skip (if any)
* @group: the broadcast group that will notice the error
* (this is the offset of the multicast group in the groups array)
* @code: error code, must be negative (as usual in kernelspace)
*
* This function returns the number of broadcast listeners that have set the
* NETLINK_RECV_NO_ENOBUFS socket option.
*/
static inline int genl_set_err(struct genl_family *family, struct net *net,
u32 portid, u32 group, int code)
{
return netlink_set_err(net->genl_sock, portid, group, code);
}
#endif /* __NET_GENERIC_NETLINK_H */

View File

@ -27,6 +27,7 @@ struct genlmsghdr {
*/
#define GENL_ID_GENERATE 0
#define GENL_ID_CTRL NLMSG_MIN_TYPE
#define GENL_ID_VFS_DQUOT (NLMSG_MIN_TYPE + 1)
/**************************************************************************
* Controller

View File

@ -763,13 +763,14 @@ enum {
TCA_FQ_RATE_ENABLE, /* enable/disable rate limiting */
TCA_FQ_FLOW_DEFAULT_RATE,/* for sockets with unspecified sk_rate,
* use the following rate
*/
TCA_FQ_FLOW_DEFAULT_RATE,/* obsolete, do not use */
TCA_FQ_FLOW_MAX_RATE, /* per flow max rate */
TCA_FQ_BUCKETS_LOG, /* log2(number of buckets) */
TCA_FQ_FLOW_REFILL_DELAY, /* flow credit refill delay in usec */
__TCA_FQ_MAX
};

View File

@ -673,17 +673,18 @@ err:
nlmsg_free(rep_skb);
}
static struct genl_ops taskstats_ops = {
.cmd = TASKSTATS_CMD_GET,
.doit = taskstats_user_cmd,
.policy = taskstats_cmd_get_policy,
.flags = GENL_ADMIN_PERM,
};
static struct genl_ops cgroupstats_ops = {
.cmd = CGROUPSTATS_CMD_GET,
.doit = cgroupstats_user_cmd,
.policy = cgroupstats_cmd_get_policy,
static const struct genl_ops taskstats_ops[] = {
{
.cmd = TASKSTATS_CMD_GET,
.doit = taskstats_user_cmd,
.policy = taskstats_cmd_get_policy,
.flags = GENL_ADMIN_PERM,
},
{
.cmd = CGROUPSTATS_CMD_GET,
.doit = cgroupstats_user_cmd,
.policy = cgroupstats_cmd_get_policy,
},
};
/* Needed early in initialization */
@ -702,26 +703,13 @@ static int __init taskstats_init(void)
{
int rc;
rc = genl_register_family(&family);
rc = genl_register_family_with_ops(&family, taskstats_ops);
if (rc)
return rc;
rc = genl_register_ops(&family, &taskstats_ops);
if (rc < 0)
goto err;
rc = genl_register_ops(&family, &cgroupstats_ops);
if (rc < 0)
goto err_cgroup_ops;
family_registered = 1;
pr_info("registered taskstats version %d\n", TASKSTATS_GENL_VERSION);
return 0;
err_cgroup_ops:
genl_unregister_ops(&family, &taskstats_ops);
err:
genl_unregister_family(&family);
return rc;
}
/*

View File

@ -214,18 +214,22 @@ static DEFINE_TIMER(seed_timer, __prandom_timer, 0, 0);
static void __prandom_timer(unsigned long dontcare)
{
u32 entropy;
unsigned long expires;
get_random_bytes(&entropy, sizeof(entropy));
prandom_seed(entropy);
/* reseed every ~60 seconds, in [40 .. 80) interval with slack */
seed_timer.expires = jiffies + (40 * HZ + (prandom_u32() % (40 * HZ)));
expires = 40 + (prandom_u32() % 40);
seed_timer.expires = jiffies + msecs_to_jiffies(expires * MSEC_PER_SEC);
add_timer(&seed_timer);
}
static void prandom_start_seed_timer(void)
static void __init __prandom_start_seed_timer(void)
{
set_timer_slack(&seed_timer, HZ);
seed_timer.expires = jiffies + 40 * HZ;
seed_timer.expires = jiffies + msecs_to_jiffies(40 * MSEC_PER_SEC);
add_timer(&seed_timer);
}
@ -270,7 +274,7 @@ void prandom_reseed_late(void)
static int __init prandom_reseed(void)
{
__prandom_reseed(false);
prandom_start_seed_timer();
__prandom_start_seed_timer();
return 0;
}
late_initcall(prandom_reseed);

View File

@ -172,6 +172,7 @@ void br_dev_delete(struct net_device *dev, struct list_head *head)
del_nbp(p);
}
br_vlan_flush(br);
del_timer_sync(&br->gc_timer);
br_sysfs_delbr(br->dev);

View File

@ -34,7 +34,6 @@ static void __vlan_add_flags(struct net_port_vlans *v, u16 vid, u16 flags)
static int __vlan_add(struct net_port_vlans *v, u16 vid, u16 flags)
{
const struct net_device_ops *ops;
struct net_bridge_port *p = NULL;
struct net_bridge *br;
struct net_device *dev;
@ -53,17 +52,15 @@ static int __vlan_add(struct net_port_vlans *v, u16 vid, u16 flags)
br = v->parent.br;
dev = br->dev;
}
ops = dev->netdev_ops;
if (p && (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) {
if (p) {
/* Add VLAN to the device filter if it is supported.
* Stricly speaking, this is not necessary now, since
* devices are made promiscuous by the bridge, but if
* that ever changes this code will allow tagged
* traffic to enter the bridge.
*/
err = ops->ndo_vlan_rx_add_vid(dev, htons(ETH_P_8021Q),
vid);
err = vlan_vid_add(dev, htons(ETH_P_8021Q), vid);
if (err)
return err;
}
@ -82,8 +79,8 @@ static int __vlan_add(struct net_port_vlans *v, u16 vid, u16 flags)
return 0;
out_filt:
if (p && (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
ops->ndo_vlan_rx_kill_vid(dev, htons(ETH_P_8021Q), vid);
if (p)
vlan_vid_del(dev, htons(ETH_P_8021Q), vid);
return err;
}
@ -95,13 +92,8 @@ static int __vlan_del(struct net_port_vlans *v, u16 vid)
__vlan_delete_pvid(v, vid);
clear_bit(vid, v->untagged_bitmap);
if (v->port_idx) {
struct net_device *dev = v->parent.port->dev;
const struct net_device_ops *ops = dev->netdev_ops;
if (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
ops->ndo_vlan_rx_kill_vid(dev, htons(ETH_P_8021Q), vid);
}
if (v->port_idx)
vlan_vid_del(v->parent.port->dev, htons(ETH_P_8021Q), vid);
clear_bit(vid, v->vlan_bitmap);
v->num_vlans--;
@ -398,6 +390,7 @@ int nbp_vlan_delete(struct net_bridge_port *port, u16 vid)
void nbp_vlan_flush(struct net_bridge_port *port)
{
struct net_port_vlans *pv;
u16 vid;
ASSERT_RTNL();
@ -405,6 +398,9 @@ void nbp_vlan_flush(struct net_bridge_port *port)
if (!pv)
return;
for_each_set_bit(vid, pv->vlan_bitmap, VLAN_N_VID)
vlan_vid_del(port->dev, htons(ETH_P_8021Q), vid);
__vlan_flush(pv);
}

View File

@ -131,6 +131,7 @@
#include <linux/static_key.h>
#include <linux/hashtable.h>
#include <linux/vmalloc.h>
#include <linux/if_macvlan.h>
#include "net-sysfs.h"
@ -1424,6 +1425,10 @@ void dev_disable_lro(struct net_device *dev)
if (is_vlan_dev(dev))
dev = vlan_dev_real_dev(dev);
/* the same for macvlan devices */
if (netif_is_macvlan(dev))
dev = macvlan_dev_real_dev(dev);
dev->wanted_features &= ~NETIF_F_LRO;
netdev_update_features(dev);
@ -1690,13 +1695,9 @@ int dev_forward_skb(struct net_device *dev, struct sk_buff *skb)
kfree_skb(skb);
return NET_RX_DROP;
}
skb->protocol = eth_type_trans(skb, dev);
/* eth_type_trans() can set pkt_type.
* call skb_scrub_packet() after it to clear pkt_type _after_ calling
* eth_type_trans().
*/
skb_scrub_packet(skb, true);
skb->protocol = eth_type_trans(skb, dev);
return netif_rx(skb);
}

View File

@ -106,6 +106,10 @@ static struct sk_buff *reset_per_cpu_data(struct per_cpu_dm_data *data)
return skb;
}
static struct genl_multicast_group dropmon_mcgrps[] = {
{ .name = "events", },
};
static void send_dm_alert(struct work_struct *work)
{
struct sk_buff *skb;
@ -116,7 +120,8 @@ static void send_dm_alert(struct work_struct *work)
skb = reset_per_cpu_data(data);
if (skb)
genlmsg_multicast(skb, 0, NET_DM_GRP_ALERT, GFP_KERNEL);
genlmsg_multicast(&net_drop_monitor_family, skb, 0,
0, GFP_KERNEL);
}
/*
@ -333,7 +338,7 @@ out:
return NOTIFY_DONE;
}
static struct genl_ops dropmon_ops[] = {
static const struct genl_ops dropmon_ops[] = {
{
.cmd = NET_DM_CMD_CONFIG,
.doit = net_dm_cmd_config,
@ -364,13 +369,13 @@ static int __init init_net_drop_monitor(void)
return -ENOSPC;
}
rc = genl_register_family_with_ops(&net_drop_monitor_family,
dropmon_ops,
ARRAY_SIZE(dropmon_ops));
rc = genl_register_family_with_ops_groups(&net_drop_monitor_family,
dropmon_ops, dropmon_mcgrps);
if (rc) {
pr_err("Could not create drop monitor netlink family\n");
return rc;
}
WARN_ON(net_drop_monitor_family.mcgrp_offset != NET_DM_GRP_ALERT);
rc = register_netdevice_notifier(&dropmon_net_notifier);
if (rc < 0) {

View File

@ -90,8 +90,8 @@ static struct genl_family hsr_genl_family = {
.maxattr = HSR_A_MAX,
};
static struct genl_multicast_group hsr_network_genl_mcgrp = {
.name = "hsr-network",
static const struct genl_multicast_group hsr_mcgrps[] = {
{ .name = "hsr-network", },
};
@ -129,7 +129,7 @@ void hsr_nl_ringerror(struct hsr_priv *hsr_priv, unsigned char addr[ETH_ALEN],
goto nla_put_failure;
genlmsg_end(skb, msg_head);
genlmsg_multicast(skb, 0, hsr_network_genl_mcgrp.id, GFP_ATOMIC);
genlmsg_multicast(&hsr_genl_family, skb, 0, 0, GFP_ATOMIC);
return;
@ -163,7 +163,7 @@ void hsr_nl_nodedown(struct hsr_priv *hsr_priv, unsigned char addr[ETH_ALEN])
goto nla_put_failure;
genlmsg_end(skb, msg_head);
genlmsg_multicast(skb, 0, hsr_network_genl_mcgrp.id, GFP_ATOMIC);
genlmsg_multicast(&hsr_genl_family, skb, 0, 0, GFP_ATOMIC);
return;
@ -249,7 +249,7 @@ static int hsr_get_node_status(struct sk_buff *skb_in, struct genl_info *info)
&hsr_node_if2_age,
&hsr_node_if2_seq);
if (res < 0)
goto fail;
goto nla_put_failure;
res = nla_put(skb_out, HSR_A_NODE_ADDR, ETH_ALEN,
nla_data(info->attrs[HSR_A_NODE_ADDR]));
@ -306,15 +306,6 @@ fail:
return res;
}
static struct genl_ops hsr_ops_get_node_status = {
.cmd = HSR_C_GET_NODE_STATUS,
.flags = 0,
.policy = hsr_genl_policy,
.doit = hsr_get_node_status,
.dumpit = NULL,
};
/* Get a list of MacAddressA of all nodes known to this node (other than self).
*/
static int hsr_get_node_list(struct sk_buff *skb_in, struct genl_info *info)
@ -398,12 +389,21 @@ fail:
}
static struct genl_ops hsr_ops_get_node_list = {
.cmd = HSR_C_GET_NODE_LIST,
.flags = 0,
.policy = hsr_genl_policy,
.doit = hsr_get_node_list,
.dumpit = NULL,
static const struct genl_ops hsr_ops[] = {
{
.cmd = HSR_C_GET_NODE_STATUS,
.flags = 0,
.policy = hsr_genl_policy,
.doit = hsr_get_node_status,
.dumpit = NULL,
},
{
.cmd = HSR_C_GET_NODE_LIST,
.flags = 0,
.policy = hsr_genl_policy,
.doit = hsr_get_node_list,
.dumpit = NULL,
},
};
int __init hsr_netlink_init(void)
@ -414,30 +414,13 @@ int __init hsr_netlink_init(void)
if (rc)
goto fail_rtnl_link_register;
rc = genl_register_family(&hsr_genl_family);
rc = genl_register_family_with_ops_groups(&hsr_genl_family, hsr_ops,
hsr_mcgrps);
if (rc)
goto fail_genl_register_family;
rc = genl_register_ops(&hsr_genl_family, &hsr_ops_get_node_status);
if (rc)
goto fail_genl_register_ops;
rc = genl_register_ops(&hsr_genl_family, &hsr_ops_get_node_list);
if (rc)
goto fail_genl_register_ops_node_list;
rc = genl_register_mc_group(&hsr_genl_family, &hsr_network_genl_mcgrp);
if (rc)
goto fail_genl_register_mc_group;
return 0;
fail_genl_register_mc_group:
genl_unregister_ops(&hsr_genl_family, &hsr_ops_get_node_list);
fail_genl_register_ops_node_list:
genl_unregister_ops(&hsr_genl_family, &hsr_ops_get_node_status);
fail_genl_register_ops:
genl_unregister_family(&hsr_genl_family);
fail_genl_register_family:
rtnl_link_unregister(&hsr_link_ops);
fail_rtnl_link_register:
@ -447,10 +430,7 @@ fail_rtnl_link_register:
void __exit hsr_netlink_exit(void)
{
genl_unregister_mc_group(&hsr_genl_family, &hsr_network_genl_mcgrp);
genl_unregister_ops(&hsr_genl_family, &hsr_ops_get_node_status);
genl_unregister_family(&hsr_genl_family);
rtnl_link_unregister(&hsr_link_ops);
}

View File

@ -956,7 +956,7 @@ lowpan_process_data(struct sk_buff *skb)
* Traffic class carried in-line
* ECN + DSCP (1 byte), Flow Label is elided
*/
case 1: /* 10b */
case 2: /* 10b */
if (lowpan_fetch_skb_u8(skb, &tmp))
goto drop;
@ -967,7 +967,7 @@ lowpan_process_data(struct sk_buff *skb)
* Flow Label carried in-line
* ECN + 2-bit Pad + Flow Label (3 bytes), DSCP is elided
*/
case 2: /* 01b */
case 1: /* 01b */
if (lowpan_fetch_skb_u8(skb, &tmp))
goto drop;

View File

@ -315,9 +315,8 @@ static int dgram_recvmsg(struct kiocb *iocb, struct sock *sk,
if (saddr) {
saddr->family = AF_IEEE802154;
saddr->addr = mac_cb(skb)->sa;
}
if (addr_len)
*addr_len = sizeof(*saddr);
}
if (flags & MSG_TRUNC)
copied = skb->len;

View File

@ -47,7 +47,24 @@ struct sk_buff *ieee802154_nl_new_reply(struct genl_info *info,
int ieee802154_nl_reply(struct sk_buff *msg, struct genl_info *info);
extern struct genl_family nl802154_family;
int nl802154_mac_register(void);
int nl802154_phy_register(void);
/* genetlink ops/groups */
int ieee802154_list_phy(struct sk_buff *skb, struct genl_info *info);
int ieee802154_dump_phy(struct sk_buff *skb, struct netlink_callback *cb);
int ieee802154_add_iface(struct sk_buff *skb, struct genl_info *info);
int ieee802154_del_iface(struct sk_buff *skb, struct genl_info *info);
enum ieee802154_mcgrp_ids {
IEEE802154_COORD_MCGRP,
IEEE802154_BEACON_MCGRP,
};
int ieee802154_associate_req(struct sk_buff *skb, struct genl_info *info);
int ieee802154_associate_resp(struct sk_buff *skb, struct genl_info *info);
int ieee802154_disassociate_req(struct sk_buff *skb, struct genl_info *info);
int ieee802154_scan_req(struct sk_buff *skb, struct genl_info *info);
int ieee802154_start_req(struct sk_buff *skb, struct genl_info *info);
int ieee802154_list_iface(struct sk_buff *skb, struct genl_info *info);
int ieee802154_dump_iface(struct sk_buff *skb, struct netlink_callback *cb);
#endif

View File

@ -70,7 +70,7 @@ int ieee802154_nl_mcast(struct sk_buff *msg, unsigned int group)
if (genlmsg_end(msg, hdr) < 0)
goto out;
return genlmsg_multicast(msg, 0, group, GFP_ATOMIC);
return genlmsg_multicast(&nl802154_family, msg, 0, group, GFP_ATOMIC);
out:
nlmsg_free(msg);
return -ENOBUFS;
@ -109,31 +109,36 @@ out:
return -ENOBUFS;
}
static const struct genl_ops ieee8021154_ops[] = {
/* see nl-phy.c */
IEEE802154_DUMP(IEEE802154_LIST_PHY, ieee802154_list_phy,
ieee802154_dump_phy),
IEEE802154_OP(IEEE802154_ADD_IFACE, ieee802154_add_iface),
IEEE802154_OP(IEEE802154_DEL_IFACE, ieee802154_del_iface),
/* see nl-mac.c */
IEEE802154_OP(IEEE802154_ASSOCIATE_REQ, ieee802154_associate_req),
IEEE802154_OP(IEEE802154_ASSOCIATE_RESP, ieee802154_associate_resp),
IEEE802154_OP(IEEE802154_DISASSOCIATE_REQ, ieee802154_disassociate_req),
IEEE802154_OP(IEEE802154_SCAN_REQ, ieee802154_scan_req),
IEEE802154_OP(IEEE802154_START_REQ, ieee802154_start_req),
IEEE802154_DUMP(IEEE802154_LIST_IFACE, ieee802154_list_iface,
ieee802154_dump_iface),
};
static const struct genl_multicast_group ieee802154_mcgrps[] = {
[IEEE802154_COORD_MCGRP] = { .name = IEEE802154_MCAST_COORD_NAME, },
[IEEE802154_BEACON_MCGRP] = { .name = IEEE802154_MCAST_BEACON_NAME, },
};
int __init ieee802154_nl_init(void)
{
int rc;
rc = genl_register_family(&nl802154_family);
if (rc)
goto fail;
rc = nl802154_mac_register();
if (rc)
goto fail;
rc = nl802154_phy_register();
if (rc)
goto fail;
return 0;
fail:
genl_unregister_family(&nl802154_family);
return rc;
return genl_register_family_with_ops_groups(&nl802154_family,
ieee8021154_ops,
ieee802154_mcgrps);
}
void __exit ieee802154_nl_exit(void)
{
genl_unregister_family(&nl802154_family);
}

View File

@ -39,14 +39,6 @@
#include "ieee802154.h"
static struct genl_multicast_group ieee802154_coord_mcgrp = {
.name = IEEE802154_MCAST_COORD_NAME,
};
static struct genl_multicast_group ieee802154_beacon_mcgrp = {
.name = IEEE802154_MCAST_BEACON_NAME,
};
int ieee802154_nl_assoc_indic(struct net_device *dev,
struct ieee802154_addr *addr, u8 cap)
{
@ -72,7 +64,7 @@ int ieee802154_nl_assoc_indic(struct net_device *dev,
nla_put_u8(msg, IEEE802154_ATTR_CAPABILITY, cap))
goto nla_put_failure;
return ieee802154_nl_mcast(msg, ieee802154_coord_mcgrp.id);
return ieee802154_nl_mcast(msg, IEEE802154_COORD_MCGRP);
nla_put_failure:
nlmsg_free(msg);
@ -98,7 +90,7 @@ int ieee802154_nl_assoc_confirm(struct net_device *dev, u16 short_addr,
nla_put_u16(msg, IEEE802154_ATTR_SHORT_ADDR, short_addr) ||
nla_put_u8(msg, IEEE802154_ATTR_STATUS, status))
goto nla_put_failure;
return ieee802154_nl_mcast(msg, ieee802154_coord_mcgrp.id);
return ieee802154_nl_mcast(msg, IEEE802154_COORD_MCGRP);
nla_put_failure:
nlmsg_free(msg);
@ -133,7 +125,7 @@ int ieee802154_nl_disassoc_indic(struct net_device *dev,
}
if (nla_put_u8(msg, IEEE802154_ATTR_REASON, reason))
goto nla_put_failure;
return ieee802154_nl_mcast(msg, ieee802154_coord_mcgrp.id);
return ieee802154_nl_mcast(msg, IEEE802154_COORD_MCGRP);
nla_put_failure:
nlmsg_free(msg);
@ -157,7 +149,7 @@ int ieee802154_nl_disassoc_confirm(struct net_device *dev, u8 status)
dev->dev_addr) ||
nla_put_u8(msg, IEEE802154_ATTR_STATUS, status))
goto nla_put_failure;
return ieee802154_nl_mcast(msg, ieee802154_coord_mcgrp.id);
return ieee802154_nl_mcast(msg, IEEE802154_COORD_MCGRP);
nla_put_failure:
nlmsg_free(msg);
@ -183,7 +175,7 @@ int ieee802154_nl_beacon_indic(struct net_device *dev,
nla_put_u16(msg, IEEE802154_ATTR_COORD_SHORT_ADDR, coord_addr) ||
nla_put_u16(msg, IEEE802154_ATTR_COORD_PAN_ID, panid))
goto nla_put_failure;
return ieee802154_nl_mcast(msg, ieee802154_coord_mcgrp.id);
return ieee802154_nl_mcast(msg, IEEE802154_COORD_MCGRP);
nla_put_failure:
nlmsg_free(msg);
@ -214,7 +206,7 @@ int ieee802154_nl_scan_confirm(struct net_device *dev,
(edl &&
nla_put(msg, IEEE802154_ATTR_ED_LIST, 27, edl)))
goto nla_put_failure;
return ieee802154_nl_mcast(msg, ieee802154_coord_mcgrp.id);
return ieee802154_nl_mcast(msg, IEEE802154_COORD_MCGRP);
nla_put_failure:
nlmsg_free(msg);
@ -238,7 +230,7 @@ int ieee802154_nl_start_confirm(struct net_device *dev, u8 status)
dev->dev_addr) ||
nla_put_u8(msg, IEEE802154_ATTR_STATUS, status))
goto nla_put_failure;
return ieee802154_nl_mcast(msg, ieee802154_coord_mcgrp.id);
return ieee802154_nl_mcast(msg, IEEE802154_COORD_MCGRP);
nla_put_failure:
nlmsg_free(msg);
@ -309,8 +301,7 @@ static struct net_device *ieee802154_nl_get_dev(struct genl_info *info)
return dev;
}
static int ieee802154_associate_req(struct sk_buff *skb,
struct genl_info *info)
int ieee802154_associate_req(struct sk_buff *skb, struct genl_info *info)
{
struct net_device *dev;
struct ieee802154_addr addr;
@ -357,8 +348,7 @@ out:
return ret;
}
static int ieee802154_associate_resp(struct sk_buff *skb,
struct genl_info *info)
int ieee802154_associate_resp(struct sk_buff *skb, struct genl_info *info)
{
struct net_device *dev;
struct ieee802154_addr addr;
@ -390,8 +380,7 @@ out:
return ret;
}
static int ieee802154_disassociate_req(struct sk_buff *skb,
struct genl_info *info)
int ieee802154_disassociate_req(struct sk_buff *skb, struct genl_info *info)
{
struct net_device *dev;
struct ieee802154_addr addr;
@ -433,7 +422,7 @@ out:
* PAN_coordinator, battery_life_extension = 0,
* coord_realignment = 0, security_enable = 0
*/
static int ieee802154_start_req(struct sk_buff *skb, struct genl_info *info)
int ieee802154_start_req(struct sk_buff *skb, struct genl_info *info)
{
struct net_device *dev;
struct ieee802154_addr addr;
@ -492,7 +481,7 @@ out:
return ret;
}
static int ieee802154_scan_req(struct sk_buff *skb, struct genl_info *info)
int ieee802154_scan_req(struct sk_buff *skb, struct genl_info *info)
{
struct net_device *dev;
int ret = -EOPNOTSUPP;
@ -530,8 +519,7 @@ out:
return ret;
}
static int ieee802154_list_iface(struct sk_buff *skb,
struct genl_info *info)
int ieee802154_list_iface(struct sk_buff *skb, struct genl_info *info)
{
/* Request for interface name, index, type, IEEE address,
PAN Id, short address */
@ -565,8 +553,7 @@ out_dev:
}
static int ieee802154_dump_iface(struct sk_buff *skb,
struct netlink_callback *cb)
int ieee802154_dump_iface(struct sk_buff *skb, struct netlink_callback *cb)
{
struct net *net = sock_net(skb->sk);
struct net_device *dev;
@ -590,41 +577,3 @@ cont:
return skb->len;
}
static struct genl_ops ieee802154_coordinator_ops[] = {
IEEE802154_OP(IEEE802154_ASSOCIATE_REQ, ieee802154_associate_req),
IEEE802154_OP(IEEE802154_ASSOCIATE_RESP, ieee802154_associate_resp),
IEEE802154_OP(IEEE802154_DISASSOCIATE_REQ, ieee802154_disassociate_req),
IEEE802154_OP(IEEE802154_SCAN_REQ, ieee802154_scan_req),
IEEE802154_OP(IEEE802154_START_REQ, ieee802154_start_req),
IEEE802154_DUMP(IEEE802154_LIST_IFACE, ieee802154_list_iface,
ieee802154_dump_iface),
};
/*
* No need to unregister as family unregistration will do it.
*/
int nl802154_mac_register(void)
{
int i;
int rc;
rc = genl_register_mc_group(&nl802154_family,
&ieee802154_coord_mcgrp);
if (rc)
return rc;
rc = genl_register_mc_group(&nl802154_family,
&ieee802154_beacon_mcgrp);
if (rc)
return rc;
for (i = 0; i < ARRAY_SIZE(ieee802154_coordinator_ops); i++) {
rc = genl_register_ops(&nl802154_family,
&ieee802154_coordinator_ops[i]);
if (rc)
return rc;
}
return 0;
}

View File

@ -77,8 +77,7 @@ out:
return -EMSGSIZE;
}
static int ieee802154_list_phy(struct sk_buff *skb,
struct genl_info *info)
int ieee802154_list_phy(struct sk_buff *skb, struct genl_info *info)
{
/* Request for interface name, index, type, IEEE address,
PAN Id, short address */
@ -151,8 +150,7 @@ static int ieee802154_dump_phy_iter(struct wpan_phy *phy, void *_data)
return 0;
}
static int ieee802154_dump_phy(struct sk_buff *skb,
struct netlink_callback *cb)
int ieee802154_dump_phy(struct sk_buff *skb, struct netlink_callback *cb)
{
struct dump_phy_data data = {
.cb = cb,
@ -170,8 +168,7 @@ static int ieee802154_dump_phy(struct sk_buff *skb,
return skb->len;
}
static int ieee802154_add_iface(struct sk_buff *skb,
struct genl_info *info)
int ieee802154_add_iface(struct sk_buff *skb, struct genl_info *info)
{
struct sk_buff *msg;
struct wpan_phy *phy;
@ -273,8 +270,7 @@ out_dev:
return rc;
}
static int ieee802154_del_iface(struct sk_buff *skb,
struct genl_info *info)
int ieee802154_del_iface(struct sk_buff *skb, struct genl_info *info)
{
struct sk_buff *msg;
struct wpan_phy *phy;
@ -356,28 +352,3 @@ out_dev:
return rc;
}
static struct genl_ops ieee802154_phy_ops[] = {
IEEE802154_DUMP(IEEE802154_LIST_PHY, ieee802154_list_phy,
ieee802154_dump_phy),
IEEE802154_OP(IEEE802154_ADD_IFACE, ieee802154_add_iface),
IEEE802154_OP(IEEE802154_DEL_IFACE, ieee802154_del_iface),
};
/*
* No need to unregister as family unregistration will do it.
*/
int nl802154_phy_register(void)
{
int i;
int rc;
for (i = 0; i < ARRAY_SIZE(ieee802154_phy_ops); i++) {
rc = genl_register_ops(&nl802154_family,
&ieee802154_phy_ops[i]);
if (rc)
return rc;
}
return 0;
}

View File

@ -57,7 +57,7 @@ int ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
if (IS_ERR(rt)) {
err = PTR_ERR(rt);
if (err == -ENETUNREACH)
IP_INC_STATS_BH(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
goto out;
}

View File

@ -454,6 +454,8 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
tstats->rx_bytes += skb->len;
u64_stats_update_end(&tstats->syncp);
skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(tunnel->dev)));
if (tunnel->dev->type == ARPHRD_ETHER) {
skb->protocol = eth_type_trans(skb, tunnel->dev);
skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
@ -461,8 +463,6 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
skb->dev = tunnel->dev;
}
skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(tunnel->dev)));
gro_cells_receive(&tunnel->gro_cells, skb);
return 0;

View File

@ -126,6 +126,7 @@ static netdev_tx_t vti_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
if (!rt->dst.xfrm ||
rt->dst.xfrm->props.mode != XFRM_MODE_TUNNEL) {
dev->stats.tx_carrier_errors++;
ip_rt_put(rt);
goto tx_error_icmp;
}
tdev = rt->dst.dev;

View File

@ -830,8 +830,6 @@ int ping_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
{
struct inet_sock *isk = inet_sk(sk);
int family = sk->sk_family;
struct sockaddr_in *sin;
struct sockaddr_in6 *sin6;
struct sk_buff *skb;
int copied, err;
@ -841,13 +839,6 @@ int ping_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
if (flags & MSG_OOB)
goto out;
if (addr_len) {
if (family == AF_INET)
*addr_len = sizeof(*sin);
else if (family == AF_INET6 && addr_len)
*addr_len = sizeof(*sin6);
}
if (flags & MSG_ERRQUEUE) {
if (family == AF_INET) {
return ip_recv_error(sk, msg, len);
@ -877,11 +868,15 @@ int ping_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
/* Copy the address and add cmsg data. */
if (family == AF_INET) {
sin = (struct sockaddr_in *) msg->msg_name;
sin->sin_family = AF_INET;
sin->sin_port = 0 /* skb->h.uh->source */;
sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
struct sockaddr_in *sin = (struct sockaddr_in *)msg->msg_name;
if (sin) {
sin->sin_family = AF_INET;
sin->sin_port = 0 /* skb->h.uh->source */;
sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
*addr_len = sizeof(*sin);
}
if (isk->cmsg_flags)
ip_cmsg_recv(msg, skb);
@ -890,17 +885,21 @@ int ping_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
} else if (family == AF_INET6) {
struct ipv6_pinfo *np = inet6_sk(sk);
struct ipv6hdr *ip6 = ipv6_hdr(skb);
sin6 = (struct sockaddr_in6 *) msg->msg_name;
sin6->sin6_family = AF_INET6;
sin6->sin6_port = 0;
sin6->sin6_addr = ip6->saddr;
struct sockaddr_in6 *sin6 =
(struct sockaddr_in6 *)msg->msg_name;
sin6->sin6_flowinfo = 0;
if (np->sndflow)
sin6->sin6_flowinfo = ip6_flowinfo(ip6);
sin6->sin6_scope_id = ipv6_iface_scope_id(&sin6->sin6_addr,
IP6CB(skb)->iif);
if (sin6) {
sin6->sin6_family = AF_INET6;
sin6->sin6_port = 0;
sin6->sin6_addr = ip6->saddr;
sin6->sin6_flowinfo = 0;
if (np->sndflow)
sin6->sin6_flowinfo = ip6_flowinfo(ip6);
sin6->sin6_scope_id =
ipv6_iface_scope_id(&sin6->sin6_addr,
IP6CB(skb)->iif);
*addr_len = sizeof(*sin6);
}
if (inet6_sk(sk)->rxopt.all)
pingv6_ops.ip6_datagram_recv_ctl(sk, msg, skb);

View File

@ -696,9 +696,6 @@ static int raw_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
if (flags & MSG_OOB)
goto out;
if (addr_len)
*addr_len = sizeof(*sin);
if (flags & MSG_ERRQUEUE) {
err = ip_recv_error(sk, msg, len);
goto out;
@ -726,6 +723,7 @@ static int raw_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
sin->sin_port = 0;
memset(&sin->sin_zero, 0, sizeof(sin->sin_zero));
*addr_len = sizeof(*sin);
}
if (inet->cmsg_flags)
ip_cmsg_recv(msg, skb);

View File

@ -808,12 +808,6 @@ static unsigned int tcp_xmit_size_goal(struct sock *sk, u32 mss_now,
xmit_size_goal = min_t(u32, gso_size,
sk->sk_gso_max_size - 1 - hlen);
/* TSQ : try to have at least two segments in flight
* (one in NIC TX ring, another in Qdisc)
*/
xmit_size_goal = min_t(u32, xmit_size_goal,
sysctl_tcp_limit_output_bytes >> 1);
xmit_size_goal = tcp_bound_to_half_wnd(tp, xmit_size_goal);
/* We try hard to avoid divides here */

View File

@ -663,10 +663,13 @@ void tcp_fastopen_cache_get(struct sock *sk, u16 *mss,
void tcp_fastopen_cache_set(struct sock *sk, u16 mss,
struct tcp_fastopen_cookie *cookie, bool syn_lost)
{
struct dst_entry *dst = __sk_dst_get(sk);
struct tcp_metrics_block *tm;
if (!dst)
return;
rcu_read_lock();
tm = tcp_get_metrics(sk, __sk_dst_get(sk), true);
tm = tcp_get_metrics(sk, dst, true);
if (tm) {
struct tcp_fastopen_metrics *tfom = &tm->tcpm_fastopen;
@ -988,7 +991,7 @@ static int tcp_metrics_nl_cmd_del(struct sk_buff *skb, struct genl_info *info)
return 0;
}
static struct genl_ops tcp_metrics_nl_ops[] = {
static const struct genl_ops tcp_metrics_nl_ops[] = {
{
.cmd = TCP_METRICS_CMD_GET,
.doit = tcp_metrics_nl_cmd_get,
@ -1079,8 +1082,7 @@ void __init tcp_metrics_init(void)
if (ret < 0)
goto cleanup;
ret = genl_register_family_with_ops(&tcp_metrics_nl_family,
tcp_metrics_nl_ops,
ARRAY_SIZE(tcp_metrics_nl_ops));
tcp_metrics_nl_ops);
if (ret < 0)
goto cleanup_subsys;
return;

View File

@ -1875,8 +1875,12 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle,
* - better RTT estimation and ACK scheduling
* - faster recovery
* - high rates
* Alas, some drivers / subsystems require a fair amount
* of queued bytes to ensure line rate.
* One example is wifi aggregation (802.11 AMPDU)
*/
limit = max(skb->truesize, sk->sk_pacing_rate >> 10);
limit = max_t(unsigned int, sysctl_tcp_limit_output_bytes,
sk->sk_pacing_rate >> 10);
if (atomic_read(&sk->sk_wmem_alloc) > limit) {
set_bit(TSQ_THROTTLED, &tp->tsq_flags);
@ -3093,7 +3097,6 @@ void tcp_send_window_probe(struct sock *sk)
{
if (sk->sk_state == TCP_ESTABLISHED) {
tcp_sk(sk)->snd_wl1 = tcp_sk(sk)->rcv_nxt - 1;
tcp_sk(sk)->snd_nxt = tcp_sk(sk)->write_seq;
tcp_xmit_probe_skb(sk, 0);
}
}

View File

@ -1235,12 +1235,6 @@ int udp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
int is_udplite = IS_UDPLITE(sk);
bool slow;
/*
* Check any passed addresses
*/
if (addr_len)
*addr_len = sizeof(*sin);
if (flags & MSG_ERRQUEUE)
return ip_recv_error(sk, msg, len);
@ -1302,6 +1296,7 @@ try_again:
sin->sin_port = udp_hdr(skb)->source;
sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
*addr_len = sizeof(*sin);
}
if (inet->cmsg_flags)
ip_cmsg_recv(msg, skb);

View File

@ -1996,23 +1996,6 @@ static void addrconf_add_mroute(struct net_device *dev)
ip6_route_add(&cfg);
}
#if IS_ENABLED(CONFIG_IPV6_SIT)
static void sit_route_add(struct net_device *dev)
{
struct fib6_config cfg = {
.fc_table = RT6_TABLE_MAIN,
.fc_metric = IP6_RT_PRIO_ADDRCONF,
.fc_ifindex = dev->ifindex,
.fc_dst_len = 96,
.fc_flags = RTF_UP | RTF_NONEXTHOP,
.fc_nlinfo.nl_net = dev_net(dev),
};
/* prefix length - 96 bits "::d.d.d.d" */
ip6_route_add(&cfg);
}
#endif
static struct inet6_dev *addrconf_add_dev(struct net_device *dev)
{
struct inet6_dev *idev;
@ -2542,7 +2525,8 @@ static void sit_add_v4_addrs(struct inet6_dev *idev)
struct in6_addr addr;
struct net_device *dev;
struct net *net = dev_net(idev->dev);
int scope;
int scope, plen;
u32 pflags = 0;
ASSERT_RTNL();
@ -2552,12 +2536,16 @@ static void sit_add_v4_addrs(struct inet6_dev *idev)
if (idev->dev->flags&IFF_POINTOPOINT) {
addr.s6_addr32[0] = htonl(0xfe800000);
scope = IFA_LINK;
plen = 64;
} else {
scope = IPV6_ADDR_COMPATv4;
plen = 96;
pflags |= RTF_NONEXTHOP;
}
if (addr.s6_addr32[3]) {
add_addr(idev, &addr, 128, scope);
add_addr(idev, &addr, plen, scope);
addrconf_prefix_route(&addr, plen, idev->dev, 0, pflags);
return;
}
@ -2569,7 +2557,6 @@ static void sit_add_v4_addrs(struct inet6_dev *idev)
int flag = scope;
for (ifa = in_dev->ifa_list; ifa; ifa = ifa->ifa_next) {
int plen;
addr.s6_addr32[3] = ifa->ifa_local;
@ -2580,12 +2567,10 @@ static void sit_add_v4_addrs(struct inet6_dev *idev)
continue;
flag |= IFA_HOST;
}
if (idev->dev->flags&IFF_POINTOPOINT)
plen = 64;
else
plen = 96;
add_addr(idev, &addr, plen, flag);
addrconf_prefix_route(&addr, plen, idev->dev, 0,
pflags);
}
}
}
@ -2711,7 +2696,6 @@ static void addrconf_sit_config(struct net_device *dev)
struct in6_addr addr;
ipv6_addr_set(&addr, htonl(0xFE800000), 0, 0, 0);
addrconf_prefix_route(&addr, 64, dev, 0, 0);
if (!ipv6_generate_eui64(addr.s6_addr + 8, dev))
addrconf_add_linklocal(idev, &addr);
return;
@ -2721,8 +2705,6 @@ static void addrconf_sit_config(struct net_device *dev)
if (dev->flags&IFF_POINTOPOINT)
addrconf_add_mroute(dev);
else
sit_route_add(dev);
}
#endif
@ -2740,8 +2722,6 @@ static void addrconf_gre_config(struct net_device *dev)
}
ipv6_addr_set(&addr, htonl(0xFE800000), 0, 0, 0);
addrconf_prefix_route(&addr, 64, dev, 0, 0);
if (!ipv6_generate_eui64(addr.s6_addr + 8, dev))
addrconf_add_linklocal(idev, &addr);
}

View File

@ -972,10 +972,10 @@ out:
#ifdef CONFIG_SYSCTL
sysctl_fail:
ipv6_packet_cleanup();
pingv6_exit();
#endif
pingv6_fail:
pingv6_exit();
ipv6_packet_cleanup();
ipv6_packet_fail:
tcpv6_exit();
tcpv6_fail:

View File

@ -1642,6 +1642,15 @@ static int ip6_tnl_changelink(struct net_device *dev, struct nlattr *tb[],
return ip6_tnl_update(t, &p);
}
static void ip6_tnl_dellink(struct net_device *dev, struct list_head *head)
{
struct net *net = dev_net(dev);
struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id);
if (dev != ip6n->fb_tnl_dev)
unregister_netdevice_queue(dev, head);
}
static size_t ip6_tnl_get_size(const struct net_device *dev)
{
return
@ -1706,6 +1715,7 @@ static struct rtnl_link_ops ip6_link_ops __read_mostly = {
.validate = ip6_tnl_validate,
.newlink = ip6_tnl_newlink,
.changelink = ip6_tnl_changelink,
.dellink = ip6_tnl_dellink,
.get_size = ip6_tnl_get_size,
.fill_info = ip6_tnl_fill_info,
};
@ -1722,9 +1732,9 @@ static struct xfrm6_tunnel ip6ip6_handler __read_mostly = {
.priority = 1,
};
static void __net_exit ip6_tnl_destroy_tunnels(struct ip6_tnl_net *ip6n)
static void __net_exit ip6_tnl_destroy_tunnels(struct net *net)
{
struct net *net = dev_net(ip6n->fb_tnl_dev);
struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id);
struct net_device *dev, *aux;
int h;
struct ip6_tnl *t;
@ -1792,10 +1802,8 @@ err_alloc_dev:
static void __net_exit ip6_tnl_exit_net(struct net *net)
{
struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id);
rtnl_lock();
ip6_tnl_destroy_tunnels(ip6n);
ip6_tnl_destroy_tunnels(net);
rtnl_unlock();
}

Some files were not shown because too many files have changed in this diff Show More