syzbot |
sign-in | mailing list | source | docs | 🏰 |
| ID | Workflow | Result | Correct | Bug | Created | Started | Finished | Revision | Error |
|---|---|---|---|---|---|---|---|---|---|
| 287999a8-5bc9-49df-af36-32a336b10fec | assessment-kcsan | Benign: ✅ Confident: ✅ | ❓ | KCSAN: data-race in wg_socket_send_skb_to_peer / wg_socket_send_skb_to_peer (8) | 2026/02/05 10:25 | 2026/02/05 10:25 | 2026/02/05 10:28 | 9b618abc0cd923b36c7a5ebc58a21174112d8f70 |
The data race occurs on the `tx_bytes` field of the `struct wg_peer` in the WireGuard network driver. This field is a 64-bit counter used to track the total number of bytes transmitted to a specific peer.
The race happens in the `wg_socket_send_skb_to_peer` function in `drivers/net/wireguard/socket.c`:
```c
int wg_socket_send_skb_to_peer(struct wg_peer *peer, struct sk_buff *skb, u8 ds)
{
...
read_lock_bh(&peer->endpoint_lock);
...
if (likely(!ret))
peer->tx_bytes += skb_len;
read_unlock_bh(&peer->endpoint_lock);
...
}
```
The `endpoint_lock` is a reader-writer lock (`rwlock_t`), and it is acquired as a reader in this function. This allows multiple threads (e.g., multiple worker threads sending packets to the same peer) to enter the critical section concurrently. Since `tx_bytes` is a plain `u64` and the increment operation `+=` is not atomic, concurrent updates from multiple CPUs can lead to lost updates (i.e., some packet sizes might not be added to the total).
However, this data race is considered benign for the following reasons:
1. **Purpose of the field**: The `tx_bytes` field is used exclusively for statistics. It is exported to userspace via Netlink in `drivers/net/wireguard/netlink.c` but is not used for any internal protocol logic, such as rekeying, rate limiting, or flow control.
2. **Memory Safety**: On a 64-bit architecture (like the x86_64 system in the report), 64-bit aligned memory accesses are atomic. Since `tx_bytes` is 64-bit aligned within the `wg_peer` struct, there is no risk of "torn" reads or writes that could lead to memory corruption or invalid values being read. A lost update only results in a slightly inaccurate counter.
3. **System Stability**: The race does not involve any pointers, complex data structures, or state variables that affect the driver's control flow or stability.
In summary, while the race can cause the transmission statistics to be slightly lower than the actual value under high concurrency, it does not affect the correctness or safety of the WireGuard protocol or the Linux kernel.================================================================== BUG: KCSAN: data-race in wg_socket_send_skb_to_peer / wg_socket_send_skb_to_peer read-write to 0xffff88811ad03d48 of 8 bytes by task 3427 on cpu 0: wg_socket_send_skb_to_peer+0xe8/0x130 drivers/net/wireguard/socket.c:183 wg_packet_create_data_done drivers/net/wireguard/send.c:251 [inline] wg_packet_tx_worker+0x12d/0x330 drivers/net/wireguard/send.c:276 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0x4cd/0x9d0 kernel/workqueue.c:3340 worker_thread+0x6bc/0x8b0 kernel/workqueue.c:3421 kthread+0x488/0x510 kernel/kthread.c:463 ret_from_fork+0x148/0x280 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 read-write to 0xffff88811ad03d48 of 8 bytes by task 37 on cpu 1: wg_socket_send_skb_to_peer+0xe8/0x130 drivers/net/wireguard/socket.c:183 wg_socket_send_buffer_to_peer+0xf1/0x120 drivers/net/wireguard/socket.c:200 wg_packet_send_handshake_initiation drivers/net/wireguard/send.c:40 [inline] wg_packet_handshake_send_worker+0x10d/0x160 drivers/net/wireguard/send.c:51 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0x4cd/0x9d0 kernel/workqueue.c:3340 worker_thread+0x6bc/0x8b0 kernel/workqueue.c:3421 kthread+0x488/0x510 kernel/kthread.c:463 ret_from_fork+0x148/0x280 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 value changed: 0x000000000000014c -> 0x000000000000016c Reported by Kernel Concurrency Sanitizer on: CPU: 1 UID: 0 PID: 37 Comm: kworker/u8:2 Not tainted syzkaller #0 PREEMPT(voluntary) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Workqueue: wg-kex-wg2 wg_packet_handshake_send_worker ==================================================================
| Seq | Timestamp | Type | Name | Duration |
|---|---|---|---|---|
| 0/0 | 2026/02/05 10:25 | flow | assessment-kcsan |
2mResults:map[Benign:true Confident:true Explanation:The data race occurs on the `tx_bytes` field of the `struct wg_peer` in the WireGuard network driver. This field is a 64-bit counter used to track the total number of bytes transmitted to a specific peer.
The race happens in the `wg_socket_send_skb_to_peer` function in `drivers/net/wireguard/socket.c`:
```c
int wg_socket_send_skb_to_peer(struct wg_peer *peer, struct sk_buff *skb, u8 ds)
{
...
read_lock_bh(&peer->endpoint_lock);
...
if (likely(!ret))
peer->tx_bytes += skb_len;
read_unlock_bh(&peer->endpoint_lock);
...
}
```
The `endpoint_lock` is a reader-writer lock (`rwlock_t`), and it is acquired as a reader in this function. This allows multiple threads (e.g., multiple worker threads sending packets to the same peer) to enter the critical section concurrently. Since `tx_bytes` is a plain `u64` and the increment operation `+=` is not atomic, concurrent updates from multiple CPUs can lead to lost updates (i.e., some packet sizes might not be added to the total).
However, this data race is considered benign for the following reasons:
1. **Purpose of the field**: The `tx_bytes` field is used exclusively for statistics. It is exported to userspace via Netlink in `drivers/net/wireguard/netlink.c` but is not used for any internal protocol logic, such as rekeying, rate limiting, or flow control.
2. **Memory Safety**: On a 64-bit architecture (like the x86_64 system in the report), 64-bit aligned memory accesses are atomic. Since `tx_bytes` is 64-bit aligned within the `wg_peer` struct, there is no risk of "torn" reads or writes that could lead to memory corruption or invalid values being read. A lost update only results in a slightly inaccurate counter.
3. **System Stability**: The race does not involve any pointers, complex data structures, or state variables that affect the driver's control flow or stability.
In summary, while the race can cause the transmission statistics to be slightly lower than the actual value under high concurrency, it does not affect the correctness or safety of the WireGuard protocol or the Linux kernel.] |
| 1/1 | 2026/02/05 10:25 | action | kernel-checkouter |
0mResults:map[KernelSrc:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/src/63171dab347e5d4f68ee8577dabeb18223ef97da] |
| 2/1 | 2026/02/05 10:25 | action | kernel-builder |
0mResults:map[KernelObj:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/build/5ae1eaacd38079e60c5b9c490fdc76c9bd52cfef] |
| 3/1 | 2026/02/05 10:25 | action | codesearch-prepare |
0mResults:map[Index:codesearch-index] |
| 4/1 | 2026/02/05 10:25 | agent | expert |
2mModel:gemini-3-flash-preview Results: map[Benign:true Confident:true] Instruction: You are an experienced Linux kernel developer tasked with determining if the given kernel data race is benign or not. The data race report is from KCSAN tool. It contains 2 stack traces of the memory accesses that constitute a data race. A "benign" data races are on a simple int/bool variable or similar field, and the accesses are not supposed to be protected by any mutual exclusion primitives. Common examples of such "benign" data races are accesses to various flags fields, statistics counters, and similar. A "benign" data race does not lead to memory corruption/crash with a conservative compiler that compiles memory accesses to primitive types effectively as atomic. A non-benign (or "harmful" data race) can lead to corruption/crash even with a conservative compiler that compiles memory accesses to primitive types effectively as atomic. A common example of a "harmful" data races is race on a complex container (list/hashmap/etc), where accesses are supposed to be protected by a mutual exclusion primitive. Also consider races that happen at the same time with the given one. If there is no synchronization in between, other memory accesses in the involved threads race with each other if they access the same memory. For example, if both threads execute: some_struct->have_elements = true; list_add(new_node, &some_struct->list_head); the race on some_struct->have_elements may appear benign, however it also implies there is a race on some_struct->list_head which is not benign, since the list is not thread-safe. Take into account that on 32-bit systems 64-bit memory accesses may be split into two accesses, and thus even with a conservative compiler may not be fully atomic. However, such races may still be benign depending on how writes are done, and how read data is used. In the final reply explain why you think the given data race is benign or is harmful. Use the provided tools to confirm any assumptions, variables/fields being accessed, etc. In particular, don't make assumptions about the kernel source code, use codesearch tools to read the actual source code. Prefer calling several tools at the same time to save round-trips. Use set-results tool to provide results of the analysis. It must be called exactly once before the final reply. Ignore results of this tool. Prompt: The data race report is: ================================================================== BUG: KCSAN: data-race in wg_socket_send_skb_to_peer / wg_socket_send_skb_to_peer read-write to 0xffff88811ad03d48 of 8 bytes by task 3427 on cpu 0: wg_socket_send_skb_to_peer+0xe8/0x130 drivers/net/wireguard/socket.c:183 wg_packet_create_data_done drivers/net/wireguard/send.c:251 [inline] wg_packet_tx_worker+0x12d/0x330 drivers/net/wireguard/send.c:276 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0x4cd/0x9d0 kernel/workqueue.c:3340 worker_thread+0x6bc/0x8b0 kernel/workqueue.c:3421 kthread+0x488/0x510 kernel/kthread.c:463 ret_from_fork+0x148/0x280 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 read-write to 0xffff88811ad03d48 of 8 bytes by task 37 on cpu 1: wg_socket_send_skb_to_peer+0xe8/0x130 drivers/net/wireguard/socket.c:183 wg_socket_send_buffer_to_peer+0xf1/0x120 drivers/net/wireguard/socket.c:200 wg_packet_send_handshake_initiation drivers/net/wireguard/send.c:40 [inline] wg_packet_handshake_send_worker+0x10d/0x160 drivers/net/wireguard/send.c:51 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0x4cd/0x9d0 kernel/workqueue.c:3340 worker_thread+0x6bc/0x8b0 kernel/workqueue.c:3421 kthread+0x488/0x510 kernel/kthread.c:463 ret_from_fork+0x148/0x280 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 value changed: 0x000000000000014c -> 0x000000000000016c Reported by Kernel Concurrency Sanitizer on: CPU: 1 UID: 0 PID: 37 Comm: kworker/u8:2 Not tainted syzkaller #0 PREEMPT(voluntary) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Workqueue: wg-kex-wg2 wg_packet_handshake_send_worker ================================================================== Reply: The data race occurs on the `tx_bytes` field of the `struct wg_peer` in the WireGuard network driver. This field is a 64-bit counter used to track the total number of bytes transmitted to a specific peer.
The race happens in the `wg_socket_send_skb_to_peer` function in `drivers/net/wireguard/socket.c`:
```c
int wg_socket_send_skb_to_peer(struct wg_peer *peer, struct sk_buff *skb, u8 ds)
{
...
read_lock_bh(&peer->endpoint_lock);
...
if (likely(!ret))
peer->tx_bytes += skb_len;
read_unlock_bh(&peer->endpoint_lock);
...
}
```
The `endpoint_lock` is a reader-writer lock (`rwlock_t`), and it is acquired as a reader in this function. This allows multiple threads (e.g., multiple worker threads sending packets to the same peer) to enter the critical section concurrently. Since `tx_bytes` is a plain `u64` and the increment operation `+=` is not atomic, concurrent updates from multiple CPUs can lead to lost updates (i.e., some packet sizes might not be added to the total).
However, this data race is considered benign for the following reasons:
1. **Purpose of the field**: The `tx_bytes` field is used exclusively for statistics. It is exported to userspace via Netlink in `drivers/net/wireguard/netlink.c` but is not used for any internal protocol logic, such as rekeying, rate limiting, or flow control.
2. **Memory Safety**: On a 64-bit architecture (like the x86_64 system in the report), 64-bit aligned memory accesses are atomic. Since `tx_bytes` is 64-bit aligned within the `wg_peer` struct, there is no risk of "torn" reads or writes that could lead to memory corruption or invalid values being read. A lost update only results in a slightly inaccurate counter.
3. **System Stability**: The race does not involve any pointers, complex data structures, or state variables that affect the driver's control flow or stability.
In summary, while the race can cause the transmission statistics to be slightly lower than the actual value under high concurrency, it does not affect the correctness or safety of the WireGuard protocol or the Linux kernel. |
| 5/2 | 2026/02/05 10:25 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 3888 output: 25 thoughts: 101 Thoughts: **Analyzing a Data Race in WireGuard Socket Handling** Okay, so I've got a data race flagged in `wg_socket_send_skb_to_peer`, specifically at `drivers/net/wireguard/socket.c:183`. The report indicates that there's concurrent read-write access to an 8-byte region. Hmm, that's not ideal. The interesting part is the values involved; they shifted from `0x14c` to `0x16c`. Given the context, these are likely small integers, my gut says counters of some sort. This certainly warrants a closer look. I need to dive into the code at `drivers/net/wireguard/socket.c:183` and understand the read and write operations happening there. Time to see what's really going on with those access patterns. |
| 6/2 | 2026/02/05 10:25 | tool | read-file |
0mArgs:map[File:drivers/net/wireguard/socket.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "device.h"
#include "peer.h"
#include "socket.h"
#include "queueing.h"
#include "messages.h"
#include <linux/ctype.h>
#include <linux/net.h>
#include <linux/if_vlan.h>
#include <linux/if_ether.h>
#include <linux/inetdevice.h>
#include <net/udp_tunnel.h>
#include <net/ipv6.h>
static int send4(struct wg_device *wg, struct sk_buff *skb,
struct endpoint *endpoint, u8 ds, struct dst_cache *cache)
{
struct flowi4 fl = {
.saddr = endpoint->src4.s_addr,
.daddr = endpoint->addr4.sin_addr.s_addr,
.fl4_dport = endpoint->addr4.sin_port,
.flowi4_mark = wg->fwmark,
.flowi4_proto = IPPROTO_UDP
};
struct rtable *rt = NULL;
struct sock *sock;
int ret = 0;
skb_mark_not_on_list(skb);
skb->dev = wg->dev;
skb->mark = wg->fwmark;
rcu_read_lock_bh();
sock = rcu_dereference_bh(wg->sock4);
if (unlikely(!sock)) {
ret = -ENONET;
goto err;
}
fl.fl4_sport = inet_sk(sock)->inet_sport;
if (cache)
rt = dst_cache_get_ip4(cache, &fl.saddr);
if (!rt) {
security_sk_classify_flow(sock, flowi4_to_flowi_common(&fl));
if (unlikely(!inet_confirm_addr(sock_net(sock), NULL, 0,
fl.saddr, RT_SCOPE_HOST))) {
endpoint->src4.s_addr = 0;
endpoint->src_if4 = 0;
fl.saddr = 0;
if (cache)
dst_cache_reset(cache);
}
rt = ip_route_output_flow(sock_net(sock), &fl, sock);
if (unlikely(endpoint->src_if4 && ((IS_ERR(rt) &&
PTR_ERR(rt) == -EINVAL) || (!IS_ERR(rt) &&
rt->dst.dev->ifindex != endpoint->src_if4)))) {
endpoint->src4.s_addr = 0;
endpoint->src_if4 = 0;
fl.saddr = 0;
if (cache)
dst_cache_reset(cache);
if (!IS_ERR(rt))
ip_rt_put(rt);
rt = ip_route_output_flow(sock_net(sock), &fl, sock);
}
if (IS_ERR(rt)) {
ret = PTR_ERR(rt);
net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n",
wg->dev->name, &endpoint->addr, ret);
goto err;
}
if (cache)
dst_cache_set_ip4(cache, &rt->dst, fl.saddr);
}
skb->ignore_df = 1;
udp_tunnel_xmit_skb(rt, sock, skb, fl.saddr, fl.daddr, ds,
ip4_dst_hoplimit(&rt->dst), 0, fl.fl4_sport,
fl.fl4_dport, false, false, 0);
goto out;
err:
kfree_skb(skb);
out:
rcu_read_unlock_bh();
return ret;
}
static int send6(struct wg_device *wg, struct sk_buff *skb,
struct endpoint *endpoint, u8 ds, struct dst_cache *cache)
{
#if IS_ENABLED(CONFIG_IPV6)
struct flowi6 fl = {
.saddr = endpoint->src6,
.daddr = endpoint->addr6.sin6_addr,
.fl6_dport = endpoint->addr6.sin6_port,
.flowi6_mark = wg->fwmark,
.flowi6_oif = endpoint->addr6.sin6_scope_id,
.flowi6_proto = IPPROTO_UDP
/* TODO: addr->sin6_flowinfo */
};
struct dst_entry *dst = NULL;
struct sock *sock;
int ret = 0;
skb_mark_not_on_list(skb);
skb->dev = wg->dev;
skb->mark = wg->fwmark;
rcu_read_lock_bh();
sock = rcu_dereference_bh(wg->sock6);
if (unlikely(!sock)) {
ret = -ENONET;
goto err;
}
fl.fl6_sport = inet_sk(sock)->inet_sport;
if (cache)
dst = dst_cache_get_ip6(cache, &fl.saddr);
if (!dst) {
security_sk_classify_flow(sock, flowi6_to_flowi_common(&fl));
if (unlikely(!ipv6_addr_any(&fl.saddr) &&
!ipv6_chk_addr(sock_net(sock), &fl.saddr, NULL, 0))) {
endpoint->src6 = fl.saddr = in6addr_any;
if (cache)
dst_cache_reset(cache);
}
dst = ipv6_stub->ipv6_dst_lookup_flow(sock_net(sock), sock, &fl,
NULL);
if (IS_ERR(dst)) {
ret = PTR_ERR(dst);
net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n",
wg->dev->name, &endpoint->addr, ret);
goto err;
}
if (cache)
dst_cache_set_ip6(cache, dst, &fl.saddr);
}
skb->ignore_df = 1;
udp_tunnel6_xmit_skb(dst, sock, skb, skb->dev, &fl.saddr, &fl.daddr, ds,
ip6_dst_hoplimit(dst), 0, fl.fl6_sport,
fl.fl6_dport, false, 0);
goto out;
err:
kfree_skb(skb);
out:
rcu_read_unlock_bh();
return ret;
#else
kfree_skb(skb);
return -EAFNOSUPPORT;
#endif
}
int wg_socket_send_skb_to_peer(struct wg_peer *peer, struct sk_buff *skb, u8 ds)
{
size_t skb_len = skb->len;
int ret = -EAFNOSUPPORT;
read_lock_bh(&peer->endpoint_lock);
if (peer->endpoint.addr.sa_family == AF_INET)
ret = send4(peer->device, skb, &peer->endpoint, ds,
&peer->endpoint_cache);
else if (peer->endpoint.addr.sa_family == AF_INET6)
ret = send6(peer->device, skb, &peer->endpoint, ds,
&peer->endpoint_cache);
else
dev_kfree_skb(skb);
if (likely(!ret))
peer->tx_bytes += skb_len;
read_unlock_bh(&peer->endpoint_lock);
return ret;
}
int wg_socket_send_buffer_to_peer(struct wg_peer *peer, void *buffer,
size_t len, u8 ds)
{
struct sk_buff *skb = alloc_skb(len + SKB_HEADER_LEN, GFP_ATOMIC);
if (unlikely(!skb))
return -ENOMEM;
skb_reserve(skb, SKB_HEADER_LEN);
skb_set_inner_network_header(skb, 0);
skb_put_data(skb, buffer, len);
return wg_socket_send_skb_to_peer(peer, skb, ds);
}
int wg_socket_send_buffer_as_reply_to_skb(struct wg_device *wg,
struct sk_buff *in_skb, void *buffer,
size_t len)
{
int ret = 0;
struct sk_buff *skb;
struct endpoint endpoint;
if (unlikely(!in_skb))
return -EINVAL;
ret = wg_socket_endpoint_from_skb(&endpoint, in_skb);
if (unlikely(ret < 0))
return ret;
skb = alloc_skb(len + SKB_HEADER_LEN, GFP_ATOMIC);
if (unlikely(!skb))
return -ENOMEM;
skb_reserve(skb, SKB_HEADER_LEN);
skb_set_inner_network_header(skb, 0);
skb_put_data(skb, buffer, len);
if (endpoint.addr.sa_family == AF_INET)
ret = send4(wg, skb, &endpoint, 0, NULL);
else if (endpoint.addr.sa_family == AF_INET6)
ret = send6(wg, skb, &endpoint, 0, NULL);
/* No other possibilities if the endpoint is valid, which it is,
* as we checked above.
*/
return ret;
}
int wg_socket_endpoint_from_skb(struct endpoint *endpoint,
const struct sk_buff *skb)
{
memset(endpoint, 0, sizeof(*endpoint));
if (skb->protocol == htons(ETH_P_IP)) {
endpoint->addr4.sin_family = AF_INET;
endpoint->addr4.sin_port = udp_hdr(skb)->source;
endpoint->addr4.sin_addr.s_addr = ip_hdr(skb)->saddr;
endpoint->src4.s_addr = ip_hdr(skb)->daddr;
endpoint->src_if4 = skb->skb_iif;
} else if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) {
endpoint->addr6.sin6_family = AF_INET6;
endpoint->addr6.sin6_port = udp_hdr(skb)->source;
endpoint->addr6.sin6_addr = ipv6_hdr(skb)->saddr;
endpoint->addr6.sin6_scope_id = ipv6_iface_scope_id(
&ipv6_hdr(skb)->saddr, skb->skb_iif);
endpoint->src6 = ipv6_hdr(skb)->daddr;
} else {
return -EINVAL;
}
return 0;
}
static bool endpoint_eq(const struct endpoint *a, const struct endpoint *b)
{
return (a->addr.sa_family == AF_INET && b->addr.sa_family == AF_INET &&
a->addr4.sin_port == b->addr4.sin_port &&
a->addr4.sin_addr.s_addr == b->addr4.sin_addr.s_addr &&
a->src4.s_addr == b->src4.s_addr && a->src_if4 == b->src_if4) ||
(a->addr.sa_family == AF_INET6 &&
b->addr.sa_family == AF_INET6 &&
a->addr6.sin6_port == b->addr6.sin6_port &&
ipv6_addr_equal(&a->addr6.sin6_addr, &b->addr6.sin6_addr) &&
a->addr6.sin6_scope_id == b->addr6.sin6_scope_id &&
ipv6_addr_equal(&a->src6, &b->src6)) ||
unlikely(!a->addr.sa_family && !b->addr.sa_family);
}
void wg_socket_set_peer_endpoint(struct wg_peer *peer,
const struct endpoint *endpoint)
{
/* First we check unlocked, in order to optimize, since it's pretty rare
* that an endpoint will change. If we happen to be mid-write, and two
* CPUs wind up writing the same thing or something slightly different,
* it doesn't really matter much either.
*/
if (endpoint_eq(endpoint, &peer->endpoint))
return;
write_lock_bh(&peer->endpoint_lock);
if (endpoint->addr.sa_family == AF_INET) {
peer->endpoint.addr4 = endpoint->addr4;
peer->endpoint.src4 = endpoint->src4;
peer->endpoint.src_if4 = endpoint->src_if4;
} else if (IS_ENABLED(CONFIG_IPV6) && endpoint->addr.sa_family == AF_INET6) {
peer->endpoint.addr6 = endpoint->addr6;
peer->endpoint.src6 = endpoint->src6;
} else {
goto out;
}
dst_cache_reset(&peer->endpoint_cache);
out:
write_unlock_bh(&peer->endpoint_lock);
}
void wg_socket_set_peer_endpoint_from_skb(struct wg_peer *peer,
const struct sk_buff *skb)
{
struct endpoint endpoint;
if (!wg_socket_endpoint_from_skb(&endpoint, skb))
wg_socket_set_peer_endpoint(peer, &endpoint);
}
void wg_socket_clear_peer_endpoint_src(struct wg_peer *peer)
{
write_lock_bh(&peer->endpoint_lock);
memset(&peer->endpoint.src6, 0, sizeof(peer->endpoint.src6));
dst_cache_reset_now(&peer->endpoint_cache);
write_unlock_bh(&peer->endpoint_lock);
}
static int wg_receive(struct sock *sk, struct sk_buff *skb)
{
struct wg_device *wg;
if (unlikely(!sk))
goto err;
wg = sk->sk_user_data;
if (unlikely(!wg))
goto err;
skb_mark_not_on_list(skb);
wg_packet_receive(wg, skb);
return 0;
err:
kfree_skb(skb);
return 0;
}
static void sock_free(struct sock *sock)
{
if (unlikely(!sock))
return;
sk_clear_memalloc(sock);
udp_tunnel_sock_release(sock->sk_socket);
}
static void set_sock_opts(struct socket *sock)
{
sock->sk->sk_allocation = GFP_ATOMIC;
sock->sk->sk_sndbuf = INT_MAX;
sk_set_memalloc(sock->sk);
}
int wg_socket_init(struct wg_device *wg, u16 port)
{
struct net *net;
int ret;
struct udp_tunnel_sock_cfg cfg = {
.sk_user_data = wg,
.encap_type = 1,
.encap_rcv = wg_receive
};
struct socket *new4 = NULL, *new6 = NULL;
struct udp_port_cfg port4 = {
.family = AF_INET,
.local_ip.s_addr = htonl(INADDR_ANY),
.local_udp_port = htons(port),
.use_udp_checksums = true
};
#if IS_ENABLED(CONFIG_IPV6)
int retries = 0;
struct udp_port_cfg port6 = {
.family = AF_INET6,
.local_ip6 = IN6ADDR_ANY_INIT,
.use_udp6_tx_checksums = true,
.use_udp6_rx_checksums = true,
.ipv6_v6only = true
};
#endif
rcu_read_lock();
net = rcu_dereference(wg->creating_net);
net = net ? maybe_get_net(net) : NULL;
rcu_read_unlock();
if (unlikely(!net))
return -ENONET;
#if IS_ENABLED(CONFIG_IPV6)
retry:
#endif
ret = udp_sock_create(net, &port4, &new4);
if (ret < 0) {
pr_err("%s: Could not create IPv4 socket\n", wg->dev->name);
goto out;
}
set_sock_opts(new4);
setup_udp_tunnel_sock(net, new4, &cfg);
#if IS_ENABLED(CONFIG_IPV6)
if (ipv6_mod_enabled()) {
port6.local_udp_port = inet_sk(new4->sk)->inet_sport;
ret = udp_sock_create(net, &port6, &new6);
if (ret < 0) {
udp_tunnel_sock_release(new4);
if (ret == -EADDRINUSE && !port && retries++ < 100)
goto retry;
pr_err("%s: Could not create IPv6 socket\n",
wg->dev->name);
goto out;
}
set_sock_opts(new6);
setup_udp_tunnel_sock(net, new6, &cfg);
}
#endif
wg_socket_reinit(wg, new4->sk, new6 ? new6->sk : NULL);
ret = 0;
out:
put_net(net);
return ret;
}
void wg_socket_reinit(struct wg_device *wg, struct sock *new4,
struct sock *new6)
{
struct sock *old4, *old6;
mutex_lock(&wg->socket_update_lock);
old4 = rcu_dereference_protected(wg->sock4,
lockdep_is_held(&wg->socket_update_lock));
old6 = rcu_dereference_protected(wg->sock6,
lockdep_is_held(&wg->socket_update_lock));
rcu_assign_pointer(wg->sock4, new4);
rcu_assign_pointer(wg->sock6, new6);
if (new4)
wg->incoming_port = ntohs(inet_sk(new4)->inet_sport);
mutex_unlock(&wg->socket_update_lock);
synchronize_net();
sock_free(old4);
sock_free(old6);
}
] |
| 7/2 | 2026/02/05 10:25 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 8972 output: 21 thoughts: 468 Thoughts: **Analyzing a Potential Race Condition in WireGuard's Packet Transmission** Okay, so I'm looking at this function `wg_socket_send_skb_to_peer`. The report flags a potential issue, specifically at line 183 of the code, which is the `else` statement. I'm making a mental note that line numbers in the report can be slightly off compared to my `read-file` output. In my version, line 183 is indeed the `else` that handles unsupported address families, and line 186 is the `peer->tx_bytes += skb_len;` assignment. The report states the potential issue lies at `drivers/net/wireguard/socket.c:183`. My primary concern is that a race condition might exist, and I want to understand if the increment of `peer->tx_bytes` is protected in the presence of a potential error path. If the address family is something other than IPv4 or IPv6, the `dev_kfree_skb(skb)` call is made. The question is whether the `tx_bytes` increment occurs *before* this free, or if it is guarded. Since I'm looking at `tx_bytes`, the next step is to examine the `wg_peer` structure to see how this variable is defined and how it relates to the locking mechanism used within this function. I need to figure out if there's a race with `tx_bytes`, if it is subject to a race with the `dev_kfree_skb(skb)` call in the `else` block. |
| 8/2 | 2026/02/05 10:25 | tool | codesearch-struct-layout |
0mArgs:map[Name:wg_peer] Results: map[Fields:[map[Name:device OffsetBits:0 SizeBits:64] map[Name:tx_queue OffsetBits:64 SizeBits:384] map[Name:rx_queue OffsetBits:448 SizeBits:384] map[Name:staged_packet_queue OffsetBits:832 SizeBits:192] map[Name:serial_work_cpu OffsetBits:1024 SizeBits:32] map[Name:is_dead OffsetBits:1056 SizeBits:8] map[Name:keypairs OffsetBits:1088 SizeBits:256] map[Name:endpoint OffsetBits:1344 SizeBits:352] map[Name:endpoint_cache OffsetBits:1728 SizeBits:128] map[Name:endpoint_lock OffsetBits:1856 SizeBits:64] map[Name:handshake OffsetBits:1920 SizeBits:2688] map[Name:last_sent_handshake OffsetBits:4608 SizeBits:64] map[Name:transmit_handshake_work OffsetBits:4672 SizeBits:256] map[Name:clear_peer_work OffsetBits:4928 SizeBits:256] map[Name:transmit_packet_work OffsetBits:5184 SizeBits:256] map[Name:latest_cookie OffsetBits:5440 SizeBits:1216] map[Name:pubkey_hash OffsetBits:6656 SizeBits:128] map[Name:rx_bytes OffsetBits:6784 SizeBits:64] map[Name:tx_bytes OffsetBits:6848 SizeBits:64] map[Name:timer_retransmit_handshake OffsetBits:6912 SizeBits:320] map[Name:timer_send_keepalive OffsetBits:7232 SizeBits:320] map[Name:timer_new_handshake OffsetBits:7552 SizeBits:320] map[Name:timer_zero_key_material OffsetBits:7872 SizeBits:320] map[Name:timer_persistent_keepalive OffsetBits:8192 SizeBits:320] map[Name:timer_handshake_attempts OffsetBits:8512 SizeBits:32] map[Name:persistent_keepalive_interval OffsetBits:8544 SizeBits:16] map[Name:timer_need_another_keepalive OffsetBits:8560 SizeBits:8] map[Name:sent_lastminute_handshake OffsetBits:8568 SizeBits:8] map[Name:walltime_last_handshake OffsetBits:8576 SizeBits:128] map[Name:refcount OffsetBits:8704 SizeBits:32] map[Name:rcu OffsetBits:8768 SizeBits:128] map[Name:peer_list OffsetBits:8896 SizeBits:128] map[Name:allowedips_list OffsetBits:9024 SizeBits:128] map[Name:napi OffsetBits:9152 SizeBits:3968] map[Name:internal_id OffsetBits:13120 SizeBits:64]]] |
| 9/2 | 2026/02/05 10:25 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 10653 output: 25 thoughts: 441 Thoughts: **Analyzing a Potential Race Condition in Wireguard's Statistics** Okay, so I'm looking at this report and immediately my attention is drawn to `tx_bytes`. It's located at offset 856 bytes from a known address, `0xffff88811ad03d48`. This means the `peer` structure must start at `0xffff88811ad039f0` (calculated from the provided offset). My initial thought is, based on the `peer` pointer, and the `tx_bytes` variable, the update to `tx_bytes` could be vulnerable to concurrent updates. The code in question is `peer->tx_bytes += skb_len;`. I see it's a 64-bit counter, which is relevant given this is a 64-bit x86 system as suggested by the report referencing `arch/x86/kernel/process.c` and `arch/x86/entry/entry_64.S`. They're protecting this counter with a `read_lock_bh(&peer->endpoint_lock)`. However, the use of `read_lock` specifically, rather than a write lock, means multiple threads *can* still access and update `tx_bytes` concurrently. This is a common pattern in the kernel, but it requires careful consideration. Now, my experience tells me that while a 64-bit addition on x86-64 *appears* atomic (it's often a single instruction), it's *not* guaranteed atomic without a `lock` prefix. The compiler will likely generate something like: `mov rax, [peer->tx_bytes]`, `add rax, skb_len`, and `mov [peer->tx_bytes], rax`. If two threads execute this concurrently, the second update could overwrite the first, causing a data race and lost updates. This is a classic race condition scenario that I have seen many times before. The key question now is: does it matter? Is `tx_bytes` used in any critical path or are we just talking about lost statistics? Since it's described as Wireguard statistics, it *might* be benign if it's only used for display. However, I need to understand the implications of potentially incorrect statistics. I need to figure out the uses of this variable throughout the code. I'll need to search for all occurrences of `tx_bytes` to ascertain this. |
| 10/2 | 2026/02/05 10:25 | tool | codesearch-find-references |
0mArgs:map[Name:wg_peer::tx_bytes] Results: map[References:[map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:get_peer SourceFile:drivers/net/wireguard/netlink.c SourceLine:112] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:wg_socket_send_skb_to_peer SourceFile:drivers/net/wireguard/socket.c SourceLine:183]] TruncatedOutput:false] |
| 11/2 | 2026/02/05 10:25 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 11777 output: 26 thoughts: 36 Thoughts: Okay, here's my interpretation of those thoughts, summarized from my perspective as someone familiar with code search and kernel internals: **Deeper Dive Required: `codesearch-find-references` Isn't Enough** Huh, that's odd. `codesearch-find-references` only turned up *two* usages? That seems incredibly low, especially given the commonality of the kernel code in question. I would have expected a more comprehensive list of references, given how central a function like that usually is. Looks like I need to dig deeper. Let me see... I'll check `drivers/net/wireguard/netlink.c` line 112. That's likely where the core implementation for the WireGuard netlink interface is going to be. Time to see what's actually happening at that location. |
| 12/2 | 2026/02/05 10:25 | tool | read-file |
0mArgs:map[File:drivers/net/wireguard/netlink.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "netlink.h"
#include "device.h"
#include "peer.h"
#include "socket.h"
#include "queueing.h"
#include "messages.h"
#include "generated/netlink.h"
#include <uapi/linux/wireguard.h>
#include <linux/if.h>
#include <net/genetlink.h>
#include <net/sock.h>
#include <crypto/utils.h>
static struct genl_family genl_family;
static struct wg_device *lookup_interface(struct nlattr **attrs,
struct sk_buff *skb)
{
struct net_device *dev = NULL;
if (!attrs[WGDEVICE_A_IFINDEX] == !attrs[WGDEVICE_A_IFNAME])
return ERR_PTR(-EBADR);
if (attrs[WGDEVICE_A_IFINDEX])
dev = dev_get_by_index(sock_net(skb->sk),
nla_get_u32(attrs[WGDEVICE_A_IFINDEX]));
else if (attrs[WGDEVICE_A_IFNAME])
dev = dev_get_by_name(sock_net(skb->sk),
nla_data(attrs[WGDEVICE_A_IFNAME]));
if (!dev)
return ERR_PTR(-ENODEV);
if (!dev->rtnl_link_ops || !dev->rtnl_link_ops->kind ||
strcmp(dev->rtnl_link_ops->kind, KBUILD_MODNAME)) {
dev_put(dev);
return ERR_PTR(-EOPNOTSUPP);
}
return netdev_priv(dev);
}
static int get_allowedips(struct sk_buff *skb, const u8 *ip, u8 cidr,
int family)
{
struct nlattr *allowedip_nest;
allowedip_nest = nla_nest_start(skb, 0);
if (!allowedip_nest)
return -EMSGSIZE;
if (nla_put_u8(skb, WGALLOWEDIP_A_CIDR_MASK, cidr) ||
nla_put_u16(skb, WGALLOWEDIP_A_FAMILY, family) ||
nla_put(skb, WGALLOWEDIP_A_IPADDR, family == AF_INET6 ?
sizeof(struct in6_addr) : sizeof(struct in_addr), ip)) {
nla_nest_cancel(skb, allowedip_nest);
return -EMSGSIZE;
}
nla_nest_end(skb, allowedip_nest);
return 0;
}
struct dump_ctx {
struct wg_device *wg;
struct wg_peer *next_peer;
u64 allowedips_seq;
struct allowedips_node *next_allowedip;
};
#define DUMP_CTX(cb) ((struct dump_ctx *)(cb)->args)
static int
get_peer(struct wg_peer *peer, struct sk_buff *skb, struct dump_ctx *ctx)
{
struct nlattr *allowedips_nest, *peer_nest = nla_nest_start(skb, 0);
struct allowedips_node *allowedips_node = ctx->next_allowedip;
bool fail;
if (!peer_nest)
return -EMSGSIZE;
down_read(&peer->handshake.lock);
fail = nla_put(skb, WGPEER_A_PUBLIC_KEY, NOISE_PUBLIC_KEY_LEN,
peer->handshake.remote_static);
up_read(&peer->handshake.lock);
if (fail)
goto err;
if (!allowedips_node) {
const struct __kernel_timespec last_handshake = {
.tv_sec = peer->walltime_last_handshake.tv_sec,
.tv_nsec = peer->walltime_last_handshake.tv_nsec
};
down_read(&peer->handshake.lock);
fail = nla_put(skb, WGPEER_A_PRESHARED_KEY,
NOISE_SYMMETRIC_KEY_LEN,
peer->handshake.preshared_key);
up_read(&peer->handshake.lock);
if (fail)
goto err;
if (nla_put(skb, WGPEER_A_LAST_HANDSHAKE_TIME,
sizeof(last_handshake), &last_handshake) ||
nla_put_u16(skb, WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL,
peer->persistent_keepalive_interval) ||
nla_put_u64_64bit(skb, WGPEER_A_TX_BYTES, peer->tx_bytes,
WGPEER_A_UNSPEC) ||
nla_put_u64_64bit(skb, WGPEER_A_RX_BYTES, peer->rx_bytes,
WGPEER_A_UNSPEC) ||
nla_put_u32(skb, WGPEER_A_PROTOCOL_VERSION, 1))
goto err;
read_lock_bh(&peer->endpoint_lock);
if (peer->endpoint.addr.sa_family == AF_INET)
fail = nla_put(skb, WGPEER_A_ENDPOINT,
sizeof(peer->endpoint.addr4),
&peer->endpoint.addr4);
else if (peer->endpoint.addr.sa_family == AF_INET6)
fail = nla_put(skb, WGPEER_A_ENDPOINT,
sizeof(peer->endpoint.addr6),
&peer->endpoint.addr6);
read_unlock_bh(&peer->endpoint_lock);
if (fail)
goto err;
allowedips_node =
list_first_entry_or_null(&peer->allowedips_list,
struct allowedips_node, peer_list);
}
if (!allowedips_node)
goto no_allowedips;
if (!ctx->allowedips_seq)
ctx->allowedips_seq = ctx->wg->peer_allowedips.seq;
else if (ctx->allowedips_seq != ctx->wg->peer_allowedips.seq)
goto no_allowedips;
allowedips_nest = nla_nest_start(skb, WGPEER_A_ALLOWEDIPS);
if (!allowedips_nest)
goto err;
list_for_each_entry_from(allowedips_node, &peer->allowedips_list,
peer_list) {
u8 cidr, ip[16] __aligned(__alignof(u64));
int family;
family = wg_allowedips_read_node(allowedips_node, ip, &cidr);
if (get_allowedips(skb, ip, cidr, family)) {
nla_nest_end(skb, allowedips_nest);
nla_nest_end(skb, peer_nest);
ctx->next_allowedip = allowedips_node;
return -EMSGSIZE;
}
}
nla_nest_end(skb, allowedips_nest);
no_allowedips:
nla_nest_end(skb, peer_nest);
ctx->next_allowedip = NULL;
ctx->allowedips_seq = 0;
return 0;
err:
nla_nest_cancel(skb, peer_nest);
return -EMSGSIZE;
}
int wg_get_device_start(struct netlink_callback *cb)
{
struct wg_device *wg;
wg = lookup_interface(genl_info_dump(cb)->attrs, cb->skb);
if (IS_ERR(wg))
return PTR_ERR(wg);
DUMP_CTX(cb)->wg = wg;
return 0;
}
int wg_get_device_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
{
struct wg_peer *peer, *next_peer_cursor;
struct dump_ctx *ctx = DUMP_CTX(cb);
struct wg_device *wg = ctx->wg;
struct nlattr *peers_nest;
int ret = -EMSGSIZE;
bool done = true;
void *hdr;
rtnl_lock();
mutex_lock(&wg->device_update_lock);
cb->seq = wg->device_update_gen;
next_peer_cursor = ctx->next_peer;
hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
&genl_family, NLM_F_MULTI, WG_CMD_GET_DEVICE);
if (!hdr)
goto out;
genl_dump_check_consistent(cb, hdr);
if (!ctx->next_peer) {
if (nla_put_u16(skb, WGDEVICE_A_LISTEN_PORT,
wg->incoming_port) ||
nla_put_u32(skb, WGDEVICE_A_FWMARK, wg->fwmark) ||
nla_put_u32(skb, WGDEVICE_A_IFINDEX, wg->dev->ifindex) ||
nla_put_string(skb, WGDEVICE_A_IFNAME, wg->dev->name))
goto out;
down_read(&wg->static_identity.lock);
if (wg->static_identity.has_identity) {
if (nla_put(skb, WGDEVICE_A_PRIVATE_KEY,
NOISE_PUBLIC_KEY_LEN,
wg->static_identity.static_private) ||
nla_put(skb, WGDEVICE_A_PUBLIC_KEY,
NOISE_PUBLIC_KEY_LEN,
wg->static_identity.static_public)) {
up_read(&wg->static_identity.lock);
goto out;
}
}
up_read(&wg->static_identity.lock);
}
peers_nest = nla_nest_start(skb, WGDEVICE_A_PEERS);
if (!peers_nest)
goto out;
ret = 0;
lockdep_assert_held(&wg->device_update_lock);
/* If the last cursor was removed in peer_remove or peer_remove_all, then
* we just treat this the same as there being no more peers left. The
* reason is that seq_nr should indicate to userspace that this isn't a
* coherent dump anyway, so they'll try again.
*/
if (list_empty(&wg->peer_list) ||
(ctx->next_peer && ctx->next_peer->is_dead)) {
nla_nest_cancel(skb, peers_nest);
goto out;
}
peer = list_prepare_entry(ctx->next_peer, &wg->peer_list, peer_list);
list_for_each_entry_continue(peer, &wg->peer_list, peer_list) {
if (get_peer(peer, skb, ctx)) {
done = false;
break;
}
next_peer_cursor = peer;
}
nla_nest_end(skb, peers_nest);
out:
if (!ret && !done && next_peer_cursor)
wg_peer_get(next_peer_cursor);
wg_peer_put(ctx->next_peer);
mutex_unlock(&wg->device_update_lock);
rtnl_unlock();
if (ret) {
genlmsg_cancel(skb, hdr);
return ret;
}
genlmsg_end(skb, hdr);
if (done) {
ctx->next_peer = NULL;
return 0;
}
ctx->next_peer = next_peer_cursor;
return skb->len;
/* At this point, we can't really deal ourselves with safely zeroing out
* the private key material after usage. This will need an additional API
* in the kernel for marking skbs as zero_on_free.
*/
}
int wg_get_device_done(struct netlink_callback *cb)
{
struct dump_ctx *ctx = DUMP_CTX(cb);
if (ctx->wg)
dev_put(ctx->wg->dev);
wg_peer_put(ctx->next_peer);
return 0;
}
static int set_port(struct wg_device *wg, u16 port)
{
struct wg_peer *peer;
if (wg->incoming_port == port)
return 0;
list_for_each_entry(peer, &wg->peer_list, peer_list)
wg_socket_clear_peer_endpoint_src(peer);
if (!netif_running(wg->dev)) {
wg->incoming_port = port;
return 0;
}
return wg_socket_init(wg, port);
}
static int set_allowedip(struct wg_peer *peer, struct nlattr **attrs)
{
int ret = -EINVAL;
u32 flags = 0;
u16 family;
u8 cidr;
if (!attrs[WGALLOWEDIP_A_FAMILY] || !attrs[WGALLOWEDIP_A_IPADDR] ||
!attrs[WGALLOWEDIP_A_CIDR_MASK])
return ret;
family = nla_get_u16(attrs[WGALLOWEDIP_A_FAMILY]);
cidr = nla_get_u8(attrs[WGALLOWEDIP_A_CIDR_MASK]);
if (attrs[WGALLOWEDIP_A_FLAGS])
flags = nla_get_u32(attrs[WGALLOWEDIP_A_FLAGS]);
if (family == AF_INET && cidr <= 32 &&
nla_len(attrs[WGALLOWEDIP_A_IPADDR]) == sizeof(struct in_addr)) {
if (flags & WGALLOWEDIP_F_REMOVE_ME)
ret = wg_allowedips_remove_v4(&peer->device->peer_allowedips,
nla_data(attrs[WGALLOWEDIP_A_IPADDR]), cidr,
peer, &peer->device->device_update_lock);
else
ret = wg_allowedips_insert_v4(&peer->device->peer_allowedips,
nla_data(attrs[WGALLOWEDIP_A_IPADDR]), cidr,
peer, &peer->device->device_update_lock);
} else if (family == AF_INET6 && cidr <= 128 &&
nla_len(attrs[WGALLOWEDIP_A_IPADDR]) == sizeof(struct in6_addr)) {
if (flags & WGALLOWEDIP_F_REMOVE_ME)
ret = wg_allowedips_remove_v6(&peer->device->peer_allowedips,
nla_data(attrs[WGALLOWEDIP_A_IPADDR]), cidr,
peer, &peer->device->device_update_lock);
else
ret = wg_allowedips_insert_v6(&peer->device->peer_allowedips,
nla_data(attrs[WGALLOWEDIP_A_IPADDR]), cidr,
peer, &peer->device->device_update_lock);
}
return ret;
}
static int set_peer(struct wg_device *wg, struct nlattr **attrs)
{
u8 *public_key = NULL, *preshared_key = NULL;
struct wg_peer *peer = NULL;
u32 flags = 0;
int ret;
ret = -EINVAL;
if (attrs[WGPEER_A_PUBLIC_KEY] &&
nla_len(attrs[WGPEER_A_PUBLIC_KEY]) == NOISE_PUBLIC_KEY_LEN)
public_key = nla_data(attrs[WGPEER_A_PUBLIC_KEY]);
else
goto out;
if (attrs[WGPEER_A_PRESHARED_KEY] &&
nla_len(attrs[WGPEER_A_PRESHARED_KEY]) == NOISE_SYMMETRIC_KEY_LEN)
preshared_key = nla_data(attrs[WGPEER_A_PRESHARED_KEY]);
if (attrs[WGPEER_A_FLAGS])
flags = nla_get_u32(attrs[WGPEER_A_FLAGS]);
ret = -EPFNOSUPPORT;
if (attrs[WGPEER_A_PROTOCOL_VERSION]) {
if (nla_get_u32(attrs[WGPEER_A_PROTOCOL_VERSION]) != 1)
goto out;
}
peer = wg_pubkey_hashtable_lookup(wg->peer_hashtable,
nla_data(attrs[WGPEER_A_PUBLIC_KEY]));
ret = 0;
if (!peer) { /* Peer doesn't exist yet. Add a new one. */
if (flags & (WGPEER_F_REMOVE_ME | WGPEER_F_UPDATE_ONLY))
goto out;
/* The peer is new, so there aren't allowed IPs to remove. */
flags &= ~WGPEER_F_REPLACE_ALLOWEDIPS;
down_read(&wg->static_identity.lock);
if (wg->static_identity.has_identity &&
!memcmp(nla_data(attrs[WGPEER_A_PUBLIC_KEY]),
wg->static_identity.static_public,
NOISE_PUBLIC_KEY_LEN)) {
/* We silently ignore peers that have the same public
* key as the device. The reason we do it silently is
* that we'd like for people to be able to reuse the
* same set of API calls across peers.
*/
up_read(&wg->static_identity.lock);
ret = 0;
goto out;
}
up_read(&wg->static_identity.lock);
peer = wg_peer_create(wg, public_key, preshared_key);
if (IS_ERR(peer)) {
ret = PTR_ERR(peer);
peer = NULL;
goto out;
}
/* Take additional reference, as though we've just been
* looked up.
*/
wg_peer_get(peer);
}
if (flags & WGPEER_F_REMOVE_ME) {
wg_peer_remove(peer);
goto out;
}
if (preshared_key) {
down_write(&peer->handshake.lock);
memcpy(&peer->handshake.preshared_key, preshared_key,
NOISE_SYMMETRIC_KEY_LEN);
up_write(&peer->handshake.lock);
}
if (attrs[WGPEER_A_ENDPOINT]) {
struct sockaddr *addr = nla_data(attrs[WGPEER_A_ENDPOINT]);
size_t len = nla_len(attrs[WGPEER_A_ENDPOINT]);
struct endpoint endpoint = { { { 0 } } };
if (len == sizeof(struct sockaddr_in) && addr->sa_family == AF_INET) {
endpoint.addr4 = *(struct sockaddr_in *)addr;
wg_socket_set_peer_endpoint(peer, &endpoint);
} else if (len == sizeof(struct sockaddr_in6) && addr->sa_family == AF_INET6) {
endpoint.addr6 = *(struct sockaddr_in6 *)addr;
wg_socket_set_peer_endpoint(peer, &endpoint);
}
}
if (flags & WGPEER_F_REPLACE_ALLOWEDIPS)
wg_allowedips_remove_by_peer(&wg->peer_allowedips, peer,
&wg->device_update_lock);
if (attrs[WGPEER_A_ALLOWEDIPS]) {
struct nlattr *attr, *allowedip[WGALLOWEDIP_A_MAX + 1];
int rem;
nla_for_each_nested(attr, attrs[WGPEER_A_ALLOWEDIPS], rem) {
ret = nla_parse_nested(allowedip, WGALLOWEDIP_A_MAX,
attr, NULL, NULL);
if (ret < 0)
goto out;
ret = set_allowedip(peer, allowedip);
if (ret < 0)
goto out;
}
}
if (attrs[WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL]) {
const u16 persistent_keepalive_interval = nla_get_u16(
attrs[WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL]);
const bool send_keepalive =
!peer->persistent_keepalive_interval &&
persistent_keepalive_interval &&
netif_running(wg->dev);
peer->persistent_keepalive_interval = persistent_keepalive_interval;
if (send_keepalive)
wg_packet_send_keepalive(peer);
}
if (netif_running(wg->dev))
wg_packet_send_staged_packets(peer);
out:
wg_peer_put(peer);
if (attrs[WGPEER_A_PRESHARED_KEY])
memzero_explicit(nla_data(attrs[WGPEER_A_PRESHARED_KEY]),
nla_len(attrs[WGPEER_A_PRESHARED_KEY]));
return ret;
}
int wg_set_device_doit(struct sk_buff *skb, struct genl_info *info)
{
struct wg_device *wg = lookup_interface(info->attrs, skb);
u32 flags = 0;
int ret;
if (IS_ERR(wg)) {
ret = PTR_ERR(wg);
goto out_nodev;
}
rtnl_lock();
mutex_lock(&wg->device_update_lock);
if (info->attrs[WGDEVICE_A_FLAGS])
flags = nla_get_u32(info->attrs[WGDEVICE_A_FLAGS]);
if (info->attrs[WGDEVICE_A_LISTEN_PORT] || info->attrs[WGDEVICE_A_FWMARK]) {
struct net *net;
rcu_read_lock();
net = rcu_dereference(wg->creating_net);
ret = !net || !ns_capable(net->user_ns, CAP_NET_ADMIN) ? -EPERM : 0;
rcu_read_unlock();
if (ret)
goto out;
}
++wg->device_update_gen;
if (info->attrs[WGDEVICE_A_FWMARK]) {
struct wg_peer *peer;
wg->fwmark = nla_get_u32(info->attrs[WGDEVICE_A_FWMARK]);
list_for_each_entry(peer, &wg->peer_list, peer_list)
wg_socket_clear_peer_endpoint_src(peer);
}
if (info->attrs[WGDEVICE_A_LISTEN_PORT]) {
ret = set_port(wg,
nla_get_u16(info->attrs[WGDEVICE_A_LISTEN_PORT]));
if (ret)
goto out;
}
if (flags & WGDEVICE_F_REPLACE_PEERS)
wg_peer_remove_all(wg);
if (info->attrs[WGDEVICE_A_PRIVATE_KEY] &&
nla_len(info->attrs[WGDEVICE_A_PRIVATE_KEY]) ==
NOISE_PUBLIC_KEY_LEN) {
u8 *private_key = nla_data(info->attrs[WGDEVICE_A_PRIVATE_KEY]);
u8 public_key[NOISE_PUBLIC_KEY_LEN];
struct wg_peer *peer, *temp;
bool send_staged_packets;
if (!crypto_memneq(wg->static_identity.static_private,
private_key, NOISE_PUBLIC_KEY_LEN))
goto skip_set_private_key;
/* We remove before setting, to prevent race, which means doing
* two 25519-genpub ops.
*/
if (curve25519_generate_public(public_key, private_key)) {
peer = wg_pubkey_hashtable_lookup(wg->peer_hashtable,
public_key);
if (peer) {
wg_peer_put(peer);
wg_peer_remove(peer);
}
}
down_write(&wg->static_identity.lock);
send_staged_packets = !wg->static_identity.has_identity && netif_running(wg->dev);
wg_noise_set_static_identity_private_key(&wg->static_identity, private_key);
send_staged_packets = send_staged_packets && wg->static_identity.has_identity;
wg_cookie_checker_precompute_device_keys(&wg->cookie_checker);
list_for_each_entry_safe(peer, temp, &wg->peer_list, peer_list) {
wg_noise_precompute_static_static(peer);
wg_noise_expire_current_peer_keypairs(peer);
if (send_staged_packets)
wg_packet_send_staged_packets(peer);
}
up_write(&wg->static_identity.lock);
}
skip_set_private_key:
if (info->attrs[WGDEVICE_A_PEERS]) {
struct nlattr *attr, *peer[WGPEER_A_MAX + 1];
int rem;
nla_for_each_nested(attr, info->attrs[WGDEVICE_A_PEERS], rem) {
ret = nla_parse_nested(peer, WGPEER_A_MAX, attr,
NULL, NULL);
if (ret < 0)
goto out;
ret = set_peer(wg, peer);
if (ret < 0)
goto out;
}
}
ret = 0;
out:
mutex_unlock(&wg->device_update_lock);
rtnl_unlock();
dev_put(wg->dev);
out_nodev:
if (info->attrs[WGDEVICE_A_PRIVATE_KEY])
memzero_explicit(nla_data(info->attrs[WGDEVICE_A_PRIVATE_KEY]),
nla_len(info->attrs[WGDEVICE_A_PRIVATE_KEY]));
return ret;
}
static struct genl_family genl_family __ro_after_init = {
.split_ops = wireguard_nl_ops,
.n_split_ops = ARRAY_SIZE(wireguard_nl_ops),
.name = WG_GENL_NAME,
.version = WG_GENL_VERSION,
.module = THIS_MODULE,
.netnsok = true
};
int __init wg_genetlink_init(void)
{
BUILD_BUG_ON(WG_KEY_LEN != NOISE_PUBLIC_KEY_LEN);
BUILD_BUG_ON(WG_KEY_LEN != NOISE_SYMMETRIC_KEY_LEN);
return genl_register_family(&genl_family);
}
void __exit wg_genetlink_uninit(void)
{
genl_unregister_family(&genl_family);
}
] |
| 13/2 | 2026/02/05 10:25 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 18882 output: 17 thoughts: 596 Thoughts: **Analyzing a Potential Race Condition in WireGuard Peer Statistics** Okay, so I'm looking at this section in `get_peer`, specifically the export of `tx_bytes` via netlink for statistics, and immediately I'm thinking about that potential race condition, especially with the `peer->tx_bytes += skb_len;` increment in `wg_socket_send_skb_to_peer`. Statistically, losing an update here and there *should* be fine for a stats counter, but this is WireGuard – known for its meticulous design. My initial thought is to check if other fields are involved in this race. The report shows a change in the value of 0x20 bytes, and this corresponds to the length of the `skb`. While `tx_bytes` is a 64-bit value, so I need to assess whether the write is atomic. On x86_64, 64-bit aligned writes *are* atomic, but is `tx_bytes` aligned? Let's see... the `codesearch-struct-layout` confirms that `tx_bytes` is at offset 856 bytes, which is a multiple of 8, so it *is* 64-bit aligned if the struct is 64-bit aligned. Given the structure of `wg_peer` with all its 64-bit fields and pointers, I'm confident it's aligned. So, if it's just a statistics counter with the occasional lost update, it's generally not a huge deal. But, again, this is WireGuard, so I wonder if an `atomic64_t` was overlooked. This per-peer counter is accessed by multiple CPUs simultaneously as packets are sent. So, the question remains: is the lack of atomicity here a deliberate decision? It's common in network drivers to use non-atomic increments for per-CPU statistics, but this is different, as it's per-peer and potentially multi-CPU. Alright, let's look at `rx_bytes` to see if it shares the same potential issue. That's at offset 848 bytes, also 64-bit aligned. I should check how and where it is being updated. |
| 14/2 | 2026/02/05 10:25 | tool | grepper |
0mArgs:map[Expression:rx_bytes] Results: map[Output:
Full output is too long, showing 500 out of 8912 lines.
Use more precise expression if possible.
[Documentation/ABI/testing/sysfs-class-net-statistics=13=Description:
--
Documentation/ABI/testing/sysfs-class-net-statistics-16-
Documentation/ABI/testing/sysfs-class-net-statistics:17:What: /sys/class/net/<iface>/statistics/rx_bytes
Documentation/ABI/testing/sysfs-class-net-statistics-18-Date: April 2005
--
Documentation/networking/device_drivers/ethernet/altera/altera_tse.rst=179=successfully transmitted from the interface.
Documentation/networking/device_drivers/ethernet/altera/altera_tse.rst-180-
Documentation/networking/device_drivers/ethernet/altera/altera_tse.rst:181:"rx_bytes" is equivalent to aOctetsReceivedOK defined in IEEE 802.3-2012,
Documentation/networking/device_drivers/ethernet/altera/altera_tse.rst-182-Section 5.2.2.1.14. This statistic is the count of data and pad bytes
--
Documentation/networking/device_drivers/ethernet/intel/ice.rst=95=https://kernel.org/pub/software/network/ethtool/
Documentation/networking/device_drivers/ethernet/intel/ice.rst-96-
Documentation/networking/device_drivers/ethernet/intel/ice.rst:97:NOTE: The rx_bytes value of ethtool does not match the rx_bytes value of
Documentation/networking/device_drivers/ethernet/intel/ice.rst-98-Netdev, due to the 4-byte CRC being stripped by the device. The difference
Documentation/networking/device_drivers/ethernet/intel/ice.rst:99:between the two rx_bytes values will be 4 x the number of Rx packets. For
Documentation/networking/device_drivers/ethernet/intel/ice.rst-100-example, if Rx packets are 10 and Netdev (software statistics) displays
Documentation/networking/device_drivers/ethernet/intel/ice.rst:101:rx_bytes as "X", then ethtool (hardware statistics) will display rx_bytes as
Documentation/networking/device_drivers/ethernet/intel/ice.rst-102-"X+40" (4 bytes CRC x 10 packets).
--
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst=743=Counters on the NIC port that is connected to a eSwitch.
--
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst-870-
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst:871: * - `rx_bytes`
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst-872- - Representor only: bytes received, that were handled by the hypervisor.
--
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst=894=like flow control, FEC and more.
--
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst-912-
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst:913: * - `rx_bytes_phy`
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst-914- - The number of bytes received on the physical port, including Ethernet
--
Documentation/networking/device_drivers/ethernet/netronome/nfp.rst=263=Following device statistics are available through the ``ethtool -S`` interface:
--
Documentation/networking/device_drivers/ethernet/netronome/nfp.rst-296-
Documentation/networking/device_drivers/ethernet/netronome/nfp.rst:297: * - dev_rx_bytes
Documentation/networking/device_drivers/ethernet/netronome/nfp.rst-298- - 3
--
Documentation/networking/device_drivers/ethernet/pensando/ionic.rst=144=Driver port specific::
--
Documentation/networking/device_drivers/ethernet/pensando/ionic.rst-148- rx_packets: 5
Documentation/networking/device_drivers/ethernet/pensando/ionic.rst:149: rx_bytes: 414
Documentation/networking/device_drivers/ethernet/pensando/ionic.rst-150- tx_tso: 0
--
arch/m68k/emu/nfeth.c=90=static inline void recv_packet(struct net_device *dev)
--
arch/m68k/emu/nfeth.c-123- dev->stats.rx_packets++;
arch/m68k/emu/nfeth.c:124: dev->stats.rx_bytes += pktlen;
arch/m68k/emu/nfeth.c-125-
--
arch/s390/appldata/appldata_net_sum.c=29=struct appldata_net_sum_data {
--
arch/s390/appldata/appldata_net_sum.c-43- u64 tx_packets; /* total packets transmitted */
arch/s390/appldata/appldata_net_sum.c:44: u64 rx_bytes; /* total bytes received */
arch/s390/appldata/appldata_net_sum.c-45- u64 tx_bytes; /* total bytes transmitted */
--
arch/s390/appldata/appldata_net_sum.c=59=static void appldata_get_net_sum_data(void *data)
--
arch/s390/appldata/appldata_net_sum.c-63- struct net_device *dev;
arch/s390/appldata/appldata_net_sum.c:64: unsigned long rx_packets, tx_packets, rx_bytes, tx_bytes, rx_errors,
arch/s390/appldata/appldata_net_sum.c-65- tx_errors, rx_dropped, tx_dropped, collisions;
--
arch/s390/appldata/appldata_net_sum.c-72- tx_packets = 0;
arch/s390/appldata/appldata_net_sum.c:73: rx_bytes = 0;
arch/s390/appldata/appldata_net_sum.c-74- tx_bytes = 0;
--
arch/s390/appldata/appldata_net_sum.c-88- tx_packets += stats->tx_packets;
arch/s390/appldata/appldata_net_sum.c:89: rx_bytes += stats->rx_bytes;
arch/s390/appldata/appldata_net_sum.c-90- tx_bytes += stats->tx_bytes;
--
arch/s390/appldata/appldata_net_sum.c-102- net_data->tx_packets = tx_packets;
arch/s390/appldata/appldata_net_sum.c:103: net_data->rx_bytes = rx_bytes;
arch/s390/appldata/appldata_net_sum.c-104- net_data->tx_bytes = tx_bytes;
--
arch/s390/include/asm/pci.h=53=struct zpci_fmb_fmt1 {
arch/s390/include/asm/pci.h:54: u64 rx_bytes;
arch/s390/include/asm/pci.h-55- u64 rx_packets;
--
arch/s390/pci/pci_debug.c=92=static int pci_perf_show(struct seq_file *m, void *v)
--
arch/s390/pci/pci_debug.c-122- pci_fmb_show(m, pci_fmt1_names, ARRAY_SIZE(pci_fmt1_names),
arch/s390/pci/pci_debug.c:123: &zdev->fmb->fmt1.rx_bytes);
arch/s390/pci/pci_debug.c-124- break;
--
arch/um/drivers/vector_kern.c=832=static int vector_legacy_rx(struct vector_private *vp)
--
arch/um/drivers/vector_kern.c-888- skb->protocol = eth_type_trans(skb, skb->dev);
arch/um/drivers/vector_kern.c:889: vp->dev->stats.rx_bytes += skb->len;
arch/um/drivers/vector_kern.c-890- vp->dev->stats.rx_packets++;
--
arch/um/drivers/vector_kern.c=950=static int vector_mmsg_rx(struct vector_private *vp, int budget)
--
arch/um/drivers/vector_kern.c-1016- */
arch/um/drivers/vector_kern.c:1017: vp->dev->stats.rx_bytes += skb->len;
arch/um/drivers/vector_kern.c-1018- vp->dev->stats.rx_packets++;
--
arch/xtensa/platforms/iss/network.c=272=static int iss_net_rx(struct net_device *dev)
--
arch/xtensa/platforms/iss/network.c-306- spin_lock_bh(&lp->lock);
arch/xtensa/platforms/iss/network.c:307: lp->stats.rx_bytes += skb->len;
arch/xtensa/platforms/iss/network.c-308- lp->stats.rx_packets++;
--
drivers/firewire/net.c=478=static int fwnet_finish_incoming_packet(struct net_device *net,
--
drivers/firewire/net.c-543- net->stats.rx_packets++;
drivers/firewire/net.c:544: net->stats.rx_bytes += len;
drivers/firewire/net.c-545- }
--
drivers/hid/intel-ish-hid/ipc/ipc.c=647=irqreturn_t ish_irq_handler(int irq, void *dev_id)
--
drivers/hid/intel-ish-hid/ipc/ipc.c-687- ++dev->ipc_rx_cnt;
drivers/hid/intel-ish-hid/ipc/ipc.c:688: dev->ipc_rx_bytes_cnt += IPC_HEADER_GET_LENGTH(doorbell_val);
drivers/hid/intel-ish-hid/ipc/ipc.c-689-
--
drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h=155=struct ishtp_device {
--
drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h-257- unsigned int ipc_rx_cnt;
drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h:258: unsigned long long ipc_rx_bytes_cnt;
drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h-259- unsigned int ipc_tx_cnt;
--
drivers/hsi/clients/ssi_protocol.c=608=static void ssip_pn_rx(struct sk_buff *skb)
--
drivers/hsi/clients/ssi_protocol.c-625- dev->stats.rx_packets++;
drivers/hsi/clients/ssi_protocol.c:626: dev->stats.rx_bytes += skb->len;
drivers/hsi/clients/ssi_protocol.c-627-
--
drivers/i2c/busses/i2c-bcm-iproc.c=176=struct bcm_iproc_i2c_dev {
--
drivers/i2c/busses/i2c-bcm-iproc.c-201- /* bytes that have been read */
drivers/i2c/busses/i2c-bcm-iproc.c:202: unsigned int rx_bytes;
drivers/i2c/busses/i2c-bcm-iproc.c-203- unsigned int thld_bytes;
--
drivers/i2c/busses/i2c-bcm-iproc.c=367=static void bcm_iproc_i2c_slave_read(struct bcm_iproc_i2c_dev *iproc_i2c)
--
drivers/i2c/busses/i2c-bcm-iproc.c-369- u8 rx_data, rx_status;
drivers/i2c/busses/i2c-bcm-iproc.c:370: u32 rx_bytes = 0;
drivers/i2c/busses/i2c-bcm-iproc.c-371- u32 val;
drivers/i2c/busses/i2c-bcm-iproc.c-372-
drivers/i2c/busses/i2c-bcm-iproc.c:373: while (rx_bytes < MAX_SLAVE_RX_PER_INT) {
drivers/i2c/busses/i2c-bcm-iproc.c-374- val = iproc_i2c_rd_reg(iproc_i2c, S_RX_OFFSET);
--
drivers/i2c/busses/i2c-bcm-iproc.c-404-
drivers/i2c/busses/i2c-bcm-iproc.c:405: rx_bytes++;
drivers/i2c/busses/i2c-bcm-iproc.c-406- }
--
drivers/i2c/busses/i2c-bcm-iproc.c=549=static void bcm_iproc_i2c_read_valid_bytes(struct bcm_iproc_i2c_dev *iproc_i2c)
--
drivers/i2c/busses/i2c-bcm-iproc.c-554- /* Read valid data from RX FIFO */
drivers/i2c/busses/i2c-bcm-iproc.c:555: while (iproc_i2c->rx_bytes < msg->len) {
drivers/i2c/busses/i2c-bcm-iproc.c-556- val = iproc_i2c_rd_reg(iproc_i2c, M_RX_OFFSET);
--
drivers/i2c/busses/i2c-bcm-iproc.c-561-
drivers/i2c/busses/i2c-bcm-iproc.c:562: msg->buf[iproc_i2c->rx_bytes] =
drivers/i2c/busses/i2c-bcm-iproc.c-563- (val >> M_RX_DATA_SHIFT) & M_RX_DATA_MASK;
drivers/i2c/busses/i2c-bcm-iproc.c:564: iproc_i2c->rx_bytes++;
drivers/i2c/busses/i2c-bcm-iproc.c-565- }
--
drivers/i2c/busses/i2c-bcm-iproc.c=609=static void bcm_iproc_i2c_read(struct bcm_iproc_i2c_dev *iproc_i2c)
--
drivers/i2c/busses/i2c-bcm-iproc.c-614- bcm_iproc_i2c_read_valid_bytes(iproc_i2c);
drivers/i2c/busses/i2c-bcm-iproc.c:615: bytes_left = msg->len - iproc_i2c->rx_bytes;
drivers/i2c/busses/i2c-bcm-iproc.c-616- if (bytes_left == 0) {
--
drivers/i2c/busses/i2c-bcm-iproc.c=829=static int bcm_iproc_i2c_xfer_internal(struct bcm_iproc_i2c_dev *iproc_i2c,
--
drivers/i2c/busses/i2c-bcm-iproc.c-918-
drivers/i2c/busses/i2c-bcm-iproc.c:919: iproc_i2c->rx_bytes = 0;
drivers/i2c/busses/i2c-bcm-iproc.c-920- if (msg->len > M_RX_FIFO_MAX_THLD_VALUE)
--
drivers/i2c/busses/i2c-npcm7xx.c=525=struct npcm_i2c_data {
--
drivers/i2c/busses/i2c-npcm7xx.c-528- u8 txf_sts_tx_bytes;
drivers/i2c/busses/i2c-npcm7xx.c:529: u8 rxf_sts_rx_bytes;
drivers/i2c/busses/i2c-npcm7xx.c-530- u8 rxf_ctl_last_pec;
--
drivers/i2c/busses/i2c-npcm7xx.c=533=static const struct npcm_i2c_data npxm7xx_i2c_data = {
--
drivers/i2c/busses/i2c-npcm7xx.c-536- .txf_sts_tx_bytes = GENMASK(4, 0),
drivers/i2c/busses/i2c-npcm7xx.c:537: .rxf_sts_rx_bytes = GENMASK(4, 0),
drivers/i2c/busses/i2c-npcm7xx.c-538- .rxf_ctl_last_pec = BIT(5),
--
drivers/i2c/busses/i2c-npcm7xx.c=541=static const struct npcm_i2c_data npxm8xx_i2c_data = {
--
drivers/i2c/busses/i2c-npcm7xx.c-544- .txf_sts_tx_bytes = GENMASK(5, 0),
drivers/i2c/busses/i2c-npcm7xx.c:545: .rxf_sts_rx_bytes = GENMASK(5, 0),
drivers/i2c/busses/i2c-npcm7xx.c-546- .rxf_ctl_last_pec = BIT(7),
--
drivers/i2c/busses/i2c-npcm7xx.c=728=static inline bool npcm_i2c_rx_fifo_full(struct npcm_i2c *bus)
--
drivers/i2c/busses/i2c-npcm7xx.c-733- /* check if RX FIFO is not empty: */
drivers/i2c/busses/i2c-npcm7xx.c:734: if ((rx_fifo_sts & bus->data->rxf_sts_rx_bytes) == 0)
drivers/i2c/busses/i2c-npcm7xx.c-735- return false;
--
drivers/i2c/busses/i2c-npcm7xx.c=1015=static u8 npcm_i2c_fifo_usage(struct npcm_i2c *bus)
--
drivers/i2c/busses/i2c-npcm7xx.c-1020- if (bus->operation == I2C_READ_OPER)
drivers/i2c/busses/i2c-npcm7xx.c:1021: return (bus->data->rxf_sts_rx_bytes &
drivers/i2c/busses/i2c-npcm7xx.c-1022- ioread8(bus->reg + NPCM_I2CRXF_STS));
--
drivers/i2c/busses/i2c-qup.c=158=MODULE_PARM_DESC(scl_freq, "SCL frequency override");
--
drivers/i2c/busses/i2c-qup.c-182- * send_last_word: for tx FIFO, last word send is pending in current block
drivers/i2c/busses/i2c-qup.c:183: * rx_bytes_read: if all the bytes have been read from rx FIFO.
drivers/i2c/busses/i2c-qup.c-184- * rx_tags_fetched: all the rx tag bytes have been fetched from rx fifo word
--
drivers/i2c/busses/i2c-qup.c=189=struct qup_i2c_block {
--
drivers/i2c/busses/i2c-qup.c-208- bool rx_tags_fetched;
drivers/i2c/busses/i2c-qup.c:209: bool rx_bytes_read;
drivers/i2c/busses/i2c-qup.c-210- bool is_tx_blk_mode;
--
drivers/i2c/busses/i2c-qup.c=288=static irqreturn_t qup_i2c_interrupt(int irq, void *dev)
--
drivers/i2c/busses/i2c-qup.c-362- if (qup->msg->flags & I2C_M_RD) {
drivers/i2c/busses/i2c-qup.c:363: if (!blk->rx_bytes_read)
drivers/i2c/busses/i2c-qup.c-364- return IRQ_HANDLED;
--
drivers/i2c/busses/i2c-qup.c=946=static void qup_i2c_read_rx_fifo_v1(struct qup_i2c_dev *qup)
--
drivers/i2c/busses/i2c-qup.c-965- if (qup->pos == msg->len)
drivers/i2c/busses/i2c-qup.c:966: blk->rx_bytes_read = true;
drivers/i2c/busses/i2c-qup.c-967-}
--
drivers/i2c/busses/i2c-qup.c=1018=static void qup_i2c_clear_blk_v1(struct qup_i2c_block *blk)
--
drivers/i2c/busses/i2c-qup.c-1021- blk->fifo_available = 0;
drivers/i2c/busses/i2c-qup.c:1022: blk->rx_bytes_read = false;
drivers/i2c/busses/i2c-qup.c-1023-}
--
drivers/i2c/busses/i2c-qup.c=1205=static void qup_i2c_clear_blk_v2(struct qup_i2c_block *blk)
--
drivers/i2c/busses/i2c-qup.c-1213- blk->rx_tags_fetched = false;
drivers/i2c/busses/i2c-qup.c:1214: blk->rx_bytes_read = false;
drivers/i2c/busses/i2c-qup.c-1215- blk->rx_fifo_data = 0;
--
drivers/i2c/busses/i2c-qup.c=1245=static void qup_i2c_recv_tags(struct qup_i2c_dev *qup)
--
drivers/i2c/busses/i2c-qup.c-1260- * 2. Read the data from RX FIFO. When all the data bytes have been read then
drivers/i2c/busses/i2c-qup.c:1261: * set rx_bytes_read to true.
drivers/i2c/busses/i2c-qup.c-1262- */
drivers/i2c/busses/i2c-qup.c=1263=static void qup_i2c_read_rx_fifo_v2(struct qup_i2c_dev *qup)
--
drivers/i2c/busses/i2c-qup.c-1273- if (!blk->cur_blk_len)
drivers/i2c/busses/i2c-qup.c:1274: blk->rx_bytes_read = true;
drivers/i2c/busses/i2c-qup.c-1275-}
--
drivers/infiniband/hw/bnxt_re/hw_counters.c=53=static const struct rdma_stat_desc bnxt_re_stat_descs[] = {
drivers/infiniband/hw/bnxt_re/hw_counters.c-54- [BNXT_RE_RX_PKTS].name = "rx_pkts",
drivers/infiniband/hw/bnxt_re/hw_counters.c:55: [BNXT_RE_RX_BYTES].name = "rx_bytes",
drivers/infiniband/hw/bnxt_re/hw_counters.c-56- [BNXT_RE_TX_PKTS].name = "tx_pkts",
--
drivers/infiniband/hw/efa/efa_admin_cmds_defs.h=611=struct efa_admin_basic_stats {
--
drivers/infiniband/hw/efa/efa_admin_cmds_defs.h-615-
drivers/infiniband/hw/efa/efa_admin_cmds_defs.h:616: u64 rx_bytes;
drivers/infiniband/hw/efa/efa_admin_cmds_defs.h-617-
--
drivers/infiniband/hw/efa/efa_com_cmd.c=765=int efa_com_get_stats(struct efa_com_dev *edev,
--
drivers/infiniband/hw/efa/efa_com_cmd.c-801- result->basic_stats.tx_pkts = bs->tx_pkts;
drivers/infiniband/hw/efa/efa_com_cmd.c:802: result->basic_stats.rx_bytes = bs->rx_bytes;
drivers/infiniband/hw/efa/efa_com_cmd.c-803- result->basic_stats.rx_pkts = bs->rx_pkts;
--
drivers/infiniband/hw/efa/efa_com_cmd.h=257=struct efa_com_basic_stats {
--
drivers/infiniband/hw/efa/efa_com_cmd.h-259- u64 tx_pkts;
drivers/infiniband/hw/efa/efa_com_cmd.h:260: u64 rx_bytes;
drivers/infiniband/hw/efa/efa_com_cmd.h-261- u64 rx_pkts;
--
drivers/infiniband/hw/efa/efa_verbs.c=29=struct efa_user_mmap_entry {
--
drivers/infiniband/hw/efa/efa_verbs.c-51- op(EFA_TX_PKTS, "tx_pkts") \
drivers/infiniband/hw/efa/efa_verbs.c:52: op(EFA_RX_BYTES, "rx_bytes") \
drivers/infiniband/hw/efa/efa_verbs.c-53- op(EFA_RX_PKTS, "rx_pkts") \
--
drivers/infiniband/hw/efa/efa_verbs.c=2227=static int efa_fill_port_stats(struct efa_dev *dev, struct rdma_hw_stats *stats,
--
drivers/infiniband/hw/efa/efa_verbs.c-2248- stats->value[EFA_TX_PKTS] = bs->tx_pkts;
drivers/infiniband/hw/efa/efa_verbs.c:2249: stats->value[EFA_RX_BYTES] = bs->rx_bytes;
drivers/infiniband/hw/efa/efa_verbs.c-2250- stats->value[EFA_RX_PKTS] = bs->rx_pkts;
--
drivers/infiniband/hw/erdma/erdma_hw.h=463=struct erdma_cmdq_query_stats_resp {
--
drivers/infiniband/hw/erdma/erdma_hw.h-472- u64 rx_packets_cnt;
drivers/infiniband/hw/erdma/erdma_hw.h:473: u64 rx_bytes_cnt;
drivers/infiniband/hw/erdma/erdma_hw.h-474- u64 rx_drop_packets_cnt;
--
drivers/infiniband/hw/erdma/erdma_verbs.c=2072=static const struct rdma_stat_desc erdma_descs[] = {
--
drivers/infiniband/hw/erdma/erdma_verbs.c-2079- [ERDMA_STATS_RX_PACKETS_CNT].name = "rx_packets_cnt",
drivers/infiniband/hw/erdma/erdma_verbs.c:2080: [ERDMA_STATS_RX_BYTES_CNT].name = "rx_bytes_cnt",
drivers/infiniband/hw/erdma/erdma_verbs.c-2081- [ERDMA_STATS_RX_DISABLE_DROP_CNT].name = "rx_disable_drop_cnt",
--
drivers/infiniband/hw/hfi1/vnic_main.c=37=static void hfi1_vnic_update_stats(struct hfi1_vnic_vport_info *vinfo,
--
drivers/infiniband/hw/hfi1/vnic_main.c-70- stats->netstats.rx_packets += qnstats->rx_packets;
drivers/infiniband/hw/hfi1/vnic_main.c:71: stats->netstats.rx_bytes += qnstats->rx_bytes;
drivers/infiniband/hw/hfi1/vnic_main.c-72- }
--
drivers/infiniband/hw/hfi1/vnic_main.c-92- netdev->stats.rx_packets = stats->netstats.rx_packets;
drivers/infiniband/hw/hfi1/vnic_main.c:93: netdev->stats.rx_bytes = stats->netstats.rx_bytes;
drivers/infiniband/hw/hfi1/vnic_main.c-94- netdev->stats.rx_fifo_errors = stats->netstats.rx_fifo_errors;
--
drivers/infiniband/hw/hfi1/vnic_main.c=152=static void hfi1_vnic_update_rx_counters(struct hfi1_vnic_vport_info *vinfo,
--
drivers/infiniband/hw/hfi1/vnic_main.c-160- stats->netstats.rx_packets++;
drivers/infiniband/hw/hfi1/vnic_main.c:161: stats->netstats.rx_bytes += skb->len + ETH_FCS_LEN;
drivers/infiniband/hw/hfi1/vnic_main.c-162-
--
drivers/infiniband/hw/mana/counters.c=8=static const struct rdma_stat_desc mana_ib_port_stats_desc[] = {
--
drivers/infiniband/hw/mana/counters.c-36- [MANA_IB_TX_BYTES].name = "tx_bytes",
drivers/infiniband/hw/mana/counters.c:37: [MANA_IB_RX_BYTES].name = "rx_bytes",
drivers/infiniband/hw/mana/counters.c-38- [MANA_IB_RX_SEND_REQ].name = "rx_send_requests",
--
drivers/infiniband/hw/mana/counters.c=100=static int mana_ib_get_hw_port_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats,
--
drivers/infiniband/hw/mana/counters.c-161- stats->value[MANA_IB_TX_BYTES] = resp.tx_bytes;
drivers/infiniband/hw/mana/counters.c:162: stats->value[MANA_IB_RX_BYTES] = resp.rx_bytes;
drivers/infiniband/hw/mana/counters.c-163- stats->value[MANA_IB_RX_SEND_REQ] = resp.rx_send_req;
--
drivers/infiniband/hw/mana/mana_ib.h=501=struct mana_rnic_query_vf_cntrs_resp {
--
drivers/infiniband/hw/mana/mana_ib.h-530- u64 tx_bytes;
drivers/infiniband/hw/mana/mana_ib.h:531: u64 rx_bytes;
drivers/infiniband/hw/mana/mana_ib.h-532- u64 rx_send_req;
--
drivers/infiniband/hw/mlx4/mad.c=892=static void edit_counter(struct mlx4_counter *cnt, void *counters,
--
drivers/infiniband/hw/mlx4/mad.c-903- ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_data,
drivers/infiniband/hw/mlx4/mad.c:904: (be64_to_cpu(cnt->rx_bytes) >> 2));
drivers/infiniband/hw/mlx4/mad.c-905- ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_packets,
--
drivers/infiniband/hw/mlx4/mad.c-918- pma_cnt_ext->port_rcv_data =
drivers/infiniband/hw/mlx4/mad.c:919: cpu_to_be64(be64_to_cpu(cnt->rx_bytes) >> 2);
drivers/infiniband/hw/mlx4/mad.c-920- pma_cnt_ext->port_xmit_packets = cnt->tx_frames;
--
drivers/infiniband/hw/mlx5/counters.c=155=static const struct mlx5_ib_counter packets_op_cnts[] = {
--
drivers/infiniband/hw/mlx5/counters.c-158- INIT_OP_COUNTER(rdma_rx_packets, RDMA_RX_PACKETS),
drivers/infiniband/hw/mlx5/counters.c:159: INIT_OP_COUNTER(rdma_rx_bytes, RDMA_RX_BYTES),
drivers/infiniband/hw/mlx5/counters.c-160-};
--
drivers/infiniband/hw/mlx5/ib_virt.c=120=int mlx5_ib_get_vf_stats(struct ib_device *device, int vf,
--
drivers/infiniband/hw/mlx5/ib_virt.c-141- stats->tx_packets = MLX5_GET64_PR(query_vport_counter_out, out, transmitted_ib_unicast.packets);
drivers/infiniband/hw/mlx5/ib_virt.c:142: stats->rx_bytes = MLX5_GET64_PR(query_vport_counter_out, out, received_ib_unicast.octets);
drivers/infiniband/hw/mlx5/ib_virt.c-143- stats->tx_bytes = MLX5_GET64_PR(query_vport_counter_out, out, transmitted_ib_unicast.octets);
--
drivers/infiniband/ulp/ipoib/ipoib_cm.c=560=void ipoib_cm_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
--
drivers/infiniband/ulp/ipoib/ipoib_cm.c-671- ++dev->stats.rx_packets;
drivers/infiniband/ulp/ipoib/ipoib_cm.c:672: dev->stats.rx_bytes += skb->len;
drivers/infiniband/ulp/ipoib/ipoib_cm.c-673-
--
drivers/infiniband/ulp/ipoib/ipoib_ethtool.c=48=static const struct ipoib_stats ipoib_gstrings_stats[] = {
--
drivers/infiniband/ulp/ipoib/ipoib_ethtool.c-50- IPOIB_NETDEV_STAT(tx_packets),
drivers/infiniband/ulp/ipoib/ipoib_ethtool.c:51: IPOIB_NETDEV_STAT(rx_bytes),
drivers/infiniband/ulp/ipoib/ipoib_ethtool.c-52- IPOIB_NETDEV_STAT(tx_bytes),
--
drivers/infiniband/ulp/ipoib/ipoib_ib.c=174=static void ipoib_ib_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
--
drivers/infiniband/ulp/ipoib/ipoib_ib.c-258- ++dev->stats.rx_packets;
drivers/infiniband/ulp/ipoib/ipoib_ib.c:259: dev->stats.rx_bytes += skb->len;
drivers/infiniband/ulp/ipoib/ipoib_ib.c-260- if (skb->pkt_type == PACKET_MULTICAST)
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h=266=struct opa_veswport_mactable {
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h-282- * @tx_bytes: transmit bytes
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h:283: * @rx_bytes: receive bytes
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h-284- * @tx_unicast: unicast packets transmitted
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h=310=struct opa_veswport_summary_counters {
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h-319- __be64 tx_bytes;
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h:320: __be64 rx_bytes;
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h-321-
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_ethtool.c=69=static struct vnic_stats vnic_gstrings_stats[] = {
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_ethtool.c-72- {"tx_packets", VNIC_STAT(netstats.tx_packets)},
drivers/infiniband/ulp/opa_vnic/opa_vnic_ethtool.c:73: {"rx_bytes", VNIC_STAT(netstats.rx_bytes)},
drivers/infiniband/ulp/opa_vnic/opa_vnic_ethtool.c-74- {"tx_bytes", VNIC_STAT(netstats.tx_bytes)},
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_vema_iface.c=84=void opa_vnic_get_summary_counters(struct opa_vnic_adapter *adapter,
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_vema_iface.c-104- cntrs->tx_bytes = cpu_to_be64(vstats.netstats.tx_bytes);
drivers/infiniband/ulp/opa_vnic/opa_vnic_vema_iface.c:105: cntrs->rx_bytes = cpu_to_be64(vstats.netstats.rx_bytes);
drivers/infiniband/ulp/opa_vnic/opa_vnic_vema_iface.c-106-
--
drivers/mailbox/riscv-sbi-mpxy-mbox.c=237=static int mpxy_send_message_with_resp(u32 channel_id, u32 msg_id,
--
drivers/mailbox/riscv-sbi-mpxy-mbox.c-242- struct mpxy_local *mpxy = this_cpu_ptr(&mpxy_local);
drivers/mailbox/riscv-sbi-mpxy-mbox.c:243: unsigned long rx_bytes;
drivers/mailbox/riscv-sbi-mpxy-mbox.c-244- struct sbiret sret;
--
drivers/mailbox/riscv-sbi-mpxy-mbox.c-259- if (rx && !sret.error) {
drivers/mailbox/riscv-sbi-mpxy-mbox.c:260: rx_bytes = sret.value;
drivers/mailbox/riscv-sbi-mpxy-mbox.c:261: if (rx_bytes > max_rx_len) {
drivers/mailbox/riscv-sbi-mpxy-mbox.c-262- put_cpu();
--
drivers/mailbox/riscv-sbi-mpxy-mbox.c-265-
drivers/mailbox/riscv-sbi-mpxy-mbox.c:266: memcpy(rx, mpxy->shmem, rx_bytes);
drivers/mailbox/riscv-sbi-mpxy-mbox.c-267- if (rx_len)
drivers/mailbox/riscv-sbi-mpxy-mbox.c:268: *rx_len = rx_bytes;
drivers/mailbox/riscv-sbi-mpxy-mbox.c-269- }
--
drivers/media/dvb-core/dvb_net.c=645=static void dvb_net_ule_check_crc(struct dvb_net_ule_handle *h,
--
drivers/media/dvb-core/dvb_net.c-754- h->dev->stats.rx_packets++;
drivers/media/dvb-core/dvb_net.c:755: h->dev->stats.rx_bytes += h->priv->ule_skb->len;
drivers/media/dvb-core/dvb_net.c-756- netif_rx(h->priv->ule_skb);
--
drivers/media/dvb-core/dvb_net.c=892=static void dvb_net_sec(struct net_device *dev,
--
drivers/media/dvb-core/dvb_net.c-979- stats->rx_packets++;
drivers/media/dvb-core/dvb_net.c:980: stats->rx_bytes+=skb->len;
drivers/media/dvb-core/dvb_net.c-981- netif_rx(skb);
--
drivers/media/pci/cobalt/cobalt-i2c.c=107=static int cobalt_tx_bytes(struct cobalt_i2c_regs __iomem *regs,
--
drivers/media/pci/cobalt/cobalt-i2c.c-164- */
drivers/media/pci/cobalt/cobalt-i2c.c:165:static int cobalt_rx_bytes(struct cobalt_i2c_regs __iomem *regs,
drivers/media/pci/cobalt/cobalt-i2c.c-166- struct i2c_adapter *adap, bool start, bool stop,
--
drivers/media/pci/cobalt/cobalt-i2c.c=230=static int cobalt_xfer(struct i2c_adapter *adap,
--
drivers/media/pci/cobalt/cobalt-i2c.c-265- /* read bytes into buffer */
drivers/media/pci/cobalt/cobalt-i2c.c:266: ret = cobalt_rx_bytes(regs, adap, false, stop,
drivers/media/pci/cobalt/cobalt-i2c.c-267- pmsg->buf, pmsg->len);
--
drivers/media/rc/ite-cir.c=224=static irqreturn_t ite_cir_isr(int irq, void *data)
--
drivers/media/rc/ite-cir.c-228- u8 rx_buf[ITE_RX_FIFO_LEN];
drivers/media/rc/ite-cir.c:229: int rx_bytes;
drivers/media/rc/ite-cir.c-230- int iflags;
--
drivers/media/rc/ite-cir.c-246- /* read the FIFO bytes */
drivers/media/rc/ite-cir.c:247: rx_bytes = dev->params->get_rx_bytes(dev, rx_buf,
drivers/media/rc/ite-cir.c-248- ITE_RX_FIFO_LEN);
drivers/media/rc/ite-cir.c-249-
drivers/media/rc/ite-cir.c:250: dev_dbg(&dev->rdev->dev, "interrupt %d RX bytes\n", rx_bytes);
drivers/media/rc/ite-cir.c-251-
drivers/media/rc/ite-cir.c:252: if (rx_bytes > 0) {
drivers/media/rc/ite-cir.c-253- /* drop the spinlock, since the ir-core layer
--
drivers/media/rc/ite-cir.c-258- /* decode the data we've just received */
drivers/media/rc/ite-cir.c:259: ite_decode_bytes(dev, rx_buf, rx_bytes);
drivers/media/rc/ite-cir.c-260-
--
drivers/media/rc/ite-cir.c=517=static void it87_set_carrier_params(struct ite_dev *dev, bool high_freq,
--
drivers/media/rc/ite-cir.c-544- * held */
drivers/media/rc/ite-cir.c:545:static int it87_get_rx_bytes(struct ite_dev *dev, u8 * buf, int buf_size)
drivers/media/rc/ite-cir.c-546-{
--
drivers/media/rc/ite-cir.c=707=static void it8708_set_carrier_params(struct ite_dev *dev, bool high_freq,
--
drivers/media/rc/ite-cir.c-747- * held */
drivers/media/rc/ite-cir.c:748:static int it8708_get_rx_bytes(struct ite_dev *dev, u8 * buf, int buf_size)
drivers/media/rc/ite-cir.c-749-{
--
drivers/media/rc/ite-cir.c=982=static void it8709_set_carrier_params(struct ite_dev *dev, bool high_freq,
--
drivers/media/rc/ite-cir.c-1016- * held */
drivers/media/rc/ite-cir.c:1017:static int it8709_get_rx_bytes(struct ite_dev *dev, u8 * buf, int buf_size)
drivers/media/rc/ite-cir.c-1018-{
--
drivers/media/rc/ite-cir.c=1207=static const struct ite_dev_params ite_dev_descs[] = {
--
drivers/media/rc/ite-cir.c-1217- .disable_rx = it87_idle_rx,
drivers/media/rc/ite-cir.c:1218: .get_rx_bytes = it87_get_rx_bytes,
drivers/media/rc/ite-cir.c-1219- .enable_tx_interrupt = it87_enable_tx_interrupt,
--
drivers/media/rc/ite-cir.c-1236- .disable_rx = it87_idle_rx,
drivers/media/rc/ite-cir.c:1237: .get_rx_bytes = it87_get_rx_bytes,
drivers/media/rc/ite-cir.c-1238- .enable_tx_interrupt = it87_enable_tx_interrupt,
--
drivers/media/rc/ite-cir.c-1255- .disable_rx = it8708_idle_rx,
drivers/media/rc/ite-cir.c:1256: .get_rx_bytes = it8708_get_rx_bytes,
drivers/media/rc/ite-cir.c-1257- .enable_tx_interrupt = it8708_enable_tx_interrupt,
--
drivers/media/rc/ite-cir.c-1275- .disable_rx = it8709_idle_rx,
drivers/media/rc/ite-cir.c:1276: .get_rx_bytes = it8709_get_rx_bytes,
drivers/media/rc/ite-cir.c-1277- .enable_tx_interrupt = it8709_enable_tx_interrupt,
--
drivers/media/rc/ite-cir.h=24=struct ite_dev_params {
--
drivers/media/rc/ite-cir.h-50- /* read bytes from RX FIFO; return read count */
drivers/media/rc/ite-cir.h:51: int (*get_rx_bytes) (struct ite_dev *dev, u8 *buf, int buf_size);
drivers/media/rc/ite-cir.h-52-
--
drivers/message/fusion/mptlan.c=816=mpt_lan_receive_skb(struct net_device *dev, struct sk_buff *skb)
--
drivers/message/fusion/mptlan.c-825-
drivers/message/fusion/mptlan.c:826: dev->stats.rx_bytes += skb->len;
drivers/message/fusion/mptlan.c-827- dev->stats.rx_packets++;
--
drivers/misc/sgi-xp/xpnet.c=150=xpnet_receive(short partid, int channel, struct xpnet_message *msg)
]
] |
| 15/2 | 2026/02/05 10:25 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 33142 output: 25 thoughts: 43 Thoughts: Okay, here's my interpretation of those thoughts, tailored for an expert audience: **Focusing In on WireGuard Rx Byte Updates** Alright, that `grepper` output is a bit of a firehose – too much noise to be immediately useful. I need to zero in on the relevant aspects, which in this case, means focusing on WireGuard. Specifically, I'm interested in how receive byte counts are updated. So, I'm going to start digging into the `drivers/net/wireguard/receive.c` file. My hunch is that this is where the core logic for receiving packets and updating `rx_bytes` within the WireGuard driver resides. That's the most logical place to begin my investigation to understand how the system reports received traffic. |
| 16/2 | 2026/02/05 10:26 | tool | read-file |
0mArgs:map[File:drivers/net/wireguard/receive.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "queueing.h"
#include "device.h"
#include "peer.h"
#include "timers.h"
#include "messages.h"
#include "cookie.h"
#include "socket.h"
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <linux/udp.h>
#include <net/ip_tunnels.h>
/* Must be called with bh disabled. */
static void update_rx_stats(struct wg_peer *peer, size_t len)
{
dev_sw_netstats_rx_add(peer->device->dev, len);
peer->rx_bytes += len;
}
#define SKB_TYPE_LE32(skb) (((struct message_header *)(skb)->data)->type)
static size_t validate_header_len(struct sk_buff *skb)
{
if (unlikely(skb->len < sizeof(struct message_header)))
return 0;
if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_DATA) &&
skb->len >= MESSAGE_MINIMUM_LENGTH)
return sizeof(struct message_data);
if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION) &&
skb->len == sizeof(struct message_handshake_initiation))
return sizeof(struct message_handshake_initiation);
if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE) &&
skb->len == sizeof(struct message_handshake_response))
return sizeof(struct message_handshake_response);
if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE) &&
skb->len == sizeof(struct message_handshake_cookie))
return sizeof(struct message_handshake_cookie);
return 0;
}
static int prepare_skb_header(struct sk_buff *skb, struct wg_device *wg)
{
size_t data_offset, data_len, header_len;
struct udphdr *udp;
if (unlikely(!wg_check_packet_protocol(skb) ||
skb_transport_header(skb) < skb->head ||
(skb_transport_header(skb) + sizeof(struct udphdr)) >
skb_tail_pointer(skb)))
return -EINVAL; /* Bogus IP header */
udp = udp_hdr(skb);
data_offset = (u8 *)udp - skb->data;
if (unlikely(data_offset > U16_MAX ||
data_offset + sizeof(struct udphdr) > skb->len))
/* Packet has offset at impossible location or isn't big enough
* to have UDP fields.
*/
return -EINVAL;
data_len = ntohs(udp->len);
if (unlikely(data_len < sizeof(struct udphdr) ||
data_len > skb->len - data_offset))
/* UDP packet is reporting too small of a size or lying about
* its size.
*/
return -EINVAL;
data_len -= sizeof(struct udphdr);
data_offset = (u8 *)udp + sizeof(struct udphdr) - skb->data;
if (unlikely(!pskb_may_pull(skb,
data_offset + sizeof(struct message_header)) ||
pskb_trim(skb, data_len + data_offset) < 0))
return -EINVAL;
skb_pull(skb, data_offset);
if (unlikely(skb->len != data_len))
/* Final len does not agree with calculated len */
return -EINVAL;
header_len = validate_header_len(skb);
if (unlikely(!header_len))
return -EINVAL;
__skb_push(skb, data_offset);
if (unlikely(!pskb_may_pull(skb, data_offset + header_len)))
return -EINVAL;
__skb_pull(skb, data_offset);
return 0;
}
static void wg_receive_handshake_packet(struct wg_device *wg,
struct sk_buff *skb)
{
enum cookie_mac_state mac_state;
struct wg_peer *peer = NULL;
/* This is global, so that our load calculation applies to the whole
* system. We don't care about races with it at all.
*/
static u64 last_under_load;
bool packet_needs_cookie;
bool under_load;
if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE)) {
net_dbg_skb_ratelimited("%s: Receiving cookie response from %pISpfsc\n",
wg->dev->name, skb);
wg_cookie_message_consume(
(struct message_handshake_cookie *)skb->data, wg);
return;
}
under_load = atomic_read(&wg->handshake_queue_len) >=
MAX_QUEUED_INCOMING_HANDSHAKES / 8;
if (under_load) {
last_under_load = ktime_get_coarse_boottime_ns();
} else if (last_under_load) {
under_load = !wg_birthdate_has_expired(last_under_load, 1);
if (!under_load)
last_under_load = 0;
}
mac_state = wg_cookie_validate_packet(&wg->cookie_checker, skb,
under_load);
if ((under_load && mac_state == VALID_MAC_WITH_COOKIE) ||
(!under_load && mac_state == VALID_MAC_BUT_NO_COOKIE)) {
packet_needs_cookie = false;
} else if (under_load && mac_state == VALID_MAC_BUT_NO_COOKIE) {
packet_needs_cookie = true;
} else {
net_dbg_skb_ratelimited("%s: Invalid MAC of handshake, dropping packet from %pISpfsc\n",
wg->dev->name, skb);
return;
}
switch (SKB_TYPE_LE32(skb)) {
case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION): {
struct message_handshake_initiation *message =
(struct message_handshake_initiation *)skb->data;
if (packet_needs_cookie) {
wg_packet_send_handshake_cookie(wg, skb,
message->sender_index);
return;
}
peer = wg_noise_handshake_consume_initiation(message, wg);
if (unlikely(!peer)) {
net_dbg_skb_ratelimited("%s: Invalid handshake initiation from %pISpfsc\n",
wg->dev->name, skb);
return;
}
wg_socket_set_peer_endpoint_from_skb(peer, skb);
net_dbg_ratelimited("%s: Receiving handshake initiation from peer %llu (%pISpfsc)\n",
wg->dev->name, peer->internal_id,
&peer->endpoint.addr);
wg_packet_send_handshake_response(peer);
break;
}
case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE): {
struct message_handshake_response *message =
(struct message_handshake_response *)skb->data;
if (packet_needs_cookie) {
wg_packet_send_handshake_cookie(wg, skb,
message->sender_index);
return;
}
peer = wg_noise_handshake_consume_response(message, wg);
if (unlikely(!peer)) {
net_dbg_skb_ratelimited("%s: Invalid handshake response from %pISpfsc\n",
wg->dev->name, skb);
return;
}
wg_socket_set_peer_endpoint_from_skb(peer, skb);
net_dbg_ratelimited("%s: Receiving handshake response from peer %llu (%pISpfsc)\n",
wg->dev->name, peer->internal_id,
&peer->endpoint.addr);
if (wg_noise_handshake_begin_session(&peer->handshake,
&peer->keypairs)) {
wg_timers_session_derived(peer);
wg_timers_handshake_complete(peer);
/* Calling this function will either send any existing
* packets in the queue and not send a keepalive, which
* is the best case, Or, if there's nothing in the
* queue, it will send a keepalive, in order to give
* immediate confirmation of the session.
*/
wg_packet_send_keepalive(peer);
}
break;
}
}
if (unlikely(!peer)) {
WARN(1, "Somehow a wrong type of packet wound up in the handshake queue!\n");
return;
}
local_bh_disable();
update_rx_stats(peer, skb->len);
local_bh_enable();
wg_timers_any_authenticated_packet_received(peer);
wg_timers_any_authenticated_packet_traversal(peer);
wg_peer_put(peer);
}
void wg_packet_handshake_receive_worker(struct work_struct *work)
{
struct crypt_queue *queue = container_of(work, struct multicore_worker, work)->ptr;
struct wg_device *wg = container_of(queue, struct wg_device, handshake_queue);
struct sk_buff *skb;
while ((skb = ptr_ring_consume_bh(&queue->ring)) != NULL) {
wg_receive_handshake_packet(wg, skb);
dev_kfree_skb(skb);
atomic_dec(&wg->handshake_queue_len);
cond_resched();
}
}
static void keep_key_fresh(struct wg_peer *peer)
{
struct noise_keypair *keypair;
bool send;
if (peer->sent_lastminute_handshake)
return;
rcu_read_lock_bh();
keypair = rcu_dereference_bh(peer->keypairs.current_keypair);
send = keypair && READ_ONCE(keypair->sending.is_valid) &&
keypair->i_am_the_initiator &&
wg_birthdate_has_expired(keypair->sending.birthdate,
REJECT_AFTER_TIME - KEEPALIVE_TIMEOUT - REKEY_TIMEOUT);
rcu_read_unlock_bh();
if (unlikely(send)) {
peer->sent_lastminute_handshake = true;
wg_packet_send_queued_handshake_initiation(peer, false);
}
}
static bool decrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair)
{
struct scatterlist sg[MAX_SKB_FRAGS + 8];
struct sk_buff *trailer;
unsigned int offset;
int num_frags;
if (unlikely(!keypair))
return false;
if (unlikely(!READ_ONCE(keypair->receiving.is_valid) ||
wg_birthdate_has_expired(keypair->receiving.birthdate, REJECT_AFTER_TIME) ||
READ_ONCE(keypair->receiving_counter.counter) >= REJECT_AFTER_MESSAGES)) {
WRITE_ONCE(keypair->receiving.is_valid, false);
return false;
}
PACKET_CB(skb)->nonce =
le64_to_cpu(((struct message_data *)skb->data)->counter);
/* We ensure that the network header is part of the packet before we
* call skb_cow_data, so that there's no chance that data is removed
* from the skb, so that later we can extract the original endpoint.
*/
offset = -skb_network_offset(skb);
skb_push(skb, offset);
num_frags = skb_cow_data(skb, 0, &trailer);
offset += sizeof(struct message_data);
skb_pull(skb, offset);
if (unlikely(num_frags < 0 || num_frags > ARRAY_SIZE(sg)))
return false;
sg_init_table(sg, num_frags);
if (skb_to_sgvec(skb, sg, 0, skb->len) <= 0)
return false;
if (!chacha20poly1305_decrypt_sg_inplace(sg, skb->len, NULL, 0,
PACKET_CB(skb)->nonce,
keypair->receiving.key))
return false;
/* Another ugly situation of pushing and pulling the header so as to
* keep endpoint information intact.
*/
skb_push(skb, offset);
if (pskb_trim(skb, skb->len - noise_encrypted_len(0)))
return false;
skb_pull(skb, offset);
return true;
}
/* This is RFC6479, a replay detection bitmap algorithm that avoids bitshifts */
static bool counter_validate(struct noise_replay_counter *counter, u64 their_counter)
{
unsigned long index, index_current, top, i;
bool ret = false;
spin_lock_bh(&counter->lock);
if (unlikely(counter->counter >= REJECT_AFTER_MESSAGES + 1 ||
their_counter >= REJECT_AFTER_MESSAGES))
goto out;
++their_counter;
if (unlikely((COUNTER_WINDOW_SIZE + their_counter) <
counter->counter))
goto out;
index = their_counter >> ilog2(BITS_PER_LONG);
if (likely(their_counter > counter->counter)) {
index_current = counter->counter >> ilog2(BITS_PER_LONG);
top = min_t(unsigned long, index - index_current,
COUNTER_BITS_TOTAL / BITS_PER_LONG);
for (i = 1; i <= top; ++i)
counter->backtrack[(i + index_current) &
((COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1)] = 0;
WRITE_ONCE(counter->counter, their_counter);
}
index &= (COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1;
ret = !test_and_set_bit(their_counter & (BITS_PER_LONG - 1),
&counter->backtrack[index]);
out:
spin_unlock_bh(&counter->lock);
return ret;
}
#include "selftest/counter.c"
static void wg_packet_consume_data_done(struct wg_peer *peer,
struct sk_buff *skb,
struct endpoint *endpoint)
{
struct net_device *dev = peer->device->dev;
unsigned int len, len_before_trim;
struct wg_peer *routed_peer;
wg_socket_set_peer_endpoint(peer, endpoint);
if (unlikely(wg_noise_received_with_keypair(&peer->keypairs,
PACKET_CB(skb)->keypair))) {
wg_timers_handshake_complete(peer);
wg_packet_send_staged_packets(peer);
}
keep_key_fresh(peer);
wg_timers_any_authenticated_packet_received(peer);
wg_timers_any_authenticated_packet_traversal(peer);
/* A packet with length 0 is a keepalive packet */
if (unlikely(!skb->len)) {
update_rx_stats(peer, message_data_len(0));
net_dbg_ratelimited("%s: Receiving keepalive packet from peer %llu (%pISpfsc)\n",
dev->name, peer->internal_id,
&peer->endpoint.addr);
goto packet_processed;
}
wg_timers_data_received(peer);
if (unlikely(skb_network_header(skb) < skb->head))
goto dishonest_packet_size;
if (unlikely(!(pskb_network_may_pull(skb, sizeof(struct iphdr)) &&
(ip_hdr(skb)->version == 4 ||
(ip_hdr(skb)->version == 6 &&
pskb_network_may_pull(skb, sizeof(struct ipv6hdr)))))))
goto dishonest_packet_type;
skb->dev = dev;
/* We've already verified the Poly1305 auth tag, which means this packet
* was not modified in transit. We can therefore tell the networking
* stack that all checksums of every layer of encapsulation have already
* been checked "by the hardware" and therefore is unnecessary to check
* again in software.
*/
skb->ip_summed = CHECKSUM_UNNECESSARY;
skb->csum_level = ~0; /* All levels */
skb->protocol = ip_tunnel_parse_protocol(skb);
if (skb->protocol == htons(ETH_P_IP)) {
len = ntohs(ip_hdr(skb)->tot_len);
if (unlikely(len < sizeof(struct iphdr)))
goto dishonest_packet_size;
INET_ECN_decapsulate(skb, PACKET_CB(skb)->ds, ip_hdr(skb)->tos);
} else if (skb->protocol == htons(ETH_P_IPV6)) {
len = ntohs(ipv6_hdr(skb)->payload_len) +
sizeof(struct ipv6hdr);
INET_ECN_decapsulate(skb, PACKET_CB(skb)->ds, ipv6_get_dsfield(ipv6_hdr(skb)));
} else {
goto dishonest_packet_type;
}
if (unlikely(len > skb->len))
goto dishonest_packet_size;
len_before_trim = skb->len;
if (unlikely(pskb_trim(skb, len)))
goto packet_processed;
routed_peer = wg_allowedips_lookup_src(&peer->device->peer_allowedips,
skb);
wg_peer_put(routed_peer); /* We don't need the extra reference. */
if (unlikely(routed_peer != peer))
goto dishonest_packet_peer;
napi_gro_receive(&peer->napi, skb);
update_rx_stats(peer, message_data_len(len_before_trim));
return;
dishonest_packet_peer:
net_dbg_skb_ratelimited("%s: Packet has unallowed src IP (%pISc) from peer %llu (%pISpfsc)\n",
dev->name, skb, peer->internal_id,
&peer->endpoint.addr);
DEV_STATS_INC(dev, rx_errors);
DEV_STATS_INC(dev, rx_frame_errors);
goto packet_processed;
dishonest_packet_type:
net_dbg_ratelimited("%s: Packet is neither ipv4 nor ipv6 from peer %llu (%pISpfsc)\n",
dev->name, peer->internal_id, &peer->endpoint.addr);
DEV_STATS_INC(dev, rx_errors);
DEV_STATS_INC(dev, rx_frame_errors);
goto packet_processed;
dishonest_packet_size:
net_dbg_ratelimited("%s: Packet has incorrect size from peer %llu (%pISpfsc)\n",
dev->name, peer->internal_id, &peer->endpoint.addr);
DEV_STATS_INC(dev, rx_errors);
DEV_STATS_INC(dev, rx_length_errors);
goto packet_processed;
packet_processed:
dev_kfree_skb(skb);
}
int wg_packet_rx_poll(struct napi_struct *napi, int budget)
{
struct wg_peer *peer = container_of(napi, struct wg_peer, napi);
struct noise_keypair *keypair;
struct endpoint endpoint;
enum packet_state state;
struct sk_buff *skb;
int work_done = 0;
bool free;
if (unlikely(budget <= 0))
return 0;
while ((skb = wg_prev_queue_peek(&peer->rx_queue)) != NULL &&
(state = atomic_read_acquire(&PACKET_CB(skb)->state)) !=
PACKET_STATE_UNCRYPTED) {
wg_prev_queue_drop_peeked(&peer->rx_queue);
keypair = PACKET_CB(skb)->keypair;
free = true;
if (unlikely(state != PACKET_STATE_CRYPTED))
goto next;
if (unlikely(!counter_validate(&keypair->receiving_counter,
PACKET_CB(skb)->nonce))) {
net_dbg_ratelimited("%s: Packet has invalid nonce %llu (max %llu)\n",
peer->device->dev->name,
PACKET_CB(skb)->nonce,
READ_ONCE(keypair->receiving_counter.counter));
goto next;
}
if (unlikely(wg_socket_endpoint_from_skb(&endpoint, skb)))
goto next;
wg_reset_packet(skb, false);
wg_packet_consume_data_done(peer, skb, &endpoint);
free = false;
next:
wg_noise_keypair_put(keypair, false);
wg_peer_put(peer);
if (unlikely(free))
dev_kfree_skb(skb);
if (++work_done >= budget)
break;
}
if (work_done < budget)
napi_complete_done(napi, work_done);
return work_done;
}
void wg_packet_decrypt_worker(struct work_struct *work)
{
struct crypt_queue *queue = container_of(work, struct multicore_worker,
work)->ptr;
struct sk_buff *skb;
while ((skb = ptr_ring_consume_bh(&queue->ring)) != NULL) {
enum packet_state state =
likely(decrypt_packet(skb, PACKET_CB(skb)->keypair)) ?
PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
wg_queue_enqueue_per_peer_rx(skb, state);
if (need_resched())
cond_resched();
}
}
static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb)
{
__le32 idx = ((struct message_data *)skb->data)->key_idx;
struct wg_peer *peer = NULL;
int ret;
rcu_read_lock_bh();
PACKET_CB(skb)->keypair =
(struct noise_keypair *)wg_index_hashtable_lookup(
wg->index_hashtable, INDEX_HASHTABLE_KEYPAIR, idx,
&peer);
if (unlikely(!wg_noise_keypair_get(PACKET_CB(skb)->keypair)))
goto err_keypair;
if (unlikely(READ_ONCE(peer->is_dead)))
goto err;
ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, &peer->rx_queue, skb,
wg->packet_crypt_wq);
if (unlikely(ret == -EPIPE))
wg_queue_enqueue_per_peer_rx(skb, PACKET_STATE_DEAD);
if (likely(!ret || ret == -EPIPE)) {
rcu_read_unlock_bh();
return;
}
err:
wg_noise_keypair_put(PACKET_CB(skb)->keypair, false);
err_keypair:
rcu_read_unlock_bh();
wg_peer_put(peer);
dev_kfree_skb(skb);
}
void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb)
{
if (unlikely(prepare_skb_header(skb, wg) < 0))
goto err;
switch (SKB_TYPE_LE32(skb)) {
case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION):
case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE):
case cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE): {
int cpu, ret = -EBUSY;
if (unlikely(!rng_is_initialized()))
goto drop;
if (atomic_read(&wg->handshake_queue_len) > MAX_QUEUED_INCOMING_HANDSHAKES / 2) {
if (spin_trylock_bh(&wg->handshake_queue.ring.producer_lock)) {
ret = __ptr_ring_produce(&wg->handshake_queue.ring, skb);
spin_unlock_bh(&wg->handshake_queue.ring.producer_lock);
}
} else
ret = ptr_ring_produce_bh(&wg->handshake_queue.ring, skb);
if (ret) {
drop:
net_dbg_skb_ratelimited("%s: Dropping handshake packet from %pISpfsc\n",
wg->dev->name, skb);
goto err;
}
atomic_inc(&wg->handshake_queue_len);
cpu = wg_cpumask_next_online(&wg->handshake_queue.last_cpu);
/* Queues up a call to packet_process_queued_handshake_packets(skb): */
queue_work_on(cpu, wg->handshake_receive_wq,
&per_cpu_ptr(wg->handshake_queue.worker, cpu)->work);
break;
}
case cpu_to_le32(MESSAGE_DATA):
PACKET_CB(skb)->ds = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
wg_packet_consume_data(wg, skb);
break;
default:
WARN(1, "Non-exhaustive parsing of packet header lead to unknown packet type!\n");
goto err;
}
return;
err:
dev_kfree_skb(skb);
}
] |
| 17/2 | 2026/02/05 10:26 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 40125 output: 25 thoughts: 1060 Thoughts: **Analyzing Potential Data Races in WireGuard Peer Statistics** Okay, so I've been digging into the WireGuard code, specifically in `drivers/net/wireguard/receive.c`. I've noticed something that's got me thinking about potential data races in how the network statistics are handled, specifically the `rx_bytes` counter. The `update_rx_stats` function (called from several places: `wg_receive_handshake_packet` and `wg_packet_consume_data_done`) increments `peer->rx_bytes`. This looks straightforward enough: `peer->rx_bytes += len;`. But where it gets interesting is *where* these functions are called from. `wg_receive_handshake_packet` comes from `wg_packet_handshake_receive_worker`, which runs on a workqueue, and, importantly, a *multicore* worker. Multiple CPUs could, potentially, be handling handshake packets for the *same* peer concurrently. `wg_packet_consume_data_done` is called from `wg_packet_rx_poll` which is NAPI poll function, so serialized per peer. Since `wg_receive_handshake_packet` is run on a workqueue and might be called by multiple CPUs simultaneously, and there's no explicit locking around the `peer->rx_bytes` increment, I'm pretty sure we've got a data race here. That `local_bh_disable` doesn't protect against other CPUs. Now, that's just `rx_bytes`. Let's look at `tx_bytes`. `wg_socket_send_skb_to_peer` is responsible for incrementing that, and it's called from both `wg_packet_tx_worker` and `wg_packet_handshake_send_worker`, both of which are also multicore workers. So `tx_bytes` also appears to be susceptible to a data race. The question is, is this intentional? In many kernel subsystems, this kind of relaxed statistics update is common – where a lost update is considered acceptable. However, I know on 32-bit systems, this could cause real problems (torn reads/writes of 64-bit values). Even on 64-bit systems, we're losing updates, which, while probably not security critical, is still... sloppy. Wait, WireGuard is known to be "secure". I doubt these statistics really matter for security, though. I should check if Jason Donenfeld (the author) has any comments on this. I should also check if there are other variables in `wg_peer` being updated under similar circumstances. The `endpoint_lock` is a `rwlock_t`. So the same section of code is protected by an `read_lock_bh`. `read_lock_bh` and `read_unlock_bh` around the `peer->tx_bytes += skb_len;` assignment. So *other* threads are likely in this region. If it was just a statistical counter, I'd probably write it off as benign, but KCSAN is reporting it as "read-write to ... of 8 bytes". If it was `atomic64_t`, KCSAN wouldn't be complaining. I'm going to scan through `drivers/net/wireguard/peer.c` to see how `tx_bytes` and `rx_bytes` are initialized and if there are other similar potential races elsewhere in the WireGuard code. |
| 18/2 | 2026/02/05 10:26 | tool | read-file |
0mArgs:map[File:drivers/net/wireguard/peer.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "peer.h"
#include "device.h"
#include "queueing.h"
#include "timers.h"
#include "peerlookup.h"
#include "noise.h"
#include <linux/kref.h>
#include <linux/lockdep.h>
#include <linux/rcupdate.h>
#include <linux/list.h>
static struct kmem_cache *peer_cache;
static atomic64_t peer_counter = ATOMIC64_INIT(0);
struct wg_peer *wg_peer_create(struct wg_device *wg,
const u8 public_key[NOISE_PUBLIC_KEY_LEN],
const u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN])
{
struct wg_peer *peer;
int ret = -ENOMEM;
lockdep_assert_held(&wg->device_update_lock);
if (wg->num_peers >= MAX_PEERS_PER_DEVICE)
return ERR_PTR(ret);
peer = kmem_cache_zalloc(peer_cache, GFP_KERNEL);
if (unlikely(!peer))
return ERR_PTR(ret);
if (unlikely(dst_cache_init(&peer->endpoint_cache, GFP_KERNEL)))
goto err;
peer->device = wg;
wg_noise_handshake_init(&peer->handshake, &wg->static_identity,
public_key, preshared_key, peer);
peer->internal_id = atomic64_inc_return(&peer_counter);
peer->serial_work_cpu = nr_cpumask_bits;
wg_cookie_init(&peer->latest_cookie);
wg_timers_init(peer);
wg_cookie_checker_precompute_peer_keys(peer);
spin_lock_init(&peer->keypairs.keypair_update_lock);
INIT_WORK(&peer->transmit_handshake_work, wg_packet_handshake_send_worker);
INIT_WORK(&peer->transmit_packet_work, wg_packet_tx_worker);
wg_prev_queue_init(&peer->tx_queue);
wg_prev_queue_init(&peer->rx_queue);
rwlock_init(&peer->endpoint_lock);
kref_init(&peer->refcount);
skb_queue_head_init(&peer->staged_packet_queue);
wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
set_bit(NAPI_STATE_NO_BUSY_POLL, &peer->napi.state);
netif_napi_add(wg->dev, &peer->napi, wg_packet_rx_poll);
napi_enable(&peer->napi);
list_add_tail(&peer->peer_list, &wg->peer_list);
INIT_LIST_HEAD(&peer->allowedips_list);
wg_pubkey_hashtable_add(wg->peer_hashtable, peer);
++wg->num_peers;
pr_debug("%s: Peer %llu created\n", wg->dev->name, peer->internal_id);
return peer;
err:
kmem_cache_free(peer_cache, peer);
return ERR_PTR(ret);
}
struct wg_peer *wg_peer_get_maybe_zero(struct wg_peer *peer)
{
RCU_LOCKDEP_WARN(!rcu_read_lock_bh_held(),
"Taking peer reference without holding the RCU read lock");
if (unlikely(!peer || !kref_get_unless_zero(&peer->refcount)))
return NULL;
return peer;
}
static void peer_make_dead(struct wg_peer *peer)
{
/* Remove from configuration-time lookup structures. */
list_del_init(&peer->peer_list);
wg_allowedips_remove_by_peer(&peer->device->peer_allowedips, peer,
&peer->device->device_update_lock);
wg_pubkey_hashtable_remove(peer->device->peer_hashtable, peer);
/* Mark as dead, so that we don't allow jumping contexts after. */
WRITE_ONCE(peer->is_dead, true);
/* The caller must now synchronize_net() for this to take effect. */
}
static void peer_remove_after_dead(struct wg_peer *peer)
{
WARN_ON(!peer->is_dead);
/* No more keypairs can be created for this peer, since is_dead protects
* add_new_keypair, so we can now destroy existing ones.
*/
wg_noise_keypairs_clear(&peer->keypairs);
/* Destroy all ongoing timers that were in-flight at the beginning of
* this function.
*/
wg_timers_stop(peer);
/* The transition between packet encryption/decryption queues isn't
* guarded by is_dead, but each reference's life is strictly bounded by
* two generations: once for parallel crypto and once for serial
* ingestion, so we can simply flush twice, and be sure that we no
* longer have references inside these queues.
*/
/* a) For encrypt/decrypt. */
flush_workqueue(peer->device->packet_crypt_wq);
/* b.1) For send (but not receive, since that's napi). */
flush_workqueue(peer->device->packet_crypt_wq);
/* b.2.1) For receive (but not send, since that's wq). */
napi_disable(&peer->napi);
/* b.2.1) It's now safe to remove the napi struct, which must be done
* here from process context.
*/
netif_napi_del(&peer->napi);
/* Ensure any workstructs we own (like transmit_handshake_work or
* clear_peer_work) no longer are in use.
*/
flush_workqueue(peer->device->handshake_send_wq);
/* After the above flushes, a peer might still be active in a few
* different contexts: 1) from xmit(), before hitting is_dead and
* returning, 2) from wg_packet_consume_data(), before hitting is_dead
* and returning, 3) from wg_receive_handshake_packet() after a point
* where it has processed an incoming handshake packet, but where
* all calls to pass it off to timers fails because of is_dead. We won't
* have new references in (1) eventually, because we're removed from
* allowedips; we won't have new references in (2) eventually, because
* wg_index_hashtable_lookup will always return NULL, since we removed
* all existing keypairs and no more can be created; we won't have new
* references in (3) eventually, because we're removed from the pubkey
* hash table, which allows for a maximum of one handshake response,
* via the still-uncleared index hashtable entry, but not more than one,
* and in wg_cookie_message_consume, the lookup eventually gets a peer
* with a refcount of zero, so no new reference is taken.
*/
--peer->device->num_peers;
wg_peer_put(peer);
}
/* We have a separate "remove" function make sure that all active places where
* a peer is currently operating will eventually come to an end and not pass
* their reference onto another context.
*/
void wg_peer_remove(struct wg_peer *peer)
{
if (unlikely(!peer))
return;
lockdep_assert_held(&peer->device->device_update_lock);
peer_make_dead(peer);
synchronize_net();
peer_remove_after_dead(peer);
}
void wg_peer_remove_all(struct wg_device *wg)
{
struct wg_peer *peer, *temp;
LIST_HEAD(dead_peers);
lockdep_assert_held(&wg->device_update_lock);
/* Avoid having to traverse individually for each one. */
wg_allowedips_free(&wg->peer_allowedips, &wg->device_update_lock);
list_for_each_entry_safe(peer, temp, &wg->peer_list, peer_list) {
peer_make_dead(peer);
list_add_tail(&peer->peer_list, &dead_peers);
}
synchronize_net();
list_for_each_entry_safe(peer, temp, &dead_peers, peer_list)
peer_remove_after_dead(peer);
}
static void rcu_release(struct rcu_head *rcu)
{
struct wg_peer *peer = container_of(rcu, struct wg_peer, rcu);
dst_cache_destroy(&peer->endpoint_cache);
WARN_ON(wg_prev_queue_peek(&peer->tx_queue) || wg_prev_queue_peek(&peer->rx_queue));
/* The final zeroing takes care of clearing any remaining handshake key
* material and other potentially sensitive information.
*/
memzero_explicit(peer, sizeof(*peer));
kmem_cache_free(peer_cache, peer);
}
static void kref_release(struct kref *refcount)
{
struct wg_peer *peer = container_of(refcount, struct wg_peer, refcount);
pr_debug("%s: Peer %llu (%pISpfsc) destroyed\n",
peer->device->dev->name, peer->internal_id,
&peer->endpoint.addr);
/* Remove ourself from dynamic runtime lookup structures, now that the
* last reference is gone.
*/
wg_index_hashtable_remove(peer->device->index_hashtable,
&peer->handshake.entry);
/* Remove any lingering packets that didn't have a chance to be
* transmitted.
*/
wg_packet_purge_staged_packets(peer);
/* Free the memory used. */
call_rcu(&peer->rcu, rcu_release);
}
void wg_peer_put(struct wg_peer *peer)
{
if (unlikely(!peer))
return;
kref_put(&peer->refcount, kref_release);
}
int __init wg_peer_init(void)
{
peer_cache = KMEM_CACHE(wg_peer, 0);
return peer_cache ? 0 : -ENOMEM;
}
void wg_peer_uninit(void)
{
kmem_cache_destroy(peer_cache);
}
] |
| 19/2 | 2026/02/05 10:26 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 44630 output: 25 thoughts: 337 Thoughts: **Analyzing Wireguard's Byte Counters** Okay, so I'm looking at these `tx_bytes` and `rx_bytes` counters within the `wg_peer` struct. They're just plain `u64` fields, not `atomic64_t`. Now, since this is Wireguard, which is generally well-regarded for its code quality, this piques my interest. If these are purely for statistics, and we're on a 64-bit architecture – which we almost certainly are – then the implications are: updates aren't atomic, meaning we could potentially lose updates, but reads and writes themselves *are* atomic, so we won't see torn values. Hold on, wasn’t there a question? I believe I can make sure: On x86_64, 64-bit reads and writes are atomic if aligned, and they *are* aligned in this structure. Yet, KCSAN is flagging a potential data race. Kernel developers usually use `data_race()`, `READ_ONCE`, `WRITE_ONCE`, or even `atomic_t` to suppress such warnings if the race is truly harmless. So, the fact that they *didn't* might be an oversight... but then again, Wireguard's code is often cited as a reference for good kernel programming practices. There must be a reason. Could it be that atomics are just more expensive in this context? For simple statistics, maybe they prioritized performance. But that's a hypothesis. I have to verify if these counters are used for more than just simple statistics. So, I checked the references. From what I see, `tx_bytes` is written to in `wg_socket_send_skb_to_peer` and read in `get_peer` within `netlink.c`, which seems to imply it's *only* used for the netlink statistics. But hold on, am I missing something? I should cover my bases and check the `peer.h` header file just to be absolutely sure. |
| 20/2 | 2026/02/05 10:26 | tool | read-file |
0mArgs:map[File:drivers/net/wireguard/peer.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_PEER_H
#define _WG_PEER_H
#include "device.h"
#include "noise.h"
#include "cookie.h"
#include <linux/types.h>
#include <linux/netfilter.h>
#include <linux/spinlock.h>
#include <linux/kref.h>
#include <net/dst_cache.h>
struct wg_device;
struct endpoint {
union {
struct sockaddr_inet addr; /* Large enough for both address families */
struct sockaddr_in addr4;
struct sockaddr_in6 addr6;
};
union {
struct {
struct in_addr src4;
/* Essentially the same as addr6->scope_id */
int src_if4;
};
struct in6_addr src6;
};
};
struct wg_peer {
struct wg_device *device;
struct prev_queue tx_queue, rx_queue;
struct sk_buff_head staged_packet_queue;
int serial_work_cpu;
bool is_dead;
struct noise_keypairs keypairs;
struct endpoint endpoint;
struct dst_cache endpoint_cache;
rwlock_t endpoint_lock;
struct noise_handshake handshake;
atomic64_t last_sent_handshake;
struct work_struct transmit_handshake_work, clear_peer_work, transmit_packet_work;
struct cookie latest_cookie;
struct hlist_node pubkey_hash;
u64 rx_bytes, tx_bytes;
struct timer_list timer_retransmit_handshake, timer_send_keepalive;
struct timer_list timer_new_handshake, timer_zero_key_material;
struct timer_list timer_persistent_keepalive;
unsigned int timer_handshake_attempts;
u16 persistent_keepalive_interval;
bool timer_need_another_keepalive;
bool sent_lastminute_handshake;
struct timespec64 walltime_last_handshake;
struct kref refcount;
struct rcu_head rcu;
struct list_head peer_list;
struct list_head allowedips_list;
struct napi_struct napi;
u64 internal_id;
};
struct wg_peer *wg_peer_create(struct wg_device *wg,
const u8 public_key[NOISE_PUBLIC_KEY_LEN],
const u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN]);
struct wg_peer *__must_check wg_peer_get_maybe_zero(struct wg_peer *peer);
static inline struct wg_peer *wg_peer_get(struct wg_peer *peer)
{
kref_get(&peer->refcount);
return peer;
}
void wg_peer_put(struct wg_peer *peer);
void wg_peer_remove(struct wg_peer *peer);
void wg_peer_remove_all(struct wg_device *wg);
int wg_peer_init(void);
void wg_peer_uninit(void);
#endif /* _WG_PEER_H */
] |
| 21/2 | 2026/02/05 10:26 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 46268 output: 25 thoughts: 557 Thoughts: **Analyzing a Potential Race Condition** Okay, so I'm looking at `u64 rx_bytes, tx_bytes;` and the code that updates `tx_bytes`. Wait, this is a 64-bit system, so `u64` is 8 bytes, right? The key issue is: `peer->tx_bytes += skb_len;`. This isn't atomic, and that's the problem. Let me break down the race: If two threads try to update `tx_bytes` concurrently, we can have a lost update situation. Thread 1 reads `peer->tx_bytes` (say, it's 100), then Thread 2 also reads it (also 100). Then Thread 1 calculates and writes `100 + 50 = 150`, then Thread 2 does `100 + 60 = 160`. The final result is 160, but it should be 210. Classic lost update scenario. In the grand scheme of things, this kind of statistic update is often considered benign. Most people, especially kernel maintainers, would prefer to use `atomic64_t` or per-cpu counters here to avoid KCSAN reports and get accurate statistics. It's a question of code quality and avoiding spurious warnings, more than correctness in many cases. But I need to think this through completely. Is there any *other* consequence? If `tx_bytes` were used for something like rate limiting, this would be a significant bug. However, it seems like it's *only* used for Netlink export, which makes it less serious. Actually, there's one more piece of the puzzle to consider. The KCSAN report confirms a 64-bit write: `read-write to 0xffff88811ad03d48 of 8 bytes`, and the value changed: `0x000000000000014c -> 0x000000000000016c`. So, can this *not* be benign? What if `skb_len` is huge and we end up with an overflow? But no, it's using `skb->len`, which is the actual packet length, and therefore bounded. What if `peer` is being freed and we get a use-after-free? But the stack traces look good. It's called from workers that should hold a reference to `peer`. Specifically, `wg_packet_tx_worker` is called in the `send.c` file from a worker, and it should retain a reference to the `wg_device` structure. This should make this issue benign as the `wg_device` should still be valid. I will recheck this. |
| 22/2 | 2026/02/05 10:26 | tool | read-file |
0mArgs:map[File:drivers/net/wireguard/send.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "queueing.h"
#include "timers.h"
#include "device.h"
#include "peer.h"
#include "socket.h"
#include "messages.h"
#include "cookie.h"
#include <linux/uio.h>
#include <linux/inetdevice.h>
#include <linux/socket.h>
#include <net/ip_tunnels.h>
#include <net/udp.h>
#include <net/sock.h>
static void wg_packet_send_handshake_initiation(struct wg_peer *peer)
{
struct message_handshake_initiation packet;
if (!wg_birthdate_has_expired(atomic64_read(&peer->last_sent_handshake),
REKEY_TIMEOUT))
return; /* This function is rate limited. */
atomic64_set(&peer->last_sent_handshake, ktime_get_coarse_boottime_ns());
net_dbg_ratelimited("%s: Sending handshake initiation to peer %llu (%pISpfsc)\n",
peer->device->dev->name, peer->internal_id,
&peer->endpoint.addr);
if (wg_noise_handshake_create_initiation(&packet, &peer->handshake)) {
wg_cookie_add_mac_to_packet(&packet, sizeof(packet), peer);
wg_timers_any_authenticated_packet_traversal(peer);
wg_timers_any_authenticated_packet_sent(peer);
atomic64_set(&peer->last_sent_handshake,
ktime_get_coarse_boottime_ns());
wg_socket_send_buffer_to_peer(peer, &packet, sizeof(packet),
HANDSHAKE_DSCP);
wg_timers_handshake_initiated(peer);
}
}
void wg_packet_handshake_send_worker(struct work_struct *work)
{
struct wg_peer *peer = container_of(work, struct wg_peer,
transmit_handshake_work);
wg_packet_send_handshake_initiation(peer);
wg_peer_put(peer);
}
void wg_packet_send_queued_handshake_initiation(struct wg_peer *peer,
bool is_retry)
{
if (!is_retry)
peer->timer_handshake_attempts = 0;
rcu_read_lock_bh();
/* We check last_sent_handshake here in addition to the actual function
* we're queueing up, so that we don't queue things if not strictly
* necessary:
*/
if (!wg_birthdate_has_expired(atomic64_read(&peer->last_sent_handshake),
REKEY_TIMEOUT) ||
unlikely(READ_ONCE(peer->is_dead)))
goto out;
wg_peer_get(peer);
/* Queues up calling packet_send_queued_handshakes(peer), where we do a
* peer_put(peer) after:
*/
if (!queue_work(peer->device->handshake_send_wq,
&peer->transmit_handshake_work))
/* If the work was already queued, we want to drop the
* extra reference:
*/
wg_peer_put(peer);
out:
rcu_read_unlock_bh();
}
void wg_packet_send_handshake_response(struct wg_peer *peer)
{
struct message_handshake_response packet;
atomic64_set(&peer->last_sent_handshake, ktime_get_coarse_boottime_ns());
net_dbg_ratelimited("%s: Sending handshake response to peer %llu (%pISpfsc)\n",
peer->device->dev->name, peer->internal_id,
&peer->endpoint.addr);
if (wg_noise_handshake_create_response(&packet, &peer->handshake)) {
wg_cookie_add_mac_to_packet(&packet, sizeof(packet), peer);
if (wg_noise_handshake_begin_session(&peer->handshake,
&peer->keypairs)) {
wg_timers_session_derived(peer);
wg_timers_any_authenticated_packet_traversal(peer);
wg_timers_any_authenticated_packet_sent(peer);
atomic64_set(&peer->last_sent_handshake,
ktime_get_coarse_boottime_ns());
wg_socket_send_buffer_to_peer(peer, &packet,
sizeof(packet),
HANDSHAKE_DSCP);
}
}
}
void wg_packet_send_handshake_cookie(struct wg_device *wg,
struct sk_buff *initiating_skb,
__le32 sender_index)
{
struct message_handshake_cookie packet;
net_dbg_skb_ratelimited("%s: Sending cookie response for denied handshake message for %pISpfsc\n",
wg->dev->name, initiating_skb);
wg_cookie_message_create(&packet, initiating_skb, sender_index,
&wg->cookie_checker);
wg_socket_send_buffer_as_reply_to_skb(wg, initiating_skb, &packet,
sizeof(packet));
}
static void keep_key_fresh(struct wg_peer *peer)
{
struct noise_keypair *keypair;
bool send;
rcu_read_lock_bh();
keypair = rcu_dereference_bh(peer->keypairs.current_keypair);
send = keypair && READ_ONCE(keypair->sending.is_valid) &&
(atomic64_read(&keypair->sending_counter) > REKEY_AFTER_MESSAGES ||
(keypair->i_am_the_initiator &&
wg_birthdate_has_expired(keypair->sending.birthdate, REKEY_AFTER_TIME)));
rcu_read_unlock_bh();
if (unlikely(send))
wg_packet_send_queued_handshake_initiation(peer, false);
}
static unsigned int calculate_skb_padding(struct sk_buff *skb)
{
unsigned int padded_size, last_unit = skb->len;
if (unlikely(!PACKET_CB(skb)->mtu))
return ALIGN(last_unit, MESSAGE_PADDING_MULTIPLE) - last_unit;
/* We do this modulo business with the MTU, just in case the networking
* layer gives us a packet that's bigger than the MTU. In that case, we
* wouldn't want the final subtraction to overflow in the case of the
* padded_size being clamped. Fortunately, that's very rarely the case,
* so we optimize for that not happening.
*/
if (unlikely(last_unit > PACKET_CB(skb)->mtu))
last_unit %= PACKET_CB(skb)->mtu;
padded_size = min(PACKET_CB(skb)->mtu,
ALIGN(last_unit, MESSAGE_PADDING_MULTIPLE));
return padded_size - last_unit;
}
static bool encrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair)
{
unsigned int padding_len, plaintext_len, trailer_len;
struct scatterlist sg[MAX_SKB_FRAGS + 8];
struct message_data *header;
struct sk_buff *trailer;
int num_frags;
/* Force hash calculation before encryption so that flow analysis is
* consistent over the inner packet.
*/
skb_get_hash(skb);
/* Calculate lengths. */
padding_len = calculate_skb_padding(skb);
trailer_len = padding_len + noise_encrypted_len(0);
plaintext_len = skb->len + padding_len;
/* Expand data section to have room for padding and auth tag. */
num_frags = skb_cow_data(skb, trailer_len, &trailer);
if (unlikely(num_frags < 0 || num_frags > ARRAY_SIZE(sg)))
return false;
/* Set the padding to zeros, and make sure it and the auth tag are part
* of the skb.
*/
memset(skb_tail_pointer(trailer), 0, padding_len);
/* Expand head section to have room for our header and the network
* stack's headers.
*/
if (unlikely(skb_cow_head(skb, DATA_PACKET_HEAD_ROOM) < 0))
return false;
/* Finalize checksum calculation for the inner packet, if required. */
if (unlikely(skb->ip_summed == CHECKSUM_PARTIAL &&
skb_checksum_help(skb)))
return false;
/* Only after checksumming can we safely add on the padding at the end
* and the header.
*/
skb_set_inner_network_header(skb, 0);
header = (struct message_data *)skb_push(skb, sizeof(*header));
header->header.type = cpu_to_le32(MESSAGE_DATA);
header->key_idx = keypair->remote_index;
header->counter = cpu_to_le64(PACKET_CB(skb)->nonce);
pskb_put(skb, trailer, trailer_len);
/* Now we can encrypt the scattergather segments */
sg_init_table(sg, num_frags);
if (skb_to_sgvec(skb, sg, sizeof(struct message_data),
noise_encrypted_len(plaintext_len)) <= 0)
return false;
return chacha20poly1305_encrypt_sg_inplace(sg, plaintext_len, NULL, 0,
PACKET_CB(skb)->nonce,
keypair->sending.key);
}
void wg_packet_send_keepalive(struct wg_peer *peer)
{
struct sk_buff *skb;
if (skb_queue_empty_lockless(&peer->staged_packet_queue)) {
skb = alloc_skb(DATA_PACKET_HEAD_ROOM + MESSAGE_MINIMUM_LENGTH,
GFP_ATOMIC);
if (unlikely(!skb))
return;
skb_reserve(skb, DATA_PACKET_HEAD_ROOM);
skb->dev = peer->device->dev;
PACKET_CB(skb)->mtu = skb->dev->mtu;
skb_queue_tail(&peer->staged_packet_queue, skb);
net_dbg_ratelimited("%s: Sending keepalive packet to peer %llu (%pISpfsc)\n",
peer->device->dev->name, peer->internal_id,
&peer->endpoint.addr);
}
wg_packet_send_staged_packets(peer);
}
static void wg_packet_create_data_done(struct wg_peer *peer, struct sk_buff *first)
{
struct sk_buff *skb, *next;
bool is_keepalive, data_sent = false;
wg_timers_any_authenticated_packet_traversal(peer);
wg_timers_any_authenticated_packet_sent(peer);
skb_list_walk_safe(first, skb, next) {
is_keepalive = skb->len == message_data_len(0);
if (likely(!wg_socket_send_skb_to_peer(peer, skb,
PACKET_CB(skb)->ds) && !is_keepalive))
data_sent = true;
}
if (likely(data_sent))
wg_timers_data_sent(peer);
keep_key_fresh(peer);
}
void wg_packet_tx_worker(struct work_struct *work)
{
struct wg_peer *peer = container_of(work, struct wg_peer, transmit_packet_work);
struct noise_keypair *keypair;
enum packet_state state;
struct sk_buff *first;
while ((first = wg_prev_queue_peek(&peer->tx_queue)) != NULL &&
(state = atomic_read_acquire(&PACKET_CB(first)->state)) !=
PACKET_STATE_UNCRYPTED) {
wg_prev_queue_drop_peeked(&peer->tx_queue);
keypair = PACKET_CB(first)->keypair;
if (likely(state == PACKET_STATE_CRYPTED))
wg_packet_create_data_done(peer, first);
else
kfree_skb_list(first);
wg_noise_keypair_put(keypair, false);
wg_peer_put(peer);
if (need_resched())
cond_resched();
}
}
void wg_packet_encrypt_worker(struct work_struct *work)
{
struct crypt_queue *queue = container_of(work, struct multicore_worker,
work)->ptr;
struct sk_buff *first, *skb, *next;
while ((first = ptr_ring_consume_bh(&queue->ring)) != NULL) {
enum packet_state state = PACKET_STATE_CRYPTED;
skb_list_walk_safe(first, skb, next) {
if (likely(encrypt_packet(skb,
PACKET_CB(first)->keypair))) {
wg_reset_packet(skb, true);
} else {
state = PACKET_STATE_DEAD;
break;
}
}
wg_queue_enqueue_per_peer_tx(first, state);
if (need_resched())
cond_resched();
}
}
static void wg_packet_create_data(struct wg_peer *peer, struct sk_buff *first)
{
struct wg_device *wg = peer->device;
int ret = -EINVAL;
rcu_read_lock_bh();
if (unlikely(READ_ONCE(peer->is_dead)))
goto err;
ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, &peer->tx_queue, first,
wg->packet_crypt_wq);
if (unlikely(ret == -EPIPE))
wg_queue_enqueue_per_peer_tx(first, PACKET_STATE_DEAD);
err:
rcu_read_unlock_bh();
if (likely(!ret || ret == -EPIPE))
return;
wg_noise_keypair_put(PACKET_CB(first)->keypair, false);
wg_peer_put(peer);
kfree_skb_list(first);
}
void wg_packet_purge_staged_packets(struct wg_peer *peer)
{
spin_lock_bh(&peer->staged_packet_queue.lock);
DEV_STATS_ADD(peer->device->dev, tx_dropped,
peer->staged_packet_queue.qlen);
__skb_queue_purge(&peer->staged_packet_queue);
spin_unlock_bh(&peer->staged_packet_queue.lock);
}
void wg_packet_send_staged_packets(struct wg_peer *peer)
{
struct noise_keypair *keypair;
struct sk_buff_head packets;
struct sk_buff *skb;
/* Steal the current queue into our local one. */
__skb_queue_head_init(&packets);
spin_lock_bh(&peer->staged_packet_queue.lock);
skb_queue_splice_init(&peer->staged_packet_queue, &packets);
spin_unlock_bh(&peer->staged_packet_queue.lock);
if (unlikely(skb_queue_empty(&packets)))
return;
/* First we make sure we have a valid reference to a valid key. */
rcu_read_lock_bh();
keypair = wg_noise_keypair_get(
rcu_dereference_bh(peer->keypairs.current_keypair));
rcu_read_unlock_bh();
if (unlikely(!keypair))
goto out_nokey;
if (unlikely(!READ_ONCE(keypair->sending.is_valid)))
goto out_nokey;
if (unlikely(wg_birthdate_has_expired(keypair->sending.birthdate,
REJECT_AFTER_TIME)))
goto out_invalid;
/* After we know we have a somewhat valid key, we now try to assign
* nonces to all of the packets in the queue. If we can't assign nonces
* for all of them, we just consider it a failure and wait for the next
* handshake.
*/
skb_queue_walk(&packets, skb) {
/* 0 for no outer TOS: no leak. TODO: at some later point, we
* might consider using flowi->tos as outer instead.
*/
PACKET_CB(skb)->ds = ip_tunnel_ecn_encap(0, ip_hdr(skb), skb);
PACKET_CB(skb)->nonce =
atomic64_inc_return(&keypair->sending_counter) - 1;
if (unlikely(PACKET_CB(skb)->nonce >= REJECT_AFTER_MESSAGES))
goto out_invalid;
}
packets.prev->next = NULL;
wg_peer_get(keypair->entry.peer);
PACKET_CB(packets.next)->keypair = keypair;
wg_packet_create_data(peer, packets.next);
return;
out_invalid:
WRITE_ONCE(keypair->sending.is_valid, false);
out_nokey:
wg_noise_keypair_put(keypair, false);
/* We orphan the packets if we're waiting on a handshake, so that they
* don't block a socket's pool.
*/
skb_queue_walk(&packets, skb)
skb_orphan(skb);
/* Then we put them back on the top of the queue. We're not too
* concerned about accidentally getting things a little out of order if
* packets are being added really fast, because this queue is for before
* packets can even be sent and it's small anyway.
*/
spin_lock_bh(&peer->staged_packet_queue.lock);
skb_queue_splice(&packets, &peer->staged_packet_queue);
spin_unlock_bh(&peer->staged_packet_queue.lock);
/* If we're exiting because there's something wrong with the key, it
* means we should initiate a new handshake.
*/
wg_packet_send_queued_handshake_initiation(peer, false);
}
] |
| 23/2 | 2026/02/05 10:26 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 52197 output: 25 thoughts: 1345 Thoughts: **Analyzing the Data Race in WireGuard's TX Byte Counter** Okay, so I'm looking at this code and immediately see a potential data race. Specifically, the `tx_bytes` update within `wg_socket_send_skb_to_peer`. It's straightforward: `peer->tx_bytes += skb_len;` is called from `wg_packet_create_data_done`, which itself is triggered by `wg_packet_tx_worker`. The problem is, `wg_packet_tx_worker` is a worker function, meaning multiple threads (or a worker and the handshake thread) could potentially be trying to update that `tx_bytes` counter concurrently. Now, as an expert in this area, my first thought is, is this going to cause actual problems? Because it's just a statistics counter, and we're on a 64-bit system, memory corruption or torn values are unlikely. It's technically a race, so the statistics could be slightly inaccurate but it’s probably benign as far as system stability goes. It’s also only used for Netlink, so it's not critical for kernel functionality. Wait, I should confirm that `tx_bytes` is only used for Netlink. I've confirmed that already. Okay, so is there anything else that's being raced? Looking at the code, `endpoint_lock` protects access to the `endpoint` itself via a reader lock, so that part is fine. `send4` and `send6` also read from the `peer->endpoint`. Another thread that calls `wg_socket_set_peer_endpoint` writes to the `endpoint` via a write lock, so the endpoint access is protected. So the only unprotected access I see is `peer->tx_bytes += skb_len;`. Wait, what about `peer->rx_bytes += len;` in `update_rx_stats`? That one's also unprotected. Is there any reason this would be harmful? If someone's using these counters for billing, then they might be slightly off. But I know kernel statistics aren't usually guaranteed to be 100% accurate unless you're using atomics. It's a bit surprising that a modern driver like WireGuard isn't using `atomic64_t` for these. Let me see how other drivers handle similar situations. Often drivers use `u64_stats_t` and `u64_stats_update_begin/end`, but that's typically for per-CPU stats. For per-peer stats, `atomic64_t` would be more appropriate for accuracy. Wait, I see `dev_sw_netstats_rx_add(peer->device->dev, len);` in `update_rx_stats`. Ah, this is the device-wide statistic, which *is* per-CPU and uses the proper macros. But the `peer->rx_bytes += len;` is per-peer and just a raw increment. I'd like to check WireGuard's source history or mailing list to see if this has been discussed. Unfortunately, I don't have access to that. So, I have to assume it's benign because it's only statistics. However, I should double-check that there aren’t other fields being raced. The report says "read-write to ... of 8 bytes." If it were a struct, it would be more bytes or multiple reports. Wait, the value changed: `0x14c -> 0x16c`. `0x14c` is 332. `0x16c` is 364. The difference is 32. That looks like a plausible packet or header size. Back to `wg_socket_send_skb_to_peer`: `peer->tx_bytes += skb_len;`. Yep, it’s definitely `tx_bytes`. Is it possible that `skb_len` is racing too? No, that's a local variable. So the race is purely on the counter. I know data races on statistics counters are often considered benign in the Linux kernel but KCSAN is enabled to find them. If the maintainer hasn't fixed it, they might consider it benign. Wireguard is usually very proactive about fixing things. Wait, I see in `drivers/net/wireguard/peer.h`: `u64 rx_bytes, tx_bytes;` and `drivers/net/wireguard/socket.c`: `if (likely(!ret)) peer->tx_bytes += skb_len;`. Let me see if I can check the latest kernel code. I don't have that specific version, but from what |
| 24/2 | 2026/02/05 10:26 | tool | read-file |
0mArgs:map[File:drivers/net/wireguard/timers.c] Results: map[Contents:// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "timers.h"
#include "device.h"
#include "peer.h"
#include "queueing.h"
#include "socket.h"
/*
* - Timer for retransmitting the handshake if we don't hear back after
* `REKEY_TIMEOUT + jitter` ms.
*
* - Timer for sending empty packet if we have received a packet but after have
* not sent one for `KEEPALIVE_TIMEOUT` ms.
*
* - Timer for initiating new handshake if we have sent a packet but after have
* not received one (even empty) for `(KEEPALIVE_TIMEOUT + REKEY_TIMEOUT) +
* jitter` ms.
*
* - Timer for zeroing out all ephemeral keys after `(REJECT_AFTER_TIME * 3)` ms
* if no new keys have been received.
*
* - Timer for, if enabled, sending an empty authenticated packet every user-
* specified seconds.
*/
static inline void mod_peer_timer(struct wg_peer *peer,
struct timer_list *timer,
unsigned long expires)
{
rcu_read_lock_bh();
if (likely(netif_running(peer->device->dev) &&
!READ_ONCE(peer->is_dead)))
mod_timer(timer, expires);
rcu_read_unlock_bh();
}
static void wg_expired_retransmit_handshake(struct timer_list *timer)
{
struct wg_peer *peer = timer_container_of(peer, timer,
timer_retransmit_handshake);
if (peer->timer_handshake_attempts > MAX_TIMER_HANDSHAKES) {
pr_debug("%s: Handshake for peer %llu (%pISpfsc) did not complete after %d attempts, giving up\n",
peer->device->dev->name, peer->internal_id,
&peer->endpoint.addr, (int)MAX_TIMER_HANDSHAKES + 2);
timer_delete(&peer->timer_send_keepalive);
/* We drop all packets without a keypair and don't try again,
* if we try unsuccessfully for too long to make a handshake.
*/
wg_packet_purge_staged_packets(peer);
/* We set a timer for destroying any residue that might be left
* of a partial exchange.
*/
if (!timer_pending(&peer->timer_zero_key_material))
mod_peer_timer(peer, &peer->timer_zero_key_material,
jiffies + REJECT_AFTER_TIME * 3 * HZ);
} else {
++peer->timer_handshake_attempts;
pr_debug("%s: Handshake for peer %llu (%pISpfsc) did not complete after %d seconds, retrying (try %d)\n",
peer->device->dev->name, peer->internal_id,
&peer->endpoint.addr, (int)REKEY_TIMEOUT,
peer->timer_handshake_attempts + 1);
/* We clear the endpoint address src address, in case this is
* the cause of trouble.
*/
wg_socket_clear_peer_endpoint_src(peer);
wg_packet_send_queued_handshake_initiation(peer, true);
}
}
static void wg_expired_send_keepalive(struct timer_list *timer)
{
struct wg_peer *peer = timer_container_of(peer, timer,
timer_send_keepalive);
wg_packet_send_keepalive(peer);
if (peer->timer_need_another_keepalive) {
peer->timer_need_another_keepalive = false;
mod_peer_timer(peer, &peer->timer_send_keepalive,
jiffies + KEEPALIVE_TIMEOUT * HZ);
}
}
static void wg_expired_new_handshake(struct timer_list *timer)
{
struct wg_peer *peer = timer_container_of(peer, timer,
timer_new_handshake);
pr_debug("%s: Retrying handshake with peer %llu (%pISpfsc) because we stopped hearing back after %d seconds\n",
peer->device->dev->name, peer->internal_id,
&peer->endpoint.addr, (int)(KEEPALIVE_TIMEOUT + REKEY_TIMEOUT));
/* We clear the endpoint address src address, in case this is the cause
* of trouble.
*/
wg_socket_clear_peer_endpoint_src(peer);
wg_packet_send_queued_handshake_initiation(peer, false);
}
static void wg_expired_zero_key_material(struct timer_list *timer)
{
struct wg_peer *peer = timer_container_of(peer, timer,
timer_zero_key_material);
rcu_read_lock_bh();
if (!READ_ONCE(peer->is_dead)) {
wg_peer_get(peer);
if (!queue_work(peer->device->handshake_send_wq,
&peer->clear_peer_work))
/* If the work was already on the queue, we want to drop
* the extra reference.
*/
wg_peer_put(peer);
}
rcu_read_unlock_bh();
}
static void wg_queued_expired_zero_key_material(struct work_struct *work)
{
struct wg_peer *peer = container_of(work, struct wg_peer,
clear_peer_work);
pr_debug("%s: Zeroing out all keys for peer %llu (%pISpfsc), since we haven't received a new one in %d seconds\n",
peer->device->dev->name, peer->internal_id,
&peer->endpoint.addr, (int)REJECT_AFTER_TIME * 3);
wg_noise_handshake_clear(&peer->handshake);
wg_noise_keypairs_clear(&peer->keypairs);
wg_peer_put(peer);
}
static void wg_expired_send_persistent_keepalive(struct timer_list *timer)
{
struct wg_peer *peer = timer_container_of(peer, timer,
timer_persistent_keepalive);
if (likely(peer->persistent_keepalive_interval))
wg_packet_send_keepalive(peer);
}
/* Should be called after an authenticated data packet is sent. */
void wg_timers_data_sent(struct wg_peer *peer)
{
if (!timer_pending(&peer->timer_new_handshake))
mod_peer_timer(peer, &peer->timer_new_handshake,
jiffies + (KEEPALIVE_TIMEOUT + REKEY_TIMEOUT) * HZ +
get_random_u32_below(REKEY_TIMEOUT_JITTER_MAX_JIFFIES));
}
/* Should be called after an authenticated data packet is received. */
void wg_timers_data_received(struct wg_peer *peer)
{
if (likely(netif_running(peer->device->dev))) {
if (!timer_pending(&peer->timer_send_keepalive))
mod_peer_timer(peer, &peer->timer_send_keepalive,
jiffies + KEEPALIVE_TIMEOUT * HZ);
else
peer->timer_need_another_keepalive = true;
}
}
/* Should be called after any type of authenticated packet is sent, whether
* keepalive, data, or handshake.
*/
void wg_timers_any_authenticated_packet_sent(struct wg_peer *peer)
{
timer_delete(&peer->timer_send_keepalive);
}
/* Should be called after any type of authenticated packet is received, whether
* keepalive, data, or handshake.
*/
void wg_timers_any_authenticated_packet_received(struct wg_peer *peer)
{
timer_delete(&peer->timer_new_handshake);
}
/* Should be called after a handshake initiation message is sent. */
void wg_timers_handshake_initiated(struct wg_peer *peer)
{
mod_peer_timer(peer, &peer->timer_retransmit_handshake,
jiffies + REKEY_TIMEOUT * HZ +
get_random_u32_below(REKEY_TIMEOUT_JITTER_MAX_JIFFIES));
}
/* Should be called after a handshake response message is received and processed
* or when getting key confirmation via the first data message.
*/
void wg_timers_handshake_complete(struct wg_peer *peer)
{
timer_delete(&peer->timer_retransmit_handshake);
peer->timer_handshake_attempts = 0;
peer->sent_lastminute_handshake = false;
ktime_get_real_ts64(&peer->walltime_last_handshake);
}
/* Should be called after an ephemeral key is created, which is before sending a
* handshake response or after receiving a handshake response.
*/
void wg_timers_session_derived(struct wg_peer *peer)
{
mod_peer_timer(peer, &peer->timer_zero_key_material,
jiffies + REJECT_AFTER_TIME * 3 * HZ);
}
/* Should be called before a packet with authentication, whether
* keepalive, data, or handshakem is sent, or after one is received.
*/
void wg_timers_any_authenticated_packet_traversal(struct wg_peer *peer)
{
if (peer->persistent_keepalive_interval)
mod_peer_timer(peer, &peer->timer_persistent_keepalive,
jiffies + peer->persistent_keepalive_interval * HZ);
}
void wg_timers_init(struct wg_peer *peer)
{
timer_setup(&peer->timer_retransmit_handshake,
wg_expired_retransmit_handshake, 0);
timer_setup(&peer->timer_send_keepalive, wg_expired_send_keepalive, 0);
timer_setup(&peer->timer_new_handshake, wg_expired_new_handshake, 0);
timer_setup(&peer->timer_zero_key_material,
wg_expired_zero_key_material, 0);
timer_setup(&peer->timer_persistent_keepalive,
wg_expired_send_persistent_keepalive, 0);
INIT_WORK(&peer->clear_peer_work, wg_queued_expired_zero_key_material);
peer->timer_handshake_attempts = 0;
peer->sent_lastminute_handshake = false;
peer->timer_need_another_keepalive = false;
}
void wg_timers_stop(struct wg_peer *peer)
{
timer_delete_sync(&peer->timer_retransmit_handshake);
timer_delete_sync(&peer->timer_send_keepalive);
timer_delete_sync(&peer->timer_new_handshake);
timer_delete_sync(&peer->timer_zero_key_material);
timer_delete_sync(&peer->timer_persistent_keepalive);
flush_work(&peer->clear_peer_work);
}
] |
| 25/2 | 2026/02/05 10:26 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 57363 output: 17 thoughts: 380 Thoughts: **Analyzing Potential Race Conditions in Wireguard Statistics** Okay, so I'm looking at how `walltime_last_handshake` gets updated in the Wireguard code. It looks like `wg_timers_handshake_complete` calls `ktime_get_real_ts64` to update this timestamp, and importantly, it does so *without* any locks. Now, the initial thought is that multiple CPUs could potentially be completing handshakes for the same peer simultaneously. While unlikely given the index lookup and partial serialization, it's a possibility I need to consider. Wait a minute... `wg_noise_handshake_consume_response` removes the handshake from the hashtable index, so *only* one thread should actually begin the session. Okay, that's one race avoided. But here's where it gets interesting. `wg_timers_handshake_complete` is *also* called from `wg_packet_consume_data_done`. And specifically, the call happens if `wg_noise_received_with_keypair` returns true. That function determines if the first packet with the new keypair has been received. This scenario is where the race condition is possible, because it could happen that multiple packets arrive simultaneously, and each CPU would call `wg_noise_received_with_keypair`. So, it seems like `walltime_last_handshake` is, in fact, potentially subject to a race condition. However, after looking at the code, I see that `walltime_last_handshake` is *just* for statistics, specifically for exporting the time of the last handshake via netlink. This is good. I can probably accept that this is a benign race condition, as it's not critical for the core functionality. The fact that the author has implemented multiple race conditions like this for statistics suggests that it's deemed acceptable in terms of risk. I want to check another statistics field. I'm going to run `grepper` to see if `tx_bytes` is used in any place other than the statistics code in the Wireguard driver. This will give me a better overview of whether this is likely a pattern, or if it is isolated. |
| 26/2 | 2026/02/05 10:26 | tool | grepper |
0mArgs:map[Expression:tx_bytes] Results: map[Output:
Full output is too long, showing 500 out of 8690 lines.
Use more precise expression if possible.
[Documentation/ABI/testing/sysfs-class-net-statistics=129=Description:
--
Documentation/ABI/testing/sysfs-class-net-statistics-134-
Documentation/ABI/testing/sysfs-class-net-statistics:135:What: /sys/class/net/<iface>/statistics/tx_bytes
Documentation/ABI/testing/sysfs-class-net-statistics-136-Date: April 2005
--
Documentation/networking/device_drivers/ethernet/altera/altera_tse.rst=175=received.
Documentation/networking/device_drivers/ethernet/altera/altera_tse.rst-176-
Documentation/networking/device_drivers/ethernet/altera/altera_tse.rst:177:"tx_bytes" is equivalent to aOctetsTransmittedOK defined in IEEE 802.3-2012,
Documentation/networking/device_drivers/ethernet/altera/altera_tse.rst-178-Section 5.2.2.1.8. This statistic is the count of data and pad bytes
--
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst=743=Counters on the NIC port that is connected to a eSwitch.
--
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst-879-
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst:880: * - `tx_bytes`
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst-881- - Representor only: bytes transmitted, that were handled by the hypervisor.
--
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst=894=like flow control, FEC and more.
--
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst-917-
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst:918: * - `tx_bytes_phy`
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst-919- - The number of bytes transmitted on the physical port.
--
Documentation/networking/device_drivers/ethernet/netronome/nfp.rst=263=Following device statistics are available through the ``ethtool -S`` interface:
--
Documentation/networking/device_drivers/ethernet/netronome/nfp.rst-343-
Documentation/networking/device_drivers/ethernet/netronome/nfp.rst:344: * - dev_tx_bytes
Documentation/networking/device_drivers/ethernet/netronome/nfp.rst-345- - 12
--
Documentation/networking/device_drivers/ethernet/pensando/ionic.rst=144=Driver port specific::
--
Documentation/networking/device_drivers/ethernet/pensando/ionic.rst-146- tx_packets: 12
Documentation/networking/device_drivers/ethernet/pensando/ionic.rst:147: tx_bytes: 964
Documentation/networking/device_drivers/ethernet/pensando/ionic.rst-148- rx_packets: 5
--
arch/m68k/emu/nfeth.c=144=static int nfeth_xmit(struct sk_buff *skb, struct net_device *dev)
--
arch/m68k/emu/nfeth.c-163- dev->stats.tx_packets++;
arch/m68k/emu/nfeth.c:164: dev->stats.tx_bytes += len;
arch/m68k/emu/nfeth.c-165-
--
arch/s390/appldata/appldata_net_sum.c=29=struct appldata_net_sum_data {
--
arch/s390/appldata/appldata_net_sum.c-44- u64 rx_bytes; /* total bytes received */
arch/s390/appldata/appldata_net_sum.c:45: u64 tx_bytes; /* total bytes transmitted */
arch/s390/appldata/appldata_net_sum.c-46- u64 rx_errors; /* bad packets received */
--
arch/s390/appldata/appldata_net_sum.c=59=static void appldata_get_net_sum_data(void *data)
--
arch/s390/appldata/appldata_net_sum.c-63- struct net_device *dev;
arch/s390/appldata/appldata_net_sum.c:64: unsigned long rx_packets, tx_packets, rx_bytes, tx_bytes, rx_errors,
arch/s390/appldata/appldata_net_sum.c-65- tx_errors, rx_dropped, tx_dropped, collisions;
--
arch/s390/appldata/appldata_net_sum.c-73- rx_bytes = 0;
arch/s390/appldata/appldata_net_sum.c:74: tx_bytes = 0;
arch/s390/appldata/appldata_net_sum.c-75- rx_errors = 0;
--
arch/s390/appldata/appldata_net_sum.c-89- rx_bytes += stats->rx_bytes;
arch/s390/appldata/appldata_net_sum.c:90: tx_bytes += stats->tx_bytes;
arch/s390/appldata/appldata_net_sum.c-91- rx_errors += stats->rx_errors;
--
arch/s390/appldata/appldata_net_sum.c-103- net_data->rx_bytes = rx_bytes;
arch/s390/appldata/appldata_net_sum.c:104: net_data->tx_bytes = tx_bytes;
arch/s390/appldata/appldata_net_sum.c-105- net_data->rx_errors = rx_errors;
--
arch/s390/include/asm/pci.h=53=struct zpci_fmb_fmt1 {
--
arch/s390/include/asm/pci.h-55- u64 rx_packets;
arch/s390/include/asm/pci.h:56: u64 tx_bytes;
arch/s390/include/asm/pci.h-57- u64 tx_packets;
--
arch/s390/include/asm/pci.h=65=struct zpci_fmb_fmt3 {
arch/s390/include/asm/pci.h:66: u64 tx_bytes;
arch/s390/include/asm/pci.h-67-};
--
arch/s390/pci/pci_debug.c=92=static int pci_perf_show(struct seq_file *m, void *v)
--
arch/s390/pci/pci_debug.c-130- pci_fmb_show(m, pci_fmt3_names, ARRAY_SIZE(pci_fmt3_names),
arch/s390/pci/pci_debug.c:131: &zdev->fmb->fmt3.tx_bytes);
arch/s390/pci/pci_debug.c-132- break;
--
arch/um/drivers/vector_kern.c=370=static int consume_vector_skbs(struct vector_queue *qi, int count)
--
arch/um/drivers/vector_kern.c-384- }
arch/um/drivers/vector_kern.c:385: qi->dev->stats.tx_bytes += bytes_compl;
arch/um/drivers/vector_kern.c-386- qi->dev->stats.tx_packets += count;
--
arch/um/drivers/vector_kern.c=906=static int writev_tx(struct vector_private *vp, struct sk_buff *skb)
--
arch/um/drivers/vector_kern.c-929- if (pkt_len > 0) {
arch/um/drivers/vector_kern.c:930: vp->dev->stats.tx_bytes += skb->len;
arch/um/drivers/vector_kern.c-931- vp->dev->stats.tx_packets++;
--
arch/xtensa/platforms/iss/network.c=384=static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
--
arch/xtensa/platforms/iss/network.c-395- lp->stats.tx_packets++;
arch/xtensa/platforms/iss/network.c:396: lp->stats.tx_bytes += skb->len;
arch/xtensa/platforms/iss/network.c-397- spin_unlock_bh(&lp->lock);
--
drivers/firewire/net.c=821=static void fwnet_transmit_packet_done(struct fwnet_packet_task *ptask)
--
drivers/firewire/net.c-838- dev->netdev->stats.tx_packets++;
drivers/firewire/net.c:839: dev->netdev->stats.tx_bytes += skb->len;
drivers/firewire/net.c-840- }
--
drivers/hid/intel-ish-hid/ipc/ipc.c=270=static int write_ipc_from_queue(struct ishtp_device *dev)
--
drivers/hid/intel-ish-hid/ipc/ipc.c-346- ++dev->ipc_tx_cnt;
drivers/hid/intel-ish-hid/ipc/ipc.c:347: dev->ipc_tx_bytes_cnt += IPC_HEADER_GET_LENGTH(doorbell_val);
drivers/hid/intel-ish-hid/ipc/ipc.c-348-
--
drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h=155=struct ishtp_device {
--
drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h-259- unsigned int ipc_tx_cnt;
drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h:260: unsigned long long ipc_tx_bytes_cnt;
drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h-261-
--
drivers/hsi/clients/ssi_protocol.c=970=static netdev_tx_t ssip_pn_xmit(struct sk_buff *skb, struct net_device *dev)
--
drivers/hsi/clients/ssi_protocol.c-1026- dev->stats.tx_packets++;
drivers/hsi/clients/ssi_protocol.c:1027: dev->stats.tx_bytes += skb->len;
drivers/hsi/clients/ssi_protocol.c-1028-
--
drivers/i2c/busses/i2c-bcm-iproc.c=176=struct bcm_iproc_i2c_dev {
--
drivers/i2c/busses/i2c-bcm-iproc.c-199- /* bytes that have been transferred */
drivers/i2c/busses/i2c-bcm-iproc.c:200: unsigned int tx_bytes;
drivers/i2c/busses/i2c-bcm-iproc.c-201- /* bytes that have been read */
--
drivers/i2c/busses/i2c-bcm-iproc.c=568=static void bcm_iproc_i2c_send(struct bcm_iproc_i2c_dev *iproc_i2c)
--
drivers/i2c/busses/i2c-bcm-iproc.c-570- struct i2c_msg *msg = iproc_i2c->msg;
drivers/i2c/busses/i2c-bcm-iproc.c:571: unsigned int tx_bytes = msg->len - iproc_i2c->tx_bytes;
drivers/i2c/busses/i2c-bcm-iproc.c-572- unsigned int i;
--
drivers/i2c/busses/i2c-bcm-iproc.c-575- /* can only fill up to the FIFO size */
drivers/i2c/busses/i2c-bcm-iproc.c:576: tx_bytes = min_t(unsigned int, tx_bytes, M_TX_RX_FIFO_SIZE);
drivers/i2c/busses/i2c-bcm-iproc.c:577: for (i = 0; i < tx_bytes; i++) {
drivers/i2c/busses/i2c-bcm-iproc.c-578- /* start from where we left over */
drivers/i2c/busses/i2c-bcm-iproc.c:579: unsigned int idx = iproc_i2c->tx_bytes + i;
drivers/i2c/busses/i2c-bcm-iproc.c-580-
--
drivers/i2c/busses/i2c-bcm-iproc.c-605- /* update number of transferred bytes */
drivers/i2c/busses/i2c-bcm-iproc.c:606: iproc_i2c->tx_bytes += tx_bytes;
drivers/i2c/busses/i2c-bcm-iproc.c-607-}
--
drivers/i2c/busses/i2c-bcm-iproc.c=829=static int bcm_iproc_i2c_xfer_internal(struct bcm_iproc_i2c_dev *iproc_i2c,
--
drivers/i2c/busses/i2c-bcm-iproc.c-834- u32 val, tmp, val_intr_en;
drivers/i2c/busses/i2c-bcm-iproc.c:835: unsigned int tx_bytes;
drivers/i2c/busses/i2c-bcm-iproc.c-836- struct i2c_msg *msg = &msgs[0];
--
drivers/i2c/busses/i2c-bcm-iproc.c-855- */
drivers/i2c/busses/i2c-bcm-iproc.c:856: tx_bytes = min_t(unsigned int, msg->len, M_TX_RX_FIFO_SIZE - 1);
drivers/i2c/busses/i2c-bcm-iproc.c-857- if (!(msg->flags & I2C_M_RD)) {
drivers/i2c/busses/i2c-bcm-iproc.c:858: for (i = 0; i < tx_bytes; i++) {
drivers/i2c/busses/i2c-bcm-iproc.c-859- val = msg->buf[i];
--
drivers/i2c/busses/i2c-bcm-iproc.c-866- }
drivers/i2c/busses/i2c-bcm-iproc.c:867: iproc_i2c->tx_bytes = tx_bytes;
drivers/i2c/busses/i2c-bcm-iproc.c-868- }
--
drivers/i2c/busses/i2c-bcm-iproc.c-903- if (!process_call && !(msg->flags & I2C_M_RD) &&
drivers/i2c/busses/i2c-bcm-iproc.c:904: msg->len > iproc_i2c->tx_bytes)
drivers/i2c/busses/i2c-bcm-iproc.c-905- val_intr_en |= BIT(IE_M_TX_UNDERRUN_SHIFT);
--
drivers/i2c/busses/i2c-npcm7xx.c=525=struct npcm_i2c_data {
--
drivers/i2c/busses/i2c-npcm7xx.c-527- u32 segctl_init_val;
drivers/i2c/busses/i2c-npcm7xx.c:528: u8 txf_sts_tx_bytes;
drivers/i2c/busses/i2c-npcm7xx.c-529- u8 rxf_sts_rx_bytes;
--
drivers/i2c/busses/i2c-npcm7xx.c=533=static const struct npcm_i2c_data npxm7xx_i2c_data = {
--
drivers/i2c/busses/i2c-npcm7xx.c-535- .segctl_init_val = 0x0333F000,
drivers/i2c/busses/i2c-npcm7xx.c:536: .txf_sts_tx_bytes = GENMASK(4, 0),
drivers/i2c/busses/i2c-npcm7xx.c-537- .rxf_sts_rx_bytes = GENMASK(4, 0),
--
drivers/i2c/busses/i2c-npcm7xx.c=541=static const struct npcm_i2c_data npxm8xx_i2c_data = {
--
drivers/i2c/busses/i2c-npcm7xx.c-543- .segctl_init_val = 0x9333F000,
drivers/i2c/busses/i2c-npcm7xx.c:544: .txf_sts_tx_bytes = GENMASK(5, 0),
drivers/i2c/busses/i2c-npcm7xx.c-545- .rxf_sts_rx_bytes = GENMASK(5, 0),
--
drivers/i2c/busses/i2c-npcm7xx.c=715=static inline bool npcm_i2c_tx_fifo_empty(struct npcm_i2c *bus)
--
drivers/i2c/busses/i2c-npcm7xx.c-720- /* check if TX FIFO is not empty */
drivers/i2c/busses/i2c-npcm7xx.c:721: if ((tx_fifo_sts & bus->data->txf_sts_tx_bytes) == 0)
drivers/i2c/busses/i2c-npcm7xx.c-722- return false;
--
drivers/i2c/busses/i2c-npcm7xx.c=1015=static u8 npcm_i2c_fifo_usage(struct npcm_i2c *bus)
--
drivers/i2c/busses/i2c-npcm7xx.c-1017- if (bus->operation == I2C_WRITE_OPER)
drivers/i2c/busses/i2c-npcm7xx.c:1018: return (bus->data->txf_sts_tx_bytes &
drivers/i2c/busses/i2c-npcm7xx.c-1019- ioread8(bus->reg + NPCM_I2CTXF_STS));
--
drivers/i2c/busses/i2c-npcm7xx.c=1264=static void npcm_i2c_slave_wr_buf_sync(struct npcm_i2c *bus)
--
drivers/i2c/busses/i2c-npcm7xx.c-1267-
drivers/i2c/busses/i2c-npcm7xx.c:1268: left_in_fifo = bus->data->txf_sts_tx_bytes &
drivers/i2c/busses/i2c-npcm7xx.c-1269- ioread8(bus->reg + NPCM_I2CTXF_STS);
--
drivers/infiniband/hw/bnxt_re/hw_counters.c=53=static const struct rdma_stat_desc bnxt_re_stat_descs[] = {
--
drivers/infiniband/hw/bnxt_re/hw_counters.c-56- [BNXT_RE_TX_PKTS].name = "tx_pkts",
drivers/infiniband/hw/bnxt_re/hw_counters.c:57: [BNXT_RE_TX_BYTES].name = "tx_bytes",
drivers/infiniband/hw/bnxt_re/hw_counters.c-58- [BNXT_RE_RECOVERABLE_ERRORS].name = "recoverable_errors",
--
drivers/infiniband/hw/efa/efa_admin_cmds_defs.h=611=struct efa_admin_basic_stats {
drivers/infiniband/hw/efa/efa_admin_cmds_defs.h:612: u64 tx_bytes;
drivers/infiniband/hw/efa/efa_admin_cmds_defs.h-613-
--
drivers/infiniband/hw/efa/efa_com_cmd.c=765=int efa_com_get_stats(struct efa_com_dev *edev,
--
drivers/infiniband/hw/efa/efa_com_cmd.c-799- bs = &resp.u.basic_stats;
drivers/infiniband/hw/efa/efa_com_cmd.c:800: result->basic_stats.tx_bytes = bs->tx_bytes;
drivers/infiniband/hw/efa/efa_com_cmd.c-801- result->basic_stats.tx_pkts = bs->tx_pkts;
--
drivers/infiniband/hw/efa/efa_com_cmd.h=257=struct efa_com_basic_stats {
drivers/infiniband/hw/efa/efa_com_cmd.h:258: u64 tx_bytes;
drivers/infiniband/hw/efa/efa_com_cmd.h-259- u64 tx_pkts;
--
drivers/infiniband/hw/efa/efa_verbs.c=29=struct efa_user_mmap_entry {
--
drivers/infiniband/hw/efa/efa_verbs.c-49-#define EFA_DEFINE_PORT_STATS(op) \
drivers/infiniband/hw/efa/efa_verbs.c:50: op(EFA_TX_BYTES, "tx_bytes") \
drivers/infiniband/hw/efa/efa_verbs.c-51- op(EFA_TX_PKTS, "tx_pkts") \
--
drivers/infiniband/hw/efa/efa_verbs.c=2227=static int efa_fill_port_stats(struct efa_dev *dev, struct rdma_hw_stats *stats,
--
drivers/infiniband/hw/efa/efa_verbs.c-2246- bs = &result.basic_stats;
drivers/infiniband/hw/efa/efa_verbs.c:2247: stats->value[EFA_TX_BYTES] = bs->tx_bytes;
drivers/infiniband/hw/efa/efa_verbs.c-2248- stats->value[EFA_TX_PKTS] = bs->tx_pkts;
--
drivers/infiniband/hw/erdma/erdma_hw.h=463=struct erdma_cmdq_query_stats_resp {
--
drivers/infiniband/hw/erdma/erdma_hw.h-467- u64 tx_packets_cnt;
drivers/infiniband/hw/erdma/erdma_hw.h:468: u64 tx_bytes_cnt;
drivers/infiniband/hw/erdma/erdma_hw.h-469- u64 tx_drop_packets_cnt;
--
drivers/infiniband/hw/erdma/erdma_verbs.c=2072=static const struct rdma_stat_desc erdma_descs[] = {
--
drivers/infiniband/hw/erdma/erdma_verbs.c-2074- [ERDMA_STATS_TX_PACKETS_CNT].name = "tx_packets_cnt",
drivers/infiniband/hw/erdma/erdma_verbs.c:2075: [ERDMA_STATS_TX_BYTES_CNT].name = "tx_bytes_cnt",
drivers/infiniband/hw/erdma/erdma_verbs.c-2076- [ERDMA_STATS_TX_DISABLE_DROP_CNT].name = "tx_disable_drop_cnt",
--
drivers/infiniband/hw/hfi1/vnic_main.c=37=static void hfi1_vnic_update_stats(struct hfi1_vnic_vport_info *vinfo,
--
drivers/infiniband/hw/hfi1/vnic_main.c-54- stats->netstats.tx_packets += qnstats->tx_packets;
drivers/infiniband/hw/hfi1/vnic_main.c:55: stats->netstats.tx_bytes += qnstats->tx_bytes;
drivers/infiniband/hw/hfi1/vnic_main.c-56- }
--
drivers/infiniband/hw/hfi1/vnic_main.c-85- netdev->stats.tx_packets = stats->netstats.tx_packets;
drivers/infiniband/hw/hfi1/vnic_main.c:86: netdev->stats.tx_bytes = stats->netstats.tx_bytes;
drivers/infiniband/hw/hfi1/vnic_main.c-87- netdev->stats.tx_fifo_errors = stats->netstats.tx_fifo_errors;
--
drivers/infiniband/hw/hfi1/vnic_main.c=123=static void hfi1_vnic_update_tx_counters(struct hfi1_vnic_vport_info *vinfo,
--
drivers/infiniband/hw/hfi1/vnic_main.c-131- stats->netstats.tx_packets++;
drivers/infiniband/hw/hfi1/vnic_main.c:132: stats->netstats.tx_bytes += skb->len + ETH_FCS_LEN;
drivers/infiniband/hw/hfi1/vnic_main.c-133-
--
drivers/infiniband/hw/mana/counters.c=8=static const struct rdma_stat_desc mana_ib_port_stats_desc[] = {
--
drivers/infiniband/hw/mana/counters.c-35- [MANA_IB_DUP_RX_REQ].name = "dup_rx_requests",
drivers/infiniband/hw/mana/counters.c:36: [MANA_IB_TX_BYTES].name = "tx_bytes",
drivers/infiniband/hw/mana/counters.c-37- [MANA_IB_RX_BYTES].name = "rx_bytes",
--
drivers/infiniband/hw/mana/counters.c=100=static int mana_ib_get_hw_port_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats,
--
drivers/infiniband/hw/mana/counters.c-160- stats->value[MANA_IB_DUP_RX_REQ] = resp.dup_rx_req;
drivers/infiniband/hw/mana/counters.c:161: stats->value[MANA_IB_TX_BYTES] = resp.tx_bytes;
drivers/infiniband/hw/mana/counters.c-162- stats->value[MANA_IB_RX_BYTES] = resp.rx_bytes;
--
drivers/infiniband/hw/mana/mana_ib.h=501=struct mana_rnic_query_vf_cntrs_resp {
--
drivers/infiniband/hw/mana/mana_ib.h-529- u64 dup_rx_req;
drivers/infiniband/hw/mana/mana_ib.h:530: u64 tx_bytes;
drivers/infiniband/hw/mana/mana_ib.h-531- u64 rx_bytes;
--
drivers/infiniband/hw/mlx4/mad.c=892=static void edit_counter(struct mlx4_counter *cnt, void *counters,
--
drivers/infiniband/hw/mlx4/mad.c-901- ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_data,
drivers/infiniband/hw/mlx4/mad.c:902: (be64_to_cpu(cnt->tx_bytes) >> 2));
drivers/infiniband/hw/mlx4/mad.c-903- ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_data,
--
drivers/infiniband/hw/mlx4/mad.c-916- pma_cnt_ext->port_xmit_data =
drivers/infiniband/hw/mlx4/mad.c:917: cpu_to_be64(be64_to_cpu(cnt->tx_bytes) >> 2);
drivers/infiniband/hw/mlx4/mad.c-918- pma_cnt_ext->port_rcv_data =
--
drivers/infiniband/hw/mlx5/counters.c=155=static const struct mlx5_ib_counter packets_op_cnts[] = {
drivers/infiniband/hw/mlx5/counters.c-156- INIT_OP_COUNTER(rdma_tx_packets, RDMA_TX_PACKETS),
drivers/infiniband/hw/mlx5/counters.c:157: INIT_OP_COUNTER(rdma_tx_bytes, RDMA_TX_BYTES),
drivers/infiniband/hw/mlx5/counters.c-158- INIT_OP_COUNTER(rdma_rx_packets, RDMA_RX_PACKETS),
--
drivers/infiniband/hw/mlx5/ib_virt.c=120=int mlx5_ib_get_vf_stats(struct ib_device *device, int vf,
--
drivers/infiniband/hw/mlx5/ib_virt.c-142- stats->rx_bytes = MLX5_GET64_PR(query_vport_counter_out, out, received_ib_unicast.octets);
drivers/infiniband/hw/mlx5/ib_virt.c:143: stats->tx_bytes = MLX5_GET64_PR(query_vport_counter_out, out, transmitted_ib_unicast.octets);
drivers/infiniband/hw/mlx5/ib_virt.c-144- stats->multicast = MLX5_GET64_PR(query_vport_counter_out, out, received_ib_multicast.packets);
--
drivers/infiniband/ulp/ipoib/ipoib_cm.c=794=void ipoib_cm_handle_tx_wc(struct net_device *dev, struct ib_wc *wc)
--
drivers/infiniband/ulp/ipoib/ipoib_cm.c-816- ++dev->stats.tx_packets;
drivers/infiniband/ulp/ipoib/ipoib_cm.c:817: dev->stats.tx_bytes += tx_req->skb->len;
drivers/infiniband/ulp/ipoib/ipoib_cm.c-818-
--
drivers/infiniband/ulp/ipoib/ipoib_ethtool.c=48=static const struct ipoib_stats ipoib_gstrings_stats[] = {
--
drivers/infiniband/ulp/ipoib/ipoib_ethtool.c-51- IPOIB_NETDEV_STAT(rx_bytes),
drivers/infiniband/ulp/ipoib/ipoib_ethtool.c:52: IPOIB_NETDEV_STAT(tx_bytes),
drivers/infiniband/ulp/ipoib/ipoib_ethtool.c-53- IPOIB_NETDEV_STAT(tx_errors),
--
drivers/infiniband/ulp/ipoib/ipoib_ib.c=386=static void ipoib_ib_handle_tx_wc(struct net_device *dev, struct ib_wc *wc)
--
drivers/infiniband/ulp/ipoib/ipoib_ib.c-405- ++dev->stats.tx_packets;
drivers/infiniband/ulp/ipoib/ipoib_ib.c:406: dev->stats.tx_bytes += tx_req->skb->len;
drivers/infiniband/ulp/ipoib/ipoib_ib.c-407-
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h=266=struct opa_veswport_mactable {
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h-281- * @rx_packets: receive packets
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h:282: * @tx_bytes: transmit bytes
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h-283- * @rx_bytes: receive bytes
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h=310=struct opa_veswport_summary_counters {
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h-318- __be64 rx_packets;
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h:319: __be64 tx_bytes;
drivers/infiniband/ulp/opa_vnic/opa_vnic_encap.h-320- __be64 rx_bytes;
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_ethtool.c=69=static struct vnic_stats vnic_gstrings_stats[] = {
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_ethtool.c-73- {"rx_bytes", VNIC_STAT(netstats.rx_bytes)},
drivers/infiniband/ulp/opa_vnic/opa_vnic_ethtool.c:74: {"tx_bytes", VNIC_STAT(netstats.tx_bytes)},
drivers/infiniband/ulp/opa_vnic/opa_vnic_ethtool.c-75- {"rx_errors", VNIC_STAT(netstats.rx_errors)},
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_vema_iface.c=84=void opa_vnic_get_summary_counters(struct opa_vnic_adapter *adapter,
--
drivers/infiniband/ulp/opa_vnic/opa_vnic_vema_iface.c-103- cntrs->rx_packets = cpu_to_be64(vstats.netstats.rx_packets);
drivers/infiniband/ulp/opa_vnic/opa_vnic_vema_iface.c:104: cntrs->tx_bytes = cpu_to_be64(vstats.netstats.tx_bytes);
drivers/infiniband/ulp/opa_vnic/opa_vnic_vema_iface.c-105- cntrs->rx_bytes = cpu_to_be64(vstats.netstats.rx_bytes);
--
drivers/media/pci/cobalt/cobalt-i2c.c=82=cobalt_i2c_regs(struct cobalt *cobalt, unsigned idx)
--
drivers/media/pci/cobalt/cobalt-i2c.c-106- */
drivers/media/pci/cobalt/cobalt-i2c.c:107:static int cobalt_tx_bytes(struct cobalt_i2c_regs __iomem *regs,
drivers/media/pci/cobalt/cobalt-i2c.c-108- struct i2c_adapter *adap, bool start, bool stop,
--
drivers/media/pci/cobalt/cobalt-i2c.c=222=static int cobalt_stop(struct cobalt_i2c_regs __iomem *regs,
--
drivers/media/pci/cobalt/cobalt-i2c.c-226-
drivers/media/pci/cobalt/cobalt-i2c.c:227: return cobalt_tx_bytes(regs, adap, true, true, &data, 1);
drivers/media/pci/cobalt/cobalt-i2c.c-228-}
--
drivers/media/pci/cobalt/cobalt-i2c.c=230=static int cobalt_xfer(struct i2c_adapter *adap,
--
drivers/media/pci/cobalt/cobalt-i2c.c-253- for (j = 0; j < adap->retries; j++) {
drivers/media/pci/cobalt/cobalt-i2c.c:254: ret = cobalt_tx_bytes(regs, adap, true, false,
drivers/media/pci/cobalt/cobalt-i2c.c-255- &addr, 1);
--
drivers/media/pci/cobalt/cobalt-i2c.c-271- /* write bytes from buffer */
drivers/media/pci/cobalt/cobalt-i2c.c:272: ret = cobalt_tx_bytes(regs, adap, false, stop,
drivers/media/pci/cobalt/cobalt-i2c.c-273- pmsg->buf, pmsg->len);
--
drivers/message/fusion/mptlan.c=566=mpt_lan_send_turbo(struct net_device *dev, u32 tmsg)
--
drivers/message/fusion/mptlan.c-577- dev->stats.tx_packets++;
drivers/message/fusion/mptlan.c:578: dev->stats.tx_bytes += sent->len;
drivers/message/fusion/mptlan.c-579-
--
drivers/message/fusion/mptlan.c=599=mpt_lan_send_reply(struct net_device *dev, LANSendReply_t *pSendRep)
--
drivers/message/fusion/mptlan.c-643- sent = priv->SendCtl[ctx].skb;
drivers/message/fusion/mptlan.c:644: dev->stats.tx_bytes += sent->len;
drivers/message/fusion/mptlan.c-645-
--
drivers/misc/sgi-xp/xpnet.c=412=xpnet_dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
--
drivers/misc/sgi-xp/xpnet.c-485- dev->stats.tx_packets++;
drivers/misc/sgi-xp/xpnet.c:486: dev->stats.tx_bytes += skb->len;
drivers/misc/sgi-xp/xpnet.c-487-
--
drivers/net/arcnet/arcnet.c=696=netdev_tx_t arcnet_send_packet(struct sk_buff *skb,
--
drivers/net/arcnet/arcnet.c-743- */
drivers/net/arcnet/arcnet.c:744: dev->stats.tx_bytes += skb->len;
drivers/net/arcnet/arcnet.c-745- } else {
--
drivers/net/arcnet/arcnet.c=854=irqreturn_t arcnet_interrupt(int irq, void *dev_id)
--
drivers/net/arcnet/arcnet.c-1007- /* that was the last segment */
drivers/net/arcnet/arcnet.c:1008: dev->stats.tx_bytes += lp->outgoing.skb->len;
drivers/net/arcnet/arcnet.c-1009- if (!lp->outgoing.proto->ack_tx) {
--
drivers/net/bonding/bond_alb.c=74=static inline void tlb_init_table_entry(struct tlb_client_info *entry, int save_load)
--
drivers/net/bonding/bond_alb.c-76- if (save_load) {
drivers/net/bonding/bond_alb.c:77: entry->load_history = 1 + entry->tx_bytes /
drivers/net/bonding/bond_alb.c-78- BOND_TLB_REBALANCE_INTERVAL;
drivers/net/bonding/bond_alb.c:79: entry->tx_bytes = 0;
drivers/net/bonding/bond_alb.c-80- }
--
drivers/net/bonding/bond_alb.c=191=static struct slave *__tlb_choose_channel(struct bonding *bond, u32 hash_index,
--
drivers/net/bonding/bond_alb.c-221- if (assigned_slave)
drivers/net/bonding/bond_alb.c:222: hash_table[hash_index].tx_bytes += skb_len;
drivers/net/bonding/bond_alb.c-223-
--
drivers/net/caif/caif_serial.c=194=static int handle_tx(struct ser_device *ser)
--
drivers/net/caif/caif_serial.c-228- ser->dev->stats.tx_packets++;
drivers/net/caif/caif_serial.c:229: ser->dev->stats.tx_bytes += tty_wr;
drivers/net/caif/caif_serial.c-230-
--
drivers/net/caif/caif_virtio.c=522=static netdev_tx_t cfv_netdev_tx(struct sk_buff *skb, struct net_device *netdev)
--
drivers/net/caif/caif_virtio.c-584- cfv->ndev->stats.tx_packets++;
drivers/net/caif/caif_virtio.c:585: cfv->ndev->stats.tx_bytes += skb->len;
drivers/net/caif/caif_virtio.c-586- spin_unlock_irqrestore(&cfv->tx_lock, flags);
--
drivers/net/can/at91_can.c=651=static void at91_irq_tx(struct net_device *dev, u32 reg_sr)
--
drivers/net/can/at91_can.c-677- /* _NOTE_: subtract AT91_MB_TX_FIRST offset from mb! */
drivers/net/can/at91_can.c:678: dev->stats.tx_bytes +=
drivers/net/can/at91_can.c-679- can_get_echo_skb(dev, mb - get_mb_tx_first(priv), NULL);
--
drivers/net/can/bxcan.c=429=static irqreturn_t bxcan_tx_isr(int irq, void *dev_id)
--
drivers/net/can/bxcan.c-448- stats->tx_packets++;
drivers/net/can/bxcan.c:449: stats->tx_bytes += can_get_echo_skb(ndev, idx, NULL);
drivers/net/can/bxcan.c-450- priv->tx_tail++;
--
drivers/net/can/c_can/c_can_main.c=720=static void c_can_do_tx(struct net_device *dev)
--
drivers/net/can/c_can/c_can_main.c-758-
drivers/net/can/c_can/c_can_main.c:759: stats->tx_bytes += bytes;
drivers/net/can/c_can/c_can_main.c-760- stats->tx_packets += pkts;
--
drivers/net/can/can327.c=812=static netdev_tx_t can327_netdev_start_xmit(struct sk_buff *skb,
--
drivers/net/can/can327.c-838- dev->stats.tx_packets++;
drivers/net/can/can327.c:839: dev->stats.tx_bytes += frame->can_id & CAN_RTR_FLAG ? 0 : frame->len;
drivers/net/can/can327.c-840-
--
drivers/net/can/cc770/cc770.c=664=static void cc770_tx_interrupt(struct net_device *dev, unsigned int o)
--
drivers/net/can/cc770/cc770.c-700- can_put_echo_skb(priv->tx_skb, dev, 0, 0);
drivers/net/can/cc770/cc770.c:701: stats->tx_bytes += can_get_echo_skb(dev, 0, NULL);
drivers/net/can/cc770/cc770.c-702- stats->tx_packets++;
--
drivers/net/can/ctucanfd/ctucanfd_base.c=995=static void ctucan_tx_interrupt(struct net_device *ndev)
--
drivers/net/can/ctucanfd/ctucanfd_base.c-1023- ctucan_netdev_dbg(ndev, "TXT_OK\n");
drivers/net/can/ctucanfd/ctucanfd_base.c:1024: stats->tx_bytes += can_get_echo_skb(ndev, txtb_id, NULL);
drivers/net/can/ctucanfd/ctucanfd_base.c-1025- stats->tx_packets++;
--
drivers/net/can/dummy_can.c=209=static netdev_tx_t dummy_can_start_xmit(struct sk_buff *skb,
--
drivers/net/can/dummy_can.c-216- dev->stats.tx_packets++;
drivers/net/can/dummy_can.c:217: dev->stats.tx_bytes += can_get_echo_skb(dev, 0, NULL);
drivers/net/can/dummy_can.c-218-
--
drivers/net/can/esd/esdacc.c=405=static void handle_core_msg_rxtxdone(struct acc_core *core,
--
drivers/net/can/esd/esdacc.c-428- stats->tx_packets++;
drivers/net/can/esd/esdacc.c:429: stats->tx_bytes += can_get_echo_skb(core->netdev, tx_fifo_tail,
drivers/net/can/esd/esdacc.c-430- NULL);
--
drivers/net/can/flexcan/flexcan-core.c=1073=static irqreturn_t flexcan_irq(int irq, void *dev_id)
--
drivers/net/can/flexcan/flexcan-core.c-1121- handled = IRQ_HANDLED;
drivers/net/can/flexcan/flexcan-core.c:1122: stats->tx_bytes +=
drivers/net/can/flexcan/flexcan-core.c-1123- can_rx_offload_get_echo_skb_queue_timestamp(&priv->offload, 0,
--
drivers/net/can/grcan.c=499=static int catch_up_echo_skb(struct net_device *dev, int budget, bool echo)
--
drivers/net/can/grcan.c-519- stats->tx_packets++;
drivers/net/can/grcan.c:520: stats->tx_bytes += can_get_echo_skb(dev, i, NULL);
drivers/net/can/grcan.c-521- } else {
--
drivers/net/can/ifi_canfd/ifi_canfd.c=613=static irqreturn_t ifi_canfd_isr(int irq, void *dev_id)
--
drivers/net/can/ifi_canfd/ifi_canfd.c-645- if (isr & IFI_CANFD_INTERRUPT_TXFIFO_REMOVE) {
drivers/net/can/ifi_canfd/ifi_canfd.c:646: stats->tx_bytes += can_get_echo_skb(ndev, 0, NULL);
drivers/net/can/ifi_canfd/ifi_canfd.c-647- stats->tx_packets++;
--
drivers/net/can/janz-ican3.c=1376=static int ican3_recv_skb(struct ican3_dev *mod)
--
drivers/net/can/janz-ican3.c-1420- stats->tx_packets++;
drivers/net/can/janz-ican3.c:1421: stats->tx_bytes += ican3_get_echo_skb(mod);
drivers/net/can/janz-ican3.c-1422- kfree_skb(skb);
--
drivers/net/can/kvaser_pciefd/kvaser_pciefd.h=60=struct kvaser_pciefd_can {
--
drivers/net/can/kvaser_pciefd/kvaser_pciefd.h-72- unsigned int completed_tx_pkts;
drivers/net/can/kvaser_pciefd/kvaser_pciefd.h:73: unsigned int completed_tx_bytes;
drivers/net/can/kvaser_pciefd/kvaser_pciefd.h-74- spinlock_t lock; /* Locks sensitive registers (e.g. MODE) */
--
drivers/net/can/kvaser_pciefd/kvaser_pciefd_core.c=938=static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
]
] |
| 27/2 | 2026/02/05 10:26 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 71296 output: 25 thoughts: 136 Thoughts: **Debugging Wireguard Network Traffic: A Focused Approach** Okay, too much extraneous output, this is going to take too long. I can't just `grep` the local file system. I should probably use `codesearch-find-references` on `tx_bytes` again, although I already tried that and it only turned up a couple of instances. Hmm, wait. Instead of repeatedly searching, let me focus. Does `tx_bytes` appear in `drivers/net/wireguard/netlink.c` outside of those initial references? I've already read that file and I think it's only used within `get_peer`, but it's worth a second look. Alright, what about `drivers/net/wireguard/stats.c`? Does it even exist? I should just quickly check the directory structure. Focus, focus, eliminate possibilities systematically. |
| 28/2 | 2026/02/05 10:26 | tool | codesearch-dir-index |
0mArgs:map[Dir:drivers/net/wireguard/] Results: map[Files:[allowedips.c allowedips.h cookie.c cookie.h device.c device.h main.c messages.h netlink.c netlink.h noise.c noise.h peer.c peer.h peerlookup.c peerlookup.h queueing.c queueing.h ratelimiter.c ratelimiter.h receive.c send.c socket.c socket.h timers.c timers.h version.h] Subdirs:[generated selftest]] |
| 29/2 | 2026/02/05 10:26 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 71843 output: 28 thoughts: 1121 Thoughts: **Analyzing Potential Data Races in WireGuard Peer Statistics** Okay, so there's no `stats.c` file involved here, which simplifies things. It looks like `tx_bytes` and `rx_bytes` are purely for statistics collection. While data races on counters are generally considered harmless in the Linux kernel, especially with 64-bit aligned variables on 64-bit architectures, I still need to verify there aren't any other issues. The report highlights read-write access to the same 8-byte address by two different tasks, both within the `wg_socket_send_skb_to_peer` function, specifically at the line `peer->tx_bytes += skb_len;`. I need to review this function. Let's quickly review the `wg_socket_send_skb_to_peer` function. The code obtains a `read_lock_bh` on `peer->endpoint_lock` before calling either `send4` or `send6`. This lock protects access to `peer->endpoint` and `peer->endpoint_cache`. However, it *doesn't* protect `peer->tx_bytes`. `skb_len` is copied at the beginning of the function, so we don't need to worry about the race in terms of accessing deallocated `skb` memory. Also, the `else` branch, triggered if there's a problem with the socket family, won't execute the offending line, so there is no race. It's clear that the race *is* on `tx_bytes`. Since `send4` and `send6` return 0 on success, we do have a race. Now, I'm going to look deeper. Are there any other fields within `wg_peer` that are modified *within* `send4` or `send6`? Going into `send4`, I see `endpoint` and `cache` are passed and modified *under a read lock*. Looking at the code, in `send4`, the function calls `inet_confirm_addr`, and the function potentially modifies `endpoint->src4.s_addr` and resets the dst cache. This is problematic, because if `send4` modifies `endpoint->src4.s_addr`, and another thread is also calling `send4`, both could try to modify it. While `endpoint->src4.s_addr = 0;` is an idempotent operation, `dst_cache_reset(cache)` might not be thread-safe if called concurrently on the same cache. So, now I need to check `dst_cache_reset`. |
| 30/2 | 2026/02/05 10:27 | tool | codesearch-definition-source |
0mError:missing argument "ContextFile" Args: map[IncludeLines:true Name:dst_cache_reset] |
| 31/2 | 2026/02/05 10:27 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 73574 output: 44 thoughts: 3 Thoughts: Okay, I'm ready. Here's how I'll summarize the text, assuming I'm an expert in the field and the text is, well, my own internal monologue: **My Internal Processing & Response** My immediate reaction is to offer a sincere apology. I understand that the original intention of what was being expressed might not be entirely clear. It might have been clumsily presented, or perhaps I'm anticipating some confusion or potential for a misunderstanding of some sort. Therefore, my first instinct is to preemptively acknowledge any possibility of a miscommunication and apologize in advance. |
| 32/2 | 2026/02/05 10:27 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:drivers/net/wireguard/socket.c IncludeLines:true Name:dst_cache_reset] Results: map[SourceCode: 70: /**
71: * dst_cache_reset - invalidate the cache contents
72: * @dst_cache: the cache
73: *
74: * This does not free the cached dst to avoid races and contentions.
75: * the dst will be freed on later cache lookup.
76: */
77: static inline void dst_cache_reset(struct dst_cache *dst_cache)
78: {
79: WRITE_ONCE(dst_cache->reset_ts, jiffies);
80: }
SourceFile:include/net/dst_cache.h] |
| 33/2 | 2026/02/05 10:27 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 73931 output: 24 thoughts: 159 Thoughts: **Examining the dst_cache Functions** Okay, so `dst_cache_reset` just uses `WRITE_ONCE` to update the `reset_ts` field with the current jiffies. That's good; it's thread-safe, no concerns there. But what about the other functions interacting with the cache, specifically `dst_cache_get_ip4` and `dst_cache_set_ip4`? I need to analyze those more closely. Looking at this code snippet, it seems they're used in a lookup/set pattern. First, it tries to retrieve a route using `dst_cache_get_ip4`. If that fails (rt is null), then a new route is established. In that situation, we're calling `dst_cache_set_ip4` to store the new route in the cache. This seems like the critical section I need to pay attention to. The thread-safety there needs to be absolutely rock solid to prevent data corruption or race conditions. I'll need to dig into the implementations of those functions to see how they're handling concurrent access to the cache. |
| 34/2 | 2026/02/05 10:27 | tool | read-file |
0mArgs:map[File:include/net/dst_cache.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _NET_DST_CACHE_H
#define _NET_DST_CACHE_H
#include <linux/jiffies.h>
#include <net/dst.h>
#if IS_ENABLED(CONFIG_IPV6)
#include <net/ip6_fib.h>
#endif
struct dst_cache {
struct dst_cache_pcpu __percpu *cache;
unsigned long reset_ts;
};
/**
* dst_cache_get - perform cache lookup
* @dst_cache: the cache
*
* The caller should use dst_cache_get_ip4() if it need to retrieve the
* source address to be used when xmitting to the cached dst.
* local BH must be disabled.
*/
struct dst_entry *dst_cache_get(struct dst_cache *dst_cache);
/**
* dst_cache_get_ip4 - perform cache lookup and fetch ipv4 source address
* @dst_cache: the cache
* @saddr: return value for the retrieved source address
*
* local BH must be disabled.
*/
struct rtable *dst_cache_get_ip4(struct dst_cache *dst_cache, __be32 *saddr);
/**
* dst_cache_set_ip4 - store the ipv4 dst into the cache
* @dst_cache: the cache
* @dst: the entry to be cached
* @saddr: the source address to be stored inside the cache
*
* local BH must be disabled.
*/
void dst_cache_set_ip4(struct dst_cache *dst_cache, struct dst_entry *dst,
__be32 saddr);
#if IS_ENABLED(CONFIG_IPV6)
/**
* dst_cache_set_ip6 - store the ipv6 dst into the cache
* @dst_cache: the cache
* @dst: the entry to be cached
* @saddr: the source address to be stored inside the cache
*
* local BH must be disabled.
*/
void dst_cache_set_ip6(struct dst_cache *dst_cache, struct dst_entry *dst,
const struct in6_addr *saddr);
/**
* dst_cache_get_ip6 - perform cache lookup and fetch ipv6 source address
* @dst_cache: the cache
* @saddr: return value for the retrieved source address
*
* local BH must be disabled.
*/
struct dst_entry *dst_cache_get_ip6(struct dst_cache *dst_cache,
struct in6_addr *saddr);
#endif
/**
* dst_cache_reset - invalidate the cache contents
* @dst_cache: the cache
*
* This does not free the cached dst to avoid races and contentions.
* the dst will be freed on later cache lookup.
*/
static inline void dst_cache_reset(struct dst_cache *dst_cache)
{
WRITE_ONCE(dst_cache->reset_ts, jiffies);
}
/**
* dst_cache_reset_now - invalidate the cache contents immediately
* @dst_cache: the cache
*
* The caller must be sure there are no concurrent users, as this frees
* all dst_cache users immediately, rather than waiting for the next
* per-cpu usage like dst_cache_reset does. Most callers should use the
* higher speed lazily-freed dst_cache_reset function instead.
*/
void dst_cache_reset_now(struct dst_cache *dst_cache);
/**
* dst_cache_init - initialize the cache, allocating the required storage
* @dst_cache: the cache
* @gfp: allocation flags
*/
int dst_cache_init(struct dst_cache *dst_cache, gfp_t gfp);
/**
* dst_cache_destroy - empty the cache and free the allocated storage
* @dst_cache: the cache
*
* No synchronization is enforced: it must be called only when the cache
* is unused.
*/
void dst_cache_destroy(struct dst_cache *dst_cache);
#endif
] |
| 35/2 | 2026/02/05 10:27 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 75366 output: 25 thoughts: 1281 Thoughts: **Analyzing the WireGuard Data Race Report** Okay, let's break this down. First, regarding the `dst_cache`, since it's per-CPU, the `dst_cache_get_ip4` and `dst_cache_set_ip4` operations should be thread-safe. That's straightforward. However, the potential for races comes in with the `endpoint->src4.s_addr = 0;` code. While two threads writing zero is fine, what if one is reading while the other writes? The `send4` function *reads* this value at the beginning, so if it's subsequently changed within the *same* function, it only impacts the current thread. And since it is a 32-bit value, it is atomically updated. This should not be a problem. The WireGuard author, Jason Donenfeld, addresses a similar issue explicitly in `wg_socket_set_peer_endpoint`. He seems okay with potential races during endpoint updates, stating that even if two CPUs write slightly different things, it's not a major issue. Now, the KCSAN report. The report specifically targets `tx_bytes`, a 64-bit counter. Here's the key: on a 64-bit system, `peer->tx_bytes += skb_len` involves a read-modify-write operation: read 8 bytes, add, write 8 bytes. If two CPUs do this concurrently, one update will be lost. That's a classic data race. But is it *benign*? If it's *just* statistics, it should be fine. I'll need to check if this is being used for anything *other* than statistics. I'll also ensure that it doesn't overflow. It is a 64-bit counter, so no, it won't overflow. Now, that "value changed" message, from 0x000000000000014c to 0x000000000000016c, is crucial. It means KCSAN *observed* the write *changing* the value. If there were another concurrent write, KCSAN would have detected that as well. This "read-write" reported by KCSAN points to a non-atomic update (like `+=`). Now, in the C code, it's NOT atomic, but the compiler *might* generate a single instruction if it's `add [mem], reg`. On x86, `add [mem], reg` *is* a single instruction but it's *not* atomic unless it's got a `lock` prefix. Without a `lock` prefix, it's still a read-modify-write cycle at the hardware level, though it might be one instruction in the ISA. KCSAN detects this because it's *not* an atomic operation. Again, if this is just a statistics counter, it should be benign. The kernel has numerous such races in network statistics. They're often marked with `data_race()` or outright ignored. WireGuard, being a newer driver in `drivers/net/wireguard`, raises this question in my mind. I tried looking for other related KCSAN reports, but unfortunately, it is not something I can readily check at this time. Looking back at the stack traces, I see Task 1: `wg_packet_tx_worker` -> `wg_packet_create_data_done` -> `wg_socket_send_skb_to_peer` and Task 2: `wg_packet_handshake_send_worker` -> `wg_packet_send_handshake_initiation` -> `wg_socket_send_buffer_to_peer` -> `wg_socket_send_skb_to_peer`. Both tasks are trying to send something to the same peer. One is data and the other is a handshake initiation. They both update `tx_bytes`. If the handshake initiation packet size gets lost from the counter, is it a problem? The initiation packet is 148 bytes, and the data packet size will depend on the payload. The change in the report was 32 bytes. But wait, is `MESSAGE_HANDSHAKE_INITIATION` 148 bytes? I need to check `messages.h` just to be sure... |
| 36/2 | 2026/02/05 10:27 | tool | read-file |
0mArgs:map[File:drivers/net/wireguard/messages.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_MESSAGES_H
#define _WG_MESSAGES_H
#include <crypto/curve25519.h>
#include <crypto/chacha20poly1305.h>
#include <crypto/blake2s.h>
#include <linux/kernel.h>
#include <linux/param.h>
#include <linux/skbuff.h>
enum noise_lengths {
NOISE_PUBLIC_KEY_LEN = CURVE25519_KEY_SIZE,
NOISE_SYMMETRIC_KEY_LEN = CHACHA20POLY1305_KEY_SIZE,
NOISE_TIMESTAMP_LEN = sizeof(u64) + sizeof(u32),
NOISE_AUTHTAG_LEN = CHACHA20POLY1305_AUTHTAG_SIZE,
NOISE_HASH_LEN = BLAKE2S_HASH_SIZE
};
#define noise_encrypted_len(plain_len) ((plain_len) + NOISE_AUTHTAG_LEN)
enum cookie_values {
COOKIE_SECRET_MAX_AGE = 2 * 60,
COOKIE_SECRET_LATENCY = 5,
COOKIE_NONCE_LEN = XCHACHA20POLY1305_NONCE_SIZE,
COOKIE_LEN = 16
};
enum counter_values {
COUNTER_BITS_TOTAL = 8192,
COUNTER_REDUNDANT_BITS = BITS_PER_LONG,
COUNTER_WINDOW_SIZE = COUNTER_BITS_TOTAL - COUNTER_REDUNDANT_BITS
};
enum limits {
REKEY_AFTER_MESSAGES = 1ULL << 60,
REJECT_AFTER_MESSAGES = U64_MAX - COUNTER_WINDOW_SIZE - 1,
REKEY_TIMEOUT = 5,
REKEY_TIMEOUT_JITTER_MAX_JIFFIES = HZ / 3,
REKEY_AFTER_TIME = 120,
REJECT_AFTER_TIME = 180,
INITIATIONS_PER_SECOND = 50,
MAX_PEERS_PER_DEVICE = 1U << 20,
KEEPALIVE_TIMEOUT = 10,
MAX_TIMER_HANDSHAKES = 90 / REKEY_TIMEOUT,
MAX_QUEUED_INCOMING_HANDSHAKES = 4096, /* TODO: replace this with DQL */
MAX_STAGED_PACKETS = 128,
MAX_QUEUED_PACKETS = 1024 /* TODO: replace this with DQL */
};
enum message_type {
MESSAGE_INVALID = 0,
MESSAGE_HANDSHAKE_INITIATION = 1,
MESSAGE_HANDSHAKE_RESPONSE = 2,
MESSAGE_HANDSHAKE_COOKIE = 3,
MESSAGE_DATA = 4
};
struct message_header {
/* The actual layout of this that we want is:
* u8 type
* u8 reserved_zero[3]
*
* But it turns out that by encoding this as little endian,
* we achieve the same thing, and it makes checking faster.
*/
__le32 type;
};
struct message_macs {
u8 mac1[COOKIE_LEN];
u8 mac2[COOKIE_LEN];
};
struct message_handshake_initiation {
struct message_header header;
__le32 sender_index;
u8 unencrypted_ephemeral[NOISE_PUBLIC_KEY_LEN];
u8 encrypted_static[noise_encrypted_len(NOISE_PUBLIC_KEY_LEN)];
u8 encrypted_timestamp[noise_encrypted_len(NOISE_TIMESTAMP_LEN)];
struct message_macs macs;
};
struct message_handshake_response {
struct message_header header;
__le32 sender_index;
__le32 receiver_index;
u8 unencrypted_ephemeral[NOISE_PUBLIC_KEY_LEN];
u8 encrypted_nothing[noise_encrypted_len(0)];
struct message_macs macs;
};
struct message_handshake_cookie {
struct message_header header;
__le32 receiver_index;
u8 nonce[COOKIE_NONCE_LEN];
u8 encrypted_cookie[noise_encrypted_len(COOKIE_LEN)];
};
struct message_data {
struct message_header header;
__le32 key_idx;
__le64 counter;
u8 encrypted_data[];
};
#define message_data_len(plain_len) \
(noise_encrypted_len(plain_len) + sizeof(struct message_data))
enum message_alignments {
MESSAGE_PADDING_MULTIPLE = 16,
MESSAGE_MINIMUM_LENGTH = message_data_len(0)
};
#define SKB_HEADER_LEN \
(max(sizeof(struct iphdr), sizeof(struct ipv6hdr)) + \
sizeof(struct udphdr) + NET_SKB_PAD)
#define DATA_PACKET_HEAD_ROOM \
ALIGN(sizeof(struct message_data) + SKB_HEADER_LEN, 4)
enum { HANDSHAKE_DSCP = 0x88 /* AF41, plus 00 ECN */ };
#endif /* _WG_MESSAGES_H */
] |
| 37/2 | 2026/02/05 10:27 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 78985 output: 569 thoughts: 1797 Thoughts: **Benign Data Race in WireGuard's TX Byte Counter**
Okay, let's break this down. KCSAN reported a potential data race on `tx_bytes` within the `wg_peer` struct. First, I needed to understand the context. Based on the size of the packet change (32 bytes), I deduced the race was likely between a handshake initiation and a keepalive packet, or two keepalive packets.
I know that `tx_bytes` and `rx_bytes` in WireGuard are primarily for user-space reporting. They aren't used internally for critical logic like rekeying or rate limiting, which rely on `sending_counter` (an `atomic64_t`) and `last_sent_handshake` (also `atomic64_t`). So, the immediate concern is lower because the race doesn't impact critical network function.
Then, I looked at other potential 64-bit non-atomic fields in the `wg_peer` struct that *could* be racing. `internal_id` is set once at creation, and `walltime_last_handshake` is used for statistics, so I deemed those as not problematic.
Given that it's a statistic, a lost update wouldn't lead to memory corruption or crashes. It's a "benign" data race. I verified that this is a 64-bit system, so a 64-bit `+=` is a single atomic operation.
I checked if `tx_bytes` was used in `wg_packet_create_data_done`, and it isn't.
Now, I checked the relevant function `wg_socket_send_skb_to_peer` and the disassembly to confirm that the reported address corresponds to `peer->tx_bytes += skb_len`. This is exactly what the report suggests: a read-write operation on the same 8-byte location.
I then verified that `tx_bytes` is exported to netlink via `wg_get_device_dumpit`. However, the netlink function `nla_put_u64_64bit` is an atomic read of the 64-bit value, so there's no risk of torn reads.
I looked at the lock and confirmed that `wg_socket_send_skb_to_peer` is called under a `read_lock_bh(&peer->endpoint_lock)`, which is a `rwlock_t`. This means multiple CPUs *can* execute that `tx_bytes +=` statement concurrently within the read section. This is consistent with a statistics counter.
The lack of a `spinlock_t` or `atomic64_t` indicates that the developer likely accepts the possibility of occasional lost updates in the statistics.
I re-checked `wg_packet_tx_worker` and confirmed `tx_bytes` isn't used there.
I confirmed the offsets: The `tx_bytes` at offset `0x358` bytes, which maps to 856 bytes, which corresponds to `6848 / 8 = 856`, where the race happens.
I'm confident the data race on `tx_bytes` is benign. It's a statistics counter, and any lost updates don't lead to errors or crashes.
The data race occurs on the `tx_bytes` field of the `struct wg_peer` in the WireGuard network driver. This field is a 64-bit counter used to track the total number of bytes transmitted to a specific peer.
The race happens in the `wg_socket_send_skb_to_peer` function in `drivers/net/wireguard/socket.c`:
```c
int wg_socket_send_skb_to_peer(struct wg_peer *peer, struct sk_buff *skb, u8 ds)
{
...
read_lock_bh(&peer->endpoint_lock);
...
if (likely(!ret))
peer->tx_bytes += skb_len;
read_unlock_bh(&peer->endpoint_lock);
...
}
```
The `endpoint_lock` is a reader-writer lock (`rwlock_t`), and it is acquired as a reader in this function. This allows multiple threads (e.g., multiple worker threads sending packets to the same peer) to enter the critical section concurrently. Since `tx_bytes` is a plain `u64` and the increment operation `+=` is not atomic, concurrent updates from multiple CPUs can lead to lost updates (i.e., some packet sizes might not be added to the total).
However, this data race is considered benign for the following reasons:
1. **Purpose of the field**: The `tx_bytes` field is used exclusively for statistics. It is exported to userspace via Netlink in `drivers/net/wireguard/netlink.c` but is not used for any internal protocol logic, such as rekeying, rate limiting, or flow control.
2. **Memory Safety**: On a 64-bit architecture (like the x86_64 system in the report), 64-bit aligned memory accesses are atomic. Since `tx_bytes` is 64-bit aligned within the `wg_peer` struct, there is no risk of "torn" reads or writes that could lead to memory corruption or invalid values being read. A lost update only results in a slightly inaccurate counter.
3. **System Stability**: The race does not involve any pointers, complex data structures, or state variables that affect the driver's control flow or stability.
In summary, while the race can cause the transmission statistics to be slightly lower than the actual value under high concurrency, it does not affect the correctness or safety of the WireGuard protocol or the Linux kernel.
|
| 38/2 | 2026/02/05 10:27 | tool | set-results |
0mArgs:map[Benign:true Confident:true] Results: map[Benign:true Confident:true] |
| 39/2 | 2026/02/05 10:27 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 82080 output: 549 thoughts: 0 |
| Total Calls | Total Tokens | Avg Tokens | Total Duration (Seconds) | Avg Duration (Seconds) |
|---|
| Total Calls | Total Duration (Seconds) | Avg Duration (Seconds) |
|---|