syzbot |
sign-in | mailing list | source | docs | 🏰 |
| ID | Workflow | Result | Correct | Bug | Created | Started | Finished | Revision | Error |
|---|---|---|---|---|---|---|---|---|---|
| 56182841-9cb6-47db-a946-0bc9bdc54da3 | assessment-kcsan | Benign: ✅ Confident: ✅ | ❓ | KCSAN: data-race in br_fdb_update / br_fdb_update (8) | 2026/02/25 02:02 | 2026/02/25 02:02 | 2026/02/25 02:13 | 305c0ec5cd886e2d13738e28e1b2df9b0ec20fc9 |
The data race occurs in the `br_fdb_update` function in `net/bridge/br_fdb.c` on the `updated` field of `struct net_bridge_fdb_entry`. This field is an `unsigned long` that stores the `jiffies` timestamp of when the FDB entry was last updated (i.e., when a packet with the corresponding source MAC address was last seen).
### Analysis:
1. **Nature of the field**: `fdb->updated` is a timestamp used primarily for the aging of FDB entries. The bridge garbage collector (`br_fdb_cleanup`) periodically checks if `fdb->updated + hold_time` is before the current `jiffies` to decide whether to delete an entry.
2. **Access Pattern**: In `br_fdb_update`, the code checks `if (now != fdb->updated)` and then performs `fdb->updated = now;`. This is done without holding the bridge `hash_lock` to keep the packet processing fast path lockless.
3. **Atomicity**: On the reported architecture (x86_64), 8-byte aligned memory accesses are atomic. The `updated` field is 8-byte aligned within the struct. Thus, there is no risk of "torn" reads or writes where a reader would see a partially updated value.
4. **Consequences of the race**: If two CPUs receive packets for the same MAC address simultaneously, they may both observe that `now != fdb->updated` and both attempt to update it. Since they are writing almost identical `jiffies` values, the result is consistent. Even if one CPU's write is slightly delayed and overwrites a newer timestamp with an older one (by a few jiffies), it only marginally affects the aging timing, which is not critical for correctness.
5. **Synchronization of other fields**: Other fields in the same structure that require stricter synchronization are handled appropriately:
* `fdb->dst` (the destination port) is accessed using `READ_ONCE` and `WRITE_ONCE`.
* `fdb->flags` are modified using atomic bit operations (e.g., `test_and_clear_bit`).
* The `__fdb_mark_active` call, which determines if a notification should be sent, uses atomic bitops on the flags, ensuring that even if multiple threads enter the update block, the notification logic remains correct.
### Conclusion:
The data race is **benign**. It is an intentional lockless update of a timestamp where absolute precision is not required, and the hardware guarantees atomic access to the value. Similar patterns are used elsewhere in the bridge code (e.g., for the `used` field in `br_handle_frame_finish`). While adding `READ_ONCE`/`WRITE_ONCE` or `data_race()` annotations would silence KCSAN, the current code does not lead to memory corruption or logic errors.bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) ================================================================== BUG: KCSAN: data-race in br_fdb_update / br_fdb_update read to 0xffff88811a0655c0 of 8 bytes by interrupt on cpu 1: br_fdb_update+0x106/0x460 net/bridge/br_fdb.c:1005 br_handle_frame_finish+0x340/0xfc0 net/bridge/br_input.c:144 br_nf_hook_thresh+0x1eb/0x220 net/bridge/br_netfilter_hooks.c:-1 br_nf_pre_routing_finish_ipv6+0x4d1/0x570 net/bridge/br_netfilter_ipv6.c:-1 NF_HOOK include/linux/netfilter.h:318 [inline] br_nf_pre_routing_ipv6+0x1fa/0x2b0 net/bridge/br_netfilter_ipv6.c:184 br_nf_pre_routing+0x52b/0xbd0 net/bridge/br_netfilter_hooks.c:508 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline] nf_hook_bridge_pre net/bridge/br_input.c:291 [inline] br_handle_frame+0x4f0/0x9e0 net/bridge/br_input.c:442 __netif_receive_skb_core+0x5df/0x1920 net/core/dev.c:6026 __netif_receive_skb_one_core net/core/dev.c:6137 [inline] __netif_receive_skb+0x59/0x270 net/core/dev.c:6252 process_backlog+0x228/0x420 net/core/dev.c:6604 __napi_poll+0x5f/0x300 net/core/dev.c:7668 napi_poll net/core/dev.c:7731 [inline] net_rx_action+0x425/0x8c0 net/core/dev.c:7883 handle_softirqs+0xba/0x290 kernel/softirq.c:622 do_softirq+0x45/0x60 kernel/softirq.c:523 __local_bh_enable_ip+0x70/0x80 kernel/softirq.c:450 local_bh_enable include/linux/bottom_half.h:33 [inline] __alloc_skb+0x476/0x4b0 net/core/skbuff.c:674 alloc_skb include/linux/skbuff.h:1383 [inline] wg_socket_send_buffer_to_peer+0x35/0x120 drivers/net/wireguard/socket.c:192 wg_packet_send_handshake_initiation drivers/net/wireguard/send.c:40 [inline] wg_packet_handshake_send_worker+0x10d/0x160 drivers/net/wireguard/send.c:51 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0x4ce/0x9d0 kernel/workqueue.c:3340 worker_thread+0x582/0x770 kernel/workqueue.c:3421 kthread+0x489/0x510 kernel/kthread.c:463 ret_from_fork+0x149/0x290 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 write to 0xffff88811a0655c0 of 8 bytes by interrupt on cpu 0: br_fdb_update+0x13e/0x460 net/bridge/br_fdb.c:1006 br_handle_frame_finish+0x340/0xfc0 net/bridge/br_input.c:144 br_nf_hook_thresh+0x1eb/0x220 net/bridge/br_netfilter_hooks.c:-1 br_nf_pre_routing_finish_ipv6+0x4d1/0x570 net/bridge/br_netfilter_ipv6.c:-1 NF_HOOK include/linux/netfilter.h:318 [inline] br_nf_pre_routing_ipv6+0x1fa/0x2b0 net/bridge/br_netfilter_ipv6.c:184 br_nf_pre_routing+0x52b/0xbd0 net/bridge/br_netfilter_hooks.c:508 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline] nf_hook_bridge_pre net/bridge/br_input.c:291 [inline] br_handle_frame+0x4f0/0x9e0 net/bridge/br_input.c:442 __netif_receive_skb_core+0x5df/0x1920 net/core/dev.c:6026 __netif_receive_skb_one_core net/core/dev.c:6137 [inline] __netif_receive_skb+0x59/0x270 net/core/dev.c:6252 process_backlog+0x228/0x420 net/core/dev.c:6604 __napi_poll+0x5f/0x300 net/core/dev.c:7668 napi_poll net/core/dev.c:7731 [inline] net_rx_action+0x425/0x8c0 net/core/dev.c:7883 handle_softirqs+0xba/0x290 kernel/softirq.c:622 do_softirq+0x45/0x60 kernel/softirq.c:523 __local_bh_enable_ip+0x70/0x80 kernel/softirq.c:450 local_bh_enable include/linux/bottom_half.h:33 [inline] fpregs_unlock arch/x86/include/asm/fpu/api.h:77 [inline] kernel_fpu_end+0x6c/0x80 arch/x86/kernel/fpu/core.c:480 blake2s_compress+0x67/0x1740 lib/crypto/x86/blake2s.h:42 blake2s_update+0xa3/0x160 lib/crypto/blake2s.c:119 hmac+0x141/0x270 drivers/net/wireguard/noise.c:324 kdf+0x10b/0x1d0 drivers/net/wireguard/noise.c:375 mix_dh drivers/net/wireguard/noise.c:413 [inline] wg_noise_handshake_create_initiation+0x1ac/0x520 drivers/net/wireguard/noise.c:550 wg_packet_send_handshake_initiation drivers/net/wireguard/send.c:34 [inline] wg_packet_handshake_send_worker+0xb2/0x160 drivers/net/wireguard/send.c:51 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0x4ce/0x9d0 kernel/workqueue.c:3340 worker_thread+0x582/0x770 kernel/workqueue.c:3421 kthread+0x489/0x510 kernel/kthread.c:463 ret_from_fork+0x149/0x290 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 value changed: 0x0000000100026abc -> 0x0000000100026abd Reported by Kernel Concurrency Sanitizer on: CPU: 0 UID: 0 PID: 8678 Comm: kworker/u8:42 Not tainted syzkaller #0 PREEMPT(voluntary) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Workqueue: wg-kex-wg0 wg_packet_handshake_send_worker ================================================================== net_ratelimit: 6540 callbacks suppressed bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:aa, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:aa, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:6e:14:75:db:9d, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:6e:14:75:db:9d, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) net_ratelimit: 7050 callbacks suppressed bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:aa, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:aa, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:6e:14:75:db:9d, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:6e:14:75:db:9d, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
| Seq | Timestamp | Type | Name | Duration |
|---|---|---|---|---|
| 0/0 | 2026/02/25 02:02 | flow | assessment-kcsan |
10mResults:map[Benign:true Confident:true Explanation:The data race occurs in the `br_fdb_update` function in `net/bridge/br_fdb.c` on the `updated` field of `struct net_bridge_fdb_entry`. This field is an `unsigned long` that stores the `jiffies` timestamp of when the FDB entry was last updated (i.e., when a packet with the corresponding source MAC address was last seen).
### Analysis:
1. **Nature of the field**: `fdb->updated` is a timestamp used primarily for the aging of FDB entries. The bridge garbage collector (`br_fdb_cleanup`) periodically checks if `fdb->updated + hold_time` is before the current `jiffies` to decide whether to delete an entry.
2. **Access Pattern**: In `br_fdb_update`, the code checks `if (now != fdb->updated)` and then performs `fdb->updated = now;`. This is done without holding the bridge `hash_lock` to keep the packet processing fast path lockless.
3. **Atomicity**: On the reported architecture (x86_64), 8-byte aligned memory accesses are atomic. The `updated` field is 8-byte aligned within the struct. Thus, there is no risk of "torn" reads or writes where a reader would see a partially updated value.
4. **Consequences of the race**: If two CPUs receive packets for the same MAC address simultaneously, they may both observe that `now != fdb->updated` and both attempt to update it. Since they are writing almost identical `jiffies` values, the result is consistent. Even if one CPU's write is slightly delayed and overwrites a newer timestamp with an older one (by a few jiffies), it only marginally affects the aging timing, which is not critical for correctness.
5. **Synchronization of other fields**: Other fields in the same structure that require stricter synchronization are handled appropriately:
* `fdb->dst` (the destination port) is accessed using `READ_ONCE` and `WRITE_ONCE`.
* `fdb->flags` are modified using atomic bit operations (e.g., `test_and_clear_bit`).
* The `__fdb_mark_active` call, which determines if a notification should be sent, uses atomic bitops on the flags, ensuring that even if multiple threads enter the update block, the notification logic remains correct.
### Conclusion:
The data race is **benign**. It is an intentional lockless update of a timestamp where absolute precision is not required, and the hardware guarantees atomic access to the value. Similar patterns are used elsewhere in the bridge code (e.g., for the `used` field in `br_handle_frame_finish`). While adding `READ_ONCE`/`WRITE_ONCE` or `data_race()` annotations would silence KCSAN, the current code does not lead to memory corruption or logic errors.] |
| 1/1 | 2026/02/25 02:02 | action | kernel-checkouter |
0mResults:map[KernelSrc:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/src/cb24f0835974de80148d21636f6b653d93682021] |
| 2/1 | 2026/02/25 02:03 | action | kernel-builder |
4mResults:map[KernelObj:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/build/a1cc52ae30092c0dcc3322e73fc71400fd9f4da9] |
| 3/1 | 2026/02/25 02:07 | action | codesearch-prepare |
3mResults:map[Index:codesearch-index] |
| 4/1 | 2026/02/25 02:10 | agent | expert |
2mModel:gemini-3-flash-preview Results: map[Benign:true Confident:true] Instruction: You are an experienced Linux kernel developer tasked with determining if the given kernel data race is benign or not. The data race report is from KCSAN tool. It contains 2 stack traces of the memory accesses that constitute a data race. A "benign" data races are on a simple int/bool variable or similar field, and the accesses are not supposed to be protected by any mutual exclusion primitives. Common examples of such "benign" data races are accesses to various flags fields, statistics counters, and similar. A "benign" data race does not lead to memory corruption/crash with a conservative compiler that compiles memory accesses to primitive types effectively as atomic. A non-benign (or "harmful" data race) can lead to corruption/crash even with a conservative compiler that compiles memory accesses to primitive types effectively as atomic. A common example of a "harmful" data races is race on a complex container (list/hashmap/etc), where accesses are supposed to be protected by a mutual exclusion primitive. Also consider races that happen at the same time with the given one. If there is no synchronization in between, other memory accesses in the involved threads race with each other if they access the same memory. For example, if both threads execute: some_struct->have_elements = true; list_add(new_node, &some_struct->list_head); the race on some_struct->have_elements may appear benign, however it also implies there is a race on some_struct->list_head which is not benign, since the list is not thread-safe. Take into account that on 32-bit systems 64-bit memory accesses may be split into two accesses, and thus even with a conservative compiler may not be fully atomic. However, such races may still be benign depending on how writes are done, and how read data is used. In the final reply explain why you think the given data race is benign or is harmful. Use the provided tools to confirm any assumptions, variables/fields being accessed, etc. In particular, don't make assumptions about the kernel source code, use codesearch tools to read the actual source code. Prefer calling several tools at the same time to save round-trips. Use set-results tool to provide results of the analysis. It must be called exactly once before the final reply. Ignore results of this tool. Prompt: The data race report is: bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) ================================================================== BUG: KCSAN: data-race in br_fdb_update / br_fdb_update read to 0xffff88811a0655c0 of 8 bytes by interrupt on cpu 1: br_fdb_update+0x106/0x460 net/bridge/br_fdb.c:1005 br_handle_frame_finish+0x340/0xfc0 net/bridge/br_input.c:144 br_nf_hook_thresh+0x1eb/0x220 net/bridge/br_netfilter_hooks.c:-1 br_nf_pre_routing_finish_ipv6+0x4d1/0x570 net/bridge/br_netfilter_ipv6.c:-1 NF_HOOK include/linux/netfilter.h:318 [inline] br_nf_pre_routing_ipv6+0x1fa/0x2b0 net/bridge/br_netfilter_ipv6.c:184 br_nf_pre_routing+0x52b/0xbd0 net/bridge/br_netfilter_hooks.c:508 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline] nf_hook_bridge_pre net/bridge/br_input.c:291 [inline] br_handle_frame+0x4f0/0x9e0 net/bridge/br_input.c:442 __netif_receive_skb_core+0x5df/0x1920 net/core/dev.c:6026 __netif_receive_skb_one_core net/core/dev.c:6137 [inline] __netif_receive_skb+0x59/0x270 net/core/dev.c:6252 process_backlog+0x228/0x420 net/core/dev.c:6604 __napi_poll+0x5f/0x300 net/core/dev.c:7668 napi_poll net/core/dev.c:7731 [inline] net_rx_action+0x425/0x8c0 net/core/dev.c:7883 handle_softirqs+0xba/0x290 kernel/softirq.c:622 do_softirq+0x45/0x60 kernel/softirq.c:523 __local_bh_enable_ip+0x70/0x80 kernel/softirq.c:450 local_bh_enable include/linux/bottom_half.h:33 [inline] __alloc_skb+0x476/0x4b0 net/core/skbuff.c:674 alloc_skb include/linux/skbuff.h:1383 [inline] wg_socket_send_buffer_to_peer+0x35/0x120 drivers/net/wireguard/socket.c:192 wg_packet_send_handshake_initiation drivers/net/wireguard/send.c:40 [inline] wg_packet_handshake_send_worker+0x10d/0x160 drivers/net/wireguard/send.c:51 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0x4ce/0x9d0 kernel/workqueue.c:3340 worker_thread+0x582/0x770 kernel/workqueue.c:3421 kthread+0x489/0x510 kernel/kthread.c:463 ret_from_fork+0x149/0x290 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 write to 0xffff88811a0655c0 of 8 bytes by interrupt on cpu 0: br_fdb_update+0x13e/0x460 net/bridge/br_fdb.c:1006 br_handle_frame_finish+0x340/0xfc0 net/bridge/br_input.c:144 br_nf_hook_thresh+0x1eb/0x220 net/bridge/br_netfilter_hooks.c:-1 br_nf_pre_routing_finish_ipv6+0x4d1/0x570 net/bridge/br_netfilter_ipv6.c:-1 NF_HOOK include/linux/netfilter.h:318 [inline] br_nf_pre_routing_ipv6+0x1fa/0x2b0 net/bridge/br_netfilter_ipv6.c:184 br_nf_pre_routing+0x52b/0xbd0 net/bridge/br_netfilter_hooks.c:508 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline] nf_hook_bridge_pre net/bridge/br_input.c:291 [inline] br_handle_frame+0x4f0/0x9e0 net/bridge/br_input.c:442 __netif_receive_skb_core+0x5df/0x1920 net/core/dev.c:6026 __netif_receive_skb_one_core net/core/dev.c:6137 [inline] __netif_receive_skb+0x59/0x270 net/core/dev.c:6252 process_backlog+0x228/0x420 net/core/dev.c:6604 __napi_poll+0x5f/0x300 net/core/dev.c:7668 napi_poll net/core/dev.c:7731 [inline] net_rx_action+0x425/0x8c0 net/core/dev.c:7883 handle_softirqs+0xba/0x290 kernel/softirq.c:622 do_softirq+0x45/0x60 kernel/softirq.c:523 __local_bh_enable_ip+0x70/0x80 kernel/softirq.c:450 local_bh_enable include/linux/bottom_half.h:33 [inline] fpregs_unlock arch/x86/include/asm/fpu/api.h:77 [inline] kernel_fpu_end+0x6c/0x80 arch/x86/kernel/fpu/core.c:480 blake2s_compress+0x67/0x1740 lib/crypto/x86/blake2s.h:42 blake2s_update+0xa3/0x160 lib/crypto/blake2s.c:119 hmac+0x141/0x270 drivers/net/wireguard/noise.c:324 kdf+0x10b/0x1d0 drivers/net/wireguard/noise.c:375 mix_dh drivers/net/wireguard/noise.c:413 [inline] wg_noise_handshake_create_initiation+0x1ac/0x520 drivers/net/wireguard/noise.c:550 wg_packet_send_handshake_initiation drivers/net/wireguard/send.c:34 [inline] wg_packet_handshake_send_worker+0xb2/0x160 drivers/net/wireguard/send.c:51 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0x4ce/0x9d0 kernel/workqueue.c:3340 worker_thread+0x582/0x770 kernel/workqueue.c:3421 kthread+0x489/0x510 kernel/kthread.c:463 ret_from_fork+0x149/0x290 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 value changed: 0x0000000100026abc -> 0x0000000100026abd Reported by Kernel Concurrency Sanitizer on: CPU: 0 UID: 0 PID: 8678 Comm: kworker/u8:42 Not tainted syzkaller #0 PREEMPT(voluntary) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Workqueue: wg-kex-wg0 wg_packet_handshake_send_worker ================================================================== net_ratelimit: 6540 callbacks suppressed bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:aa, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:aa, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:6e:14:75:db:9d, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:6e:14:75:db:9d, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) net_ratelimit: 7050 callbacks suppressed bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:aa, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:aa, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:6e:14:75:db:9d, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:6e:14:75:db:9d, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) Reply: The data race occurs in the `br_fdb_update` function in `net/bridge/br_fdb.c` on the `updated` field of `struct net_bridge_fdb_entry`. This field is an `unsigned long` that stores the `jiffies` timestamp of when the FDB entry was last updated (i.e., when a packet with the corresponding source MAC address was last seen).
### Analysis:
1. **Nature of the field**: `fdb->updated` is a timestamp used primarily for the aging of FDB entries. The bridge garbage collector (`br_fdb_cleanup`) periodically checks if `fdb->updated + hold_time` is before the current `jiffies` to decide whether to delete an entry.
2. **Access Pattern**: In `br_fdb_update`, the code checks `if (now != fdb->updated)` and then performs `fdb->updated = now;`. This is done without holding the bridge `hash_lock` to keep the packet processing fast path lockless.
3. **Atomicity**: On the reported architecture (x86_64), 8-byte aligned memory accesses are atomic. The `updated` field is 8-byte aligned within the struct. Thus, there is no risk of "torn" reads or writes where a reader would see a partially updated value.
4. **Consequences of the race**: If two CPUs receive packets for the same MAC address simultaneously, they may both observe that `now != fdb->updated` and both attempt to update it. Since they are writing almost identical `jiffies` values, the result is consistent. Even if one CPU's write is slightly delayed and overwrites a newer timestamp with an older one (by a few jiffies), it only marginally affects the aging timing, which is not critical for correctness.
5. **Synchronization of other fields**: Other fields in the same structure that require stricter synchronization are handled appropriately:
* `fdb->dst` (the destination port) is accessed using `READ_ONCE` and `WRITE_ONCE`.
* `fdb->flags` are modified using atomic bit operations (e.g., `test_and_clear_bit`).
* The `__fdb_mark_active` call, which determines if a notification should be sent, uses atomic bitops on the flags, ensuring that even if multiple threads enter the update block, the notification logic remains correct.
### Conclusion:
The data race is **benign**. It is an intentional lockless update of a timestamp where absolute precision is not required, and the hardware guarantees atomic access to the value. Similar patterns are used elsewhere in the bridge code (e.g., for the `used` field in `br_handle_frame_finish`). While adding `READ_ONCE`/`WRITE_ONCE` or `data_race()` annotations would silence KCSAN, the current code does not lead to memory corruption or logic errors. |
| 5/2 | 2026/02/25 02:10 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 6246 output: 45 thoughts: 226 Thoughts: **Analyzing a Data Race in br_fdb_update** Okay, so the report flags a data race within `br_fdb_update`, residing in `net/bridge/br_fdb.c`. I need to dig into this. It's pinpointing a read and a write within the same function at offsets `0x106` and `0x13e` respectively. The critical part is the value changing from `0x0000000100026abc` to `0x0000000100026abd`. My initial thought is it's likely an increment operation – either on a 64-bit counter or possibly two 32-bit values packed together. Let's confirm: `0x0000000100026abc` to `0x0000000100026abd`. Yes, it's just the least significant hex digit incrementing from 'c' to 'd'. So, that's what's getting modified. Now I need to understand what this value represents within the context of the FDB update logic. I'll need to jump into the code and examine `br_fdb_update` in `net/bridge/br_fdb.c`. |
| 6/2 | 2026/02/25 02:11 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:net/bridge/br_fdb.c IncludeLines:true Name:br_fdb_update] Results: map[SourceCode: 985: void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source,
986: const unsigned char *addr, u16 vid, unsigned long flags)
987: {
988: struct net_bridge_fdb_entry *fdb;
989:
990: /* some users want to always flood. */
991: if (hold_time(br) == 0)
992: return;
993:
994: fdb = fdb_find_rcu(&br->fdb_hash_tbl, addr, vid);
995: if (likely(fdb)) {
996: /* attempt to update an entry for a local interface */
997: if (unlikely(test_bit(BR_FDB_LOCAL, &fdb->flags))) {
998: if (net_ratelimit())
999: br_warn(br, "received packet on %s with own address as source address (addr:%pM, vlan:%u)\n",
1000: source->dev->name, addr, vid);
1001: } else {
1002: unsigned long now = jiffies;
1003: bool fdb_modified = false;
1004:
1005: if (now != fdb->updated) {
1006: fdb->updated = now;
1007: fdb_modified = __fdb_mark_active(fdb);
1008: }
1009:
1010: /* fastpath: update of existing entry */
1011: if (unlikely(source != READ_ONCE(fdb->dst) &&
1012: !test_bit(BR_FDB_STICKY, &fdb->flags))) {
1013: br_switchdev_fdb_notify(br, fdb, RTM_DELNEIGH);
1014: WRITE_ONCE(fdb->dst, source);
1015: fdb_modified = true;
1016: /* Take over HW learned entry */
1017: if (unlikely(test_bit(BR_FDB_ADDED_BY_EXT_LEARN,
1018: &fdb->flags)))
1019: clear_bit(BR_FDB_ADDED_BY_EXT_LEARN,
1020: &fdb->flags);
1021: /* Clear locked flag when roaming to an
1022: * unlocked port.
1023: */
1024: if (unlikely(test_bit(BR_FDB_LOCKED, &fdb->flags)))
1025: clear_bit(BR_FDB_LOCKED, &fdb->flags);
1026: }
1027:
1028: if (unlikely(test_bit(BR_FDB_ADDED_BY_USER, &flags))) {
1029: set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags);
1030: if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED,
1031: &fdb->flags))
1032: atomic_dec(&br->fdb_n_learned);
1033: }
1034: if (unlikely(fdb_modified)) {
1035: trace_br_fdb_update(br, source, addr, vid, flags);
1036: fdb_notify(br, fdb, RTM_NEWNEIGH, true);
1037: }
1038: }
1039: } else {
1040: spin_lock(&br->hash_lock);
1041: fdb = fdb_create(br, source, addr, vid, flags);
1042: if (fdb) {
1043: trace_br_fdb_update(br, source, addr, vid, flags);
1044: fdb_notify(br, fdb, RTM_NEWNEIGH, true);
1045: }
1046: /* else we lose race and someone else inserts
1047: * it first, don't bother updating
1048: */
1049: spin_unlock(&br->hash_lock);
1050: }
1051: }
SourceFile:net/bridge/br_fdb.c] |
| 7/2 | 2026/02/25 02:11 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 8005 output: 28 thoughts: 276 Thoughts: **Analyzing a Potential Race Condition in the FDB Update** Okay, so I'm looking at this code related to updating the Forwarding Database (FDB). The race is on `fdb->updated`. I see that on line 1005, there's a read of this value: `if (now != fdb->updated)`. Then, on line 1006, it's immediately written to: `fdb->updated = now;`. `fdb->updated` is an `unsigned long`, and from the context, I'm pretty sure that it stores a value for `jiffies`. I can see from the logs that the value went from `0x0000000100026abc` to `0x0000000100026abd`. Definitely jiffies. The function `br_fdb_update` gets called whenever a packet comes in and it looks like it's responsible for updating the FDB. It tries to locate the FDB entry using `fdb_find_rcu`. If it finds it, this update timestamp logic is triggered. It seems designed to record when an entry was last *seen*, assuming it's different from the current jiffies. Now, this smells like a classic "update timestamp" race condition. If two CPUs receive packets for the same FDB entry very close in time, they might *both* read the same value for `fdb->updated`. Since the read and write aren't atomic, both CPUs could pass the check `now != fdb->updated` and then both write the updated timestamp. The question then is, is this harmful? Considering that `fdb->updated` is used for aging out FDB entries, this could indeed be a problem. Entries could potentially "age out" more quickly than intended if they are not consistently updated, or potentially not at all if the jiffies wrap around! I'll need to investigate where else `fdb->updated` is used to confirm this. |
| 8/2 | 2026/02/25 02:11 | tool | codesearch-find-references |
0mArgs:map[Name:net_bridge_fdb_entry::updated] Results: map[References:[map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:br_fdb_cleanup SourceFile:net/bridge/br_fdb.c SourceLine:554] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:br_fdb_external_learn_add SourceFile:net/bridge/br_fdb.c SourceLine:1559] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:br_fdb_fillbuf SourceFile:net/bridge/br_fdb.c SourceLine:957] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:br_fdb_update SourceFile:net/bridge/br_fdb.c SourceLine:1005] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:br_fdb_update SourceFile:net/bridge/br_fdb.c SourceLine:1006] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:fdb_add_entry SourceFile:net/bridge/br_fdb.c SourceLine:1248] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:fdb_create SourceFile:net/bridge/br_fdb.c SourceLine:410] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fdb_fill_info SourceFile:net/bridge/br_fdb.c SourceLine:131] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:has_expired SourceFile:net/bridge/br_fdb.c SourceLine:73] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:has_expired SourceFile:net/bridge/br_fdb.c SourceLine:73]] TruncatedOutput:false] |
| 9/2 | 2026/02/25 02:11 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 9298 output: 42 thoughts: 19 Thoughts: Okay, here's my interpretation of those thoughts, tailored for an expert audience, presented in the first person: **Analyzing `has_expired` and `br_fdb_cleanup`** Right, let's dive into these two. Specifically, I need to consider `has_expired` in the context of the potential garbage collection work of `br_fdb_cleanup`. My initial thought is to cross-reference their functionality. How are they interacting? Is `has_expired` a predicate that drives cleanup in `br_fdb_cleanup`, or is it independent? Are there any temporal dependencies I should be aware of? Are there race conditions possible between those functions? Are we dealing with any shared resources? I need to think about that. I need to understand if `br_fdb_cleanup` depends on an explicit `has_expired` call, or if it internally determines expiration. Essentially, I'm trying to mentally map the logical flow and the potential for any edge cases related to resource management and lifecycle concerns. There might be some subtle interplay between those functions that needs a deeper look. |
| 10/2 | 2026/02/25 02:11 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:net/bridge/br_fdb.c IncludeLines:true Name:has_expired] Results: map[SourceCode: 68: static inline int has_expired(const struct net_bridge *br,
69: const struct net_bridge_fdb_entry *fdb)
70: {
71: return !test_bit(BR_FDB_STATIC, &fdb->flags) &&
72: !test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags) &&
73: time_before_eq(fdb->updated + hold_time(br), jiffies);
74: }
SourceFile:net/bridge/br_fdb.c] |
| 11/2 | 2026/02/25 02:11 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 9767 output: 45 thoughts: 9 Thoughts: Okay, here's my interpretation of that thought process, tailored for an expert audience, and written in the first person: **My Initial Assessment of `br_fdb_cleanup`** Right, so I see `br_fdb_cleanup` and my immediate thought is "Okay, this is about cleaning up the FDB – probably related to bridging or forwarding database entries." My brain starts ticking through the potential scenarios: Are we talking about stale entries, inconsistencies, memory leaks, or perhaps a more active garbage collection strategy for dynamically learned MAC addresses? My experience tells me that these types of cleanup routines can be critical for maintaining the stability and performance of a network device, especially under heavy load or in environments with rapid topology changes. I'd expect this function to be carefully designed, as a poorly implemented cleanup routine can introduce significant performance bottlenecks, or worse, lead to incorrect forwarding decisions. I'd want to know the triggers – is it time-based, event-driven (e.g., interface down), or a combination? And, crucially, what are the implications on forwarding while this cleanup is happening. Are there any lock contention issues that I need to be aware of? I'd immediately be looking for the surrounding code to determine its impact. This is more than a trivial function, this one's probably pretty important. |
| 12/2 | 2026/02/25 02:11 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:net/bridge/br_fdb.c IncludeLines:true Name:br_fdb_cleanup] Results: map[SourceCode: 539: void br_fdb_cleanup(struct work_struct *work)
540: {
541: struct net_bridge *br = container_of(work, struct net_bridge,
542: gc_work.work);
543: struct net_bridge_fdb_entry *f = NULL;
544: unsigned long delay = hold_time(br);
545: unsigned long work_delay = delay;
546: unsigned long now = jiffies;
547:
548: /* this part is tricky, in order to avoid blocking learning and
549: * consequently forwarding, we rely on rcu to delete objects with
550: * delayed freeing allowing us to continue traversing
551: */
552: rcu_read_lock();
553: hlist_for_each_entry_rcu(f, &br->fdb_list, fdb_node) {
554: unsigned long this_timer = f->updated + delay;
555:
556: if (test_bit(BR_FDB_STATIC, &f->flags) ||
557: test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &f->flags)) {
558: if (test_bit(BR_FDB_NOTIFY, &f->flags)) {
559: if (time_after(this_timer, now))
560: work_delay = min(work_delay,
561: this_timer - now);
562: else if (!test_and_set_bit(BR_FDB_NOTIFY_INACTIVE,
563: &f->flags))
564: fdb_notify(br, f, RTM_NEWNEIGH, false);
565: }
566: continue;
567: }
568:
569: if (time_after(this_timer, now)) {
570: work_delay = min(work_delay, this_timer - now);
571: } else {
572: spin_lock_bh(&br->hash_lock);
573: if (!hlist_unhashed(&f->fdb_node))
574: fdb_delete(br, f, true);
575: spin_unlock_bh(&br->hash_lock);
576: }
577: }
578: rcu_read_unlock();
579:
580: /* Cleanup minimum 10 milliseconds apart */
581: work_delay = max_t(unsigned long, work_delay, msecs_to_jiffies(10));
582: mod_delayed_work(system_long_wq, &br->gc_work, work_delay);
583: }
SourceFile:net/bridge/br_fdb.c] |
| 13/2 | 2026/02/25 02:11 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 10900 output: 46 thoughts: 252 Thoughts: **Analyzing the FDB Update Race Condition**
Okay, so I'm looking at this `fdb->updated` field, and it seems like the update race is happening within `br_fdb_update`. Specifically, it's being modified without any explicit locking except during the `fdb_create` path. This is interesting. Then, `fdb_find_rcu` is called under an `rcu_read_lock`, implied by the packet processing context.
It's pretty clear that if two threads are racing to update `fdb->updated`, one is going to win and write its `jiffies` value. Chances are these writes are going to be nearly simultaneous, and the values are going to be very close. Even if there's a torn write, which is unlikely for an `unsigned long` on a 64-bit architecture, but possible on a 32-bit one, it's still just a timestamp for the aging process, so that's not critical.
Hold on, there's a bit more in `br_fdb_update` that I need to consider:
```c
1005: if (now != fdb->updated) {
1006: fdb->updated = now;
1007: fdb_modified = __fdb_mark_active(fdb);
1008: }
```
Okay, so there's a check to see if the value is different before the update. But what's the purpose of `__fdb_mark_active(fdb)`? I need to investigate that function to see what it does.
|
| 14/2 | 2026/02/25 02:11 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:net/bridge/br_fdb.c IncludeLines:true Name:__fdb_mark_active] Results: map[SourceCode: 978: /* returns true if the fdb was modified */
979: static bool __fdb_mark_active(struct net_bridge_fdb_entry *fdb)
980: {
981: return !!(test_bit(BR_FDB_NOTIFY_INACTIVE, &fdb->flags) &&
982: test_and_clear_bit(BR_FDB_NOTIFY_INACTIVE, &fdb->flags));
983: }
SourceFile:net/bridge/br_fdb.c] |
| 15/2 | 2026/02/25 02:11 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 11718 output: 42 thoughts: 317 Thoughts: **Analyzing the Concurrency in `__fdb_mark_active`** Okay, so I'm looking at this function `__fdb_mark_active`. I see it uses atomic bit operations, which is good. That's the expected way to handle this kind of concurrency. The issue KCSAN flagged centers around `fdb->updated`. Initially, I thought the race on the `fdb->updated` timestamp itself was probably benign, since it's just a timestamp. It seems like it would be handled through the if statement, where a thread might read an old or new value but the system is designed to respond correctly to either. But then KCSAN reported a specific race between a read and a write on lines 1005 and 1006. Let's break it down: * **Thread 1:** `if (now != fdb->updated)` - This reads the value of `fdb->updated`. * **Thread 2:** `fdb->updated = now;` - This writes a new value to `fdb->updated`. The scenario seems to be that if Thread 1 reads an *old* value, it enters the `if` block and writes a new value, as designed. If Thread 1 reads the *new* value (written by Thread 2), it skips the `if` block, which is the desired behavior and the whole point of using the timestamp. This prevents redundant updates. That all sounds correct in principle. However, I immediately get a red flag: Could a 64-bit read/write be an issue on a 32-bit system? It's a timestamp, so 64 bits makes sense. But the KCSAN report gives me pause. It mentions `CPU: 0 ... arch/x86/kernel/process.c:158`, and the address `0xffff88811a0655c0` is 64-bit. Wait...it's an x86\_64 system. On x86\_64, 64-bit *aligned* loads and stores are atomic, so we should be good! But is `fdb->updated` aligned? I need to check the struct definition, specifically `struct net_bridge_fdb_entry` and confirm. |
| 16/2 | 2026/02/25 02:11 | tool | codesearch-struct-layout |
0mArgs:map[ContextFile:net/bridge/br_fdb.c Name:net_bridge_fdb_entry] Results: map[Fields:[map[Name:rhnode OffsetBits:0 SizeBits:64] map[Name:dst OffsetBits:64 SizeBits:64] map[Name:key OffsetBits:128 SizeBits:64] map[Name:fdb_node OffsetBits:192 SizeBits:128] map[Name:flags OffsetBits:320 SizeBits:64] map[Name:updated OffsetBits:512 SizeBits:64] map[Name:used OffsetBits:576 SizeBits:64] map[Name:rcu OffsetBits:640 SizeBits:128]]] |
| 17/2 | 2026/02/25 02:11 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 12748 output: 28 thoughts: 2449 Thoughts: **Analysis of the Bridge FDB Update Race Condition** Okay, so KCSAN's flagged a data race on `fdb->updated`. Let's break this down. From what I can see, `fdb->updated` at offset 64 bytes, meaning it's 8-byte aligned, which is a good start. The code's clearly designed for a fast path update, aiming to avoid locking. My immediate thought is that this isn't a *critical* race in the traditional sense. `fdb->updated` appears to be a timestamp, likely representing the last time an entry was seen. The potential race stems from multiple CPUs potentially updating this at the same time. The code checks `now != fdb->updated` and, if true, updates it. The expectation is that if two CPUs see the condition true, they *both* update it. That doesn't break anything functionally, because it's just a time-of-last-use value for aging. The more pressing concern is ensuring *atomicity* in related operations, not just the timestamp itself. We have `fdb->dst`, which is updated using `WRITE_ONCE` and read with `READ_ONCE`. This is as expected for lockless access, and *should* prevent tearing of the field. KCSAN's specifically flagging `fdb->updated` though. If there was a true race, it would be around `WRITE_ONCE` and `READ_ONCE`, but that's not what KCSAN is reporting. The KCSAN report gives away that the field is updated to `jiffies`. So, the compiler isn't using `READ_ONCE` or `WRITE_ONCE` for this "timestamp." While it could theoretically cause some weird optimizations, this is typically fine for a timestamp, especially when the exact value isn't crucial for correctness, and the field is aligned to prevent tearing. Many such timestamp updates in the network stack are marked with `data_race()` or use `READ_ONCE`/`WRITE_ONCE` to avoid KCSAN reports if they are intended to be lockless. Let's look at what's dependent on it. `fdb_modified` is also updated. It's set if `__fdb_mark_active(fdb)` is called. If `fdb_modified` becomes true, `fdb_notify` is called. The race around `__fdb_mark_active` is already handled, though: it is atomic, so only one notification is generated even with concurrent updates. This reinforces the assumption that this is designed to be a lockless fast-path. Now, considering the timing, one thread may perform an update, changing `fdb->updated` while the other thread is still checking. That's fine. It means that the second thread skips marking the entry active if the time difference is small, which is completely expected. The most critical part is whether `fdb->updated` is ever read in such a way that a torn read would be detrimental. I see its use in `has_expired` and `br_fdb_cleanup`. They compare it against `jiffies`. The key is that `unsigned long` is the size of the register on the system architecture. Therefore, whether we are on a 32- or 64-bit system, we are using the full register, so we *can't* have tearing. So, the potential race on `fdb->updated` is, in all probability, benign. It's a timestamp used for aging, and the code intentionally avoids locking. We can tolerate a slightly off timestamp for aging. The timestamp is just for aging, and the race is *not* around the critical field of `fdb->dst` which *is* protected with `READ_ONCE` and `WRITE_ONCE`. The comment at line 1010 confirms this. We should, however, consider what other fields are being touched. `fdb->flags` is updated with atomic bitops (like `test_bit` etc.), which is correct. The `fdb->dst` uses `WRITE_ONCE`, ensuring no torn reads. We should also consider `fdb->used` which I know I've seen in the struct layout. But it's not being mentioned by KCSAN and doesn't appear to be related to the race. So, in conclusion, the race on `fdb->updated` is likely a false positive or an intentional design choice for performance. It *might* lead to a slightly incorrect aging of FDB entries, but because `jiffies` is always going to be updated to a reasonable value, this is negligible in the grand scheme of things. It's a classic example of prioritizing speed over strict |
| 18/2 | 2026/02/25 02:11 | tool | codesearch-find-references |
0mArgs:map[Name:net_bridge_fdb_entry::used] Results: map[References:[map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:br_fdb_external_learn_add SourceFile:net/bridge/br_fdb.c SourceLine:1568] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:br_handle_frame_finish SourceFile:net/bridge/br_input.c SourceLine:224] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:br_handle_frame_finish SourceFile:net/bridge/br_input.c SourceLine:225] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:fdb_add_entry SourceFile:net/bridge/br_fdb.c SourceLine:1245] map[ReferenceKind:writes ReferencingEntityKind:function ReferencingEntityName:fdb_create SourceFile:net/bridge/br_fdb.c SourceLine:410] map[ReferenceKind:reads ReferencingEntityKind:function ReferencingEntityName:fdb_fill_info SourceFile:net/bridge/br_fdb.c SourceLine:129]] TruncatedOutput:false] |
| 19/2 | 2026/02/25 02:11 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 16592 output: 45 thoughts: 164 Thoughts: **Examining Data Consistency in the Bridge Forwarding Database** Okay, so I'm digging into this bridge code, specifically looking at the `br_handle_frame_finish` function. Line 224 and 225 immediately catch my eye: they're using `READ_ONCE` and `WRITE_ONCE` on the `fdb->used` field. Right, that makes sense – we're dealing with concurrent access, and these primitives guarantee atomic reads and writes, crucial for maintaining data integrity in a multithreaded environment, especially when handling packet processing. But then my mind immediately jumps to `br_fdb_update`. Why isn't `READ_ONCE`/`WRITE_ONCE` used there for the `updated` field? It's a bit puzzling. Did someone miss it during the code review, or was it a conscious design decision? Perhaps the thinking was that `updated` is *only* used for aging the entries, and therefore the consistency requirements aren't as stringent as for `used`, which *might* be used in more critical operations. Could be. I'd need to look at all of the code that uses the `used` field to be sure. Let's drill down and get to the bottom of this. Alright, I'm going to pull up `br_handle_frame_finish` in `net/bridge/br_input.c` and take a closer look at the context. |
| 20/2 | 2026/02/25 02:12 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:net/bridge/br_input.c IncludeLines:true Name:br_handle_frame_finish] Results: map[SourceCode: 75: /* note: already called with rcu_read_lock */
76: int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
77: {
78: enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED;
79: struct net_bridge_port *p = br_port_get_rcu(skb->dev);
80: enum br_pkt_type pkt_type = BR_PKT_UNICAST;
81: struct net_bridge_fdb_entry *dst = NULL;
82: struct net_bridge_mcast_port *pmctx;
83: struct net_bridge_mdb_entry *mdst;
84: bool local_rcv, mcast_hit = false;
85: struct net_bridge_mcast *brmctx;
86: struct net_bridge_vlan *vlan;
87: struct net_bridge *br;
88: bool promisc;
89: u16 vid = 0;
90: u8 state;
91:
92: if (!p)
93: goto drop;
94:
95: br = p->br;
96:
97: if (br_mst_is_enabled(p)) {
98: state = BR_STATE_FORWARDING;
99: } else {
100: if (p->state == BR_STATE_DISABLED) {
101: reason = SKB_DROP_REASON_BRIDGE_INGRESS_STP_STATE;
102: goto drop;
103: }
104:
105: state = p->state;
106: }
107:
108: brmctx = &p->br->multicast_ctx;
109: pmctx = &p->multicast_ctx;
110: if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid,
111: &state, &vlan))
112: goto out;
113:
114: if (p->flags & BR_PORT_LOCKED) {
115: struct net_bridge_fdb_entry *fdb_src =
116: br_fdb_find_rcu(br, eth_hdr(skb)->h_source, vid);
117:
118: if (!fdb_src) {
119: /* FDB miss. Create locked FDB entry if MAB is enabled
120: * and drop the packet.
121: */
122: if (p->flags & BR_PORT_MAB)
123: br_fdb_update(br, p, eth_hdr(skb)->h_source,
124: vid, BIT(BR_FDB_LOCKED));
125: goto drop;
126: } else if (READ_ONCE(fdb_src->dst) != p ||
127: test_bit(BR_FDB_LOCAL, &fdb_src->flags)) {
128: /* FDB mismatch. Drop the packet without roaming. */
129: goto drop;
130: } else if (test_bit(BR_FDB_LOCKED, &fdb_src->flags)) {
131: /* FDB match, but entry is locked. Refresh it and drop
132: * the packet.
133: */
134: br_fdb_update(br, p, eth_hdr(skb)->h_source, vid,
135: BIT(BR_FDB_LOCKED));
136: goto drop;
137: }
138: }
139:
140: nbp_switchdev_frame_mark(p, skb);
141:
142: /* insert into forwarding database after filtering to avoid spoofing */
143: if (p->flags & BR_LEARNING)
144: br_fdb_update(br, p, eth_hdr(skb)->h_source, vid, 0);
145:
146: promisc = !!(br->dev->flags & IFF_PROMISC);
147: local_rcv = promisc;
148:
149: if (is_multicast_ether_addr(eth_hdr(skb)->h_dest)) {
150: /* by definition the broadcast is also a multicast address */
151: if (is_broadcast_ether_addr(eth_hdr(skb)->h_dest)) {
152: pkt_type = BR_PKT_BROADCAST;
153: local_rcv = true;
154: } else {
155: pkt_type = BR_PKT_MULTICAST;
156: if (br_multicast_rcv(&brmctx, &pmctx, vlan, skb, vid))
157: goto drop;
158: }
159: }
160:
161: if (state == BR_STATE_LEARNING) {
162: reason = SKB_DROP_REASON_BRIDGE_INGRESS_STP_STATE;
163: goto drop;
164: }
165:
166: BR_INPUT_SKB_CB(skb)->brdev = br->dev;
167: BR_INPUT_SKB_CB(skb)->src_port_isolated = !!(p->flags & BR_ISOLATED);
168:
169: if (IS_ENABLED(CONFIG_INET) &&
170: (skb->protocol == htons(ETH_P_ARP) ||
171: skb->protocol == htons(ETH_P_RARP))) {
172: br_do_proxy_suppress_arp(skb, br, vid, p);
173: } else if (IS_ENABLED(CONFIG_IPV6) &&
174: skb->protocol == htons(ETH_P_IPV6) &&
175: br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED) &&
176: pskb_may_pull(skb, sizeof(struct ipv6hdr) +
177: sizeof(struct nd_msg)) &&
178: ipv6_hdr(skb)->nexthdr == IPPROTO_ICMPV6) {
179: struct nd_msg *msg, _msg;
180:
181: msg = br_is_nd_neigh_msg(skb, &_msg);
182: if (msg)
183: br_do_suppress_nd(skb, br, vid, p, msg);
184: }
185:
186: switch (pkt_type) {
187: case BR_PKT_MULTICAST:
188: mdst = br_mdb_entry_skb_get(brmctx, skb, vid);
189: if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
190: br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst)) {
191: if ((mdst && mdst->host_joined) ||
192: br_multicast_is_router(brmctx, skb) ||
193: br->dev->flags & IFF_ALLMULTI) {
194: local_rcv = true;
195: DEV_STATS_INC(br->dev, multicast);
196: }
197: mcast_hit = true;
198: } else {
199: local_rcv = true;
200: DEV_STATS_INC(br->dev, multicast);
201: }
202: break;
203: case BR_PKT_UNICAST:
204: dst = br_fdb_find_rcu(br, eth_hdr(skb)->h_dest, vid);
205: if (unlikely(!dst && vid &&
206: br_opt_get(br, BROPT_FDB_LOCAL_VLAN_0))) {
207: dst = br_fdb_find_rcu(br, eth_hdr(skb)->h_dest, 0);
208: if (dst &&
209: (!test_bit(BR_FDB_LOCAL, &dst->flags) ||
210: test_bit(BR_FDB_ADDED_BY_USER, &dst->flags)))
211: dst = NULL;
212: }
213: break;
214: default:
215: break;
216: }
217:
218: if (dst) {
219: unsigned long now = jiffies;
220:
221: if (test_bit(BR_FDB_LOCAL, &dst->flags))
222: return br_pass_frame_up(skb, false);
223:
224: if (now != dst->used)
225: dst->used = now;
226: br_forward(dst->dst, skb, local_rcv, false);
227: } else {
228: if (!mcast_hit)
229: br_flood(br, skb, pkt_type, local_rcv, false, vid);
230: else
231: br_multicast_flood(mdst, skb, brmctx, local_rcv, false);
232: }
233:
234: if (local_rcv)
235: return br_pass_frame_up(skb, promisc);
236:
237: out:
238: return 0;
239: drop:
240: kfree_skb_reason(skb, reason);
241: goto out;
242: }
SourceFile:net/bridge/br_input.c] |
| 21/2 | 2026/02/25 02:12 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 19957 output: 43 thoughts: 1266 Thoughts: **Analyzing the Potential Data Race in `br_fdb_update`**
Okay, so I was looking at `br_handle_frame_finish`, specifically this code snippet:
```c
224: if (now != dst->used)
225: dst->used = now;
```
And I noticed that there's no `READ_ONCE` or `WRITE_ONCE` here. I thought I remembered seeing it in a `codesearch-find-references` output, but after re-checking, I think I misread or it was from a different version. I need to be more careful with that. Let me look at the references:
```
{ReferenceKind: 'reads', ReferencingEntityKind: 'function', ReferencingEntityName: 'br_handle_frame_finish', SourceFile: 'net/bridge/br_input.c', SourceLine: 224}
{ReferenceKind: 'writes', ReferencingEntityKind: 'function', ReferencingEntityName: 'br_handle_frame_finish', SourceFile: 'net/bridge/br_input.c', SourceLine: 225}
```
Yeah, it doesn't say `READ_ONCE`.
Anyway, both `updated` and `used` are timestamps that are updated in the fast path without any locks. This seems like a common pattern in the bridge code, as I've seen before.
Now, if I go back and look at `br_fdb_update`:
```c
1005: if (now != fdb->updated) {
1006: fdb->updated = now;
```
It's *exactly* the same pattern as the `used` in `br_handle_frame_finish`. This means the race is on `fdb->updated`. Since it's an `unsigned long` and just used for aging, I think the race condition is probably benign because the updates will be to the same or very similar values.
But, I need to be thorough. Could it be harmful for another reason? Could `fdb->updated` be used to synchronize something else? From my inspection, it doesn't seem like it. It just looks like a timestamp.
However, there was a recent patch adding `READ_ONCE` and `WRITE_ONCE` to some of these fields to silence KCSAN. So, I need to verify if it should have them here as well. Let's look:
```c
1005: if (now != fdb->updated) {
1006: fdb->updated = now;
1007: fdb_modified = __fdb_mark_active(fdb);
1008: }
```
Okay, if we want to be *strictly* correct and avoid KCSAN reports, we should use `READ_ONCE` and `WRITE_ONCE`. But, the more important question is whether it's actually *harmful*. In the kernel, many of these races on statistics or timestamps are considered benign.
Let me just double-check one last time that `fdb->updated` is *only* used for aging. It's used in `br_fdb_cleanup` (for garbage collection) and `has_expired`, but both of those just check if `fdb->updated + hold_time < jiffies`.
Alright, a torn read *could* be bad, but that's not possible on x86_64 because it's atomic.
Wait, the KCSAN report shows:
```
read to 0xffff88811a0655c0 of 8 bytes by interrupt on cpu 1:
br_fdb_update+0x106/0x460 net/bridge/br_fdb.c:1005
write to 0xffff88811a0655c0 of 8 bytes by interrupt on cpu 0:
br_fdb_update+0x13e/0x460 net/bridge/br_fdb.c:1006
```
Both are 8-byte accesses, which confirms my understanding.
Is there any other field that could be racing? The report explicitly says "data-race in br_fdb_update / br_fdb_update". Okay, so both stacks are `br_fdb_update`, which is important.
One stack is `wg_packet_handshake_send_worker` -> `wg_socket_send_buffer_to_peer` -> `alloc_skb` -> `local_bh_
|
| 22/2 | 2026/02/25 02:12 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:net/bridge/br_fdb.c IncludeLines:true Name:fdb_delete] Results: map[SourceCode: 311: static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
312: bool swdev_notify)
313: {
314: trace_fdb_delete(br, f);
315:
316: if (test_bit(BR_FDB_STATIC, &f->flags))
317: fdb_del_hw_addr(br, f->key.addr.addr);
318:
319: hlist_del_init_rcu(&f->fdb_node);
320: rhashtable_remove_fast(&br->fdb_hash_tbl, &f->rhnode,
321: br_fdb_rht_params);
322: if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
323: atomic_dec(&br->fdb_n_learned);
324: fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
325: kfree_rcu(f, rcu);
326: }
SourceFile:net/bridge/br_fdb.c] |
| 23/2 | 2026/02/25 02:12 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 22640 output: 17 thoughts: 2032 Thoughts: **Analyzing a Potential Data Race in `br_fdb_update`**
Okay, so I'm looking at `kfree_rcu(f, rcu)` and confirming that the memory is safe during the RCU read-side critical section. My focus now is on the race condition around the `fdb->updated` value.
The code snippet seems to be dealing with a local variable `fdb_modified`. If two threads enter the `if` block, they both call `__fdb_mark_active(fdb)`. Good, as I expected, `__fdb_mark_active` is atomic.
I need to make sure I haven't missed something. There's no `else` block for the `if (now != fdb->updated)` check, which seems fine. But there *is* an `else` block for `if (unlikely(test_bit(BR_FDB_LOCAL, &fdb->flags)))`, so I'll keep that in mind.
I have to consider all writes to `fdb->flags` or `fdb->dst` that could be involved. I see that `fdb->dst` is updated with `WRITE_ONCE`, which protects it. And `fdb->flags` are updated with bitops, which is also protected.
Except for the missing `READ_ONCE`/`WRITE_ONCE` around `fdb->updated` , everything else looks correct.
Is this race known? I've seen this kind of "benign" race with timestamps in network drivers before.
But KCSAN is designed to flag these, so let's see what the history of `br_fdb.c` says. Did anyone try to fix this?
Hmm, let's see how `fdb->updated` is actually used. Does it *need* consistency with other fields? It's used for aging, which is independent, so that looks fine.
Wait, the `fdb_notify` call:
```c
1034: if (unlikely(fdb_modified)) {
1035: trace_br_fdb_update(br, source, addr, vid, flags);
1036: fdb_notify(br, fdb, RTM_NEWNEIGH, true);
1037: }
```
`fdb_notify` sends a netlink message. If `fdb_modified` is true, it means either `__fdb_mark_active` returned true, or the `dst` changed. Both warrant a notification.
If two threads race, both might see `now != fdb->updated` and try to mark the entry active. One will succeed, and the notification will fire.
What if one thread sees the *updated* `fdb->updated` and skips the block? That means the other thread *already* updated it and called `__fdb_mark_active`. The notification *still* happens.
Or if Thread 1 updates `fdb->updated`, but hasn't called `__fdb_mark_active` yet?
Thread 2 sees the new `fdb->updated` and skips the block. Thread 2 won't call `__fdb_mark_active`. But Thread 1 *will*. The entry will still get marked active and notification will be sent.
If Thread 1 is *delayed* between `fdb->updated = now` and `__fdb_mark_active(fdb)`? Same situation: Everything's fine.
The only strange case is if `now` is different. Two threads, two different `now` values.
Let's say, Thread 1: `now = 100`. Then `fdb->updated = 100`.
Then, Thread 2: `now = 101`. It sees `fdb->updated == 100`. `101 != 100` is true. Then, `fdb->updated = 101`. Then calls `__fdb_mark_active`.
Thread 1: Now calls `__fdb_mark_active`.
Both called `__fdb_mark_active`. Only one of the calls will have the desired effect on the flags (if the flag wasn't already set), one returns true, one returns false, and the notification will be sent one time.
So, it *seems* the race is benign.
But I need to be thorough. I should check how `fdb->updated` is used in `fdb_fill_info` (for netlink notifications).
```c
131: if (nla_put_u32
|
| 24/2 | 2026/02/25 02:12 | tool | grepper |
0mArgs:map[Expression:data_race] Results: map[Output:
Full output is too long, showing 500 out of 1143 lines.
Use more precise expression if possible.
[Documentation/dev-tools/checkpatch.rst=456=Comments
--
Documentation/dev-tools/checkpatch.rst-476- **DATA_RACE**
Documentation/dev-tools/checkpatch.rst:477: Applications of data_race() should have a comment so as to document the
Documentation/dev-tools/checkpatch.rst-478- reasoning behind why it was deemed safe.
--
Documentation/dev-tools/kcsan.rst=87=the below options are available:
Documentation/dev-tools/kcsan.rst-88-
Documentation/dev-tools/kcsan.rst:89:* KCSAN understands the ``data_race(expr)`` annotation, which tells KCSAN that
Documentation/dev-tools/kcsan.rst-90- any data races due to accesses in ``expr`` should be ignored and resulting
--
Documentation/dev-tools/kcsan.rst-93-
Documentation/dev-tools/kcsan.rst:94:* Similar to ``data_race(...)``, the type qualifier ``__data_racy`` can be used
Documentation/dev-tools/kcsan.rst-95- to document that all data races due to accesses to a variable are intended
--
Documentation/translations/zh_CN/dev-tools/kcsan.rst=24=KCSAN 提供了几个其他的配置选项来自定义行为(见 ``lib/Kconfig.kcsan`` 中的各自的
--
Documentation/translations/zh_CN/dev-tools/kcsan.rst-87-
Documentation/translations/zh_CN/dev-tools/kcsan.rst:88:* KCSAN 支持使用 ``data_race(expr)`` 注解,这个注解告诉 KCSAN 任何由访问
Documentation/translations/zh_CN/dev-tools/kcsan.rst-89- ``expr`` 所引起的数据竞争都应该被忽略,其产生的行为后果被认为是安全的。请查阅
--
Documentation/translations/zh_CN/dev-tools/kcsan.rst-91-
Documentation/translations/zh_CN/dev-tools/kcsan.rst:92:* 与 ``data_race(...)`` 相似,可以使用类型限定符 ``__data_racy`` 来标记一个变量
Documentation/translations/zh_CN/dev-tools/kcsan.rst-93- ,所有访问该变量而导致的数据竞争都是故意为之并且应该被 KCSAN 忽略::
--
arch/powerpc/kernel/interrupt.c=103=static notrace void check_return_regs_valid(struct pt_regs *regs)
--
arch/powerpc/kernel/interrupt.c-171-
arch/powerpc/kernel/interrupt.c:172: if (!data_race(warned)) {
arch/powerpc/kernel/interrupt.c:173: data_race(warned = true);
arch/powerpc/kernel/interrupt.c-174- printk("%sSRR0 was: %lx should be: %lx\n", h, srr0, regs->nip);
--
arch/x86/mm/pat/set_memory.c=161=static inline void cpa_inc_4k_install(void)
arch/x86/mm/pat/set_memory.c-162-{
arch/x86/mm/pat/set_memory.c:163: data_race(cpa_4k_install++);
arch/x86/mm/pat/set_memory.c-164-}
--
drivers/block/loop.c=1852=static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,
--
drivers/block/loop.c-1860-
drivers/block/loop.c:1861: if (data_race(READ_ONCE(lo->lo_state)) != Lo_bound)
drivers/block/loop.c-1862- return BLK_STS_IOERR;
--
drivers/block/loop.c=2191=static int loop_control_get_free(int idx)
--
drivers/block/loop.c-2204- if (lo->idr_visible &&
drivers/block/loop.c:2205: data_race(READ_ONCE(lo->lo_state)) == Lo_unbound)
drivers/block/loop.c-2206- goto found;
--
drivers/block/zloop.c=693=static blk_status_t zloop_queue_rq(struct blk_mq_hw_ctx *hctx,
--
drivers/block/zloop.c-699-
drivers/block/zloop.c:700: if (data_race(READ_ONCE(zlo->state)) == Zlo_deleting)
drivers/block/zloop.c-701- return BLK_STS_IOERR;
--
drivers/net/wireless/ath/ath9k/htc_drv_txrx.c=1135=void ath9k_htc_rxep(void *drv_priv, struct sk_buff *skb,
--
drivers/net/wireless/ath/ath9k/htc_drv_txrx.c-1144- /* Check if ath9k_rx_init() completed. */
drivers/net/wireless/ath/ath9k/htc_drv_txrx.c:1145: if (!data_race(priv->rx.initialized))
drivers/net/wireless/ath/ath9k/htc_drv_txrx.c-1146- goto err;
--
drivers/net/wireless/ath/ath9k/wmi.c=138=void ath9k_wmi_event_tasklet(struct tasklet_struct *t)
--
drivers/net/wireless/ath/ath9k/wmi.c-158- /* Check if ath9k_htc_probe_device() completed. */
drivers/net/wireless/ath/ath9k/wmi.c:159: if (!data_race(priv->initialized)) {
drivers/net/wireless/ath/ath9k/wmi.c-160- kfree_skb(skb);
--
drivers/virtio/virtio_ring.c=2651=bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
--
drivers/virtio/virtio_ring.c-2655- if (vq->event_triggered)
drivers/virtio/virtio_ring.c:2656: data_race(vq->event_triggered = false);
drivers/virtio/virtio_ring.c-2657-
--
drivers/virtio/virtio_ring.c=2693=irqreturn_t vring_interrupt(int irq, void *_vq)
--
drivers/virtio/virtio_ring.c-2713- if (vq->event)
drivers/virtio/virtio_ring.c:2714: data_race(vq->event_triggered = true);
drivers/virtio/virtio_ring.c-2715-
--
fs/btrfs/block-group.c=1788=static int reclaim_bgs_cmp(void *unused, const struct list_head *a,
--
fs/btrfs/block-group.c-1802- */
fs/btrfs/block-group.c:1803: return data_race(bg1->used > bg2->used);
fs/btrfs/block-group.c-1804-}
--
fs/btrfs/block-rsv.h=107=static inline bool btrfs_block_rsv_full(const struct btrfs_block_rsv *rsv)
fs/btrfs/block-rsv.h-108-{
fs/btrfs/block-rsv.h:109: return data_race(rsv->full);
fs/btrfs/block-rsv.h-110-}
--
fs/btrfs/extent-tree.c=1999=static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
--
fs/btrfs/extent-tree.c-2016- */
fs/btrfs/extent-tree.c:2017: max_count = data_race(delayed_refs->num_heads_ready);
fs/btrfs/extent-tree.c-2018- min_bytes = U64_MAX;
--
fs/btrfs/file.c=137=int btrfs_drop_extents(struct btrfs_trans_handle *trans,
--
fs/btrfs/file.c-179-
fs/btrfs/file.c:180: if (data_race(args->start >= inode->disk_i_size) && !args->replace_extent)
fs/btrfs/file.c-181- modify_tree = 0;
--
fs/btrfs/space-info.c=1970=u64 btrfs_account_ro_block_groups_free_space(struct btrfs_space_info *sinfo)
--
fs/btrfs/space-info.c-1976- /* It's df, we don't care if it's racy */
fs/btrfs/space-info.c:1977: if (data_race(list_empty(&sinfo->ro_bgs)))
fs/btrfs/space-info.c-1978- return 0;
--
fs/btrfs/tree-log.c=3732=static int inode_logged(const struct btrfs_trans_handle *trans,
--
fs/btrfs/tree-log.c-3743- */
fs/btrfs/tree-log.c:3744: if (data_race(inode->logged_trans) == trans->transid)
fs/btrfs/tree-log.c-3745- return 1;
--
fs/inode.c=111=static unsigned long get_mg_ctime_updates(void)
--
fs/inode.c-116- for_each_possible_cpu(i)
fs/inode.c:117: sum += data_race(per_cpu(mg_ctime_updates, i));
fs/inode.c-118- return sum;
--
fs/inode.c=121=static unsigned long get_mg_fine_stamps(void)
--
fs/inode.c-126- for_each_possible_cpu(i)
fs/inode.c:127: sum += data_race(per_cpu(mg_fine_stamps, i));
fs/inode.c-128- return sum;
--
fs/inode.c=131=static unsigned long get_mg_ctime_swaps(void)
--
fs/inode.c-136- for_each_possible_cpu(i)
fs/inode.c:137: sum += data_race(per_cpu(mg_ctime_swaps, i));
fs/inode.c-138- return sum;
--
fs/jbd2/transaction.c=312=static int start_this_handle(journal_t *journal, handle_t *handle,
--
fs/jbd2/transaction.c-343- */
fs/jbd2/transaction.c:344: if (!data_race(journal->j_running_transaction)) {
fs/jbd2/transaction.c-345- /*
--
fs/jbd2/transaction.c=1491=int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
--
fs/jbd2/transaction.c-1514- */
fs/jbd2/transaction.c:1515: if (data_race(jh->b_transaction != transaction &&
fs/jbd2/transaction.c-1516- jh->b_next_transaction != transaction)) {
--
fs/jbd2/transaction.c-1521- }
fs/jbd2/transaction.c:1522: if (data_race(jh->b_modified == 1)) {
fs/jbd2/transaction.c-1523- /* If it's in our transaction it must be in BJ_Metadata list. */
fs/jbd2/transaction.c:1524: if (data_race(jh->b_transaction == transaction &&
fs/jbd2/transaction.c-1525- jh->b_jlist != BJ_Metadata)) {
--
fs/xfs/xfs_trace.h=1324=DECLARE_EVENT_CLASS(xfs_dquot_class,
--
fs/xfs/xfs_trace.h-1352- __entry->flags = dqp->q_flags;
fs/xfs/xfs_trace.h:1353: __entry->nrefs = data_race(dqp->q_lockref.count);
fs/xfs/xfs_trace.h-1354-
--
include/linux/bpf.h=518=static inline void bpf_long_memcpy(void *dst, const void *src, u32 size)
--
include/linux/bpf.h-524- while (size--)
include/linux/bpf.h:525: data_race(*ldst++ = *lsrc++);
include/linux/bpf.h-526-}
--
include/linux/compiler.h=15=void ftrace_likely_update(struct ftrace_likely_data *f, int val,
--
include/linux/compiler.h-172-/**
include/linux/compiler.h:173: * data_race - mark an expression as containing intentional data races
include/linux/compiler.h-174- *
include/linux/compiler.h:175: * This data_race() macro is useful for situations in which data races
include/linux/compiler.h-176- * should be forgiven. One example is diagnostic code that accesses
--
include/linux/compiler.h-180- * be plain C-language accesses and those in the diagnostic code should
include/linux/compiler.h:181: * use data_race(). This way, KCSAN will complain if buggy lockless
include/linux/compiler.h-182- * accesses to that variable are introduced, even if the buggy accesses
--
include/linux/compiler.h-186- * to tooling that data races here are to be ignored. If the access must
include/linux/compiler.h:187: * be atomic *and* KCSAN should ignore the access, use both data_race()
include/linux/compiler.h:188: * and READ_ONCE(), for example, data_race(READ_ONCE(x)).
include/linux/compiler.h-189- */
include/linux/compiler.h:190:#define data_race(expr) \
include/linux/compiler.h-191-({ \
--
include/linux/console.h=522=static inline short console_srcu_read_flags(const struct console *con)
--
include/linux/console.h-529- */
include/linux/console.h:530: return data_race(READ_ONCE(con->flags));
include/linux/console.h-531-}
--
include/linux/mm.h=2914=static inline void update_hiwater_rss(struct mm_struct *mm)
--
include/linux/mm.h-2917-
include/linux/mm.h:2918: if (data_race(mm->hiwater_rss) < _rss)
include/linux/mm.h:2919: data_race(mm->hiwater_rss = _rss);
include/linux/mm.h-2920-}
--
include/linux/page-flags.h=952=static inline bool page_has_type(const struct page *page)
include/linux/page-flags.h-953-{
include/linux/page-flags.h:954: return page_type_has_type(data_race(page->page_type));
include/linux/page-flags.h-955-}
--
include/linux/page-flags.h=958=static __always_inline bool folio_test_##fname(const struct folio *folio) \
include/linux/page-flags.h-959-{ \
include/linux/page-flags.h:960: return data_race(folio->page.page_type >> 24) == PGTY_##lname; \
include/linux/page-flags.h-961-} \
include/linux/page-flags.h=962=static __always_inline void __folio_set_##fname(struct folio *folio) \
--
include/linux/page-flags.h-965- return; \
include/linux/page-flags.h:966: VM_BUG_ON_FOLIO(data_race(folio->page.page_type) != UINT_MAX, \
include/linux/page-flags.h-967- folio); \
--
include/linux/page-flags.h=980=static __always_inline int Page##uname(const struct page *page) \
include/linux/page-flags.h-981-{ \
include/linux/page-flags.h:982: return data_race(page->page_type >> 24) == PGTY_##lname; \
include/linux/page-flags.h-983-} \
include/linux/page-flags.h=984=static __always_inline void __SetPage##uname(struct page *page) \
--
include/linux/page-flags.h-987- return; \
include/linux/page-flags.h:988: VM_BUG_ON_PAGE(data_race(page->page_type) != UINT_MAX, page); \
include/linux/page-flags.h-989- page->page_type = (unsigned int)PGTY_##lname << 24; \
--
include/linux/srcutiny.h=133=static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
--
include/linux/srcutiny.h-137-
include/linux/srcutiny.h:138: idx = ((data_race(READ_ONCE(ssp->srcu_idx)) + 1) & 0x2) >> 1;
include/linux/srcutiny.h-139- pr_alert("%s%s Tiny SRCU per-CPU(idx=%d): (%hd,%hd) gp: %lu->%lu\n",
include/linux/srcutiny.h-140- tt, tf, idx,
include/linux/srcutiny.h:141: data_race(READ_ONCE(ssp->srcu_lock_nesting[!idx])),
include/linux/srcutiny.h:142: data_race(READ_ONCE(ssp->srcu_lock_nesting[idx])),
include/linux/srcutiny.h:143: data_race(READ_ONCE(ssp->srcu_idx)),
include/linux/srcutiny.h:144: data_race(READ_ONCE(ssp->srcu_idx_max)));
include/linux/srcutiny.h-145-}
--
include/net/sock.h=2508=static inline int sock_error(struct sock *sk)
--
include/net/sock.h-2514- */
include/net/sock.h:2515: if (likely(data_race(!sk->sk_err)))
include/net/sock.h-2516- return 0;
--
io_uring/fdinfo.c=60=static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
--
io_uring/fdinfo.c-86- seq_printf(m, "SqTail:\t%u\n", sq_tail);
io_uring/fdinfo.c:87: seq_printf(m, "CachedSqHead:\t%u\n", data_race(ctx->cached_sq_head));
io_uring/fdinfo.c-88- seq_printf(m, "CqMask:\t0x%x\n", cq_mask);
--
io_uring/fdinfo.c-90- seq_printf(m, "CqTail:\t%u\n", cq_tail);
io_uring/fdinfo.c:91: seq_printf(m, "CachedCqTail:\t%u\n", data_race(ctx->cached_cq_tail));
io_uring/fdinfo.c-92- seq_printf(m, "SQEs:\t%u\n", sq_tail - sq_head);
--
io_uring/refs.h=20=static inline bool req_ref_put_and_test_atomic(struct io_kiocb *req)
io_uring/refs.h-21-{
io_uring/refs.h:22: WARN_ON_ONCE(!(data_race(req->flags) & REQ_F_REFCOUNT));
io_uring/refs.h-23- WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
--
io_uring/timeout.c=595=int io_timeout(struct io_kiocb *req, unsigned int issue_flags)
--
io_uring/timeout.c-614-
io_uring/timeout.c:615: tail = data_race(ctx->cached_cq_tail) - atomic_read(&ctx->cq_timeouts);
io_uring/timeout.c-616- timeout->target_seq = tail + off;
--
kernel/fork.c=1966=__latent_entropy struct task_struct *copy_process(
--
kernel/fork.c-2103- retval = -EAGAIN;
kernel/fork.c:2104: if (data_race(nr_threads >= max_threads))
kernel/fork.c-2105- goto bad_fork_cleanup_count;
--
kernel/irq/irqdesc.c=919=static unsigned int kstat_irqs_desc(struct irq_desc *desc, const struct cpumask *cpumask)
--
kernel/irq/irqdesc.c-926- !irq_is_nmi(desc))
kernel/irq/irqdesc.c:927: return data_race(desc->tot_count);
kernel/irq/irqdesc.c-928-
kernel/irq/irqdesc.c-929- for_each_cpu(cpu, cpumask)
kernel/irq/irqdesc.c:930: sum += data_race(per_cpu(desc->kstat_irqs->cnt, cpu));
kernel/irq/irqdesc.c-931- return sum;
--
kernel/kcsan/core.c=527=kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned long ip)
--
kernel/kcsan/core.c-643- */
kernel/kcsan/core.c:644: if (diff && !kcsan_ignore_data_race(size, type, old, new, diff))
kernel/kcsan/core.c-645- value_change = KCSAN_VALUE_CHANGE_TRUE;
--
kernel/kcsan/core.c-667- /*
kernel/kcsan/core.c:668: * No need to increment 'data_races' counter, as the racing
kernel/kcsan/core.c-669- * thread already did.
--
kernel/kcsan/debugfs.c=26=static const char *const counter_names[] = {
--
kernel/kcsan/debugfs.c-28- [KCSAN_COUNTER_SETUP_WATCHPOINTS] = "setup_watchpoints",
kernel/kcsan/debugfs.c:29: [KCSAN_COUNTER_DATA_RACES] = "data_races",
kernel/kcsan/debugfs.c-30- [KCSAN_COUNTER_ASSERT_FAILURES] = "assert_failures",
--
kernel/kcsan/debugfs.c=138=static ssize_t insert_report_filterlist(const char *func)
--
kernel/kcsan/debugfs.c-156- */
kernel/kcsan/debugfs.c:157: if (data_race(report_filterlist.used == report_filterlist.size)) {
kernel/kcsan/debugfs.c-158- new_size = (report_filterlist.size ?: 4) * 2;
--
kernel/kcsan/kcsan_test.c=358=static noinline void test_kernel_write_uninstrumented(void) { test_var++; }
kernel/kcsan/kcsan_test.c-359-
kernel/kcsan/kcsan_test.c:360:static noinline void test_kernel_data_race(void) { data_race(test_var++); }
kernel/kcsan/kcsan_test.c-361-
--
kernel/kcsan/kcsan_test.c=976=static void test_zero_size_access(struct kunit *test)
--
kernel/kcsan/kcsan_test.c-1001-
kernel/kcsan/kcsan_test.c:1002:/* Test the data_race() macro. */
kernel/kcsan/kcsan_test.c-1003-__no_kcsan
kernel/kcsan/kcsan_test.c:1004:static void test_data_race(struct kunit *test)
kernel/kcsan/kcsan_test.c-1005-{
--
kernel/kcsan/kcsan_test.c-1007-
kernel/kcsan/kcsan_test.c:1008: begin_test_checks(test_kernel_data_race, test_kernel_data_race);
kernel/kcsan/kcsan_test.c-1009- do {
--
kernel/kcsan/kcsan_test.c=1428=static struct kunit_case kcsan_test_cases[] = {
--
kernel/kcsan/kcsan_test.c-1441- KCSAN_KUNIT_CASE(test_zero_size_access),
kernel/kcsan/kcsan_test.c:1442: KCSAN_KUNIT_CASE(test_data_race),
kernel/kcsan/kcsan_test.c-1443- KCSAN_KUNIT_CASE(test_data_racy_qualifier),
--
kernel/kcsan/permissive.h=37=static bool
kernel/kcsan/permissive.h:38:kcsan_ignore_data_race(size_t size, int type, u64 old, u64 new, u64 diff)
kernel/kcsan/permissive.h-39-{
--
kernel/locking/locktorture.c=1001=static void __torture_print_stats(char *page,
--
kernel/locking/locktorture.c-1006- int i, n_stress;
kernel/locking/locktorture.c:1007: long max = 0, min = statp ? data_race(statp[0].n_lock_acquired) : 0;
kernel/locking/locktorture.c-1008- long long sum = 0;
--
kernel/locking/locktorture.c-1011- for (i = 0; i < n_stress; i++) {
kernel/locking/locktorture.c:1012: if (data_race(statp[i].n_lock_fail))
kernel/locking/locktorture.c-1013- fail = true;
kernel/locking/locktorture.c:1014: cur = data_race(statp[i].n_lock_acquired);
kernel/locking/locktorture.c-1015- sum += cur;
--
kernel/locking/osq_lock.c=93=bool osq_lock(struct optimistic_spin_queue *lock)
--
kernel/locking/osq_lock.c-163- */
kernel/locking/osq_lock.c:164: if (data_race(prev->next) == node &&
kernel/locking/osq_lock.c-165- cmpxchg(&prev->next, node, NULL) == node)
--
kernel/printk/nbcon.c=966=static bool nbcon_emit_next_record(struct nbcon_write_context *wctxt, bool use_atomic)
--
kernel/printk/nbcon.c-1011- */
kernel/printk/nbcon.c:1012: con_dropped = data_race(READ_ONCE(con->dropped));
kernel/printk/nbcon.c-1013-
--
kernel/rcu/rcutorture.c=2717=rcu_torture_stats_print(void)
--
kernel/rcu/rcutorture.c-2776- pr_cont("barrier: %ld/%ld:%ld ",
kernel/rcu/rcutorture.c:2777: data_race(n_barrier_successes),
kernel/rcu/rcutorture.c:2778: data_race(n_barrier_attempts),
kernel/rcu/rcutorture.c:2779: data_race(n_rcu_torture_barrier_error));
kernel/rcu/rcutorture.c:2780: pr_cont("read-exits: %ld ", data_race(n_read_exits)); // Statistic.
kernel/rcu/rcutorture.c-2781- pr_cont("nocb-toggles: %ld:%ld ",
--
kernel/rcu/srcutree.c=2037=void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf)
--
kernel/rcu/srcutree.c-2062- sdp = per_cpu_ptr(ssp->sda, cpu);
kernel/rcu/srcutree.c:2063: u0 = data_race(atomic_long_read(&sdp->srcu_ctrs[!idx].srcu_unlocks));
kernel/rcu/srcutree.c:2064: u1 = data_race(atomic_long_read(&sdp->srcu_ctrs[idx].srcu_unlocks));
kernel/rcu/srcutree.c-2065-
--
kernel/rcu/srcutree.c-2071-
kernel/rcu/srcutree.c:2072: l0 = data_race(atomic_long_read(&sdp->srcu_ctrs[!idx].srcu_locks));
kernel/rcu/srcutree.c:2073: l1 = data_race(atomic_long_read(&sdp->srcu_ctrs[idx].srcu_locks));
kernel/rcu/srcutree.c-2074-
--
kernel/rcu/tasks.h=238=static const char *tasks_gp_state_getname(struct rcu_tasks *rtp)
kernel/rcu/tasks.h-239-{
kernel/rcu/tasks.h:240: int i = data_race(rtp->gp_state); // Let KCSAN detect update races
kernel/rcu/tasks.h-241- int j = READ_ONCE(i); // Prevent the compiler from reading twice
--
kernel/rcu/tasks.h=251=static void cblist_init_generic(struct rcu_tasks *rtp)
--
kernel/rcu/tasks.h-302- pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d rcu_task_cpu_ids=%d.\n",
kernel/rcu/tasks.h:303: rtp->name, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim),
kernel/rcu/tasks.h-304- rcu_task_cb_adjust, rcu_task_cpu_ids);
--
kernel/rcu/tasks.h=723=static void show_rcu_tasks_generic_gp_kthread(struct rcu_tasks *rtp, char *s)
--
kernel/rcu/tasks.h-732-
kernel/rcu/tasks.h:733: if (!data_race(rcu_segcblist_empty(&rtpcp->cblist)))
kernel/rcu/tasks.h-734- havecbs = true;
kernel/rcu/tasks.h:735: if (data_race(rtpcp->urgent_gp))
kernel/rcu/tasks.h-736- haveurgent = true;
kernel/rcu/tasks.h:737: if (!data_race(rcu_segcblist_empty(&rtpcp->cblist)) && data_race(rtpcp->urgent_gp))
kernel/rcu/tasks.h-738- haveurgentcbs = true;
--
kernel/rcu/tasks.h-743- rtp->kname,
kernel/rcu/tasks.h:744: tasks_gp_state_getname(rtp), data_race(rtp->gp_state),
kernel/rcu/tasks.h:745: jiffies - data_race(rtp->gp_jiffies),
kernel/rcu/tasks.h:746: data_race(rcu_seq_current(&rtp->tasks_gp_seq)),
kernel/rcu/tasks.h:747: data_race(rtp->n_ipis_fails), data_race(rtp->n_ipis),
kernel/rcu/tasks.h:748: ".k"[!!data_race(rtp->kthread_ptr)],
kernel/rcu/tasks.h-749- ".C"[havecbs],
--
kernel/rcu/tasks.h=757=static void rcu_tasks_torture_stats_print_generic(struct rcu_tasks *rtp, char *tt,
--
kernel/rcu/tasks.h-765- pr_alert("%s%s Tasks%s RCU g%ld gp_start %lu gp_jiffies %lu gp_state %d (%s).\n",
kernel/rcu/tasks.h:766: tt, tf, tst, data_race(rtp->tasks_gp_seq),
kernel/rcu/tasks.h:767: j - data_race(rtp->gp_start), j - data_race(rtp->gp_jiffies),
kernel/rcu/tasks.h:768: data_race(rtp->gp_state), tasks_gp_state_getname(rtp));
kernel/rcu/tasks.h-769- pr_alert("\tEnqueue shift %d limit %d Dequeue limit %d gpseq %lu.\n",
kernel/rcu/tasks.h:770: data_race(rtp->percpu_enqueue_shift),
kernel/rcu/tasks.h:771: data_race(rtp->percpu_enqueue_lim),
kernel/rcu/tasks.h:772: data_race(rtp->percpu_dequeue_lim),
kernel/rcu/tasks.h:773: data_race(rtp->percpu_dequeue_gpseq));
kernel/rcu/tasks.h-774- (void)zalloc_cpumask_var(&cm, GFP_KERNEL);
--
kernel/rcu/tasks.h-792- pr_alert("\tBarrier seq %lu start %lu count %d holdout CPUs ",
kernel/rcu/tasks.h:793: data_race(rtp->barrier_q_seq), j - data_race(rtp->barrier_q_start),
kernel/rcu/tasks.h-794- atomic_read(&rtp->barrier_q_count));
--
kernel/rcu/tasks.h=1094=static void check_holdout_task(struct task_struct *t,
--
kernel/rcu/tasks.h-1120- t->rcu_tasks_nvcsw, t->nvcsw, t->rcu_tasks_holdout,
kernel/rcu/tasks.h:1121: data_race(t->rcu_tasks_idle_cpu), cpu);
kernel/rcu/tasks.h-1122- sched_show_task(t);
--
kernel/rcu/tasks.h=1893=static void show_stalled_task_trace(struct task_struct *t, bool *firstreport)
--
kernel/rcu/tasks.h-1914- ".N"[cpu >= 0 && tick_nohz_full_cpu(cpu)],
kernel/rcu/tasks.h:1915: ".B"[!!data_race(t->trc_reader_special.b.blocked)],
kernel/rcu/tasks.h-1916- trc_rdr.nesting,
--
kernel/rcu/tasks.h=2085=void show_rcu_tasks_trace_gp_kthread(void)
--
kernel/rcu/tasks.h-2089- snprintf(buf, sizeof(buf), "N%lu h:%lu/%lu/%lu",
kernel/rcu/tasks.h:2090: data_race(n_trc_holdouts),
kernel/rcu/tasks.h:2091: data_race(n_heavy_reader_ofl_updates),
kernel/rcu/tasks.h:2092: data_race(n_heavy_reader_updates),
kernel/rcu/tasks.h:2093: data_race(n_heavy_reader_attempts));
kernel/rcu/tasks.h-2094- show_rcu_tasks_generic_gp_kthread(&rcu_tasks_trace, buf);
--
kernel/rcu/tree.c=1015=static bool rcu_start_this_gp(struct rcu_node *rnp_start, struct rcu_data *rdp,
--
kernel/rcu/tree.c-1072- }
kernel/rcu/tree.c:1073: trace_rcu_grace_period(rcu_state.name, data_race(rcu_state.gp_seq), TPS("newreq"));
kernel/rcu/tree.c-1074- ret = true; /* Caller must wake GP kthread. */
--
kernel/rcu/tree_exp.h=558=static void synchronize_rcu_expedited_stall(unsigned long jiffies_start, unsigned long j)
--
kernel/rcu/tree_exp.h-585- "N."[!!(rdp->grpmask & rnp->expmaskinitnext)],
kernel/rcu/tree_exp.h:586: "D."[!!data_race(rdp->cpu_no_qs.b.exp)]);
kernel/rcu/tree_exp.h-587- }
--
kernel/rcu/tree_exp.h-589- pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n",
kernel/rcu/tree_exp.h:590: j - jiffies_start, rcu_state.expedited_sequence, data_race(rnp_root->expmask),
kernel/rcu/tree_exp.h:591: ".T"[!!data_race(rnp_root->exp_tasks)]);
kernel/rcu/tree_exp.h-592- if (ndetected) {
--
kernel/rcu/tree_exp.h-599- pr_cont(" l=%u:%d-%d:%#lx/%c",
kernel/rcu/tree_exp.h:600: rnp->level, rnp->grplo, rnp->grphi, data_race(rnp->expmask),
kernel/rcu/tree_exp.h:601: ".T"[!!data_race(rnp->exp_tasks)]);
kernel/rcu/tree_exp.h-602- }
--
kernel/rcu/tree_plugin.h=864=dump_blkd_tasks(struct rcu_node *rnp, int ncheck)
--
kernel/rcu/tree_plugin.h-879- pr_info("%s: ->gp_tasks %p ->boost_tasks %p ->exp_tasks %p\n",
kernel/rcu/tree_plugin.h:880: __func__, READ_ONCE(rnp->gp_tasks), data_race(rnp->boost_tasks),
kernel/rcu/tree_plugin.h-881- READ_ONCE(rnp->exp_tasks));
--
kernel/rcu/tree_stall.h=396=static void rcu_dump_cpu_stacks(unsigned long gp_seq)
--
kernel/rcu/tree_stall.h-404- for_each_leaf_node_possible_cpu(rnp, cpu) {
kernel/rcu/tree_stall.h:405: if (gp_seq != data_race(rcu_state.gp_seq)) {
kernel/rcu/tree_stall.h-406- printk_deferred_exit();
--
kernel/rcu/tree_stall.h-409- }
kernel/rcu/tree_stall.h:410: if (!(data_race(rnp->qsmask) & leaf_node_cpu_bit(rnp, cpu)))
kernel/rcu/tree_stall.h-411- continue;
--
kernel/rcu/tree_stall.h=518=static void print_cpu_stall_info(int cpu)
--
kernel/rcu/tree_stall.h-560- rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
kernel/rcu/tree_stall.h:561: data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
kernel/rcu/tree_stall.h-562- rcuc_starved ? buf : "",
--
kernel/rcu/tree_stall.h=569=static void rcu_check_gp_kthread_starvation(void)
--
kernel/rcu/tree_stall.h-579- (long)rcu_seq_current(&rcu_state.gp_seq),
kernel/rcu/tree_stall.h:580: data_race(READ_ONCE(rcu_state.gp_flags)),
kernel/rcu/tree_stall.h-581- gp_state_getname(rcu_state.gp_state),
kernel/rcu/tree_stall.h:582: data_race(READ_ONCE(rcu_state.gp_state)),
kernel/rcu/tree_stall.h:583: gpk ? data_race(READ_ONCE(gpk->__state)) : ~0, cpu);
kernel/rcu/tree_stall.h-584- if (gpk) {
--
kernel/rcu/tree_stall.h-591- pr_err("RCU GP kthread last ran on offline CPU %d.\n", cpu);
kernel/rcu/tree_stall.h:592: } else if (!(data_race(READ_ONCE(rdp->mynode->qsmask)) & rdp->grpmask)) {
kernel/rcu/tree_stall.h-593- pr_err("Stack dump where RCU GP kthread last ran:\n");
--
kernel/rcu/tree_stall.h=602=static void rcu_check_gp_kthread_expired_fqs_timer(void)
--
kernel/rcu/tree_stall.h-622- (long)rcu_seq_current(&rcu_state.gp_seq),
kernel/rcu/tree_stall.h:623: data_race(READ_ONCE(rcu_state.gp_flags)), // Diagnostic read
kernel/rcu/tree_stall.h-624- gp_state_getname(RCU_GP_WAIT_FQS), RCU_GP_WAIT_FQS,
kernel/rcu/tree_stall.h:625: data_race(READ_ONCE(gpk->__state)));
kernel/rcu/tree_stall.h-626- pr_err("\tPossible timer handling issue on cpu=%d timer-softirq=%u\n",
--
kernel/rcu/tree_stall.h=631=static void print_other_cpu_stall(unsigned long gp_seq, unsigned long gps)
]
] |
| 25/2 | 2026/02/25 02:12 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 37222 output: 28 thoughts: 52 Thoughts: Okay, here's my take on those thoughts, framed as my own: **Analyzing `data_race()` Usage in the Kernel** Alright, the `grepper` output is interesting. Seems like `data_race()` is sprinkled throughout the kernel, popping up in quite a few places. The initial impression is that it's primarily used for diagnostic purposes, some basic statistics gathering, and, surprisingly, for intentional race conditions. That's a good starting point, but I need to dig deeper to understand the nuances of how it's being employed. Let's hone in on `net/bridge/br_fdb.c`. Bridge forwarding databases are always a hotbed of concurrency issues, and that area is ripe for potential data races. It seems like a good example, of a place where concurrent access to shared structures might be frequent. I'm keen to see how `data_race()` is used within that context. Is it being used to detect or diagnose potential issues, or perhaps to deliberately relax ordering constraints in a controlled manner for performance reasons? Time to dive into that specific file and see what the code reveals. |
| 26/2 | 2026/02/25 02:12 | tool | grepper |
0mError:no matches Args: map[Expression:data_race.*net/bridge/br_fdb.c] Results: map[Output:] |
| 27/2 | 2026/02/25 02:12 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 37552 output: 23 thoughts: 46 Thoughts: Okay, here's my take on those thoughts, framed as my own mental processing: **Analyzing Potential Data Races in net/bridge/br_fdb.c** Alright, I'm taking a look at the `net/bridge/br_fdb.c` file, and the initial assessment is that there's no immediate indication of a `data_race` issue. That's good to see, at least at first glance. However, I always need to be thorough. My experience tells me not to jump to conclusions, so I need to dig deeper. The next thing I need to check, and *this is crucial* when dealing with concurrent access, is whether the code uses any `READ_ONCE` or `WRITE_ONCE` macros on the `updated` variable. Those macros are the telltale signs that someone *was* concerned about potential race conditions and attempted to address them explicitly. Ignoring this step is the hallmark of sloppy debugging, and I'm not that. If I see those, it'll tell me if they are being used properly. It will be helpful to know. |
| 28/2 | 2026/02/25 02:12 | tool | grepper |
0mArgs:map[Expression:(READ|WRITE)_ONCE.*updated] Results: map[Output:drivers/net/vxlan/vxlan_core.c=156=static int vxlan_fdb_info(struct sk_buff *skb, struct vxlan_dev *vxlan,
--
drivers/net/vxlan/vxlan_core.c-237- ci.ndm_confirmed = 0;
drivers/net/vxlan/vxlan_core.c:238: ci.ndm_updated = jiffies_to_clock_t(now - READ_ONCE(fdb->updated));
drivers/net/vxlan/vxlan_core.c-239- ci.ndm_refcnt = 0;
--
drivers/net/vxlan/vxlan_core.c=968=static int vxlan_fdb_update_existing(struct vxlan_dev *vxlan,
--
drivers/net/vxlan/vxlan_core.c-1039- if (ndm_flags & NTF_USE)
drivers/net/vxlan/vxlan_core.c:1040: WRITE_ONCE(f->updated, jiffies);
drivers/net/vxlan/vxlan_core.c-1041-
--
drivers/net/vxlan/vxlan_core.c-1045-
drivers/net/vxlan/vxlan_core.c:1046: WRITE_ONCE(f->updated, jiffies);
drivers/net/vxlan/vxlan_core.c-1047- err = vxlan_fdb_notify(vxlan, f, rd, RTM_NEWNEIGH,
--
drivers/net/vxlan/vxlan_core.c=1422=static enum skb_drop_reason vxlan_snoop(struct net_device *dev,
--
drivers/net/vxlan/vxlan_core.c-1445-
drivers/net/vxlan/vxlan_core.c:1446: if (READ_ONCE(f->updated) != now)
drivers/net/vxlan/vxlan_core.c:1447: WRITE_ONCE(f->updated, now);
drivers/net/vxlan/vxlan_core.c-1448-
--
drivers/net/vxlan/vxlan_core.c=2836=static void vxlan_cleanup(struct timer_list *t)
--
drivers/net/vxlan/vxlan_core.c-2854-
drivers/net/vxlan/vxlan_core.c:2855: timeout = READ_ONCE(f->updated) + vxlan->cfg.age_interval * HZ;
drivers/net/vxlan/vxlan_core.c-2856- if (time_before_eq(timeout, jiffies)) {
] |
| 29/2 | 2026/02/25 02:12 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 38551 output: 23 thoughts: 450 Thoughts: **Analyzing Data Race Concerns in Bridge FDB Updates**
Okay, this is intriguing. I'm looking at how `vxlan` handles the `fdb->updated` timestamp and it's using `READ_ONCE`/`WRITE_ONCE`. But the bridge code doesn't. Hmm, let me go back and double-check `net/bridge/br_fdb.c`.
Hold on, before I jump to any conclusions, I should figure out *why* the bridge code might be different. Maybe it's just older code that hasn't been brought up-to-date with KCSAN requirements?
Let me re-examine the `br_fdb_update` function:
```c
1005: if (now != fdb->updated) {
1006: fdb->updated = now;
```
If I were to change this to:
```c
if (now != READ_ONCE(fdb->updated)) {
WRITE_ONCE(fdb->updated, now);
```
it *would* make it KCSAN-compliant and align with the `vxlan` approach. But the real question is: Is the current code actually causing any harm?
In the Linux kernel, a data race on a simple timestamp like this is typically considered benign, unless it's leading to crashes or severe logic errors. Here, it's just an aging timestamp. So, is there any problem?
Wait, I should check if there are any other fields updated in that function *without* `WRITE_ONCE` that could be more problematic. Let's see...
In `br_fdb_update`, `fdb->updated` gets updated without `WRITE_ONCE`. `fdb->dst` *does* get updated with `WRITE_ONCE`. And `fdb->flags` are updated using atomic bitops.
So, whoever added `WRITE_ONCE(fdb->dst, source)` (at line 1014) clearly cared about data races on `dst`. Why didn't they apply the same logic to `updated`? Probably because `dst` is a pointer, and a torn write there could be catastrophic (leading to a crash), while `updated` is just a timestamp.
Okay, let's look back to when `WRITE_ONCE(fdb->dst)` was originally added. That would tell me a lot about the context and why it was introduced.
|
| 30/2 | 2026/02/25 02:13 | tool | grepper |
0mArgs:map[Expression:WRITE_ONCE.*fdb->dst] Results: map[Output:net/bridge/br_fdb.c=382=static struct net_bridge_fdb_entry *fdb_create(struct net_bridge *br,
--
net/bridge/br_fdb.c-406- memcpy(fdb->key.addr.addr, addr, ETH_ALEN);
net/bridge/br_fdb.c:407: WRITE_ONCE(fdb->dst, source);
net/bridge/br_fdb.c-408- fdb->key.vlan_id = vid;
--
net/bridge/br_fdb.c=985=void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source,
--
net/bridge/br_fdb.c-1013- br_switchdev_fdb_notify(br, fdb, RTM_DELNEIGH);
net/bridge/br_fdb.c:1014: WRITE_ONCE(fdb->dst, source);
net/bridge/br_fdb.c-1015- fdb_modified = true;
--
net/bridge/br_fdb.c=1158=static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source,
--
net/bridge/br_fdb.c-1206- if (READ_ONCE(fdb->dst) != source) {
net/bridge/br_fdb.c:1207: WRITE_ONCE(fdb->dst, source);
net/bridge/br_fdb.c-1208- modified = true;
--
net/bridge/br_fdb.c=1517=int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
--
net/bridge/br_fdb.c-1561- if (READ_ONCE(fdb->dst) != p) {
net/bridge/br_fdb.c:1562: WRITE_ONCE(fdb->dst, p);
net/bridge/br_fdb.c-1563- modified = true;
] |
| 31/2 | 2026/02/25 02:13 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 40109 output: 18 thoughts: 57 Thoughts: Okay, here's my take on those observations. **Analyzing Data Consistency in the Kernel Module** Right, let me see. It's becoming pretty clear that the `dst` variable is being consistently accessed using `READ_ONCE` and `WRITE_ONCE`. That immediately raises a flag. We're obviously trying to enforce some form of atomic or memory barrier ordering on that specific field, likely due to it being shared across multiple threads or contexts. My initial hunch is this is the critical shared state, and the barriers are ensuring that read/write operations on this field are correct. Now, the interesting thing is that `updated` doesn't seem to get the same treatment. We have to consider why, specifically. Is there a race condition concern? Does the compiler know, from the context that `updated` is being used, that an atomic barrier isn't necessary? We need to keep a close eye on the logic surrounding `updated`. Okay, let's cast our gaze around at the other fields. Let me see... yes, I remember seeing `fdb->used` within `br_input.c`. That one warrants a closer look now that we're talking about this. I'll need to go back and double check that. Is it handled the same way? The context it's being used in might give us hints as to whether or not barriers are applied there, too. It's likely that it has similar concurrency demands to `dst`. I need to ensure that this isn't another point where race conditions may occur. |
| 32/2 | 2026/02/25 02:13 | tool | grepper |
0mArgs:map[Expression:fdb->used] Results: map[Output:drivers/net/vxlan/vxlan_core.c=156=static int vxlan_fdb_info(struct sk_buff *skb, struct vxlan_dev *vxlan,
--
drivers/net/vxlan/vxlan_core.c-235-
drivers/net/vxlan/vxlan_core.c:236: ci.ndm_used = jiffies_to_clock_t(now - READ_ONCE(fdb->used));
drivers/net/vxlan/vxlan_core.c-237- ci.ndm_confirmed = 0;
--
net/bridge/br_fdb.c=89=static int fdb_fill_info(struct sk_buff *skb, const struct net_bridge *br,
--
net/bridge/br_fdb.c-128-
net/bridge/br_fdb.c:129: ci.ndm_used = jiffies_to_clock_t(now - fdb->used);
net/bridge/br_fdb.c-130- ci.ndm_confirmed = 0;
--
net/bridge/br_fdb.c=382=static struct net_bridge_fdb_entry *fdb_create(struct net_bridge *br,
--
net/bridge/br_fdb.c-409- fdb->flags = flags;
net/bridge/br_fdb.c:410: fdb->updated = fdb->used = jiffies;
net/bridge/br_fdb.c-411- err = rhashtable_lookup_insert_fast(&br->fdb_hash_tbl, &fdb->rhnode,
--
net/bridge/br_fdb.c=1158=static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source,
--
net/bridge/br_fdb.c-1244-
net/bridge/br_fdb.c:1245: fdb->used = jiffies;
net/bridge/br_fdb.c-1246- if (modified) {
--
net/bridge/br_fdb.c=1517=int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
--
net/bridge/br_fdb.c-1567- /* Refresh entry */
net/bridge/br_fdb.c:1568: fdb->used = jiffies;
net/bridge/br_fdb.c-1569- } else {
] |
| 33/2 | 2026/02/25 02:13 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 41173 output: 28 thoughts: 54 Thoughts: Okay, here's my interpretation of those thoughts, tailored for an expert audience, as if they were my own: **Missed Target & Refining Search Strategy** Damn it, I missed `br_input.c` in that initial `grep`! Rookie mistake, really. I bet I know why, though... I didn't explicitly include it in the search. That explains it. Okay, let's try a more targeted approach. I'm going to focus specifically on `fdb->used` within `net/bridge/br_input.c` now. This is where the magic should be happening. |
| 34/2 | 2026/02/25 02:13 | tool | grepper |
0mError:no matches Args: map[Expression:fdb->used.*net/bridge/br_input.c] Results: map[Output:] |
| 35/2 | 2026/02/25 02:13 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 41400 output: 45 thoughts: 44 Thoughts: Here's my attempt at summarizing the thought process, tailored for an expert audience: **Memory Jog and Code Dive** Okay, hold on a second... didn't I just glimpse this particular issue, or a related one, within the `br_handle_frame_finish` function? That feels familiar. I need to refresh my memory. Let's get the source code up... specifically, the `br_input.c` file located in the `net/bridge` directory. Time to revisit `br_handle_frame_finish`. Hopefully, that function provides a clue I am looking for. |
| 36/2 | 2026/02/25 02:13 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:net/bridge/br_input.c IncludeLines:true Name:br_handle_frame_finish] Results: map[SourceCode: 75: /* note: already called with rcu_read_lock */
76: int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
77: {
78: enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED;
79: struct net_bridge_port *p = br_port_get_rcu(skb->dev);
80: enum br_pkt_type pkt_type = BR_PKT_UNICAST;
81: struct net_bridge_fdb_entry *dst = NULL;
82: struct net_bridge_mcast_port *pmctx;
83: struct net_bridge_mdb_entry *mdst;
84: bool local_rcv, mcast_hit = false;
85: struct net_bridge_mcast *brmctx;
86: struct net_bridge_vlan *vlan;
87: struct net_bridge *br;
88: bool promisc;
89: u16 vid = 0;
90: u8 state;
91:
92: if (!p)
93: goto drop;
94:
95: br = p->br;
96:
97: if (br_mst_is_enabled(p)) {
98: state = BR_STATE_FORWARDING;
99: } else {
100: if (p->state == BR_STATE_DISABLED) {
101: reason = SKB_DROP_REASON_BRIDGE_INGRESS_STP_STATE;
102: goto drop;
103: }
104:
105: state = p->state;
106: }
107:
108: brmctx = &p->br->multicast_ctx;
109: pmctx = &p->multicast_ctx;
110: if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid,
111: &state, &vlan))
112: goto out;
113:
114: if (p->flags & BR_PORT_LOCKED) {
115: struct net_bridge_fdb_entry *fdb_src =
116: br_fdb_find_rcu(br, eth_hdr(skb)->h_source, vid);
117:
118: if (!fdb_src) {
119: /* FDB miss. Create locked FDB entry if MAB is enabled
120: * and drop the packet.
121: */
122: if (p->flags & BR_PORT_MAB)
123: br_fdb_update(br, p, eth_hdr(skb)->h_source,
124: vid, BIT(BR_FDB_LOCKED));
125: goto drop;
126: } else if (READ_ONCE(fdb_src->dst) != p ||
127: test_bit(BR_FDB_LOCAL, &fdb_src->flags)) {
128: /* FDB mismatch. Drop the packet without roaming. */
129: goto drop;
130: } else if (test_bit(BR_FDB_LOCKED, &fdb_src->flags)) {
131: /* FDB match, but entry is locked. Refresh it and drop
132: * the packet.
133: */
134: br_fdb_update(br, p, eth_hdr(skb)->h_source, vid,
135: BIT(BR_FDB_LOCKED));
136: goto drop;
137: }
138: }
139:
140: nbp_switchdev_frame_mark(p, skb);
141:
142: /* insert into forwarding database after filtering to avoid spoofing */
143: if (p->flags & BR_LEARNING)
144: br_fdb_update(br, p, eth_hdr(skb)->h_source, vid, 0);
145:
146: promisc = !!(br->dev->flags & IFF_PROMISC);
147: local_rcv = promisc;
148:
149: if (is_multicast_ether_addr(eth_hdr(skb)->h_dest)) {
150: /* by definition the broadcast is also a multicast address */
151: if (is_broadcast_ether_addr(eth_hdr(skb)->h_dest)) {
152: pkt_type = BR_PKT_BROADCAST;
153: local_rcv = true;
154: } else {
155: pkt_type = BR_PKT_MULTICAST;
156: if (br_multicast_rcv(&brmctx, &pmctx, vlan, skb, vid))
157: goto drop;
158: }
159: }
160:
161: if (state == BR_STATE_LEARNING) {
162: reason = SKB_DROP_REASON_BRIDGE_INGRESS_STP_STATE;
163: goto drop;
164: }
165:
166: BR_INPUT_SKB_CB(skb)->brdev = br->dev;
167: BR_INPUT_SKB_CB(skb)->src_port_isolated = !!(p->flags & BR_ISOLATED);
168:
169: if (IS_ENABLED(CONFIG_INET) &&
170: (skb->protocol == htons(ETH_P_ARP) ||
171: skb->protocol == htons(ETH_P_RARP))) {
172: br_do_proxy_suppress_arp(skb, br, vid, p);
173: } else if (IS_ENABLED(CONFIG_IPV6) &&
174: skb->protocol == htons(ETH_P_IPV6) &&
175: br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED) &&
176: pskb_may_pull(skb, sizeof(struct ipv6hdr) +
177: sizeof(struct nd_msg)) &&
178: ipv6_hdr(skb)->nexthdr == IPPROTO_ICMPV6) {
179: struct nd_msg *msg, _msg;
180:
181: msg = br_is_nd_neigh_msg(skb, &_msg);
182: if (msg)
183: br_do_suppress_nd(skb, br, vid, p, msg);
184: }
185:
186: switch (pkt_type) {
187: case BR_PKT_MULTICAST:
188: mdst = br_mdb_entry_skb_get(brmctx, skb, vid);
189: if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
190: br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst)) {
191: if ((mdst && mdst->host_joined) ||
192: br_multicast_is_router(brmctx, skb) ||
193: br->dev->flags & IFF_ALLMULTI) {
194: local_rcv = true;
195: DEV_STATS_INC(br->dev, multicast);
196: }
197: mcast_hit = true;
198: } else {
199: local_rcv = true;
200: DEV_STATS_INC(br->dev, multicast);
201: }
202: break;
203: case BR_PKT_UNICAST:
204: dst = br_fdb_find_rcu(br, eth_hdr(skb)->h_dest, vid);
205: if (unlikely(!dst && vid &&
206: br_opt_get(br, BROPT_FDB_LOCAL_VLAN_0))) {
207: dst = br_fdb_find_rcu(br, eth_hdr(skb)->h_dest, 0);
208: if (dst &&
209: (!test_bit(BR_FDB_LOCAL, &dst->flags) ||
210: test_bit(BR_FDB_ADDED_BY_USER, &dst->flags)))
211: dst = NULL;
212: }
213: break;
214: default:
215: break;
216: }
217:
218: if (dst) {
219: unsigned long now = jiffies;
220:
221: if (test_bit(BR_FDB_LOCAL, &dst->flags))
222: return br_pass_frame_up(skb, false);
223:
224: if (now != dst->used)
225: dst->used = now;
226: br_forward(dst->dst, skb, local_rcv, false);
227: } else {
228: if (!mcast_hit)
229: br_flood(br, skb, pkt_type, local_rcv, false, vid);
230: else
231: br_multicast_flood(mdst, skb, brmctx, local_rcv, false);
232: }
233:
234: if (local_rcv)
235: return br_pass_frame_up(skb, promisc);
236:
237: out:
238: return 0;
239: drop:
240: kfree_skb_reason(skb, reason);
241: goto out;
242: }
SourceFile:net/bridge/br_input.c] |
| 37/2 | 2026/02/25 02:13 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 44460 output: 19 thoughts: 822 Thoughts: **Benign Data Race in Bridge FDB Update**
Okay, so I've been digging into this KCSAN report, specifically lines 224-225, where `dst->used` is updated. Yep, it's a data race, KCSAN probably nailed it. But is it *bad*? Let's break this down.
First, I need to understand why `updated` and `used` are separate fields. Right, `updated` tracks when the *source* MAC was learned, and `used` tracks when the *destination* MAC was last used for forwarding. Both are simply timestamps. A common bridge code pattern is updating these timestamps without locks.
I'm pretty confident this race is benign. The KCSAN report shows a change in the value from `0x0000000100026abc` to `0x0000000100026abd` which is a difference of just 1. If it were a torn write of a 64-bit value on a 32-bit system, we'd see a much larger difference. But it's just the low bit, so on x86_64, a torn 8-byte aligned write is impossible. So, this race is just two threads writing the same, or nearly the same, value.
Let me do a final check. Does any *other* code rely on `fdb->updated`? I checked `has_expired`, `br_fdb_cleanup`, `fdb_fill_info`, and `br_fdb_update` logic and they're all fine. So the race is benign.
Wait, hold on, I should check if there's *another* race somewhere else in `br_fdb_update`. Okay, on line 1011, there's this block:
```c
1011: if (unlikely(source != READ_ONCE(fdb->dst) &&
1012: !test_bit(BR_FDB_STICKY, &fdb->flags))) {
1013: br_switchdev_fdb_notify(br, fdb, RTM_DELNEIGH);
1014: WRITE_ONCE(fdb->dst, source);
1015: fdb_modified = true;
```
If two threads race here, they'll both see the condition in line 1011 as true, so they'll both call `br_switchdev_fdb_notify` and `WRITE_ONCE`. This is an expected "roaming" race, where the MAC address is moving rapidly between ports.
But wait a minute... `br_switchdev_fdb_notify` is called *before* `WRITE_ONCE`. This means if two threads race here, two `DELNEIGH` notifications will be sent, then two `WRITE_ONCE`s, and then two `NEWNEIGH` notifications will be sent later, which seems messy but okay. Roaming behavior is a little messy in general if it happens from two ports simultaneously.
But the KCSAN report is specifically on `fdb->updated`.
Alright, I'm sticking with benign.
|
| 38/2 | 2026/02/25 02:13 | tool | set-results |
0mArgs:map[Benign:true Confident:true] Results: map[Benign:true Confident:true] |
| 39/2 | 2026/02/25 02:13 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 46034 output: 646 thoughts: 144 Thoughts: **Analyzing the Data Race - My Perspective** Okay, so I've looked into this and I'm confident that the data race we're seeing here is, ultimately, benign. The core of the issue revolves around `fdb->updated` within `br_fdb_update`. This is that timestamp, right? The one we use for aging our FDB entries. And, as I understand it, it's getting updated in the fast path, without any explicit locking. Now, my initial concern, as it always is when seeing a potential race condition, was the integrity of the data. But, thinking through the hardware architecture, specifically x86_64, those 8-byte aligned accesses are atomic, which is good. That eliminates a substantial class of potential corruption issues. Looking closer at the logic that *uses* that `fdb->updated` field – the aging process, the notification mechanisms – it's designed to be robust against occasional, minor discrepancies. We're talking about a timestamp here, and slight variations in the timestamp aren't critical. We're not using it as an absolute ordering mechanism, right? So, missing an update or getting a slightly out-of-date value during the aging process is not a big deal. The overall system should still function as expected. And hey, I've noticed a similar pattern for `fdb->used` too, if I recall. Plus, reviewing the structure as a whole, it seems that other fields in the same struct are either accessed using `READ_ONCE`/`WRITE_ONCE` primitives, or with atomic bit operations. That's consistent with this benign race pattern. So, everything's looking fine to me. |
| Total Calls | Total Tokens | Avg Tokens | Total Duration (Seconds) | Avg Duration (Seconds) |
|---|
| Total Calls | Total Duration (Seconds) | Avg Duration (Seconds) |
|---|