======================================================
WARNING: possible circular locking dependency detected
4.14.285-syzkaller #0 Not tainted
------------------------------------------------------
kworker/u4:6/9270 is trying to acquire lock:
((&(&cp->cp_send_w)->work)){+.+.}, at: [<ffffffff813679d8>] flush_work+0x88/0x770 kernel/workqueue.c:2887
but task is already holding lock:
(k-sk_lock-AF_INET){+.+.}, at: [<ffffffff869c2101>] lock_sock include/net/sock.h:1473 [inline]
(k-sk_lock-AF_INET){+.+.}, at: [<ffffffff869c2101>] rds_tcp_reset_callbacks+0x181/0x450 net/rds/tcp.c:165
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (k-sk_lock-AF_INET){+.+.}:
lock_sock_nested+0xb7/0x100 net/core/sock.c:2813
lock_sock include/net/sock.h:1473 [inline]
do_tcp_setsockopt.constprop.0+0xfb/0x1c10 net/ipv4/tcp.c:2564
tcp_setsockopt net/ipv4/tcp.c:2832 [inline]
tcp_setsockopt+0xa7/0xc0 net/ipv4/tcp.c:2824
kernel_setsockopt+0xfb/0x1b0 net/socket.c:3396
rds_tcp_cork net/rds/tcp_send.c:43 [inline]
rds_tcp_xmit_path_prepare+0xaf/0xe0 net/rds/tcp_send.c:50
rds_send_xmit+0x1ae/0x1c00 net/rds/send.c:187
rds_send_worker+0x6d/0x240 net/rds/threads.c:189
process_one_work+0x793/0x14a0 kernel/workqueue.c:2117
worker_thread+0x5cc/0xff0 kernel/workqueue.c:2251
kthread+0x30d/0x420 kernel/kthread.c:232
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:404
-> #0 ((&(&cp->cp_send_w)->work)){+.+.}:
lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998
flush_work+0xad/0x770 kernel/workqueue.c:2890
__cancel_work_timer+0x321/0x460 kernel/workqueue.c:2965
rds_tcp_reset_callbacks+0x18d/0x450 net/rds/tcp.c:167
rds_tcp_accept_one+0x61a/0x8b0 net/rds/tcp_listen.c:194
rds_tcp_accept_worker+0x4d/0x70 net/rds/tcp.c:407
process_one_work+0x793/0x14a0 kernel/workqueue.c:2117
worker_thread+0x5cc/0xff0 kernel/workqueue.c:2251
kthread+0x30d/0x420 kernel/kthread.c:232
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:404
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(k-sk_lock-AF_INET);
lock((&(&cp->cp_send_w)->work));
lock(k-sk_lock-AF_INET);
lock((&(&cp->cp_send_w)->work));
*** DEADLOCK ***
4 locks held by kworker/u4:6/9270:
#0: ("%s""krdsd"){+.+.}, at: [<ffffffff81364ef0>] process_one_work+0x6b0/0x14a0 kernel/workqueue.c:2088
#1: ((&rtn->rds_tcp_accept_w)){+.+.}, at: [<ffffffff81364f26>] process_one_work+0x6e6/0x14a0 kernel/workqueue.c:2092
#2: (&tc->t_conn_path_lock){+.+.}, at: [<ffffffff869c3ef2>] rds_tcp_accept_one+0x502/0x8b0 net/rds/tcp_listen.c:186
#3: (k-sk_lock-AF_INET){+.+.}, at: [<ffffffff869c2101>] lock_sock include/net/sock.h:1473 [inline]
#3: (k-sk_lock-AF_INET){+.+.}, at: [<ffffffff869c2101>] rds_tcp_reset_callbacks+0x181/0x450 net/rds/tcp.c:165
stack backtrace:
CPU: 1 PID: 9270 Comm: kworker/u4:6 Not tainted 4.14.285-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: krdsd rds_tcp_accept_worker
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x1b2/0x281 lib/dump_stack.c:58
print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1258
check_prev_add kernel/locking/lockdep.c:1905 [inline]
check_prevs_add kernel/locking/lockdep.c:2022 [inline]
validate_chain kernel/locking/lockdep.c:2464 [inline]
__lock_acquire+0x2e0e/0x3f20 kernel/locking/lockdep.c:3491
lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998
flush_work+0xad/0x770 kernel/workqueue.c:2890
__cancel_work_timer+0x321/0x460 kernel/workqueue.c:2965
rds_tcp_reset_callbacks+0x18d/0x450 net/rds/tcp.c:167
rds_tcp_accept_one+0x61a/0x8b0 net/rds/tcp_listen.c:194
rds_tcp_accept_worker+0x4d/0x70 net/rds/tcp.c:407
process_one_work+0x793/0x14a0 kernel/workqueue.c:2117
worker_thread+0x5cc/0xff0 kernel/workqueue.c:2251
kthread+0x30d/0x420 kernel/kthread.c:232
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:404