Results for igt@gem_busy@close-race

Result: Abort

git-log-oneline i915_display_info5 igt_runner5 runtimes5 results5.json guc_logs5.tar i915_display_info_post_exec5 boot5 dmesg5

DetailValue
Duration unknown
Hostname
shard-tglu-1
Igt-Version
IGT-Version: 2.4-g65fb5a161 (x86_64) (Linux: 7.0.0-CI_DRM_18350-g898b5aa235c5+ x86_64)
Out
Using IGT_SRANDOM=1776770303 for randomisation
Opened device: /dev/dri/card0
Starting subtest: close-race
Opened device: /dev/dri/card1
runner: This test was killed due to a kernel taint (0x4244).

This test caused an abort condition: Child refuses to die, tainted 0x4244.
Err
Starting subtest: close-race
Received signal Received signal SIGQUITSIGQUIT.
.
Stack trace: 
Stack trace: 
 # 0# [0f at[aflata_ls_siigg__hhaannddlleerr++00xx1177bb]]

 #1  #[1_ _[s_i_gasctiigona+0cx5t0i]on
+0x50]
 #2 [poll+0x 4#d2] [
wait4+0x51]
 #3 [igt_fork_hang_detec to#3r +[_0_xigt1_3w8a]i
tchildren+0x57]
 #4 [__igt_unique____real_main472+0x158d]
 # 4# 5[ ig[tm_awiani+t0cxh2idl]d
ren #+06x 2[_d_]li
bc_init_first+0x8a]
 #5 [__igt_unique____real_main472+0x1d01]
  ##76  [[_m_aliinb+c0_xs2tda]r
t_main+0x8b]
 #8 [_start+0x25]
 #7 [__libc_init_first+0x8a]
 #8 [__libc_start_main+0x8b]
 #9 [_start+0x25]
Dmesg
<7> [174.280890] i915 0000:00:02.0: [drm:intel_power_well_disable [i915]] disabling DC_off
<7> [174.281118] i915 0000:00:02.0: [drm:skl_enable_dc6 [i915]] Enabling DC6
<7> [174.281290] i915 0000:00:02.0: [drm:gen9_set_dc_state.part.0 [i915]] Setting DC state from 00 to 02
<6> [174.287746] Console: switching to colour dummy device 80x25
<6> [174.288248] [IGT] gem_busy: executing
<7> [174.291835] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [174.292489] i915 0000:00:02.0: [drm:i915_drop_caches_set [i915]] Dropping caches: 0x000001dc [0x000001dc]
<7> [174.292791] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [174.293265] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<6> [174.303212] [IGT] gem_busy: starting subtest close-race
<7> [174.304636] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [174.310497] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [174.316564] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [174.322858] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [174.329645] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [174.334672] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [174.335191] i915 0000:00:02.0: [drm:i915_drop_caches_set [i915]] Dropping caches: 0x00000070 [0x00000070]
<7> [174.437669] i915 0000:00:02.0: [drm:i915_drop_caches_set [i915]] Dropping caches: 0x00000070 [0x00000070]
<3> [260.915495] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
<3> [260.915512] rcu: Tasks blocked on level-0 rcu_node (CPUs 0-7): P7326
<3> [260.915696] rcu: (detected by 2, t=65002 jiffies, g=100473, q=33516 ncpus=8)
<6> [260.915704] task:gem_busy state:R running task stack:0 pid:7326 tgid:7326 ppid:7323 task_flags:0x400040 flags:0x00080800
<6> [260.915709] Call Trace:
<6> [260.915710] <TASK>
<6> [260.915714] sysvec_apic_timer_interrupt+0x57/0xc0
<6> [260.915719] ? sysvec_apic_timer_interrupt+0x57/0xc0
<6> [260.915721] ? sysvec_apic_timer_interrupt+0x57/0xc0
<6> [260.915723] ? sysvec_apic_timer_interrupt+0x57/0xc0
<6> [260.915726] ? irqentry_exit+0x6a/0x780
<6> [260.915728] ? sysvec_apic_timer_interrupt+0x57/0xc0
<6> [260.915730] ? trace_hardirqs_on+0x22/0x100
<6> [260.915734] ? irqentry_exit+0x6a/0x780
<6> [260.915736] ? trace_hardirqs_off_finish+0xe1/0x100
<6> [260.915740] ? sysvec_apic_timer_interrupt+0x57/0xc0
<6> [260.915742] ? asm_sysvec_apic_timer_interrupt+0x1b/0x20
<6> [260.915748] ? lock_is_held_type+0xa3/0x130
<6> [260.915751] ? lock_is_held_type+0xa3/0x130
<6> [260.915754] ? rcu_read_lock_held+0x43/0x60
<6> [260.915757] ? dma_resv_list_entry+0x7a/0xd0
<6> [260.915762] ? dma_resv_iter_walk_unlocked.part.0+0x179/0x1d0
<6> [260.915766] ? dma_resv_iter_first_unlocked+0x69/0x290
<6> [260.915770] ? i915_gem_busy_ioctl+0xb8/0xb80 [i915]
<6> [260.915931] ? __pfx_i915_gem_busy_ioctl+0x10/0x10 [i915]
<6> [260.916059] ? drm_ioctl_kernel+0xb3/0x120
<6> [260.916065] ? drm_ioctl+0x2d4/0x5a0
<6> [260.916067] ? __pfx_i915_gem_busy_ioctl+0x10/0x10 [i915]
<6> [260.916188] ? do_syscall_64+0x22e/0x1440
<6> [260.916196] ? __x64_sys_ioctl+0xa5/0x100
<6> [260.916202] ? x64_sys_call+0x1250/0x26e0
<6> [260.916205] ? do_syscall_64+0xdd/0x1440
<6> [260.916207] ? __x64_sys_ioctl+0xa5/0x100
<6> [260.916211] ? trace_hardirqs_on_prepare+0xe1/0x100
<6> [260.916214] ? do_syscall_64+0x22e/0x1440
<6> [260.916216] ? trace_hardirqs_on_prepare+0xe1/0x100
<6> [260.916218] ? do_syscall_64+0x22e/0x1440
<6> [260.916221] ? do_syscall_64+0x22e/0x1440
<6> [260.916223] ? do_syscall_64+0x22e/0x1440
<6> [260.916225] ? clear_bhb_loop+0x30/0x80
<6> [260.916229] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
<6> [260.916237] </TASK>
<3> [308.095208] INFO: task kworker/4:2:221 blocked for more than 61 seconds.
<3> [308.095283] Tainted: G S U W L 7.0.0-CI_DRM_18350-g898b5aa235c5+ #1
<3> [308.095302] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [308.095321] task:kworker/4:2 state:D stack:0 pid:221 tgid:221 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [308.095328] Workqueue: inode_switch_wbs inode_switch_wbs_work_fn
<6> [308.095337] Call Trace:
<6> [308.095340] <TASK>
<6> [308.095346] __schedule+0x5bc/0x1dd0
<6> [308.095352] ? mark_held_locks+0x46/0x90
<6> [308.095361] ? schedule+0x10e/0x180
<6> [308.095366] ? lock_release+0xd0/0x2b0
<6> [308.095375] schedule+0x3a/0x180
<6> [308.095380] schedule_timeout+0x117/0x130
<6> [308.095384] ? __wait_for_common+0xb4/0x1f0
<6> [308.095390] ? _raw_spin_unlock_irq+0x27/0x70
<6> [308.095393] ? __wait_for_common+0xb4/0x1f0
<6> [308.095397] ? trace_hardirqs_on+0x22/0x100
<6> [308.095407] __wait_for_common+0xc0/0x1f0
<6> [308.095412] ? __pfx_schedule_timeout+0x10/0x10
<6> [308.095423] wait_for_completion+0x24/0x40
<6> [308.095427] synchronize_rcu_normal+0xc1/0xf0
<6> [308.095443] synchronize_rcu+0x1c4/0x1d0
<6> [308.095449] inode_switch_wbs_work_fn+0xe5/0xdf0
<6> [308.095459] ? lock_acquire+0xc4/0x2f0
<6> [308.095464] ? process_one_work+0x1f9/0x760
<6> [308.095472] ? lock_release+0xd0/0x2b0
<6> [308.095476] ? _raw_spin_unlock_irq+0x27/0x70
<6> [308.095479] ? process_one_work+0x1c2/0x760
<6> [308.095488] process_one_work+0x239/0x760
<6> [308.095501] worker_thread+0x200/0x3f0
<6> [308.095507] ? __pfx_worker_thread+0x10/0x10
<6> [308.095511] kthread+0x10d/0x150
<6> [308.095517] ? __pfx_kthread+0x10/0x10
<6> [308.095524] ret_from_fork+0x3d4/0x480
<6> [308.095529] ? __pfx_kthread+0x10/0x10
<6> [308.095535] ret_from_fork_asm+0x1a/0x30
<6> [308.095554] </TASK>
<4> [308.095569]
Showing all locks held in the system:
<4> [308.095572] 5 locks held by systemd/1:
<4> [308.095574] #0: ffff88810ad0c420 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0x6f/0xf0
<4> [308.095589] #1: ffff88811063dc88 (&of->mutex#2){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x127/0x240
<4> [308.095604] #2: ffffffff8369bb08 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0x4f/0x250
<4> [308.095616] #3: ffffffff83489470 (cpu_hotplug_lock){++++}-{0:0}, at: cgroup_procs_write_start+0x227/0x3f0
<4> [308.095628] #4: ffffffff8369b8d0 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_procs_write_start+0x233/0x3f0
<4> [308.095644] 1 lock held by khungtaskd/69:
<4> [308.095647] #0: ffffffff835c5ce0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x37/0x220
<4> [308.095663] 2 locks held by kworker/4:2/221:
<4> [308.095666] #0: ffff888103014548 ((wq_completion)inode_switch_wbs){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [308.095676] #1: ffffc900004fbe30 ((work_completion)(&wb->switch_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [308.095690] 1 lock held by in:imklog/681:
<4> [308.095692] #0: ffff888121644538 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [308.095708] 1 lock held by dmesg/2040:
<4> [308.095711] #0: ffff888118dd40d0 (&user->lock){+.+.}-{3:3}, at: devkmsg_read+0x5a/0x210
<4> [308.095723] 2 locks held by gem_busy/7326:
<4> [308.095726] 3 locks held by (sa1)/7334:
<4> [308.095728] #0: ffff88810ad0c420 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0x6f/0xf0
<4> [308.095740] #1: ffff888110639488 (&of->mutex#2){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x127/0x240
<4> [308.095753] #2: ffffffff8369bb08 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0x4f/0x250
<4> [308.095764]
<4> [308.095766] =============================================
<3> [369.535152] INFO: task systemd:1 blocked for more than 61 seconds.
<3> [369.535188] Tainted: G S U W L 7.0.0-CI_DRM_18350-g898b5aa235c5+ #1
<3> [369.535209] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [369.535229] task:systemd state:D stack:0 pid:1 tgid:1 ppid:0 task_flags:0x400100 flags:0x00080000
<6> [369.535236] Call Trace:
<6> [369.535239] <TASK>
<6> [369.535246] __schedule+0x5bc/0x1dd0
<6> [369.535254] ? lock_acquire+0xc4/0x2f0
<6> [369.535264] ? schedule+0x10e/0x180
<6> [369.535270] ? lock_release+0xd0/0x2b0
<6> [369.535280] schedule+0x3a/0x180
<6> [369.535285] schedule_timeout+0x117/0x130
<6> [369.535291] ? __wait_for_common+0xb4/0x1f0
<6> [369.535297] ? _raw_spin_unlock_irq+0x27/0x70
<6> [369.535301] ? __wait_for_common+0xb4/0x1f0
<6> [369.535305] ? trace_hardirqs_on+0x22/0x100
<6> [369.535316] __wait_for_common+0xc0/0x1f0
<6> [369.535321] ? __pfx_schedule_timeout+0x10/0x10
<6> [369.535331] ? cgroup_procs_write_start+0x5e/0x3f0
<6> [369.535338] wait_for_completion+0x24/0x40
<6> [369.535343] synchronize_rcu_normal+0xc1/0xf0
<6> [369.535360] synchronize_rcu+0x1c4/0x1d0
<6> [369.535366] rcu_sync_enter+0x58/0x110
<6> [369.535377] percpu_down_write+0x56/0x2f0
<6> [369.535382] ? cgroup_procs_write_start+0x233/0x3f0
<6> [369.535387] ? cgroup_procs_write_start+0x5e/0x3f0
<6> [369.535394] ? cgroup_procs_write_start+0x5e/0x3f0
<6> [369.535399] cgroup_procs_write_start+0x233/0x3f0
<6> [369.535403] ? mutex_lock_nested+0x1b/0x30
<6> [369.535412] __cgroup_procs_write+0x5c/0x2e0
<6> [369.535425] cgroup_procs_write+0x17/0x30
<6> [369.535430] cgroup_file_write+0xa6/0x2c0
<6> [369.535435] ? lock_is_held_type+0xa3/0x130
<6> [369.535444] kernfs_fop_write_iter+0x188/0x240
<6> [369.535455] vfs_write+0x283/0x540
<6> [369.535460] ? _raw_spin_unlock_irq+0x27/0x70
<6> [369.535475] ksys_write+0x6f/0xf0
<6> [369.535484] __x64_sys_write+0x19/0x30
<6> [369.535488] x64_sys_call+0x259/0x26e0
<6> [369.535495] do_syscall_64+0xdd/0x1440
<6> [369.535506] ? __x64_sys_fcntl+0x97/0x130
<6> [369.535514] ? trace_hardirqs_on_prepare+0xe1/0x100
<6> [369.535520] ? do_syscall_64+0x22e/0x1440
<6> [369.535524] ? irqentry_exit+0x17d/0x780
<6> [369.535532] ? clear_bhb_loop+0x30/0x80
<6> [369.535541] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<6> [369.535545] RIP: 0033:0x75c4e331c5a4
<6> [369.535550] RSP: 002b:00007fff7937ab98 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
<6> [369.535555] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 000075c4e331c5a4
<6> [369.535558] RDX: 0000000000000005 RSI: 00007fff7937ad6a RDI: 0000000000000043
<6> [369.535560] RBP: 00007fff7937abc0 R08: 000075c4e3402228 R09: 0000000000000000
<6> [369.535563] R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000005
<6> [369.535566] R13: 00007fff7937ad6a R14: 000062d80d6be910 R15: 0000000000000005
<6> [369.535583] </TASK>
<3> [369.535600] INFO: task kworker/4:2:221 blocked for more than 122 seconds.
<3> [369.535619] Tainted: G S U W L 7.0.0-CI_DRM_18350-g898b5aa235c5+ #1
<3> [369.535640] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [369.535661] task:kworker/4:2 state:D stack:0 pid:221 tgid:221 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [369.535667] Workqueue: inode_switch_wbs inode_switch_wbs_work_fn
<6> [369.535675] Call Trace:
<6> [369.535677] <TASK>
<6> [369.535683] __schedule+0x5bc/0x1dd0
<6> [369.535688] ? mark_held_locks+0x46/0x90
<6> [369.535696] ? schedule+0x10e/0x180
<6> [369.535702] ? lock_release+0xd0/0x2b0
<6> [369.535712] schedule+0x3a/0x180
<6> [369.535717] schedule_timeout+0x117/0x130
<6> [369.535722] ? __wait_for_common+0xb4/0x1f0
<6> [369.535727] ? _raw_spin_unlock_irq+0x27/0x70
<6> [369.535731] ? __wait_for_common+0xb4/0x1f0
<6> [369.535735] ? trace_hardirqs_on+0x22/0x100
<6> [369.535744] __wait_for_common+0xc0/0x1f0
<6> [369.535750] ? __pfx_schedule_timeout+0x10/0x10
<6> [369.535761] wait_for_completion+0x24/0x40
<6> [369.535766] synchronize_rcu_normal+0xc1/0xf0
<6> [369.535782] synchronize_rcu+0x1c4/0x1d0
<6> [369.535788] inode_switch_wbs_work_fn+0xe5/0xdf0
<6> [369.535799] ? lock_acquire+0xc4/0x2f0
<6> [369.535804] ? process_one_work+0x1f9/0x760
<6> [369.535812] ? lock_release+0xd0/0x2b0
<6> [369.535817] ? _raw_spin_unlock_irq+0x27/0x70
<6> [369.535821] ? process_one_work+0x1c2/0x760
<6> [369.535830] process_one_work+0x239/0x760
<6> [369.535845] worker_thread+0x200/0x3f0
<6> [369.535851] ? __pfx_worker_thread+0x10/0x10
<6> [369.535856] kthread+0x10d/0x150
<6> [369.535863] ? __pfx_kthread+0x10/0x10
<6> [369.535871] ret_from_fork+0x3d4/0x480
<6> [369.535876] ? __pfx_kthread+0x10/0x10
<6> [369.535883] ret_from_fork_asm+0x1a/0x30
<6> [369.535902] </TASK>
<3> [369.535923] INFO: task (sa1):7334 blocked for more than 61 seconds.
<3> [369.535940] Tainted: G S U W L 7.0.0-CI_DRM_18350-g898b5aa235c5+ #1
<3> [369.535961] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [369.535981] task:(sa1) state:D stack:0 pid:7334 tgid:7334 ppid:1 task_flags:0x400100 flags:0x00080000
<6> [369.535987] Call Trace:
<6> [369.535989] <TASK>
<6> [369.535995] __schedule+0x5bc/0x1dd0
<6> [369.535999] ? lock_acquire+0xc4/0x2f0
<6> [369.536007] ? schedule+0x10e/0x180
<6> [369.536013] ? lock_release+0xd0/0x2b0
<6> [369.536023] schedule+0x3a/0x180
<6> [369.536028] schedule_preempt_disabled+0x15/0x30
<6> [369.536033] __mutex_lock+0x738/0x10e0
<6> [369.536040] ? cgroup_kn_lock_live+0x4f/0x250
<6> [369.536056] mutex_lock_nested+0x1b/0x30
<6> [369.536060] ? mutex_lock_nested+0x1b/0x30
<6> [369.536064] cgroup_kn_lock_live+0x4f/0x250
<6> [369.536072] __cgroup_procs_write+0x3d/0x2e0
<6> [369.536079] ? kernfs_root+0x80/0x1b0
<6> [369.536088] cgroup_procs_write+0x17/0x30
<6> [369.536093] cgroup_file_write+0xa6/0x2c0
<6> [369.536098] ? lock_is_held_type+0xa3/0x130
<6> [369.536106] kernfs_fop_write_iter+0x188/0x240
<6> [369.536116] vfs_write+0x283/0x540
<6> [369.536119] ? find_held_lock+0x31/0x90
<6> [369.536135] ksys_write+0x6f/0xf0
<6> [369.536152] __x64_sys_write+0x19/0x30
<6> [369.536156] x64_sys_call+0x259/0x26e0
<6> [369.536161] do_syscall_64+0xdd/0x1440
<6> [369.536166] ? __slab_free+0x129/0x2b0
<6> [369.536175] ? __pcs_replace_full_main+0x2ad/0x710
<6> [369.536183] ? putname+0x41/0x90
<6> [369.536188] ? kmem_cache_free+0x165/0x510
<6> [369.536198] ? putname+0x41/0x90
<6> [369.536204] ? do_sys_openat2+0x85/0xd0
<6> [369.536214] ? __x64_sys_openat+0x54/0xa0
<6> [369.536220] ? trace_hardirqs_on_prepare+0xe1/0x100
<6> [369.536226] ? do_syscall_64+0x22e/0x1440
<6> [369.536231] ? rcu_read_unlock+0x26/0x80
<6> [369.536246] ? find_held_lock+0x31/0x90
<6> [369.536251] ? rcu_read_unlock+0x1c/0x80
<6> [369.536260] ? __x64_sys_fcntl+0x97/0x130
<6> [369.536267] ? trace_hardirqs_on_prepare+0xe1/0x100
<6> [369.536273] ? do_syscall_64+0x22e/0x1440
<6> [369.536278] ? lock_release+0xd0/0x2b0
<6> [369.536288] ? trace_hardirqs_on_prepare+0xe1/0x100
<6> [369.536294] ? irqentry_exit+0x17d/0x780
<6> [369.536302] ? clear_bhb_loop+0x30/0x80
<6> [369.536310] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<6> [369.536314] RIP: 0033:0x7422b951c5a4
<6> [369.536318] RSP: 002b:00007ffc994c7c78 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
<6> [369.536322] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007422b951c5a4
<6> [369.536325] RDX: 0000000000000005 RSI: 00007ffc994c7e4a RDI: 0000000000000003
<6> [369.536328] RBP: 00007ffc994c7ca0 R08: 00007422b9602228 R09: 0000000000000000
<6> [369.536331] R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000005
<6> [369.536333] R13: 00007ffc994c7e4a R14: 0000641c5085a2d0 R15: 0000000000000005
<6> [369.536351] </TASK>
<3> [369.536354] INFO: task (sa1):7334 is blocked on a mutex likely owned by task systemd:1.
<4> [369.536376]
Showing all locks held in the system:
<4> [369.536379] 5 locks held by systemd/1:
<4> [369.536382] #0: ffff88810ad0c420 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0x6f/0xf0
<4> [369.536396] #1: ffff88811063dc88 (&of->mutex#2){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x127/0x240
<4> [369.536411] #2: ffffffff8369bb08 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0x4f/0x250
<4> [369.536423] #3: ffffffff83489470 (cpu_hotplug_lock){++++}-{0:0}, at: cgroup_procs_write_start+0x227/0x3f0
<4> [369.536436] #4: ffffffff8369b8d0 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_procs_write_start+0x233/0x3f0
<4> [369.536451] 1 lock held by khungtaskd/69:
<4> [369.536454] #0: ffffffff835c5ce0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x37/0x220
<4> [369.536471] 2 locks held by kworker/4:2/221:
<4> [369.536474] #0: ffff888103014548 ((wq_completion)inode_switch_wbs){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [369.536486] #1: ffffc900004fbe30 ((work_completion)(&wb->switch_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [369.536501] 1 lock held by in:imklog/681:
<4> [369.536504] #0: ffff888121644538 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [369.536522] 3 locks held by dmesg/2040:
<4> [369.536525] 2 locks held by gem_busy/7326:
<4> [369.536527] 3 locks held by (sa1)/7334:
<4> [369.536530] #0: ffff88810ad0c420 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0x6f/0xf0
<4> [369.536543] #1: ffff888110639488 (&of->mutex#2){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x127/0x240
<4> [369.536557] #2: ffffffff8369bb08 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0x4f/0x250
<4> [369.536569]
<4> [369.536571] =============================================
Created at 2026-04-21 12:45:40