Result: 7 Warning(s)
i915_display_info1 igt_runner1 results1.json results1-xe-load.json guc_logs1.tar i915_display_info_post_exec1 boot1 dmesg1
| Detail | Value |
|---|---|
| Duration | unknown |
| Hostname |
shard-lnl-4 |
| Igt-Version |
IGT-Version: 2.3-g8e023b202 (x86_64) (Linux: 7.0.0-rc1-lgci-xe-xe-pw-162087v1-debug+ x86_64) |
| Out |
Using IGT_SRANDOM=1771995879 for randomisation Opened device: /dev/dri/card0 Starting subtest: twice-bindexecqueue-imm Subtest twice-bindexecqueue-imm: SUCCESS (0.030s) This test caused an abort condition: Lockdep not active /proc/lockdep_stats contents: lock-classes: 2224 [max: 8192] dynamic-keys: 355 direct dependencies: 22288 [max: 524288] indirect dependencies: 163842 all direct dependencies: 442106 dependency chains: 30119 [max: 524288] dependency chain hlocks used: 120299 [max: 2621440] dependency chain hlocks lost: 0 in-hardirq chains: 201 in-softirq chains: 600 in-process chains: 29318 stack-trace entries: 246456 [max: 524288] number of stack traces: 11246 number of stack hash chains: 8105 combined max dependencies: 3559385238 hardirq-safe locks: 92 hardirq-unsafe locks: 1337 softirq-safe locks: 189 softirq-unsafe locks: 1242 irq-safe locks: 222 irq-unsafe locks: 1337 hardirq-read-safe locks: 3 hardirq-read-unsafe locks: 439 softirq-read-safe locks: 7 softirq-read-unsafe locks: 437 irq-read-safe locks: 8 irq-read-unsafe locks: 439 uncategorized locks: 373 unused locks: 1 max locking depth: 17 max bfs queue depth: 405 max lock class index: 2223 debug_locks: 0 zapped classes: 2 zapped lock chains: 141 large chain blocks: 1 |
| Err |
Starting subtest: twice-bindexecqueue-imm Subtest twice-bindexecqueue-imm: SUCCESS (0.030s) |
| Dmesg |
<6> [64.355675] Console: switching to colour dummy device 80x25
<6> [64.356208] [IGT] xe_exec_fault_mode: executing
<6> [64.374036] [IGT] xe_exec_fault_mode: starting subtest twice-bindexecqueue-imm
<4> [64.381016]
<4> [64.381026] ============================================
<4> [64.381029] WARNING: possible recursive locking detected
<4> [64.381032] 7.0.0-rc1-lgci-xe-xe-pw-162087v1-debug+ #1 Tainted: G U
<4> [64.381036] --------------------------------------------
<4> [64.381038] xe_exec_fault_m/2099 is trying to acquire lock:
<4> [64.381040] ffff8881534ad018 (&fence->inline_lock){....}-{2:2}, at: dma_fence_add_callback+0x46/0x110
<4> [64.381052]
but task is already holding lock:
<4> [64.381054] ffff8881069fef58 (&fence->inline_lock){....}-{2:2}, at: dma_fence_add_callback+0x46/0x110
<4> [64.381060]
other info that might help us debug this:
<4> [64.381062] Possible unsafe locking scenario:
<4> [64.381064] CPU0
<4> [64.381065] ----
<4> [64.381066] lock(&fence->inline_lock);
<4> [64.381069] lock(&fence->inline_lock);
<4> [64.381071]
*** DEADLOCK ***
<4> [64.381074] May be due to missing lock nesting notation
<4> [64.381077] 6 locks held by xe_exec_fault_m/2099:
<4> [64.381080] #0: ffff88813effc6a0 (&vm->lock){++++}-{3:3}, at: xe_vm_bind_ioctl+0xdbb/0x1970 [xe]
<4> [64.381207] #1: ffff88814b6c4410 (&val->lock){.+.+}-{3:3}, at: xe_validation_lock+0x38/0xa0 [xe]
<4> [64.381309] #2: ffffc9000b82f748 (reservation_ww_class_acquire){+.+.}-{0:0}, at: vm_bind_ioctl_ops_execute+0x158/0x980 [xe]
<4> [64.381406] #3: ffffc9000b82f770 (reservation_ww_class_mutex){+.+.}-{3:3}, at: vm_bind_ioctl_ops_execute+0x158/0x980 [xe]
<4> [64.381500] #4: ffff8881069fef58 (&fence->inline_lock){....}-{2:2}, at: dma_fence_add_callback+0x46/0x110
<4> [64.381507] #5: ffffffff835c5be0 (rcu_read_lock){....}-{1:2}, at: __dma_fence_enable_signaling+0x53/0x360
<4> [64.381513]
stack backtrace:
<4> [64.381517] CPU: 5 UID: 0 PID: 2099 Comm: xe_exec_fault_m Tainted: G U 7.0.0-rc1-lgci-xe-xe-pw-162087v1-debug+ #1 PREEMPT(lazy)
<4> [64.381519] Tainted: [U]=USER
<4> [64.381520] Hardware name: Intel Corporation Lunar Lake Client Platform/LNL-M LP5 RVP1, BIOS LNLMFWI1.R00.3223.D99.2508051732 08/05/2025
<4> [64.381521] Call Trace:
<4> [64.381522] <TASK>
<4> [64.381523] dump_stack_lvl+0x91/0xf0
<4> [64.381526] dump_stack+0x10/0x20
<4> [64.381527] print_deadlock_bug+0x23f/0x320
<4> [64.381531] __lock_acquire+0x146e/0x2790
<4> [64.381533] ? stack_trace_save+0x4a/0x70
<4> [64.381537] lock_acquire+0xc4/0x2f0
<4> [64.381539] ? dma_fence_add_callback+0x46/0x110
<4> [64.381541] ? __pfx_dma_fence_chain_cb+0x10/0x10
<4> [64.381544] _raw_spin_lock_irqsave+0x49/0x80
<4> [64.381546] ? dma_fence_add_callback+0x46/0x110
<4> [64.381547] dma_fence_add_callback+0x46/0x110
<4> [64.381549] dma_fence_chain_enable_signaling+0x102/0x290
<4> [64.381551] __dma_fence_enable_signaling+0xbd/0x360
<4> [64.381553] ? __pfx_user_fence_cb+0x10/0x10 [xe]
<4> [64.381641] dma_fence_add_callback+0x51/0x110
<4> [64.381643] xe_sync_entry_signal+0xe1/0x1b0 [xe]
<4> [64.381727] vm_bind_ioctl_ops_fini+0x163/0x230 [xe]
<4> [64.381818] vm_bind_ioctl_ops_execute+0x5f3/0x980 [xe]
<4> [64.381906] ? mark_held_locks+0x46/0x90
<4> [64.381908] ? __create_object+0x68/0xc0
<4> [64.381911] ? _raw_spin_unlock_irqrestore+0x51/0x80
<4> [64.381912] ? __create_object+0x68/0xc0
<4> [64.381913] ? trace_hardirqs_on+0x22/0x100
<4> [64.381919] xe_vm_bind_ioctl+0x17df/0x1970 [xe]
<4> [64.382009] ? lock_acquire+0xc4/0x2f0
<4> [64.382013] ? __pfx_xe_vm_bind_ioctl+0x10/0x10 [xe]
<4> [64.382102] drm_ioctl_kernel+0xb3/0x120
<4> [64.382105] ? drm_ioctl_kernel+0xb3/0x120
<4> [64.382108] drm_ioctl+0x2d4/0x5a0
<4> [64.382109] ? __pfx_xe_vm_bind_ioctl+0x10/0x10 [xe]
<4> [64.382198] ? _raw_spin_unlock_irqrestore+0x51/0x80
<4> [64.382199] ? __pm_runtime_resume+0x5b/0x90
<4> [64.382201] ? trace_hardirqs_on+0x22/0x100
<4> [64.382203] ? _raw_spin_unlock_irqrestore+0x51/0x80
<4> [64.382205] ? __pm_runtime_resume+0x5b/0x90
<4> [64.382206] ? xe_pm_runtime_get_ioctl+0x61/0x160 [xe]
<4> [64.382290] xe_drm_ioctl+0x64/0xb0 [xe]
<4> [64.382353] __x64_sys_ioctl+0xa5/0x100
<4> [64.382357] x64_sys_call+0x1250/0x26e0
<4> [64.382359] do_syscall_64+0xdd/0x1470
<4> [64.382362] ? __might_fault+0x53/0xb0
<4> [64.382364] ? __might_fault+0x89/0xb0
<4> [64.382365] ? __might_fault+0x53/0xb0
<4> [64.382366] ? _copy_to_user+0x23/0x70
<4> [64.382369] ? drm_ioctl+0x30f/0x5a0
<4> [64.382370] ? __pfx_drm_version+0x10/0x10
<4> [64.382373] ? _raw_spin_unlock_irqrestore+0x51/0x80
<4> [64.382374] ? ktime_get_mono_fast_ns+0x3c/0xd0
<4> [64.382377] ? __pm_runtime_idle+0xb5/0x160
<4> [64.382378] ? xe_pm_runtime_put+0x66/0x150 [xe]
<4> [64.382460] ? xe_drm_ioctl+0x7c/0xb0 [xe]
<4> [64.382521] ? __x64_sys_ioctl+0xa5/0x100
<4> [64.382523] ? trace_hardirqs_on_prepare+0xe1/0x100
<4> [64.382525] ? do_syscall_64+0x22e/0x1470
<4> [64.382526] ? lock_release+0xd0/0x2b0
<4> [64.382528] ? find_held_lock+0x31/0x90
<4> [64.382529] ? exc_page_fault+0x96/0x2c0
<4> [64.382531] ? lock_release+0xd0/0x2b0
<4> [64.382533] ? trace_hardirqs_on_prepare+0xe1/0x100
<4> [64.382535] ? irqentry_exit+0x17d/0x7c0
<4> [64.382537] ? exc_page_fault+0xbd/0x2c0
<4> [64.382539] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<4> [64.382541] RIP: 0033:0x74401e724e1d
<4> [64.382543] Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10 c7 45 b0 10 00 00 00 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 1a 48 8b 45 c8 64 48 2b 04 25 28 00 00 00
<4> [64.382544] RSP: 002b:00007ffcef397250 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
<4> [64.382546] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 000074401e724e1d
<4> [64.382547] RDX: 00007ffcef3972e0 RSI: 0000000040886445 RDI: 0000000000000005
<4> [64.382548] RBP: 00007ffcef3972a0 R08: 0000000000000000 R09: 0000000000000000
<4> [64.382548] R10: 000074401e9ac058 R11: 0000000000000246 R12: 00007ffcef3972e0
<4> [64.382549] R13: 0000000040886445 R14: 0000000000000005 R15: 0000000000000000
<4> [64.382551] </TASK>
<6> [64.404726] [IGT] xe_exec_fault_mode: finished subtest twice-bindexecqueue-imm, SUCCESS
<7> [64.405510] xe 0000:00:02.0: [drm:intel_power_well_enable [xe]] enabling DC_off
<7> [64.405670] xe 0000:00:02.0: [drm:gen9_set_dc_state.part.0 [xe]] Setting DC state from 02 to 00
<7> [64.409163] xe 0000:00:02.0: [drm:drm_client_dev_restore] fbdev: ret=0
<6> [64.409498] [IGT] xe_exec_fault_mode: exiting, ret=0
<6> [64.425932] Console: switching to colour frame buffer device 180x56
|