Results for igt@xe_exec_reset@virtual-close-fd-no-exec

Result: Abort 6 Warning(s)

i915_display_info4 igt_runner4 results4.json results4-xe-load.json guc_logs4.tar i915_display_info_post_exec4 serial_data4 boot4 dmesg4

DetailValue
Duration unknown
Hostname
shard-bmg-2
Igt-Version
IGT-Version: 2.4-gb187936fd (x86_64) (Linux: 7.1.0-rc3-lgci-xe-xe-5043-0ce1c813197dfbe15-debug+ x86_64)
Out
Using IGT_SRANDOM=1778565385 for randomisation
Opened device: /dev/dri/card1
Starting subtest: virtual-close-fd-no-exec
Subtest virtual-close-fd-no-exec: SUCCESS (0.392s)

This test caused an abort condition: Lockdep not active

/proc/lockdep_stats contents:
 lock-classes:                         2363 [max: 8192]
 dynamic-keys:                          374
 direct dependencies:                 28802 [max: 524288]
 indirect dependencies:              276062
 all direct dependencies:            555476
 dependency chains:                   48564 [max: 524288]
 dependency chain hlocks used:       229357 [max: 2621440]
 dependency chain hlocks lost:            0
 in-hardirq chains:                     366
 in-softirq chains:                     862
 in-process chains:                   47336
 stack-trace entries:                319268 [max: 524288]
 number of stack traces:              14432
 number of stack hash chains:          9621
 combined max dependencies:      2107720089
 hardirq-safe locks:                    109
 hardirq-unsafe locks:                 1439
 softirq-safe locks:                    240
 softirq-unsafe locks:                 1336
 irq-safe locks:                        257
 irq-unsafe locks:                     1439
 hardirq-read-safe locks:                 3
 hardirq-read-unsafe locks:             438
 softirq-read-safe locks:                 8
 softirq-read-unsafe locks:             435
 irq-read-safe locks:                     8
 irq-read-unsafe locks:                 438
 uncategorized locks:                   396
 unused locks:                            1
 max locking depth:                      21
 max bfs queue depth:                   652
 max lock class index:                 2362
 debug_locks:                             0

 zapped classes:                         48
 zapped lock chains:                   5782
 large chain blocks:                      2
Err
Starting subtest: virtual-close-fd-no-exec
Subtest virtual-close-fd-no-exec: SUCCESS (0.392s)
Dmesg

<4> [394.416013] WARNING: possible recursive locking detected
<4> [394.416015] 7.1.0-rc3-lgci-xe-xe-5043-0ce1c813197dfbe15-debug+ #1 Tainted: G S U
<4> [394.416016] --------------------------------------------
<4> [394.416017] kworker/12:7/2571 is trying to acquire lock:
<4> [394.416018] ffff88812b79b940 ((wq_completion)xe-destroy-wq){+.+.}-{0:0}, at: touch_wq_lockdep_map+0x2b/0xc0
<4> [394.416026]
but task is already holding lock:
<4> [394.416026] ffff88812b79b940 ((wq_completion)xe-destroy-wq){+.+.}-{0:0}, at: process_one_work+0x4c4/0x740
<4> [394.416030]
other info that might help us debug this:
<4> [394.416031] Possible unsafe locking scenario:
<4> [394.416031] CPU0
<4> [394.416032] ----
<4> [394.416032] lock((wq_completion)xe-destroy-wq);
<4> [394.416033] lock((wq_completion)xe-destroy-wq);
<4> [394.416034]
*** DEADLOCK ***
<4> [394.416035] May be due to missing lock nesting notation
<4> [394.416035] 2 locks held by kworker/12:7/2571:
<4> [394.416036] #0: ffff88812b79b940 ((wq_completion)xe-destroy-wq){+.+.}-{0:0}, at: process_one_work+0x4c4/0x740
<4> [394.416040] #1: ffffc9000178be30 ((work_completion)(&q->guc->destroy_async)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x740
<4> [394.416043]
stack backtrace:
<4> [394.416045] CPU: 12 UID: 0 PID: 2571 Comm: kworker/12:7 Tainted: G S U 7.1.0-rc3-lgci-xe-xe-5043-0ce1c813197dfbe15-debug+ #1 PREEMPT(lazy)
<4> [394.416048] Tainted: [S]=CPU_OUT_OF_SPEC, [U]=USER
<4> [394.416049] Hardware name: ASUS System Product Name/PRIME Z790-P WIFI, BIOS 1645 03/15/2024
<4> [394.416050] Workqueue: xe-destroy-wq __guc_exec_queue_destroy_async [xe]
<4> [394.416194] Call Trace:
<4> [394.416195] <TASK>
<4> [394.416196] dump_stack_lvl+0x91/0xf0
<4> [394.416200] dump_stack+0x10/0x20
<4> [394.416202] print_deadlock_bug+0x23f/0x320
<4> [394.416205] __lock_acquire+0x146e/0x2790
<4> [394.416207] ? __lock_acquire+0x43e/0x2790
<4> [394.416211] lock_acquire+0xc4/0x300
<4> [394.416212] ? touch_wq_lockdep_map+0x2b/0xc0
<4> [394.416214] ? lockdep_init_map_type+0x5c/0x270
<4> [394.416217] ? touch_wq_lockdep_map+0x2b/0xc0
<4> [394.416218] touch_wq_lockdep_map+0x41/0xc0
<4> [394.416219] ? touch_wq_lockdep_map+0x2b/0xc0
<4> [394.416220] __flush_workqueue+0x8d/0x4d0
<4> [394.416222] ? lock_release+0xd0/0x2b0
<4> [394.416227] drain_workqueue+0x9b/0x140
<4> [394.416229] guc_submit_sw_fini+0xbb/0x2d0 [xe]
<4> [394.416351] ? drm_managed_release+0xd1/0x170
<4> [394.416354] ? __pcs_replace_full_main+0x2a6/0x720
<4> [394.416358] ? drm_managed_release+0xd1/0x170
<4> [394.416359] ? kfree+0x23d/0x4d0
<4> [394.416362] drm_managed_release+0x8b/0x170
<4> [394.416364] drm_dev_put+0x69/0x90
<4> [394.416366] drm_gpuvm_free+0xc2/0x280 [drm_gpuvm]
<4> [394.416371] drm_gpuvm_put+0x48/0x80 [drm_gpuvm]
<4> [394.416375] __xe_exec_queue_free+0x79/0x210 [xe]
<4> [394.416485] xe_exec_queue_fini+0x33/0x60 [xe]
<4> [394.416594] __guc_exec_queue_destroy_async+0x6c/0x190 [xe]
<4> [394.416713] process_one_work+0x239/0x740
<4> [394.416717] worker_thread+0x200/0x3f0
<4> [394.416719] ? __pfx_worker_thread+0x10/0x10
<4> [394.416721] kthread+0x10d/0x150
<4> [394.416723] ? __pfx_kthread+0x10/0x10
<4> [394.416724] ret_from_fork+0x3bd/0x470
<4> [394.416727] ? __pfx_kthread+0x10/0x10
<4> [394.416728] ret_from_fork_asm+0x1a/0x30
<4> [394.416733] </TASK>
<6> [394.725813] Console: switching to colour dummy device 80x25
<6> [394.731819] [IGT] xe_exec_reset: executing
<6> [394.751434] [IGT] xe_exec_reset: starting subtest virtual-close-fd-no-exec
<6> [395.150538] [IGT] xe_exec_reset: finished subtest virtual-close-fd-no-exec, SUCCESS
<6> [395.158652] [IGT] xe_exec_reset: exiting, ret=0
<6> [395.171091] Console: switching to colour frame buffer device 240x67
<7> [395.254295] xe 0000:03:00.0: [drm:drm_client_dev_restore] fbdev: ret=0
Created at 2026-05-12 06:44:39