Results for igt@xe_exec_system_allocator@many-large-mmap-file

Result: Abort 6 Warning(s)

i915_display_info23 igt_runner23 results23.json results23-xe-load.json guc_logs23.tar i915_display_info_post_exec23 serial_data23 boot23 dmesg23

DetailValue
Duration unknown
Hostname
shard-bmg-5
Igt-Version
IGT-Version: 2.4-g1ff14e3a8 (x86_64) (Linux: 7.1.0-rc2-lgci-xe-xe-5033-bcbe7be7961edce8f-debug+ x86_64)
Out
Using IGT_SRANDOM=1778334358 for randomisation
Opened device: /dev/dri/card1
Starting subtest: many-large-mmap-file
Subtest many-large-mmap-file: SUCCESS (1.803s)

This test caused an abort condition: Lockdep not active

/proc/lockdep_stats contents:
 lock-classes:                         2334 [max: 8192]
 dynamic-keys:                          374
 direct dependencies:                 28020 [max: 524288]
 indirect dependencies:              259868
 all direct dependencies:            533163
 dependency chains:                   46640 [max: 524288]
 dependency chain hlocks used:       219554 [max: 2621440]
 dependency chain hlocks lost:            0
 in-hardirq chains:                     332
 in-softirq chains:                     844
 in-process chains:                   45464
 stack-trace entries:                307752 [max: 524288]
 number of stack traces:              14018
 number of stack hash chains:          9480
 combined max dependencies:      4203234433
 hardirq-safe locks:                    105
 hardirq-unsafe locks:                 1431
 softirq-safe locks:                    234
 softirq-unsafe locks:                 1331
 irq-safe locks:                        248
 irq-unsafe locks:                     1431
 hardirq-read-safe locks:                 3
 hardirq-read-unsafe locks:             426
 softirq-read-safe locks:                 8
 softirq-read-unsafe locks:             423
 irq-read-safe locks:                     8
 irq-read-unsafe locks:                 426
 uncategorized locks:                   391
 unused locks:                            1
 max locking depth:                      21
 max bfs queue depth:                   578
 max lock class index:                 2333
 debug_locks:                             0

 zapped classes:                         14
 zapped lock chains:                   2877
 large chain blocks:                      2
Err
Starting subtest: many-large-mmap-file
Subtest many-large-mmap-file: SUCCESS (1.803s)
Dmesg

<4> [149.487280] ============================================
<4> [149.487281] WARNING: possible recursive locking detected
<4> [149.487283] 7.1.0-rc2-lgci-xe-xe-5033-bcbe7be7961edce8f-debug+ #1 Tainted: G U
<4> [149.487284] --------------------------------------------
<4> [149.487285] kworker/14:15/3751 is trying to acquire lock:
<4> [149.487286] ffff888121b9dd40 ((wq_completion)xe-destroy-wq){+.+.}-{0:0}, at: touch_wq_lockdep_map+0x2b/0xc0
<4> [149.487293]
but task is already holding lock:
<4> [149.487294] ffff888121b9dd40 ((wq_completion)xe-destroy-wq){+.+.}-{0:0}, at: process_one_work+0x4c4/0x740
<4> [149.487298]
other info that might help us debug this:
<4> [149.487298] Possible unsafe locking scenario:
<4> [149.487299] CPU0
<4> [149.487299] ----
<4> [149.487299] lock((wq_completion)xe-destroy-wq);
<4> [149.487301] lock((wq_completion)xe-destroy-wq);
<4> [149.487301]
*** DEADLOCK ***
<4> [149.487302] May be due to missing lock nesting notation
<4> [149.487302] 2 locks held by kworker/14:15/3751:
<4> [149.487303] #0: ffff888121b9dd40 ((wq_completion)xe-destroy-wq){+.+.}-{0:0}, at: process_one_work+0x4c4/0x740
<4> [149.487307] #1: ffffc90002b33e30 ((work_completion)(&q->guc->destroy_async)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x740
<4> [150.935980]
stack backtrace:
<4> [150.935983] CPU: 14 UID: 0 PID: 3751 Comm: kworker/14:15 Tainted: G U 7.1.0-rc2-lgci-xe-xe-5033-bcbe7be7961edce8f-debug+ #1 PREEMPT(lazy)
<4> [150.935985] Tainted: [U]=USER
<4> [150.935986] Hardware name: ASUS System Product Name/PRIME Z790-P WIFI, BIOS 1825 10/09/2025
<4> [150.935988] Workqueue: xe-destroy-wq __guc_exec_queue_destroy_async [xe]
<4> [150.936134] Call Trace:
<4> [150.936135] <TASK>
<4> [150.936137] dump_stack_lvl+0x91/0xf0
<4> [150.936141] dump_stack+0x10/0x20
<4> [150.936143] print_deadlock_bug+0x23f/0x320
<4> [150.936146] __lock_acquire+0x146e/0x2790
<4> [150.936148] ? __lock_acquire+0x43e/0x2790
<4> [150.936152] lock_acquire+0xc4/0x300
<4> [150.936154] ? touch_wq_lockdep_map+0x2b/0xc0
<4> [150.936156] ? lockdep_init_map_type+0x5c/0x270
<4> [150.936158] ? touch_wq_lockdep_map+0x2b/0xc0
<4> [150.936160] touch_wq_lockdep_map+0x41/0xc0
<4> [150.936161] ? touch_wq_lockdep_map+0x2b/0xc0
<4> [150.936162] __flush_workqueue+0x8d/0x4d0
<4> [150.936164] ? lock_release+0xd0/0x2b0
<4> [150.936168] drain_workqueue+0x9b/0x140
<4> [150.936171] guc_submit_sw_fini+0xbb/0x2d0 [xe]
<4> [150.936293] ? drm_managed_release+0xd1/0x170
<4> [150.936295] ? __pcs_replace_full_main+0x2a6/0x720
<4> [150.936299] ? drm_managed_release+0xd1/0x170
<4> [150.936301] ? kfree+0x23d/0x4d0
<4> [150.936304] drm_managed_release+0x8b/0x170
<4> [150.936305] drm_dev_put+0x69/0x90
<4> [150.936308] drm_gpuvm_free+0xc2/0x280 [drm_gpuvm]
<4> [150.936313] drm_gpuvm_put+0x48/0x80 [drm_gpuvm]
<4> [150.936317] __xe_exec_queue_free+0x79/0x210 [xe]
<4> [150.936426] xe_exec_queue_fini+0x33/0x60 [xe]
<4> [150.936535] __guc_exec_queue_destroy_async+0x6c/0x190 [xe]
<4> [150.936654] process_one_work+0x239/0x740
<4> [150.936658] worker_thread+0x200/0x3f0
<4> [150.936660] ? __pfx_worker_thread+0x10/0x10
<4> [150.936662] kthread+0x10d/0x150
<4> [150.936663] ? __pfx_kthread+0x10/0x10
<4> [150.936665] ret_from_fork+0x3bd/0x470
<4> [150.936668] ? __pfx_kthread+0x10/0x10
<4> [150.936669] ret_from_fork_asm+0x1a/0x30
<4> [150.936674] </TASK>
<7> [151.137651] xe 0000:03:00.0: [drm:intel_power_well_enable [xe]] enabling AUX_TC2
<6> [153.228236] Console: switching to colour dummy device 80x25
<6> [153.295251] [IGT] xe_exec_system_allocator: executing
<7> [153.345367] xe 0000:03:00.0: [drm:intel_power_well_disable [xe]] disabling AUX_TC2
<6> [153.489733] [IGT] xe_exec_system_allocator: starting subtest many-large-mmap-file
<7> [153.579094] xe 0000:03:00.0: [drm:drm_pagemap_dev_unhold_work [drm_gpusvm_helper]] Releasing reference on provider device and module.
<7> [153.603218] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [153.827468] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [153.933094] xe 0000:03:00.0: [drm:drm_pagemap_dev_unhold_work [drm_gpusvm_helper]] Releasing reference on provider device and module.
<7> [153.961622] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [154.183608] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [154.289695] xe 0000:03:00.0: [drm:drm_pagemap_dev_unhold_work [drm_gpusvm_helper]] Releasing reference on provider device and module.
<7> [154.461414] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [154.566661] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [154.672466] xe 0000:03:00.0: [drm:drm_pagemap_dev_unhold_work [drm_gpusvm_helper]] Releasing reference on provider device and module.
<7> [154.700687] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [154.921448] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [155.028012] xe 0000:03:00.0: [drm:drm_pagemap_dev_unhold_work [drm_gpusvm_helper]] Releasing reference on provider device and module.
<7> [155.029074] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x25242022
<7> [155.056379] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [155.056587] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [155.172022] xe 0000:03:00.0: [drm:drm_pagemap_dev_unhold_work [drm_gpusvm_helper]] Releasing reference on provider device and module.
<7> [155.207663] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [155.207856] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [155.273423] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x24242426
<7> [155.296266] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [155.296461] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [155.378003] xe 0000:03:00.0: [drm:drm_pagemap_dev_unhold_work [drm_gpusvm_helper]] Releasing reference on provider device and module.
<7> [155.378682] xe 0000:03:00.0: [drm:drm_pagemap_dev_unhold_work [drm_gpusvm_helper]] Releasing reference on provider device and module.
<6> [155.382674] [IGT] xe_exec_system_allocator: finished subtest many-large-mmap-file, SUCCESS
<7> [155.938595] xe 0000:03:00.0: [drm:drm_client_dev_restore] fbdev: ret=0
<6> [156.613933] [IGT] xe_exec_system_allocator: exiting, ret=0
<6> [156.690732] Console: switching to colour frame buffer device 240x67
Created at 2026-05-09 14:06:18