Results for igt@xe_exec_reset@parallel-gt-reset

Machine description: shard-bmg-1

Result: Abort 23 Warning(s)

i915_display_info4 igt_runner4 results4.json results4-xe-load.json i915_display_info_post_exec4 boot4 dmesg4

DetailValue
Duration unknown
Hostname
shard-bmg-1
Igt-Version
IGT-Version: 1.30-g7187a77fa (x86_64) (Linux: 6.14.0-rc2-xe+ x86_64)
Out
Using IGT_SRANDOM=1739650373 for randomisation
Opened device: /dev/dri/card0
Starting subtest: parallel-gt-reset
Subtest parallel-gt-reset: SUCCESS (10.556s)

This test caused an abort condition: Lockdep not active

/proc/lockdep_stats contents:
 lock-classes:                         2115 [max: 8192]
 direct dependencies:                 22992 [max: 524288]
 indirect dependencies:              166369
 all direct dependencies:            429703
 dependency chains:                   34377 [max: 524288]
 dependency chain hlocks used:       145326 [max: 2621440]
 dependency chain hlocks lost:            0
 in-hardirq chains:                     329
 in-softirq chains:                     745
 in-process chains:                   33303
 stack-trace entries:                240900 [max: 524288]
 number of stack traces:              11313
 number of stack hash chains:          8118
 combined max dependencies:      3903811424
 hardirq-safe locks:                     91
 hardirq-unsafe locks:                 1283
 softirq-safe locks:                    214
 softirq-unsafe locks:                 1198
 irq-safe locks:                        226
 irq-unsafe locks:                     1283
 hardirq-read-safe locks:                 4
 hardirq-read-unsafe locks:             394
 softirq-read-safe locks:                10
 softirq-read-unsafe locks:             390
 irq-read-safe locks:                    10
 irq-read-unsafe locks:                 394
 uncategorized locks:                   340
 unused locks:                            1
 max locking depth:                      18
 max bfs queue depth:                   343
 max lock class index:                 2114
 debug_locks:                             0

 zapped classes:                          2
 zapped lock chains:                    156
 large chain blocks:                      1
Err
Starting subtest: parallel-gt-reset
Subtest parallel-gt-reset: SUCCESS (10.556s)
Dmesg

<6> [200.060588] Console: switching to colour dummy device 80x25
<6> [200.060870] [IGT] xe_exec_reset: executing
<6> [200.069110] [IGT] xe_exec_reset: starting subtest parallel-gt-reset
<6> [200.093651] xe 0000:03:00.0: [drm] GT1: trying reset from force_reset [xe]
<6> [200.093917] xe 0000:03:00.0: [drm] GT1: reset queued
<6> [200.094358] xe 0000:03:00.0: [drm] GT1: reset started
<4> [200.094854]
<4> [200.094863] ======================================================
<4> [200.094868] WARNING: possible circular locking dependency detected
<4> [200.094874] 6.14.0-rc2-xe+ #1 Tainted: G W
<4> [200.094880] ------------------------------------------------------
<4> [200.094884] kworker/u64:46/2383 is trying to acquire lock:
<4> [200.094890] ffffffff834c9500 (fs_reclaim){+.+.}-{0:0}, at: __kmalloc_cache_noprof+0x58/0x490
<4> [200.094908]
but task is already holding lock:
<4> [200.094913] ffff88813d792158 (&guc->submission_state.lock){+.+.}-{3:3}, at: xe_guc_submit_stop+0x6c/0x590 [xe]
<4> [200.095047]
which lock already depends on the new lock.
<4> [200.095053]
the existing dependency chain (in reverse order) is:
<4> [200.095059]
-> #1 (&guc->submission_state.lock){+.+.}-{3:3}:
<4> [200.095068] __mutex_lock+0xdc/0xe60
<4> [200.095076] mutex_lock_nested+0x1b/0x30
<4> [200.095082] xe_guc_submit_init+0xf0/0x130 [xe]
<4> [200.095204] xe_guc_init_post_hwconfig+0x352/0x11c0 [xe]
<4> [200.095318] xe_uc_init_post_hwconfig+0x3c/0x70 [xe]
<4> [200.095407] xe_gt_init+0x3df/0x910 [xe]
<4> [200.095464] xe_device_probe+0x5d1/0x820 [xe]
<4> [200.095524] xe_pci_probe+0x35b/0x5f0 [xe]
<4> [200.095595] local_pci_probe+0x44/0xb0
<4> [200.095600] pci_device_probe+0xf4/0x270
<4> [200.095603] really_probe+0xee/0x3c0
<4> [200.095607] __driver_probe_device+0x8c/0x180
<4> [200.095610] driver_probe_device+0x24/0xd0
<4> [200.095613] __driver_attach+0x10f/0x220
<4> [200.095616] bus_for_each_dev+0x8d/0xf0
<4> [200.095619] driver_attach+0x1e/0x30
<4> [200.095622] bus_add_driver+0x151/0x290
<4> [200.095625] driver_register+0x5e/0x130
<4> [200.095628] __pci_register_driver+0x7d/0x90
<4> [200.095631] xe_register_pci_driver+0x23/0x30 [xe]
<4> [200.095701] __pfx_mei_component_master_bind+0x3/0x10 [mei_gsc_proxy]
<4> [200.095705] do_one_initcall+0x76/0x400
<4> [200.095709] do_init_module+0x97/0x2a0
<4> [200.095713] load_module+0x2c23/0x2f60
<4> [200.095716] init_module_from_file+0x97/0xe0
<4> [200.095719] idempotent_init_module+0x134/0x350
<4> [200.095722] __x64_sys_finit_module+0x77/0x100
<4> [200.095725] x64_sys_call+0x1f37/0x2650
<4> [200.095729] do_syscall_64+0x91/0x180
<4> [200.095732] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<4> [200.095737]
-> #0 (fs_reclaim){+.+.}-{0:0}:
<4> [200.095741] __lock_acquire+0x1637/0x2810
<4> [200.095746] lock_acquire+0xc9/0x300
<4> [200.095749] fs_reclaim_acquire+0xc5/0x100
<4> [200.095753] __kmalloc_cache_noprof+0x58/0x490
<4> [200.095757] xe_drm_client_add_blame+0x68/0x330 [xe]
<4> [200.095818] xe_guc_submit_stop+0x21e/0x590 [xe]
<4> [200.095882] xe_guc_stop+0x21/0x30 [xe]
<4> [200.095944] xe_uc_stop+0x2a/0x40 [xe]
<4> [200.096024] gt_reset_worker+0x13e/0x1e0 [xe]
<4> [200.096086] process_one_work+0x21c/0x740
<4> [200.096090] worker_thread+0x1db/0x3c0
<4> [200.096093] kthread+0x10d/0x270
<4> [200.096096] ret_from_fork+0x44/0x70
<4> [200.096100] ret_from_fork_asm+0x1a/0x30
<4> [200.096103]
other info that might help us debug this:
<4> [200.096107] Possible unsafe locking scenario:
<4> [200.096110] CPU0 CPU1
<4> [200.096112] ---- ----
<4> [200.096114] lock(&guc->submission_state.lock);
<4> [200.096117] lock(fs_reclaim);
<4> [200.096121] lock(&guc->submission_state.lock);
<4> [200.096124] lock(fs_reclaim);
<4> [200.096127]
*** DEADLOCK ***
<4> [200.096130] 3 locks held by kworker/u64:46/2383:
<4> [200.096132] #0: ffff88813d784d48 ((wq_completion)gt-ordered-wq#2){+.+.}-{0:0}, at: process_one_work+0x444/0x740
<4> [200.096140] #1: ffffc90004d53e20 ((work_completion)(&gt->reset.worker)){+.+.}-{0:0}, at: process_one_work+0x1da/0x740
<4> [200.096147] #2: ffff88813d792158 (&guc->submission_state.lock){+.+.}-{3:3}, at: xe_guc_submit_stop+0x6c/0x590 [xe]
<4> [200.096216]
stack backtrace:
<4> [200.096219] CPU: 4 UID: 0 PID: 2383 Comm: kworker/u64:46 Tainted: G W 6.14.0-rc2-xe+ #1
<4> [200.096221] Tainted: [W]=WARN
<4> [200.096222] Hardware name: ASUS System Product Name/PRIME Z790-P WIFI, BIOS 1645 03/15/2024
<4> [200.096222] Workqueue: gt-ordered-wq gt_reset_worker [xe]
<4> [200.096283] Call Trace:
<4> [200.096283] <TASK>
<4> [200.096284] dump_stack_lvl+0x91/0xf0
<4> [200.096287] dump_stack+0x10/0x20
<4> [200.096288] print_circular_bug+0x285/0x360
<4> [200.096290] check_noncircular+0x150/0x170
<4> [200.096293] __lock_acquire+0x1637/0x2810
<4> [200.096296] lock_acquire+0xc9/0x300
<4> [200.096298] ? __kmalloc_cache_noprof+0x58/0x490
<4> [200.096300] ? __lock_acquire+0x1166/0x2810
<4> [200.096301] ? __flush_work+0x4a5/0x5f0
<4> [200.096303] ? xe_drm_client_add_blame+0x68/0x330 [xe]
<4> [200.096366] fs_reclaim_acquire+0xc5/0x100
<4> [200.096369] ? __kmalloc_cache_noprof+0x58/0x490
<4> [200.096372] __kmalloc_cache_noprof+0x58/0x490
<4> [200.096376] xe_drm_client_add_blame+0x68/0x330 [xe]
<4> [200.096503] ? xe_drm_client_add_blame+0x68/0x330 [xe]
<4> [200.096639] ? xe_lrc_seqno+0x2c/0x70 [xe]
<4> [200.096786] xe_guc_submit_stop+0x21e/0x590 [xe]
<4> [200.096925] ? trace_hardirqs_on+0x1e/0xe0
<4> [200.096931] ? enable_work+0x8c/0x110
<4> [200.096938] xe_guc_stop+0x21/0x30 [xe]
<4> [200.097075] xe_uc_stop+0x2a/0x40 [xe]
<4> [200.097250] gt_reset_worker+0x13e/0x1e0 [xe]
<4> [200.097386] process_one_work+0x21c/0x740
<4> [200.097392] worker_thread+0x1db/0x3c0
<4> [200.097395] ? __pfx_worker_thread+0x10/0x10
<4> [200.097399] kthread+0x10d/0x270
<4> [200.097401] ? __pfx_kthread+0x10/0x10
<4> [200.097403] ret_from_fork+0x44/0x70
<4> [200.097406] ? __pfx_kthread+0x10/0x10
<4> [200.097408] ret_from_fork_asm+0x1a/0x30
<4> [200.097415] </TASK>
<7> [200.097690] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying GT save-restore MMIOs
<7> [200.097860] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x4148] = 0x00000000
<7> [200.098022] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0xa210] = 0x00000002
<7> [200.098164] xe 0000:03:00.0: [drm:xe_wopcm_init [xe]] WOPCM: 4096K
<7> [200.098332] xe 0000:03:00.0: [drm:xe_wopcm_init [xe]] GuC WOPCM is already locked [5136K, 832K)
<7> [200.099480] xe 0000:03:00.0: [drm:xe_guc_ads_populate [xe]] GT1: ADS capture alloc size changed from 45056 to 20480
<7> [200.100451] xe 0000:03:00.0: [drm:__xe_guc_upload [xe]] GT1: load still in progress, timeouts = 0, freq = 1500MHz (req 1500MHz), status = 0x00000072 [0x39/00]
<7> [200.110578] xe 0000:03:00.0: [drm:__xe_guc_upload [xe]] GT1: init took 10ms, freq = 1500MHz (req = 1500MHz), before = 1500MHz, status = 0x8002F034, timeouts = 0
<7> [200.111008] xe 0000:03:00.0: [drm:xe_guc_ct_enable [xe]] GT1: GuC CT communication channel enabled
<7> [200.111564] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: flag:0x1
<7> [200.111820] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: mocs entries: 16
<7> [200.112067] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[0] 0x4000 0xc
<7> [200.112297] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[1] 0x4004 0x10c
<7> [200.112535] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[2] 0x4008 0x130
<7> [200.112754] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[3] 0x400c 0x13c
<7> [200.112970] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[4] 0x4010 0x100
<7> [200.113179] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[5] 0x4014 0x100
<7> [200.113391] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[6] 0x4018 0x100
<7> [200.113635] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[7] 0x401c 0x100
<7> [200.113845] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[8] 0x4020 0x100
<7> [200.114060] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[9] 0x4024 0x100
<7> [200.114278] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[10] 0x4028 0x100
<7> [200.114497] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[11] 0x402c 0x100
<7> [200.114711] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[12] 0x4030 0x100
<7> [200.114920] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[13] 0x4034 0x100
<7> [200.115129] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[14] 0x4038 0x100
<7> [200.115318] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[15] 0x403c 0x100
<7> [200.116462] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying vcs0 save-restore MMIOs
<7> [200.116684] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x1c00d4] = 0xc000c000
<7> [200.116899] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying vcs2 save-restore MMIOs
<7> [200.117109] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x1d00d4] = 0xc000c000
<7> [200.117328] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying vecs0 save-restore MMIOs
<7> [200.117578] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x1c80d4] = 0xc000c000
<7> [200.117803] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying vecs1 save-restore MMIOs
<7> [200.118030] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x1d80d4] = 0xc000c000
<6> [200.118271] xe 0000:03:00.0: [drm] GT1: reset done
<7> [200.118485] xe 0000:03:00.0: [drm:xe_hw_engine_snapshot_capture [xe]] GT1: Proceeding with manual engine snapshot
<5> [200.118707] xe 0000:03:00.0: [drm] GT1: Timedout job: seqno=0, lrc_seqno=4294967169, guc_id=0, flags=0x0 in xe_exec_reset [5117]
<6> [200.148276] xe 0000:03:00.0: [drm] Xe device coredump has been created
<6> [200.148280] xe 0000:03:00.0: [drm] Check your /sys/class/drm/card0/device/devcoredump/data
<6> [200.160941] xe 0000:03:00.0: [drm] GT1: trying reset from force_reset [xe]
<6> [200.161031] xe 0000:03:00.0: [drm] GT1: reset queued
<6> [200.161048] xe 0000:03:00.0: [drm] GT1: reset started
<7> [200.161201] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying GT save-restore MMIOs
<7> [200.161286] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x4148] = 0x00000000
<7> [200.161360] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0xa210] = 0x00000002
<7> [200.161423] xe 0000:03:00.0: [drm:xe_wopcm_init [xe]] WOPCM: 4096K
<7> [200.161543] xe 0000:03:00.0: [drm:xe_wopcm_init [xe]] GuC WOPCM is already locked [5136K, 832K)
<7> [200.162567] xe 0000:03:00.0: [drm:xe_guc_ads_populate [xe]] GT1: ADS capture alloc size changed from 45056 to 20480
<7> [200.163453] xe 0000:03:00.0: [drm:__xe_guc_upload [xe]] GT1: load still in progress, timeouts = 0, freq = 1500MHz (req 1500MHz), status = 0x00000072 [0x39/00]
<7> [200.173448] xe 0000:03:00.0: [drm:__xe_guc_upload [xe]] GT1: init took 9ms, freq = 1500MHz (req = 1500MHz), before = 1500MHz, status = 0x8002F034, timeouts = 0
<7> [200.173758] xe 0000:03:00.0: [drm:xe_guc_ct_enable [xe]] GT1: GuC CT communication channel enabled
<7> [200.174140] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: flag:0x1
<7> [200.174260] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: mocs entries: 16
<7> [200.174375] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[0] 0x4000 0xc
<7> [200.174508] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[1] 0x4004 0x10c
<7> [200.174623] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[2] 0x4008 0x130
<7> [200.174734] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[3] 0x400c 0x13c
<7> [200.174843] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[4] 0x4010 0x100
<7> [200.174952] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[5] 0x4014 0x100
<7> [200.175060] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[6] 0x4018 0x100
<7> [200.175197] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[7] 0x401c 0x100
<7> [200.175309] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[8] 0x4020 0x100
<7> [200.175421] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[9] 0x4024 0x100
<7> [200.175554] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[10] 0x4028 0x100
<7> [200.175665] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[11] 0x402c 0x100
<7> [200.175772] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[12] 0x4030 0x100
<7> [200.175881] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[13] 0x4034 0x100
<7> [200.175988] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[14] 0x4038 0x100
<7> [200.176098] xe 0000:03:00.0: [drm:xe_mocs_init [xe]] GT1: GLOB_MOCS[15] 0x403c 0x100
<7> [200.176983] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying vcs0 save-restore MMIOs
<7> [200.177124] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x1c00d4] = 0xc000c000
<7> [200.177248] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying vcs2 save-restore MMIOs
<7> [200.177372] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x1d00d4] = 0xc000c000
<7> [200.177510] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying vecs0 save-restore MMIOs
<7> [200.177636] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x1c80d4] = 0xc000c000
<7> [200.177758] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: Applying vecs1 save-restore MMIOs
<7> [200.177886] xe 0000:03:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT1: REG[0x1d80d4] = 0xc000c000
<6> [200.178017] xe 0000:03:00.0: [drm] GT1: reset done
<5> [200.178024] xe 0000:03:00.0: [drm] GT1: Timedout job: seqno=0, lrc_seqno=4294967169, guc_id=0, flags=0x0 in xe_exec_reset [5117]
<7> [200.178033] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [205.497901] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=18, not started
<4> [205.498347] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=16, not started
<4> [205.498801] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=14, not started
<4> [205.499194] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=12, not started
<4> [205.499497] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=10, not started
<4> [205.499789] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=8, not started
<4> [205.500062] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=6, not started
<4> [210.617852] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=6, not started
<5> [210.617891] xe 0000:03:00.0: [drm] GT1: Timedout job: seqno=0, lrc_seqno=4294967169, guc_id=6, flags=0x0 in xe_exec_reset [5117]
<7> [210.617915] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<3> [210.618246] BUG: sleeping function called from invalid context at include/linux/sched/mm.h:321
<3> [210.618263] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2381, name: kworker/u64:44
<3> [210.618279] preempt_count: 1, expected: 0
<3> [210.618289] RCU nest depth: 0, expected: 0
<4> [210.618298] INFO: lockdep is turned off.
<4> [210.618311] CPU: 11 UID: 0 PID: 2381 Comm: kworker/u64:44 Tainted: G W 6.14.0-rc2-xe+ #1
<4> [210.618320] Tainted: [W]=WARN
<4> [210.618322] Hardware name: ASUS System Product Name/PRIME Z790-P WIFI, BIOS 1645 03/15/2024
<4> [210.618325] Workqueue: gt-ordered-wq drm_sched_job_timedout [gpu_sched]
<4> [210.618344] Call Trace:
<4> [210.618347] <TASK>
<4> [210.618351] dump_stack_lvl+0xc1/0xf0
<4> [210.618367] dump_stack+0x10/0x20
<4> [210.618375] __might_resched+0x191/0x2e0
<4> [210.618387] __might_sleep+0x58/0xa0
<4> [210.618396] ? xe_drm_client_add_blame+0x164/0x330 [xe]
<4> [210.618705] __kmalloc_cache_noprof+0x3a4/0x490
<4> [210.618719] xe_drm_client_add_blame+0x164/0x330 [xe]
<4> [210.619022] ? xe_drm_client_add_blame+0x164/0x330 [xe]
<4> [210.619329] guc_exec_queue_timedout_job+0x670/0xe70 [xe]
<4> [210.619677] ? __smp_call_single_queue+0xee/0x250
<4> [210.619686] ? sched_clock_cpu+0x10/0x1e0
<4> [210.619695] ? lock_release+0x1e9/0x2b0
<4> [210.619710] ? __pfx_autoremove_wake_function+0x10/0x10
<4> [210.619721] drm_sched_job_timedout+0x91/0x130 [gpu_sched]
<4> [210.619738] process_one_work+0x21c/0x740
<4> [210.619754] worker_thread+0x1db/0x3c0
<4> [210.619762] ? __pfx_worker_thread+0x10/0x10
<4> [210.619769] kthread+0x10d/0x270
<4> [210.619775] ? __pfx_kthread+0x10/0x10
<4> [210.619780] ret_from_fork+0x44/0x70
<4> [210.619789] ? __pfx_kthread+0x10/0x10
<4> [210.619793] ret_from_fork_asm+0x1a/0x30
<4> [210.619810] </TASK>
<4> [210.620303] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=10, not started
<5> [210.620325] xe 0000:03:00.0: [drm] GT1: Timedout job: seqno=0, lrc_seqno=4294967169, guc_id=10, flags=0x0 in xe_exec_reset [5117]
<7> [210.620348] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [210.620833] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=18, not started
<5> [210.620852] xe 0000:03:00.0: [drm] GT1: Timedout job: seqno=0, lrc_seqno=4294967169, guc_id=18, flags=0x0 in xe_exec_reset [5117]
<7> [210.620872] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [210.621359] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=8, not started
<5> [210.621379] xe 0000:03:00.0: [drm] GT1: Timedout job: seqno=0, lrc_seqno=4294967169, guc_id=8, flags=0x0 in xe_exec_reset [5117]
<7> [210.621399] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [210.621928] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=12, not started
<5> [210.621947] xe 0000:03:00.0: [drm] GT1: Timedout job: seqno=0, lrc_seqno=4294967169, guc_id=12, flags=0x0 in xe_exec_reset [5117]
<7> [210.621967] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [210.622467] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=14, not started
<5> [210.622486] xe 0000:03:00.0: [drm] GT1: Timedout job: seqno=0, lrc_seqno=4294967169, guc_id=14, flags=0x0 in xe_exec_reset [5117]
<7> [210.622506] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [210.622931] xe 0000:03:00.0: [drm] GT1: Check job timeout: seqno=0, lrc_seqno=4294967169, guc_id=16, not started
<5> [210.622949] xe 0000:03:00.0: [drm] GT1: Timedout job: seqno=0, lrc_seqno=4294967169, guc_id=16, flags=0x0 in xe_exec_reset [5117]
<7> [210.622969] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [210.625006] xe 0000:03:00.0: [drm] GT1: Engine reset: engine_class=vecs, logical_mask: 0x1, guc_id=4
<6> [210.626401] [IGT] xe_exec_reset: finished subtest parallel-gt-reset, SUCCESS
<7> [210.645467] xe 0000:03:00.0: [drm:drm_client_dev_restore] intel-fbdev: ret=0
<6> [210.646828] [IGT] xe_exec_reset: exiting, ret=0
<6> [210.665452] Console: switching to colour frame buffer device 240x67
Created at 2025-02-15 20:53:43