Result: 214 Warning(s)
i915_display_info5 igt_runner5 results5.json results5-xe-load.json guc_logs5.tar i915_display_info_post_exec5 boot5 dmesg5
| Detail | Value |
|---|---|
| Duration | unknown |
| Hostname |
shard-bmg-2 |
| Igt-Version |
IGT-Version: 2.3-gcc3169e72 (x86_64) (Linux: 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ x86_64) |
| Out |
Using IGT_SRANDOM=1774049812 for randomisation Opened device: /dev/dri/card0 Starting subtest: threads-hang-shared-vm-userptr-rebind This test caused an abort condition: Child refuses to die, tainted 0x4244. |
| Err |
Starting subtest: threads-hang-shared-vm-userptr-rebind Received signal SIGQUIT. Stack trace: #0 [fatal_sig_handler+0x17b] #1 [__sigaction+0x50] #2 [__nptl_death_event+0x17f] #3 [pthread_join+0x173] #4 [__igt_unique____real_main1261+0x1626] #5 [main+0x2d] #6 [__libc_init_first+0x8a] #7 [__libc_start_main+0x8b] #8 [_start+0x25] |
| Dmesg |
<6> [490.996701] Console: switching to colour dummy device 80x25
<6> [490.997012] [IGT] xe_exec_threads: executing
<6> [491.005995] [IGT] xe_exec_threads: starting subtest threads-hang-shared-vm-userptr-rebind
<6> [491.680752] xe 0000:03:00.0: [drm] Tile0: GT1: Engine reset: engine_class=vcs, logical_mask: 0x1, guc_id=33, state=0x3
<6> [491.682852] xe 0000:03:00.0: [drm] Tile0: GT1: Engine reset: engine_class=vcs, logical_mask: 0x2, guc_id=31, state=0x3
<6> [491.683831] xe 0000:03:00.0: [drm] Tile0: GT1: Engine reset: engine_class=vecs, logical_mask: 0x1, guc_id=34, state=0x3
<6> [491.683867] xe 0000:03:00.0: [drm] Tile0: GT1: Engine reset: engine_class=vecs, logical_mask: 0x2, guc_id=36, state=0x3
<5> [491.685948] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967169, lrc_seqno=4294967169, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [491.694815] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.697460] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.699054] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.704358] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.707496] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.708279] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.708772] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.715884] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.718968] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.719158] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.719597] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.722783] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.727260] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.727365] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.727449] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.733575] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.737213] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.737947] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.738140] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.740909] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.744431] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.744576] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.744669] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.750306] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.754415] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.754559] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.754658] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.757278] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.761251] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.761428] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.762204] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.766393] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.770816] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.770915] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.770971] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.773837] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.777273] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.777430] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.778578] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.783567] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.786252] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.786558] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.786916] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.789325] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.792813] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.792916] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.794405] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.800047] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.802467] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.802820] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.803256] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.805662] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.808609] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.809078] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.810170] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.816286] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.817051] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.817534] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.819354] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.821808] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.823866] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.824576] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.827247] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.832130] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.832716] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.833255] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.835099] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.837495] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.838078] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.839854] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.843031] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.848055] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.848917] xe 0000:03:00.0: [drm:xe_hw_engine_snapshot_capture [xe]] Tile0: GT1: Found and locked GuC-err-capture node
<7> [491.849114] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<6> [491.849249] xe 0000:03:00.0: [drm] Xe device coredump has been created
<6> [491.849301] xe 0000:03:00.0: [drm] Check your /sys/class/drm/card0/device/devcoredump/data
<7> [491.849683] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<5> [491.849785] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967169, lrc_seqno=4294967169, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [491.849792] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [491.850082] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967169, lrc_seqno=4294967169, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [491.850087] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [491.850512] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967169, lrc_seqno=4294967169, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [491.850517] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [491.851643] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.855685] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.856222] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.857411] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.861646] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.867099] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.867762] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.868008] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.872152] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.875621] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.876291] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.876372] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.884335] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.885826] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.885877] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.886765] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.890244] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.891155] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.891247] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.892060] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.901326] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.901956] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.902113] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.902802] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.907705] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.908245] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.908333] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.909025] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.917677] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.919514] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.920038] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.920396] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.923768] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.924446] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.925886] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.926705] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.935843] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.937016] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.939849] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.940027] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.948890] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.950762] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.953809] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.954438] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.966557] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.967760] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.972350] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.972436] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.982350] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.982544] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.984380] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.984593] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.994212] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.994256] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [491.999953] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.000641] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.007947] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.009729] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.015526] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.016192] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.022204] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.023843] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.030717] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.030809] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.034171] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.036322] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.041889] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.042399] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.046620] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.052882] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.057098] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.061009] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.061220] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.068117] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.071956] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<7> [492.076239] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<6> [492.317647] xe 0000:03:00.0: [drm] Tile0: GT0: Engine reset: engine_class=rcs, logical_mask: 0x1, guc_id=27, state=0x3
<6> [492.318875] xe 0000:03:00.0: [drm] Tile0: GT0: Engine reset: engine_class=ccs, logical_mask: 0x1, guc_id=28, state=0x3
<5> [492.319178] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967169, lrc_seqno=4294967169, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [492.319241] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [492.320231] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967169, lrc_seqno=4294967169, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [492.320250] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [496.481935] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=50, not started
<4> [496.482437] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=47, not started
<4> [496.482477] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=44, not started
<4> [496.483042] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=42, not started
<4> [496.483077] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=39, not started
<4> [496.483132] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=35, not started
<4> [496.483163] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=32, not started
<7> [496.483215] xe 0000:03:00.0: [drm:guc_exec_queue_timedout_job [xe]] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=29, running_time_ms=338, timeout_ms=5000, diff=0x005e9751
<6> [496.801094] xe 0000:03:00.0: [drm] Tile0: GT0: Engine reset: engine_class=bcs, logical_mask: 0x1, guc_id=29, state=0x3
<5> [496.801279] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967169, lrc_seqno=4294967169, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [496.801294] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [496.993083] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=1, not started
<6> [497.101577] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [497.101603] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [497.101612] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [497.101621] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [499.087698] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x34343131
<7> [499.087843] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x34343334
<5> [500.650046] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967171, lrc_seqno=4294967171, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [500.650082] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [500.650715] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967171, lrc_seqno=4294967171, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [500.650735] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [500.651149] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967172, lrc_seqno=4294967172, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [500.651179] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [501.288728] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967171, lrc_seqno=4294967171, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [501.288755] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [501.289702] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967171, lrc_seqno=4294967171, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<5> [501.289755] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967171, lrc_seqno=4294967171, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [501.289731] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [501.290084] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [501.290296] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967171, lrc_seqno=4294967171, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [501.290608] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [501.290872] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967171, lrc_seqno=4294967171, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<5> [501.291193] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967172, lrc_seqno=4294967172, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [501.291076] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [501.291484] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [501.292744] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967173, lrc_seqno=4294967173, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<5> [501.292834] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967172, lrc_seqno=4294967172, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [501.292776] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [501.293275] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967173, lrc_seqno=4294967173, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [501.293147] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [501.294047] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [501.294183] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967172, lrc_seqno=4294967172, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [501.294186] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [506.720489] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967171, lrc_seqno=4294967171, guc_id=39, not started
<4> [506.720600] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967171, lrc_seqno=4294967171, guc_id=50, not started
<4> [506.720637] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967171, lrc_seqno=4294967171, guc_id=47, not started
<4> [506.720669] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967172, lrc_seqno=4294967172, guc_id=4, not started
<4> [506.720721] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967171, lrc_seqno=4294967171, guc_id=42, not started
<4> [506.720773] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967171, lrc_seqno=4294967171, guc_id=44, not started
<5> [510.897399] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967173, lrc_seqno=4294967173, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [510.897426] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [510.898150] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967173, lrc_seqno=4294967173, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<5> [510.898222] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967173, lrc_seqno=4294967173, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [510.898181] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [510.898564] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [510.898903] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967173, lrc_seqno=4294967173, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [510.898980] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [510.899249] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967174, lrc_seqno=4294967174, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [510.899354] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [510.899623] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967174, lrc_seqno=4294967174, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<5> [510.899776] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967173, lrc_seqno=4294967173, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [510.899695] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [510.900410] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967175, lrc_seqno=4294967175, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [510.900442] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [510.900983] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967175, lrc_seqno=4294967175, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [510.900855] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [510.901321] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [514.143155] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x33333030
<7> [514.143513] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x33333233
<4> [515.936175] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967173, lrc_seqno=4294967173, guc_id=42, not started
<4> [515.936278] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967174, lrc_seqno=4294967174, guc_id=4, not started
<4> [515.936317] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967173, lrc_seqno=4294967173, guc_id=35, not started
<4> [515.936347] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967173, lrc_seqno=4294967173, guc_id=47, not started
<4> [515.936383] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967201, lrc_seqno=4294967201, guc_id=1, not started
<4> [515.936425] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967173, lrc_seqno=4294967173, guc_id=50, not started
<4> [515.936479] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967173, lrc_seqno=4294967173, guc_id=39, not started
<4> [515.936511] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967173, lrc_seqno=4294967173, guc_id=44, not started
<6> [516.044668] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [516.044693] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [516.044702] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [516.044711] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [516.151761] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [516.151785] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [516.151795] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [516.151804] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [519.863410] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967175, lrc_seqno=4294967175, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [519.863436] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [519.864790] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967176, lrc_seqno=4294967176, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [519.864817] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [520.503716] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967175, lrc_seqno=4294967175, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [520.503749] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [520.503946] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967175, lrc_seqno=4294967175, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [520.504142] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [520.504606] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967175, lrc_seqno=4294967175, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<5> [520.504792] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967175, lrc_seqno=4294967175, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [520.504634] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [520.505223] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967177, lrc_seqno=4294967177, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [520.505115] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [520.505557] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967176, lrc_seqno=4294967176, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [520.505451] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [520.506059] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967176, lrc_seqno=4294967176, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [520.505929] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [520.506233] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [525.663634] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967176, lrc_seqno=4294967176, guc_id=4, not started
<4> [525.663731] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967175, lrc_seqno=4294967175, guc_id=47, not started
<4> [525.663771] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967175, lrc_seqno=4294967175, guc_id=42, not started
<4> [525.663802] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967175, lrc_seqno=4294967175, guc_id=44, not started
<4> [525.663842] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967175, lrc_seqno=4294967175, guc_id=50, not started
<4> [525.663895] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967175, lrc_seqno=4294967175, guc_id=39, not started
<6> [529.075918] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [529.075940] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [529.075948] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [529.075956] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [529.096755] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x32322f30
<7> [529.097061] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x32323233
<5> [529.468570] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967177, lrc_seqno=4294967177, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [529.468600] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [529.469133] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967178, lrc_seqno=4294967178, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [529.469153] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [530.108692] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967177, lrc_seqno=4294967177, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [530.108720] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [530.109239] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967177, lrc_seqno=4294967177, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [530.109263] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [530.109731] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967177, lrc_seqno=4294967177, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [530.109831] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [530.110283] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967177, lrc_seqno=4294967177, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [530.110316] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [530.110699] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967178, lrc_seqno=4294967178, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [530.111035] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [530.112216] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967177, lrc_seqno=4294967177, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [530.112244] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [530.112969] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967179, lrc_seqno=4294967179, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [530.113063] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [534.879735] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967177, lrc_seqno=4294967177, guc_id=7, not started
<4> [534.879864] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967177, lrc_seqno=4294967177, guc_id=39, not started
<4> [534.879920] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967178, lrc_seqno=4294967178, guc_id=4, not started
<4> [534.879953] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967177, lrc_seqno=4294967177, guc_id=47, not started
<4> [534.879992] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967177, lrc_seqno=4294967177, guc_id=44, not started
<4> [534.880024] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967177, lrc_seqno=4294967177, guc_id=42, not started
<5> [538.432144] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967179, lrc_seqno=4294967179, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [538.432171] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [538.432900] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967180, lrc_seqno=4294967180, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [538.432926] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [539.072096] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967179, lrc_seqno=4294967179, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<5> [539.072309] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967179, lrc_seqno=4294967179, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [539.072123] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [539.072550] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [539.073139] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967179, lrc_seqno=4294967179, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [539.073165] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [539.073597] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967179, lrc_seqno=4294967179, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [539.074119] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [539.074294] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967179, lrc_seqno=4294967179, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<5> [539.074549] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967180, lrc_seqno=4294967180, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [539.074445] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [539.075007] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967180, lrc_seqno=4294967180, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [539.074891] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [539.075222] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [539.075839] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967181, lrc_seqno=4294967181, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [539.075868] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [539.182683] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [539.182709] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [539.182718] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [539.182726] nvme 0000:05:00.0: [ 0] RxErr (First)
<4> [543.585147] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967179, lrc_seqno=4294967179, guc_id=23, not started
<4> [543.585340] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967179, lrc_seqno=4294967179, guc_id=47, not started
<4> [543.585409] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967179, lrc_seqno=4294967179, guc_id=50, not started
<4> [543.585460] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967179, lrc_seqno=4294967179, guc_id=44, not started
<4> [543.585492] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967180, lrc_seqno=4294967180, guc_id=4, not started
<4> [543.585527] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967179, lrc_seqno=4294967179, guc_id=39, not started
<7> [544.079857] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x32322f2f
<7> [544.080031] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x32323132
<5> [548.035413] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967181, lrc_seqno=4294967181, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [548.035441] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [548.035789] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967181, lrc_seqno=4294967181, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [548.036842] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [548.037187] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967181, lrc_seqno=4294967181, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [548.037318] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [548.037577] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967181, lrc_seqno=4294967181, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<5> [548.037822] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967181, lrc_seqno=4294967181, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [548.037728] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [548.038285] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967181, lrc_seqno=4294967181, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [548.038160] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [548.038563] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967182, lrc_seqno=4294967182, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [548.038469] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [548.039008] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967182, lrc_seqno=4294967182, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [548.038907] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [548.039210] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [553.312100] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967182, lrc_seqno=4294967182, guc_id=4, not started
<4> [553.312176] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967181, lrc_seqno=4294967181, guc_id=7, not started
<4> [553.312231] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967182, lrc_seqno=4294967182, guc_id=17, not started
<4> [553.312303] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967181, lrc_seqno=4294967181, guc_id=23, not started
<4> [553.312337] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967181, lrc_seqno=4294967181, guc_id=50, not started
<4> [553.312375] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967254, lrc_seqno=4294967254, guc_id=1, not started
<4> [553.312406] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967181, lrc_seqno=4294967181, guc_id=44, not started
<5> [557.638865] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967183, lrc_seqno=4294967183, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [557.638900] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [557.639654] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967183, lrc_seqno=4294967183, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<5> [557.642276] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967183, lrc_seqno=4294967183, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [557.642208] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [557.642783] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967183, lrc_seqno=4294967183, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [557.642715] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [557.643244] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967183, lrc_seqno=4294967183, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [557.643196] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [557.643334] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967183, lrc_seqno=4294967183, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [557.643319] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [557.643392] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967183, lrc_seqno=4294967183, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [557.643379] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [557.643492] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967184, lrc_seqno=4294967184, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [557.643477] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [557.643551] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967184, lrc_seqno=4294967184, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [557.643535] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [557.643633] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [559.058483] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [559.058489] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [559.058491] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [559.058496] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [559.074957] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x32322e2f
<7> [559.075130] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x32313132
<4> [563.040152] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967183, lrc_seqno=4294967183, guc_id=39, not started
<4> [563.040252] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967183, lrc_seqno=4294967183, guc_id=50, not started
<4> [563.040300] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967183, lrc_seqno=4294967183, guc_id=44, not started
<4> [563.040359] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967184, lrc_seqno=4294967184, guc_id=4, not started
<4> [563.040416] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967183, lrc_seqno=4294967183, guc_id=23, not started
<4> [563.040451] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967183, lrc_seqno=4294967183, guc_id=7, not started
<4> [563.040493] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967267, lrc_seqno=4294967267, guc_id=1, not started
<5> [566.616117] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967185, lrc_seqno=4294967185, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [566.616144] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [566.616564] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967185, lrc_seqno=4294967185, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [566.616577] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [566.617050] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967186, lrc_seqno=4294967186, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [566.617062] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [566.728537] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [566.728562] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [566.728571] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [566.728581] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [567.255882] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967187, lrc_seqno=4294967187, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [567.255965] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [567.256739] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967185, lrc_seqno=4294967185, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [567.256770] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [567.258270] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967185, lrc_seqno=4294967185, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<5> [567.258343] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967185, lrc_seqno=4294967185, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [567.258297] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [567.258843] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967185, lrc_seqno=4294967185, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [567.258742] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [567.259291] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967185, lrc_seqno=4294967185, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [567.259210] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [567.260111] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967186, lrc_seqno=4294967186, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [567.259965] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [567.260406] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967187, lrc_seqno=4294967187, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [567.260293] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [567.260763] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [567.261147] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967188, lrc_seqno=4294967188, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [567.261149] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [572.768247] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967186, lrc_seqno=4294967186, guc_id=4, not started
<4> [572.768315] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967285, lrc_seqno=4294967285, guc_id=1, not started
<4> [572.768351] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967185, lrc_seqno=4294967185, guc_id=23, not started
<4> [572.768395] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967185, lrc_seqno=4294967185, guc_id=44, not started
<4> [572.768427] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967185, lrc_seqno=4294967185, guc_id=7, not started
<4> [572.768467] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967185, lrc_seqno=4294967185, guc_id=17, not started
<4> [572.768499] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967185, lrc_seqno=4294967185, guc_id=50, not started
<7> [574.082755] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31312e2f
<7> [574.082916] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x32313132
<5> [576.224807] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967187, lrc_seqno=4294967187, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<5> [576.224821] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967187, lrc_seqno=4294967187, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [576.224844] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [576.225448] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [576.226150] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967188, lrc_seqno=4294967188, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [576.226171] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [576.865215] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967187, lrc_seqno=4294967187, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<5> [576.865361] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967187, lrc_seqno=4294967187, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [576.865252] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [576.865689] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [576.866299] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967187, lrc_seqno=4294967187, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [576.866328] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [576.866561] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967189, lrc_seqno=4294967189, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [576.866686] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [576.868085] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967189, lrc_seqno=4294967189, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [576.868112] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [576.868752] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967188, lrc_seqno=4294967188, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [576.868769] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [576.869346] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967189, lrc_seqno=4294967189, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [576.869351] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [581.984209] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967187, lrc_seqno=4294967187, guc_id=50, not started
<4> [581.984260] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967187, lrc_seqno=4294967187, guc_id=44, not started
<4> [581.984298] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967187, lrc_seqno=4294967187, guc_id=23, not started
<4> [581.984354] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967187, lrc_seqno=4294967187, guc_id=47, not started
<4> [581.984419] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967187, lrc_seqno=4294967187, guc_id=7, not started
<4> [581.986026] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967188, lrc_seqno=4294967188, guc_id=4, not started
<4> [581.986088] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967187, lrc_seqno=4294967187, guc_id=17, not started
<5> [585.830291] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967189, lrc_seqno=4294967189, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [585.830318] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [585.831607] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967190, lrc_seqno=4294967190, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [585.831633] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [586.470474] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967189, lrc_seqno=4294967189, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<5> [586.470578] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967189, lrc_seqno=4294967189, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [586.470509] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [586.470943] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [586.471463] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967189, lrc_seqno=4294967189, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [586.471601] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [586.471891] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967191, lrc_seqno=4294967191, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [586.472028] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [586.472736] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967190, lrc_seqno=4294967190, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [586.472759] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [586.473293] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967191, lrc_seqno=4294967191, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [586.473325] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [586.473989] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967191, lrc_seqno=4294967191, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [586.474010] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [589.061284] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31312e2e
<7> [589.061443] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313132
<4> [591.200233] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967189, lrc_seqno=4294967189, guc_id=7, not started
<4> [591.200304] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967189, lrc_seqno=4294967189, guc_id=50, not started
<4> [591.200358] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967189, lrc_seqno=4294967189, guc_id=23, not started
<4> [591.200390] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967189, lrc_seqno=4294967189, guc_id=44, not started
<4> [591.200419] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967189, lrc_seqno=4294967189, guc_id=17, not started
<6> [591.313332] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [591.313359] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [591.313367] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [591.313376] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [594.793066] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967191, lrc_seqno=4294967191, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [594.793099] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [594.793635] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967191, lrc_seqno=4294967191, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<7> [594.793666] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [594.794044] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967191, lrc_seqno=4294967191, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [594.794146] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [594.794703] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967192, lrc_seqno=4294967192, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<5> [594.794789] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967191, lrc_seqno=4294967191, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [594.794731] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [594.795271] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967193, lrc_seqno=4294967193, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [594.795196] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [594.795599] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967192, lrc_seqno=4294967192, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [594.795507] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [594.795950] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [594.911911] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [594.911937] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [594.911946] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [594.911955] nvme 0000:05:00.0: [ 0] RxErr (First)
<4> [599.904331] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967191, lrc_seqno=4294967191, guc_id=7, not started
<4> [599.904410] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967191, lrc_seqno=4294967191, guc_id=17, not started
<4> [599.904466] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967192, lrc_seqno=4294967192, guc_id=4, not started
<4> [599.904520] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967191, lrc_seqno=4294967191, guc_id=47, not started
<4> [599.904576] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967191, lrc_seqno=4294967191, guc_id=50, not started
<4> [599.904609] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=30, lrc_seqno=30, guc_id=1, not started
<4> [599.904647] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967191, lrc_seqno=4294967191, guc_id=44, not started
<4> [599.904699] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967191, lrc_seqno=4294967191, guc_id=23, not started
<6> [600.017373] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [600.017398] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [600.017407] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [600.017416] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [603.759358] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967193, lrc_seqno=4294967193, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [603.759387] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [603.759647] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967193, lrc_seqno=4294967193, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [603.759829] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [603.760216] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967194, lrc_seqno=4294967194, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [603.760479] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [603.761119] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967194, lrc_seqno=4294967194, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [603.761137] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [604.092743] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31312e2f
<7> [604.092910] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313032
<5> [604.399352] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967193, lrc_seqno=4294967193, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [604.399381] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [604.399878] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967193, lrc_seqno=4294967193, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [604.399913] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [604.400767] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967193, lrc_seqno=4294967193, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<5> [604.400924] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967195, lrc_seqno=4294967195, guc_id=33, flags=0x0 in xe_exec_threads [5182]
<7> [604.400798] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [604.401384] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [604.401755] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967194, lrc_seqno=4294967194, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [604.401795] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [604.402155] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967193, lrc_seqno=4294967193, guc_id=31, flags=0x0 in xe_exec_threads [5182]
<5> [604.402349] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967195, lrc_seqno=4294967195, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [604.402286] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [604.402726] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967194, lrc_seqno=4294967194, guc_id=34, flags=0x0 in xe_exec_threads [5182]
<7> [604.402614] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [604.403053] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [604.403576] xe 0000:03:00.0: [drm] Tile0: GT1: Timedout job: seqno=4294967195, lrc_seqno=4294967195, guc_id=36, flags=0x0 in xe_exec_threads [5182]
<7> [604.403601] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [609.632151] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967193, lrc_seqno=4294967193, guc_id=17, not started
<4> [609.632266] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967194, lrc_seqno=4294967194, guc_id=4, not started
<4> [609.632338] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967193, lrc_seqno=4294967193, guc_id=7, not started
<4> [609.632397] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967193, lrc_seqno=4294967193, guc_id=23, not started
<4> [609.632434] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967193, lrc_seqno=4294967193, guc_id=50, not started
<4> [609.632480] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967193, lrc_seqno=4294967193, guc_id=44, not started
<6> [609.743852] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [609.743878] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [609.743887] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [609.743896] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [614.002814] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967195, lrc_seqno=4294967195, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [614.002841] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [614.003609] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967195, lrc_seqno=4294967195, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [614.003634] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [614.004520] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967196, lrc_seqno=4294967196, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [614.004552] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [614.006573] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967197, lrc_seqno=4294967197, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [614.006604] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [614.117376] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [614.117400] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [614.117409] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [614.117419] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [614.227270] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [614.227296] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [614.227305] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [614.227314] nvme 0000:05:00.0: [ 0] RxErr (First)
<3> [615.264200] INFO: task kworker/u65:2:2215 blocked for more than 61 seconds.
<3> [615.264330] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [615.264374] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [615.264408] task:kworker/u65:2 state:D stack:0 pid:2215 tgid:2215 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [615.264428] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [615.264471] Call Trace:
<6> [615.264478] <TASK>
<6> [615.264493] __schedule+0x5bc/0x1dd0
<6> [615.264512] ? lock_acquire+0xc4/0x2f0
<6> [615.264533] ? schedule+0x10e/0x180
<6> [615.264548] ? lock_release+0xd0/0x2b0
<6> [615.264569] schedule+0x3a/0x180
<6> [615.264582] schedule_timeout+0x117/0x130
<6> [615.264597] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [615.264610] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [615.264625] ? trace_hardirqs_on+0x22/0x100
<6> [615.264648] dma_fence_default_wait+0x1fe/0x2d0
<6> [615.264660] ? dma_fence_default_wait+0x125/0x2d0
<6> [615.264673] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [615.264696] dma_fence_wait_timeout+0x192/0x570
<6> [615.264716] dma_resv_wait_timeout+0x7b/0x100
<6> [615.264739] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [615.264767] process_one_work+0x239/0x760
<6> [615.264795] worker_thread+0x200/0x3f0
<6> [615.264807] ? __pfx_worker_thread+0x10/0x10
<6> [615.264816] kthread+0x10d/0x150
<6> [615.264830] ? __pfx_kthread+0x10/0x10
<6> [615.264846] ret_from_fork+0x3d4/0x480
<6> [615.264855] ? __pfx_kthread+0x10/0x10
<6> [615.264870] ret_from_fork_asm+0x1a/0x30
<6> [615.264979] </TASK>
<3> [615.264986] INFO: task kworker/u65:4:2217 blocked for more than 61 seconds.
<3> [615.265022] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [615.265064] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [615.265098] task:kworker/u65:4 state:D stack:0 pid:2217 tgid:2217 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [615.265113] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [615.265142] Call Trace:
<6> [615.265147] <TASK>
<6> [615.265159] __schedule+0x5bc/0x1dd0
<6> [615.265173] ? lock_acquire+0xc4/0x2f0
<6> [615.265193] ? schedule+0x10e/0x180
<6> [615.265208] ? lock_release+0xd0/0x2b0
<6> [615.265230] schedule+0x3a/0x180
<6> [615.265245] schedule_timeout+0x117/0x130
<6> [615.265260] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [615.265273] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [615.265287] ? trace_hardirqs_on+0x22/0x100
<6> [615.265309] dma_fence_default_wait+0x1fe/0x2d0
<6> [615.265322] ? dma_fence_default_wait+0x125/0x2d0
<6> [615.265337] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [615.265357] dma_fence_wait_timeout+0x192/0x570
<6> [615.265376] dma_resv_wait_timeout+0x7b/0x100
<6> [615.265399] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [615.265426] process_one_work+0x239/0x760
<6> [615.265453] worker_thread+0x200/0x3f0
<6> [615.265466] ? __pfx_worker_thread+0x10/0x10
<6> [615.265477] kthread+0x10d/0x150
<6> [615.265491] ? __pfx_kthread+0x10/0x10
<6> [615.265508] ret_from_fork+0x3d4/0x480
<6> [615.265517] ? __pfx_kthread+0x10/0x10
<6> [615.265533] ret_from_fork_asm+0x1a/0x30
<6> [615.265567] </TASK>
<3> [615.265574] INFO: task kworker/u65:9:2222 blocked for more than 61 seconds.
<3> [615.265608] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [615.265649] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [615.265684] task:kworker/u65:9 state:D stack:0 pid:2222 tgid:2222 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [615.265696] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [615.265722] Call Trace:
<6> [615.265728] <TASK>
<6> [615.265739] __schedule+0x5bc/0x1dd0
<6> [615.265753] ? lock_acquire+0xc4/0x2f0
<6> [615.265772] ? schedule+0x10e/0x180
<6> [615.265788] ? lock_release+0xd0/0x2b0
<6> [615.265810] schedule+0x3a/0x180
<6> [615.265825] schedule_timeout+0x117/0x130
<6> [615.265840] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [615.265854] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [615.265868] ? trace_hardirqs_on+0x22/0x100
<6> [615.265906] dma_fence_default_wait+0x1fe/0x2d0
<6> [615.265919] ? dma_fence_default_wait+0x125/0x2d0
<6> [615.265933] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [615.265954] dma_fence_wait_timeout+0x192/0x570
<6> [615.265973] dma_resv_wait_timeout+0x7b/0x100
<6> [615.265997] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [615.266023] process_one_work+0x239/0x760
<6> [615.266049] worker_thread+0x200/0x3f0
<6> [615.266062] ? __pfx_worker_thread+0x10/0x10
<6> [615.266072] kthread+0x10d/0x150
<6> [615.266087] ? __pfx_kthread+0x10/0x10
<6> [615.266105] ret_from_fork+0x3d4/0x480
<6> [615.266113] ? __pfx_kthread+0x10/0x10
<6> [615.266130] ret_from_fork_asm+0x1a/0x30
<6> [615.266164] </TASK>
<3> [615.266200] INFO: task kworker/u65:6:5307 blocked for more than 61 seconds.
<3> [615.266234] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [615.266276] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [615.266314] task:kworker/u65:6 state:D stack:0 pid:5307 tgid:5307 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [615.266326] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [615.266351] Call Trace:
<6> [615.266357] <TASK>
<6> [615.266369] __schedule+0x5bc/0x1dd0
<6> [615.266382] ? lock_acquire+0xc4/0x2f0
<6> [615.266402] ? schedule+0x10e/0x180
<6> [615.266418] ? lock_release+0xd0/0x2b0
<6> [615.266439] schedule+0x3a/0x180
<6> [615.266454] schedule_timeout+0x117/0x130
<6> [615.266469] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [615.266482] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [615.266497] ? trace_hardirqs_on+0x22/0x100
<6> [615.266518] dma_fence_default_wait+0x1fe/0x2d0
<6> [615.266531] ? dma_fence_default_wait+0x125/0x2d0
<6> [615.266545] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [615.266565] dma_fence_wait_timeout+0x192/0x570
<6> [615.266585] dma_resv_wait_timeout+0x7b/0x100
<6> [615.266608] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [615.266633] process_one_work+0x239/0x760
<6> [615.266660] worker_thread+0x200/0x3f0
<6> [615.266672] ? __pfx_worker_thread+0x10/0x10
<6> [615.266682] kthread+0x10d/0x150
<6> [615.266696] ? __pfx_kthread+0x10/0x10
<6> [615.266713] ret_from_fork+0x3d4/0x480
<6> [615.266722] ? __pfx_kthread+0x10/0x10
<6> [615.266739] ret_from_fork_asm+0x1a/0x30
<6> [615.266774] </TASK>
<3> [615.266780] INFO: task kworker/u65:10:5309 blocked for more than 61 seconds.
<3> [615.266815] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [615.266858] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [615.266916] task:kworker/u65:10 state:D stack:0 pid:5309 tgid:5309 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [615.266929] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [615.266953] Call Trace:
<6> [615.266958] <TASK>
<6> [615.266970] __schedule+0x5bc/0x1dd0
<6> [615.266984] ? lock_acquire+0xc4/0x2f0
<6> [615.267004] ? schedule+0x10e/0x180
<6> [615.267021] ? lock_release+0xd0/0x2b0
<6> [615.267042] schedule+0x3a/0x180
<6> [615.267057] schedule_timeout+0x117/0x130
<6> [615.267073] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [615.267085] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [615.267100] ? trace_hardirqs_on+0x22/0x100
<6> [615.267121] dma_fence_default_wait+0x1fe/0x2d0
<6> [615.267134] ? dma_fence_default_wait+0x125/0x2d0
<6> [615.267149] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [615.267169] dma_fence_wait_timeout+0x192/0x570
<6> [615.267189] dma_resv_wait_timeout+0x7b/0x100
<6> [615.267212] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [615.267237] process_one_work+0x239/0x760
<6> [615.267264] worker_thread+0x200/0x3f0
<6> [615.267276] ? __pfx_worker_thread+0x10/0x10
<6> [615.267286] kthread+0x10d/0x150
<6> [615.267300] ? __pfx_kthread+0x10/0x10
<6> [615.267318] ret_from_fork+0x3d4/0x480
<6> [615.267326] ? __pfx_kthread+0x10/0x10
<6> [615.267342] ret_from_fork_asm+0x1a/0x30
<6> [615.267377] </TASK>
<3> [615.267383] INFO: task kworker/u65:12:5311 blocked for more than 61 seconds.
<3> [615.267417] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [615.267458] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [615.267495] task:kworker/u65:12 state:D stack:0 pid:5311 tgid:5311 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [615.267506] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [615.267532] Call Trace:
<6> [615.267537] <TASK>
<6> [615.267549] __schedule+0x5bc/0x1dd0
<6> [615.267562] ? lock_acquire+0xc4/0x2f0
<6> [615.267581] ? schedule+0x10e/0x180
<6> [615.267596] ? lock_release+0xd0/0x2b0
<6> [615.267617] schedule+0x3a/0x180
<6> [615.267631] schedule_timeout+0x117/0x130
<6> [615.267645] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [615.267657] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [615.267670] ? trace_hardirqs_on+0x22/0x100
<6> [615.267692] dma_fence_default_wait+0x1fe/0x2d0
<6> [615.267705] ? dma_fence_default_wait+0x125/0x2d0
<6> [615.267719] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [615.267739] dma_fence_wait_timeout+0x192/0x570
<6> [615.267758] dma_resv_wait_timeout+0x7b/0x100
<6> [615.267780] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [615.267805] process_one_work+0x239/0x760
<6> [615.267831] worker_thread+0x200/0x3f0
<6> [615.267843] ? __pfx_worker_thread+0x10/0x10
<6> [615.267852] kthread+0x10d/0x150
<6> [615.267866] ? __pfx_kthread+0x10/0x10
<6> [615.267897] ret_from_fork+0x3d4/0x480
<6> [615.267904] ? __pfx_kthread+0x10/0x10
<6> [615.267920] ret_from_fork_asm+0x1a/0x30
<6> [615.267953] </TASK>
<4> [615.267961]
Showing all locks held in the system:
<4> [615.267981] 1 lock held by khungtaskd/118:
<4> [615.267989] #0: ffffffff835c5be0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x37/0x220
<4> [615.268041] 3 locks held by rs:main Q:Reg/814:
<4> [615.268055] 4 locks held by dmesg/2090:
<4> [615.268061] 2 locks held by igt_runner/2207:
<4> [615.268067] 2 locks held by kworker/u65:2/2215:
<4> [615.268072] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268100] #1: ffffc9000362fe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268124] 2 locks held by kworker/u65:3/2216:
<4> [615.268129] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268152] #1: ffffc90003637e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268174] 2 locks held by kworker/u65:4/2217:
<4> [615.268180] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268204] #1: ffffc9000364fe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268229] 2 locks held by kworker/u65:7/2220:
<4> [615.268234] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268255] #1: ffffc90003667e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268278] 2 locks held by kworker/u65:9/2222:
<4> [615.268284] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268309] #1: ffffc90003793e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268331] 2 locks held by kworker/u65:13/2226:
<4> [615.268336] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268358] #1: ffffc90003d37e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268383] 2 locks held by kworker/u65:14/3365:
<4> [615.268389] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268413] #1: ffffc9000289be30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268436] 2 locks held by kworker/u65:15/3366:
<4> [615.268441] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268462] #1: ffffc900026b7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268497] 2 locks held by kworker/u65:0/5304:
<4> [615.268503] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268526] #1: ffffc900078b7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268549] 2 locks held by kworker/u65:1/5305:
<4> [615.268554] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268577] #1: ffffc900078bfe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268602] 2 locks held by kworker/u65:5/5306:
<4> [615.268608] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268630] #1: ffffc900078c7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268652] 2 locks held by kworker/u65:6/5307:
<4> [615.268658] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268682] #1: ffffc900078cfe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268705] 2 locks held by kworker/u65:8/5308:
<4> [615.268711] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268732] #1: ffffc900078d7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268754] 2 locks held by kworker/u65:10/5309:
<4> [615.268759] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268784] #1: ffffc900078dfe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268809] 2 locks held by kworker/u65:11/5310:
<4> [615.268814] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268836] #1: ffffc900078e7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268858] 2 locks held by kworker/u65:12/5311:
<4> [615.268864] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [615.268899] #1: ffffc900078efe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [615.268925]
<4> [615.268930] =============================================
<6> [615.282010] Console: switching to colour frame buffer device 240x67
<7> [619.100996] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31312e2e
<7> [619.101158] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313031
<6> [619.211626] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [619.211652] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [619.211661] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [619.211670] nvme 0000:05:00.0: [ 0] RxErr (First)
<4> [619.360158] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967195, lrc_seqno=4294967195, guc_id=23, not started
<4> [619.360271] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967196, lrc_seqno=4294967196, guc_id=4, not started
<4> [619.360314] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967195, lrc_seqno=4294967195, guc_id=50, not started
<4> [619.360356] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967195, lrc_seqno=4294967195, guc_id=17, not started
<4> [619.360407] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967195, lrc_seqno=4294967195, guc_id=7, not started
<4> [619.360447] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967195, lrc_seqno=4294967195, guc_id=44, not started
<6> [619.468582] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [619.468608] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [619.468617] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [619.468626] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [623.615773] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967197, lrc_seqno=4294967197, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [623.615804] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [623.616669] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967197, lrc_seqno=4294967197, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [623.616702] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [623.617533] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967199, lrc_seqno=4294967199, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [623.617561] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [629.088033] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967197, lrc_seqno=4294967197, guc_id=50, not started
<4> [629.088097] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967197, lrc_seqno=4294967197, guc_id=17, not started
<4> [629.088133] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967197, lrc_seqno=4294967197, guc_id=44, not started
<4> [629.088172] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967197, lrc_seqno=4294967197, guc_id=20, not started
<4> [629.088204] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967197, lrc_seqno=4294967197, guc_id=7, not started
<4> [629.088251] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967198, lrc_seqno=4294967198, guc_id=4, not started
<5> [632.578813] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967199, lrc_seqno=4294967199, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [632.578843] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [632.579475] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967200, lrc_seqno=4294967200, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [632.579503] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [633.221595] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967199, lrc_seqno=4294967199, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [633.221622] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [633.222515] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967200, lrc_seqno=4294967200, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [633.222544] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [634.056862] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31312e2e
<7> [634.057021] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313031
<6> [634.163556] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [634.163582] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [634.163591] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [634.163600] nvme 0000:05:00.0: [ 0] RxErr (First)
<4> [637.792018] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967200, lrc_seqno=4294967200, guc_id=4, not started
<4> [637.792173] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967199, lrc_seqno=4294967199, guc_id=17, not started
<4> [637.792231] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967199, lrc_seqno=4294967199, guc_id=50, not started
<4> [637.792270] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967199, lrc_seqno=4294967199, guc_id=23, not started
<4> [637.792313] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967199, lrc_seqno=4294967199, guc_id=7, not started
<4> [637.792370] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967199, lrc_seqno=4294967199, guc_id=44, not started
<5> [642.180964] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967201, lrc_seqno=4294967201, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [642.180991] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [642.181788] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967201, lrc_seqno=4294967201, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [642.181809] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [642.182748] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967202, lrc_seqno=4294967202, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [642.182776] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [642.183505] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967201, lrc_seqno=4294967201, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [642.183523] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [642.295410] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [642.295435] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [642.295444] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [642.295453] nvme 0000:05:00.0: [ 0] RxErr (First)
<4> [647.519970] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967201, lrc_seqno=4294967201, guc_id=35, not started
<4> [647.520023] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967201, lrc_seqno=4294967201, guc_id=7, not started
<4> [647.520057] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967201, lrc_seqno=4294967201, guc_id=20, not started
<4> [647.520094] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967201, lrc_seqno=4294967201, guc_id=17, not started
<4> [647.520136] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967202, lrc_seqno=4294967202, guc_id=4, not started
<4> [647.520193] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967201, lrc_seqno=4294967201, guc_id=50, not started
<6> [647.632637] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [647.632663] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [647.632672] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [647.632681] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [649.056978] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x30312d2e
<7> [649.057121] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313031
<6> [649.163632] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [649.163657] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [649.163666] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [649.163675] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [651.147123] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967203, lrc_seqno=4294967203, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [651.147149] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [651.147672] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967204, lrc_seqno=4294967204, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [651.147686] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [651.255099] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [651.255124] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [651.255133] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [651.255142] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [651.788041] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967203, lrc_seqno=4294967203, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [651.788070] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [651.788973] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967203, lrc_seqno=4294967203, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [651.788998] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [651.789823] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967204, lrc_seqno=4294967204, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [651.789849] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [656.223786] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967203, lrc_seqno=4294967203, guc_id=20, not started
<4> [656.223923] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967203, lrc_seqno=4294967203, guc_id=17, not started
<4> [656.223980] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967204, lrc_seqno=4294967204, guc_id=4, not started
<4> [656.224031] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967203, lrc_seqno=4294967203, guc_id=50, not started
<4> [656.224061] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967203, lrc_seqno=4294967203, guc_id=7, not started
<4> [656.224096] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967203, lrc_seqno=4294967203, guc_id=44, not started
<4> [656.224133] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967203, lrc_seqno=4294967203, guc_id=23, not started
<5> [660.110751] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967205, lrc_seqno=4294967205, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [660.110778] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [660.111271] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967206, lrc_seqno=4294967206, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [660.111284] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [660.751352] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967205, lrc_seqno=4294967205, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [660.751377] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [660.753513] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967205, lrc_seqno=4294967205, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [660.753541] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [660.754241] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967206, lrc_seqno=4294967206, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [660.754266] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [664.143253] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x30312d2e
<7> [664.143619] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31303031
<4> [665.439619] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967205, lrc_seqno=4294967205, guc_id=7, not started
<4> [665.439789] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967205, lrc_seqno=4294967205, guc_id=17, not started
<4> [665.439862] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967206, lrc_seqno=4294967206, guc_id=4, not started
<4> [665.439930] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967205, lrc_seqno=4294967205, guc_id=35, not started
<4> [665.439989] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967205, lrc_seqno=4294967205, guc_id=44, not started
<4> [665.440047] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967205, lrc_seqno=4294967205, guc_id=50, not started
<5> [669.073453] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967207, lrc_seqno=4294967207, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [669.073481] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [669.073983] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967208, lrc_seqno=4294967208, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [669.073995] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [669.713499] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967207, lrc_seqno=4294967207, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [669.713528] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [669.715960] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967207, lrc_seqno=4294967207, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [669.715986] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [669.716592] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967208, lrc_seqno=4294967208, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [669.716618] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [674.143582] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967207, lrc_seqno=4294967207, guc_id=50, not started
<4> [674.144517] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967207, lrc_seqno=4294967207, guc_id=7, not started
<4> [674.144570] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967207, lrc_seqno=4294967207, guc_id=20, not started
<4> [674.144624] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967207, lrc_seqno=4294967207, guc_id=35, not started
<4> [674.144664] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967207, lrc_seqno=4294967207, guc_id=17, not started
<4> [674.144695] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967207, lrc_seqno=4294967207, guc_id=44, not started
<4> [675.167447] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=143, lrc_seqno=143, guc_id=1, not started
<3> [676.703505] INFO: task kworker/u65:2:2215 blocked for more than 122 seconds.
<3> [676.705007] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [676.705074] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [676.705128] task:kworker/u65:2 state:D stack:0 pid:2215 tgid:2215 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [676.705180] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [676.705222] Call Trace:
<6> [676.705228] <TASK>
<6> [676.705243] __schedule+0x5bc/0x1dd0
<6> [676.705262] ? lock_acquire+0xc4/0x2f0
<6> [676.705284] ? schedule+0x10e/0x180
<6> [676.705299] ? lock_release+0xd0/0x2b0
<6> [676.705320] schedule+0x3a/0x180
<6> [676.705334] schedule_timeout+0x117/0x130
<6> [676.705350] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [676.705363] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [676.705380] ? trace_hardirqs_on+0x22/0x100
<6> [676.705403] dma_fence_default_wait+0x1fe/0x2d0
<6> [676.705416] ? dma_fence_default_wait+0x125/0x2d0
<6> [676.705429] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [676.705453] dma_fence_wait_timeout+0x192/0x570
<6> [676.705473] dma_resv_wait_timeout+0x7b/0x100
<6> [676.705498] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [676.705527] process_one_work+0x239/0x760
<6> [676.705557] worker_thread+0x200/0x3f0
<6> [676.705569] ? __pfx_worker_thread+0x10/0x10
<6> [676.705580] kthread+0x10d/0x150
<6> [676.705595] ? __pfx_kthread+0x10/0x10
<6> [676.705613] ret_from_fork+0x3d4/0x480
<6> [676.705623] ? __pfx_kthread+0x10/0x10
<6> [676.705640] ret_from_fork_asm+0x1a/0x30
<6> [676.705676] </TASK>
<3> [676.705685] INFO: task kworker/u65:4:2217 blocked for more than 122 seconds.
<3> [676.705738] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [676.705801] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [676.705853] task:kworker/u65:4 state:D stack:0 pid:2217 tgid:2217 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [676.705868] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [676.705896] Call Trace:
<6> [676.705902] <TASK>
<6> [676.705914] __schedule+0x5bc/0x1dd0
<6> [676.705928] ? lock_acquire+0xc4/0x2f0
<6> [676.705948] ? schedule+0x10e/0x180
<6> [676.705964] ? lock_release+0xd0/0x2b0
<6> [676.705986] schedule+0x3a/0x180
<6> [676.706001] schedule_timeout+0x117/0x130
<6> [676.706016] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [676.706030] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [676.706044] ? trace_hardirqs_on+0x22/0x100
<6> [676.706066] dma_fence_default_wait+0x1fe/0x2d0
<6> [676.706079] ? dma_fence_default_wait+0x125/0x2d0
<6> [676.706093] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [676.706115] dma_fence_wait_timeout+0x192/0x570
<6> [676.706134] dma_resv_wait_timeout+0x7b/0x100
<6> [676.706180] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [676.706207] process_one_work+0x239/0x760
<6> [676.706235] worker_thread+0x200/0x3f0
<6> [676.706248] ? __pfx_worker_thread+0x10/0x10
<6> [676.706259] kthread+0x10d/0x150
<6> [676.706273] ? __pfx_kthread+0x10/0x10
<6> [676.706291] ret_from_fork+0x3d4/0x480
<6> [676.706299] ? __pfx_kthread+0x10/0x10
<6> [676.706316] ret_from_fork_asm+0x1a/0x30
<6> [676.706352] </TASK>
<3> [676.706358] INFO: task kworker/u65:9:2222 blocked for more than 122 seconds.
<3> [676.706409] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [676.706470] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [676.706523] task:kworker/u65:9 state:D stack:0 pid:2222 tgid:2222 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [676.706536] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [676.706562] Call Trace:
<6> [676.706567] <TASK>
<6> [676.706579] __schedule+0x5bc/0x1dd0
<6> [676.706593] ? lock_acquire+0xc4/0x2f0
<6> [676.706613] ? schedule+0x10e/0x180
<6> [676.706629] ? lock_release+0xd0/0x2b0
<6> [676.706651] schedule+0x3a/0x180
<6> [676.706666] schedule_timeout+0x117/0x130
<6> [676.706680] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [676.706692] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [676.706705] ? trace_hardirqs_on+0x22/0x100
<6> [676.706727] dma_fence_default_wait+0x1fe/0x2d0
<6> [676.706739] ? dma_fence_default_wait+0x125/0x2d0
<6> [676.706752] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [676.706773] dma_fence_wait_timeout+0x192/0x570
<6> [676.706793] dma_resv_wait_timeout+0x7b/0x100
<6> [676.706816] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [676.706842] process_one_work+0x239/0x760
<6> [676.706868] worker_thread+0x200/0x3f0
<6> [676.706880] ? __pfx_worker_thread+0x10/0x10
<6> [676.706889] kthread+0x10d/0x150
<6> [676.706902] ? __pfx_kthread+0x10/0x10
<6> [676.706920] ret_from_fork+0x3d4/0x480
<6> [676.706928] ? __pfx_kthread+0x10/0x10
<6> [676.706944] ret_from_fork_asm+0x1a/0x30
<6> [676.706978] </TASK>
<3> [676.706984] INFO: task kworker/u65:13:2226 blocked for more than 61 seconds.
<3> [676.707034] Tainted: G S U W L 7.0.0-rc4-lgci-xe-xe-4749-4ae9f18564e78a544-debug+ #1
<3> [676.707093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [676.707171] task:kworker/u65:13 state:D stack:0 pid:2226 tgid:2226 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [676.707183] Workqueue: ttm ttm_bo_delayed_delete [ttm]
<6> [676.707207] Call Trace:
<6> [676.707212] <TASK>
<6> [676.707223] __schedule+0x5bc/0x1dd0
<6> [676.707236] ? lock_acquire+0xc4/0x2f0
<6> [676.707255] ? schedule+0x10e/0x180
<6> [676.707270] ? lock_release+0xd0/0x2b0
<6> [676.707291] schedule+0x3a/0x180
<6> [676.707305] schedule_timeout+0x117/0x130
<6> [676.707319] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [676.707331] ? dma_fence_default_wait+0x1f6/0x2d0
<6> [676.707344] ? trace_hardirqs_on+0x22/0x100
<6> [676.707365] dma_fence_default_wait+0x1fe/0x2d0
<6> [676.707377] ? dma_fence_default_wait+0x125/0x2d0
<6> [676.707390] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [676.707411] dma_fence_wait_timeout+0x192/0x570
<6> [676.707429] dma_resv_wait_timeout+0x7b/0x100
<6> [676.707452] ttm_bo_delayed_delete+0x33/0xc0 [ttm]
<6> [676.707477] process_one_work+0x239/0x760
<6> [676.707503] worker_thread+0x200/0x3f0
<6> [676.707514] ? __pfx_worker_thread+0x10/0x10
<6> [676.707524] kthread+0x10d/0x150
<6> [676.707537] ? __pfx_kthread+0x10/0x10
<6> [676.707553] ret_from_fork+0x3d4/0x480
<6> [676.707560] ? __pfx_kthread+0x10/0x10
<6> [676.707576] ret_from_fork_asm+0x1a/0x30
<6> [676.707609] </TASK>
<6> [676.707614] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
<4> [676.707659]
Showing all locks held in the system:
<4> [676.707680] 1 lock held by khungtaskd/118:
<4> [676.707687] #0: ffffffff835c5be0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x37/0x220
<4> [676.707723] 3 locks held by kworker/u64:1/119:
<4> [676.707730] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.707754] #1: ffffc90000553e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.707776] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.707811] 3 locks held by kworker/u64:2/121:
<4> [676.707816] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.707838] #1: ffffc90000563e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.707859] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.707898] 3 locks held by kworker/u64:3/198:
<4> [676.707903] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.707925] #1: ffffc9000149be30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.707947] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.707976] 3 locks held by kworker/u64:11/206:
<4> [676.707982] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708004] #1: ffffc90001647e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708026] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.708062] 1 lock held by in:imklog/813:
<4> [676.708067] #0: ffff88812775ef38 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [676.708093] 3 locks held by kworker/u64:12/983:
<4> [676.708098] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708120] #1: ffffc9000211fe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708251] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.708287] 3 locks held by dmesg/2090:
<4> [676.708294] 1 lock held by igt_runner/2207:
<4> [676.708302] 2 locks held by kworker/u65:2/2215:
<4> [676.708308] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708332] #1: ffffc9000362fe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708357] 2 locks held by kworker/u65:3/2216:
<4> [676.708364] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708387] #1: ffffc90003637e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708412] 2 locks held by kworker/u65:4/2217:
<4> [676.708418] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708441] #1: ffffc9000364fe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708465] 2 locks held by kworker/u65:7/2220:
<4> [676.708471] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708495] #1: ffffc90003667e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708520] 2 locks held by kworker/u65:9/2222:
<4> [676.708527] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708550] #1: ffffc90003793e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708572] 2 locks held by kworker/u65:13/2226:
<4> [676.708578] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708599] #1: ffffc90003d37e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708624] 3 locks held by kworker/u64:14/3363:
<4> [676.708629] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708651] #1: ffffc900029cfe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708672] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.708701] 2 locks held by kworker/u65:14/3365:
<4> [676.708706] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708728] #1: ffffc9000289be30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708751] 2 locks held by kworker/u65:15/3366:
<4> [676.708756] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.708777] #1: ffffc900026b7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.708800] 1 lock held by xe_exec_threads/5188:
<4> [676.708806] #0: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: xe_vma_userptr_invalidate_finish+0x44/0x490 [xe]
<4> [676.709069] 2 locks held by kworker/u65:0/5304:
<4> [676.709070] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709074] #1: ffffc900078b7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709078] 2 locks held by kworker/u65:1/5305:
<4> [676.709079] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709083] #1: ffffc900078bfe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709087] 2 locks held by kworker/u65:5/5306:
<4> [676.709088] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709092] #1: ffffc900078c7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709096] 2 locks held by kworker/u65:6/5307:
<4> [676.709097] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709101] #1: ffffc900078cfe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709105] 2 locks held by kworker/u65:8/5308:
<4> [676.709106] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709110] #1: ffffc900078d7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709114] 2 locks held by kworker/u65:10/5309:
<4> [676.709114] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709118] #1: ffffc900078dfe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709122] 2 locks held by kworker/u65:11/5310:
<4> [676.709123] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709127] #1: ffffc900078e7e30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709131] 2 locks held by kworker/u65:12/5311:
<4> [676.709132] #0: ffff8881249e8548 ((wq_completion)ttm){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709140] #1: ffffc900078efe30 ((work_completion)(&bo->delayed_delete)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709145] 3 locks held by kworker/u64:0/5313:
<4> [676.709146] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709150] #1: ffffc900078ffe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709155] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709161] 3 locks held by kworker/u64:4/5314:
<4> [676.709162] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709167] #1: ffffc900054ebe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709171] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709177] 3 locks held by kworker/u64:5/5315:
<4> [676.709178] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709182] #1: ffffc90007907e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709186] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709191] 3 locks held by kworker/u64:6/5319:
<4> [676.709192] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709196] #1: ffffc900039b3e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709200] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709206] 3 locks held by kworker/u64:7/5320:
<4> [676.709207] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709211] #1: ffffc90005333e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709215] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709221] 3 locks held by kworker/u64:8/5321:
<4> [676.709222] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709226] #1: ffffc90007927e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709230] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709236] 3 locks held by kworker/u64:9/5322:
<4> [676.709237] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709241] #1: ffffc900027b7e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709245] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709250] 3 locks held by kworker/u64:10/5323:
<4> [676.709251] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709255] #1: ffffc90002717e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709260] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709266] 3 locks held by kworker/u64:13/5324:
<4> [676.709267] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709271] #1: ffffc9000792fe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709275] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709281] 3 locks held by kworker/u64:15/5326:
<4> [676.709282] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709285] #1: ffffc9000793fe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709290] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709295] 3 locks held by kworker/u64:16/5327:
<4> [676.709296] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709300] #1: ffffc90007947e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709304] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709309] 3 locks held by kworker/u64:17/5328:
<4> [676.709310] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709314] #1: ffffc9000316be30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709319] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709324] 3 locks held by kworker/u64:18/5332:
<4> [676.709325] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709329] #1: ffffc90007967e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709333] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709339] 3 locks held by kworker/u64:19/5333:
<4> [676.709340] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709344] #1: ffffc9000796fe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709348] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709354] 3 locks held by kworker/u64:20/5334:
<4> [676.709354] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709358] #1: ffffc90007977e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709362] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709368] 3 locks held by kworker/u64:21/5335:
<4> [676.709369] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709373] #1: ffffc9000797fe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709377] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709382] 3 locks held by kworker/u64:22/5336:
<4> [676.709383] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709387] #1: ffffc90007987e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709391] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709397] 3 locks held by kworker/u64:23/5337:
<4> [676.709398] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709402] #1: ffffc9000798fe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709405] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709410] 3 locks held by kworker/u64:24/5338:
<4> [676.709411] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709415] #1: ffffc90007997e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709420] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709425] 3 locks held by kworker/u64:25/5339:
<4> [676.709426] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709430] #1: ffffc9000799fe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709434] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709439] 3 locks held by kworker/u64:26/5340:
<4> [676.709440] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709445] #1: ffffc900079a7e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709448] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709454] 3 locks held by kworker/u64:27/5341:
<4> [676.709454] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709459] #1: ffffc900072ffe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709463] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709468] 3 locks held by kworker/u64:28/5342:
<4> [676.709469] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709473] #1: ffffc90007317e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709478] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709483] 3 locks held by kworker/u64:29/5343:
<4> [676.709483] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709487] #1: ffffc900079afe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709491] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709496] 3 locks held by kworker/u64:30/5344:
<4> [676.709497] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709502] #1: ffffc900079b7e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709506] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709511] 3 locks held by kworker/u64:31/5345:
<4> [676.709512] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709515] #1: ffffc900079bfe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709520] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709526] 3 locks held by kworker/u64:32/5346:
<4> [676.709526] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709530] #1: ffffc900079c7e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709534] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709540] 3 locks held by kworker/u64:33/5347:
<4> [676.709540] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709544] #1: ffffc90007537e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709548] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709554] 3 locks held by kworker/u64:34/5348:
<4> [676.709555] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709559] #1: ffffc90000367e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709563] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709568] 3 locks held by kworker/u64:35/5349:
<4> [676.709569] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709573] #1: ffffc900024bfe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709577] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709582] 3 locks held by kworker/u64:36/5350:
<4> [676.709583] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709587] #1: ffffc900079cfe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709591] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709597] 3 locks held by kworker/u64:38/5352:
<4> [676.709598] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709601] #1: ffffc900079dfe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709605] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709610] 3 locks held by kworker/u64:39/5353:
<4> [676.709612] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709616] #1: ffffc900079e7e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709620] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709625] 3 locks held by kworker/u64:40/5354:
<4> [676.709626] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709630] #1: ffffc900079efe30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709635] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709640] 3 locks held by kworker/u64:42/5356:
<4> [676.709640] #0: ffff8881002b1d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x4d8/0x760
<4> [676.709644] #1: ffffc90007387e30 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1f9/0x760
<4> [676.709649] #2: ffff888137034248 (&gpusvm->notifier_lock){++++}-{3:3}, at: drm_gpusvm_free_pages+0x25/0x90 [drm_gpusvm_helper]
<4> [676.709655]
<4> [676.709656] =============================================
<5> [678.037749] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967209, lrc_seqno=4294967209, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [678.037777] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [678.039615] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967209, lrc_seqno=4294967209, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [678.039642] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [678.040561] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967209, lrc_seqno=4294967209, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [678.040587] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [678.041237] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967210, lrc_seqno=4294967210, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [678.041263] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [678.991440] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [678.991453] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [678.991457] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [678.991461] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [679.047897] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31312e2e
<7> [679.048042] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31303031
<4> [683.359339] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967209, lrc_seqno=4294967209, guc_id=20, not started
<4> [683.359433] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967210, lrc_seqno=4294967210, guc_id=4, not started
<4> [683.359472] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967209, lrc_seqno=4294967209, guc_id=17, not started
<4> [683.359505] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967209, lrc_seqno=4294967209, guc_id=35, not started
<4> [683.359545] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967209, lrc_seqno=4294967209, guc_id=50, not started
<4> [683.359600] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967209, lrc_seqno=4294967209, guc_id=7, not started
<4> [683.359655] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=159, lrc_seqno=159, guc_id=1, not started
<5> [687.643254] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967211, lrc_seqno=4294967211, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [687.643282] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [687.643880] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967211, lrc_seqno=4294967211, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [687.643892] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [687.644739] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967212, lrc_seqno=4294967212, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [687.644750] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [687.645470] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967211, lrc_seqno=4294967211, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [687.645481] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [687.752726] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [687.752751] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [687.752760] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [687.752769] nvme 0000:05:00.0: [ 0] RxErr (First)
<4> [693.087210] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967211, lrc_seqno=4294967211, guc_id=20, not started
<4> [693.087274] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967211, lrc_seqno=4294967211, guc_id=44, not started
<4> [693.087319] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967211, lrc_seqno=4294967211, guc_id=35, not started
<4> [693.087371] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967211, lrc_seqno=4294967211, guc_id=50, not started
<4> [693.087428] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967212, lrc_seqno=4294967212, guc_id=4, not started
<6> [693.199416] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [693.199424] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [693.199426] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [693.199428] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [694.034502] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31312d2e
<7> [694.034655] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31303031
<6> [694.140324] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [694.140349] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [694.140358] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [694.140367] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [694.248055] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [694.248079] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [694.248088] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [694.248096] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [695.967024] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967213, lrc_seqno=4294967213, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [695.967050] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [695.967578] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967214, lrc_seqno=4294967214, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [695.967591] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [696.074731] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [696.074757] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [696.074766] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [696.074775] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [696.607507] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967213, lrc_seqno=4294967213, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [696.607556] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [696.608837] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967213, lrc_seqno=4294967213, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [696.608869] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [696.609815] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967214, lrc_seqno=4294967214, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [696.609835] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [696.716927] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [696.716952] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [696.716962] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [696.716971] nvme 0000:05:00.0: [ 0] RxErr (First)
<4> [701.279017] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967214, lrc_seqno=4294967214, guc_id=4, not started
<4> [701.279072] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967213, lrc_seqno=4294967213, guc_id=42, not started
<4> [701.279112] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967213, lrc_seqno=4294967213, guc_id=35, not started
<4> [701.279171] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967213, lrc_seqno=4294967213, guc_id=20, not started
<4> [701.279223] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967213, lrc_seqno=4294967213, guc_id=17, not started
<4> [701.279261] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967213, lrc_seqno=4294967213, guc_id=50, not started
<5> [704.931676] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967215, lrc_seqno=4294967215, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [704.931703] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [704.932217] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967216, lrc_seqno=4294967216, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [704.932230] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [705.571423] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967215, lrc_seqno=4294967215, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [705.571454] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [705.575262] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967215, lrc_seqno=4294967215, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [705.575289] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [705.575848] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967216, lrc_seqno=4294967216, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [705.575864] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [705.683477] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [705.683502] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [705.683511] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [705.683520] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [709.082908] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x30312d2e
<7> [709.083070] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31303031
<4> [709.982908] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967215, lrc_seqno=4294967215, guc_id=35, not started
<4> [709.983007] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967215, lrc_seqno=4294967215, guc_id=20, not started
<4> [709.983065] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967215, lrc_seqno=4294967215, guc_id=50, not started
<4> [709.983105] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967216, lrc_seqno=4294967216, guc_id=4, not started
<4> [709.983145] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967215, lrc_seqno=4294967215, guc_id=17, not started
<4> [709.983177] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967215, lrc_seqno=4294967215, guc_id=44, not started
<5> [713.255214] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967217, lrc_seqno=4294967217, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [713.255243] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [713.255920] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967218, lrc_seqno=4294967218, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [713.255955] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [713.897903] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967217, lrc_seqno=4294967217, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [713.897932] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [713.899286] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967217, lrc_seqno=4294967217, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [713.899313] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [713.899877] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967218, lrc_seqno=4294967218, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [713.899891] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [718.686737] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967217, lrc_seqno=4294967217, guc_id=20, not started
<4> [718.686851] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967217, lrc_seqno=4294967217, guc_id=42, not started
<4> [718.686911] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967217, lrc_seqno=4294967217, guc_id=44, not started
<4> [718.686975] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967218, lrc_seqno=4294967218, guc_id=4, not started
<4> [718.687038] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967217, lrc_seqno=4294967217, guc_id=35, not started
<4> [718.687082] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967217, lrc_seqno=4294967217, guc_id=7, not started
<5> [722.217739] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967219, lrc_seqno=4294967219, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [722.217765] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [722.218404] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967220, lrc_seqno=4294967220, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [722.218434] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [722.326479] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [722.326511] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [722.326522] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [722.326532] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [722.435474] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [722.435498] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [722.435508] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [722.435517] nvme 0000:05:00.0: [ 0] RxErr (First)
<5> [722.857997] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967219, lrc_seqno=4294967219, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [722.858023] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [722.860272] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967220, lrc_seqno=4294967220, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [722.860304] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [722.861469] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967219, lrc_seqno=4294967219, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [722.861491] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [724.062402] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x30312d2e
<7> [724.062565] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x30303031
<4> [727.390444] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967219, lrc_seqno=4294967219, guc_id=35, not started
<4> [727.390946] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967219, lrc_seqno=4294967219, guc_id=44, not started
<4> [727.390994] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967219, lrc_seqno=4294967219, guc_id=20, not started
<4> [727.391055] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967219, lrc_seqno=4294967219, guc_id=50, not started
<4> [727.391113] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967219, lrc_seqno=4294967219, guc_id=17, not started
<4> [727.391271] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967220, lrc_seqno=4294967220, guc_id=4, not started
<5> [731.180494] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967221, lrc_seqno=4294967221, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [731.180526] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [731.181208] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967222, lrc_seqno=4294967222, guc_id=29, flags=0x0 in xe_exec_threads [5182]
<7> [731.181230] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [731.822521] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967221, lrc_seqno=4294967221, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [731.822548] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [731.824637] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967222, lrc_seqno=4294967222, guc_id=27, flags=0x0 in xe_exec_threads [5182]
<7> [731.824651] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<5> [731.825036] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967221, lrc_seqno=4294967221, guc_id=28, flags=0x0 in xe_exec_threads [5182]
<7> [731.825064] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<4> [736.606364] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967222, lrc_seqno=4294967222, guc_id=4, not started
<4> [736.606544] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967221, lrc_seqno=4294967221, guc_id=42, not started
<4> [736.606596] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967221, lrc_seqno=4294967221, guc_id=7, not started
<4> [736.606634] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967221, lrc_seqno=4294967221, guc_id=35, not started
<4> [736.606689] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967221, lrc_seqno=4294967221, guc_id=17, not started
<4> [736.606725] xe 0000:03:00.0: [drm] Tile0: GT0: Check job timeout: seqno=4294967221, lrc_seqno=4294967221, guc_id=44, not started
|