Result: 2 Warning(s)
i915_display_info10 igt_runner10 results10.json results10-xe-load.json guc_logs10.tar i915_display_info_post_exec10 boot10 dmesg10
| Detail | Value |
|---|---|
| Duration | unknown |
| Hostname |
shard-bmg-2 |
| Igt-Version |
IGT-Version: 2.3-g2020b0bf9 (x86_64) (Linux: 6.19.0-lgci-xe-xe-4576-cc2c646d39200973c-debug+ x86_64) |
| Out |
Using IGT_SRANDOM=1771528747 for randomisation Opened device: /dev/dri/card0 Starting subtest: threads-bal-mixed-fd-userptr This test caused an abort condition: Child refuses to die, tainted 0x4244. |
| Err |
Starting subtest: threads-bal-mixed-fd-userptr (xe_exec_threads:11899) [thread:11905] xe/xe_ioctl-CRITICAL: Test assertion failure function xe_wait_ufence, file ../lib/xe/xe_ioctl.c:712: (xe_exec_threads:11899) [thread:11905] xe/xe_ioctl-CRITICAL: Failed assertion: __xe_wait_ufence(fd, addr, value, exec_queue, &timeout) == 0 (xe_exec_threads:11899) [thread:11905] xe/xe_ioctl-CRITICAL: Last errno: 62, Timer expired (xe_exec_threads:11899) [thread:11905] xe/xe_ioctl-CRITICAL: error: -62 != 0 (xe_exec_threads:11899) [thread:11903] xe/xe_ioctl-CRITICAL: Test assertion failure function xe_wait_ufence, file ../lib/xe/xe_ioctl.c:712: (xe_exec_threads:11899) [thread:11903] xe/xe_ioctl-CRITICAL: Failed assertion: __xe_wait_ufence(fd, addr, value, exec_queue, &timeout) == 0 (xe_exec_threads:11899) [thread:11903] xe/xe_ioctl-CRITICAL: Last errno: 62, Timer expired (xe_exec_threads:11899) [thread:11903] xe/xe_ioctl-CRITICAL: error: -62 != 0 Received signal SIGQUIT. Stack trace: |
| Dmesg |
<6> [651.216639] Console: switching to colour dummy device 80x25
<6> [651.216936] [IGT] xe_exec_threads: executing
<6> [651.225714] [IGT] xe_exec_threads: starting subtest threads-bal-mixed-fd-userptr
<6> [651.368853] xe 0000:03:00.0: [drm] Tile0: GT1: Engine reset: engine_class=vecs, logical_mask: 0x1, guc_id=32, state=0xc3
<6> [651.369157] xe 0000:03:00.0: [drm] Tile0: GT1:
ASID: 952
Faulted Address: 0x00000000001a0000
FaultType: 0
AccessType: 0
FaultLevel: 4
EngineClass: 2 vecs
EngineInstance: 0
<6> [651.369163] xe 0000:03:00.0: [drm] Tile0: GT1: Fault response: Unsuccessful -EINVAL
<6> [651.369263] xe 0000:03:00.0: [drm] Tile0: GT1: Engine memory CAT error [18]: class=vecs, logical_mask: 0x1, guc_id=34
<6> [651.782597] xe 0000:03:00.0: [drm] Tile0: GT1: Engine reset: engine_class=vcs, logical_mask: 0x2, guc_id=16, state=0xc3
<3> [652.863072] xe 0000:03:00.0: [drm] *ERROR* Tile0: GT1: GuC engine reset request failed on 1:2 because 0x00000000
<6> [652.864001] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from xe_guc_exec_queue_reset_failure_handler [xe]
<6> [652.864499] xe 0000:03:00.0: [drm] Tile0: GT1: reset queued
<7> [652.864574] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.864954] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.866129] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.866696] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.867458] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.867809] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.868222] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.868586] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.868971] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.869263] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.869745] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.870056] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.870302] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.870356] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.870438] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.870493] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.870579] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.870631] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.870702] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.870757] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.870820] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.870871] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.870940] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.870993] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.871062] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.871116] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.871189] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.871263] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.871349] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.871418] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.871489] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.871549] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.871625] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.871680] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.871754] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.871813] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.871887] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.871944] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.872020] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.872091] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.872177] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.872237] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.872315] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.872375] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.872463] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.872520] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.872593] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.872653] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.872727] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.872784] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.872854] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.872912] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.872987] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.873043] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.873114] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.873172] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.873246] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.873302] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.873371] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.873446] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.873522] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.873577] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.873647] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.873706] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.873780] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.873836] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.873904] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.873962] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.874035] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.874091] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.874160] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.874219] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.874292] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.874348] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.874428] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.874487] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.874562] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.874617] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.874686] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.874745] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.874819] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.874874] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.874943] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.875002] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.875076] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.875131] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.875201] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.875260] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.875333] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.875395] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.875467] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.875526] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [652.875600] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [652.875656] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<6> [652.875759] xe 0000:03:00.0: [drm] Tile0: GT1: reset started
<7> [652.875884] xe 0000:03:00.0: [drm:guc_ct_change_state [xe]] Tile0: GT1: GuC CT communication channel stopped
<3> [652.882397] xe 0000:03:00.0: [drm] *ERROR* Tile0: GT1: failed to clear GRDOM_FULL (-ETIMEDOUT)
<6> [655.181600] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [655.182693] xe 0000:03:00.0: [drm] Tile0: GT1: reset queued
<7> [656.827985] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x2d2d2a2b
<7> [656.828183] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x2d2d2d2d
<6> [657.477514] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [657.585371] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [657.585395] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [657.585403] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [657.585412] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [659.773490] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [662.069422] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [662.182714] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [662.182739] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [662.182747] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [662.182756] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [662.291479] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [662.291503] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [662.291511] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [662.291519] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [662.399327] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [662.399351] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [662.399359] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [662.399368] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [664.365402] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [664.473614] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [664.473638] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [664.473646] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [664.473655] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [666.661356] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [668.957291] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [671.253157] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [671.366092] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [671.366116] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [671.366124] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [671.366132] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [671.821314] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x2e2f2b2c
<7> [671.821482] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x2e2d2d2e
<6> [673.549210] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [675.845122] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [678.141004] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [680.436962] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [682.280616] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [682.280640] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [682.280648] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [682.280657] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [682.732969] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [682.841150] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [682.841174] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [682.841182] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [682.841191] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [685.028993] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<7> [686.828865] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x2e2f2c2c
<7> [686.829005] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x2e2e2e2f
<6> [686.935393] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [686.935417] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [686.935425] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [686.935434] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [687.324924] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [689.620859] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [691.916730] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [692.029957] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [692.029981] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [692.029989] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [692.029998] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [694.212541] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [696.508697] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [698.804610] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [701.100591] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [701.213273] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [701.213297] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [701.213305] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [701.213313] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [701.320473] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [701.320497] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [701.320505] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [701.320513] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [701.856672] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x2f302c2d
<7> [701.856852] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x2f2f2f2f
<6> [701.963469] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [701.963493] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [701.963501] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [701.963510] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [703.396577] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [705.692489] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [707.988410] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [710.284355] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [712.580385] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [712.693013] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [712.693037] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [712.693045] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [712.693053] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [712.800750] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [712.800774] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [712.800782] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [712.800790] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [714.876330] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<7> [716.911395] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x2f302d2d
<7> [716.911770] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x2f2f2f30
<6> [717.172284] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [719.468272] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [721.764190] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [724.060171] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [726.356143] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [728.652142] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [730.948097] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<7> [731.831352] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x30312d2e
<7> [731.831631] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x302f2f30
<6> [733.243990] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [733.352473] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [733.352497] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [733.352505] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [733.352514] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [733.461250] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [733.461274] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [733.461281] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [733.461290] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [735.539842] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [735.647764] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [735.647788] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [735.647797] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [735.647805] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [737.835837] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [737.949182] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [737.949207] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [737.949215] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [737.949224] nvme 0000:05:00.0: [ 0] RxErr (First)
<3> [738.492069] INFO: task kworker/u64:33:6203 blocked for more than 61 seconds.
<3> [738.492786] Tainted: G S U W L 6.19.0-lgci-xe-xe-4576-cc2c646d39200973c-debug+ #1
<3> [738.492854] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [738.492906] task:kworker/u64:33 state:D stack:0 pid:6203 tgid:6203 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [738.492926] Workqueue: events_unbound vma_destroy_work_func [xe]
<6> [738.493532] Call Trace:
<6> [738.493539] <TASK>
<6> [738.493558] __schedule+0x5bf/0x1d20
<6> [738.493576] ? lock_acquire+0xc4/0x2c0
<6> [738.493598] ? schedule+0x10e/0x180
<6> [738.493613] ? lock_release+0xce/0x280
<6> [738.493638] schedule+0x3a/0x180
<6> [738.493686] mmu_interval_notifier_remove+0x22d/0x2a0
<6> [738.493707] ? __pfx_autoremove_wake_function+0x10/0x10
<6> [738.493733] xe_userptr_remove+0x37/0x50 [xe]
<6> [738.494235] xe_vma_destroy_late+0x86/0x90 [xe]
<6> [738.494813] vma_destroy_work_func+0x15/0x20 [xe]
<6> [738.495357] process_one_work+0x22e/0x6b0
<6> [738.495395] worker_thread+0x1e8/0x3d0
<6> [738.495411] ? __pfx_worker_thread+0x10/0x10
<6> [738.495423] kthread+0x11f/0x250
<6> [738.495440] ? __pfx_kthread+0x10/0x10
<6> [738.495459] ret_from_fork+0x344/0x3a0
<6> [738.495470] ? __pfx_kthread+0x10/0x10
<6> [738.495485] ret_from_fork_asm+0x1a/0x30
<6> [738.495527] </TASK>
<3> [738.495570] INFO: task xe_exec_threads:11902 blocked for more than 61 seconds.
<3> [738.495625] Tainted: G S U W L 6.19.0-lgci-xe-xe-4576-cc2c646d39200973c-debug+ #1
<3> [738.495712] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [738.495767] task:xe_exec_threads state:D stack:0 pid:11902 tgid:11899 ppid:2201 task_flags:0x400140 flags:0x00080000
<6> [738.495784] Call Trace:
<6> [738.495790] <TASK>
<6> [738.495806] __schedule+0x5bf/0x1d20
<6> [738.495820] ? lock_acquire+0xc4/0x2c0
<6> [738.495840] ? schedule+0x10e/0x180
<6> [738.495855] ? lock_release+0xce/0x280
<6> [738.495875] ? dma_fence_default_wait+0x100/0x2b0
<6> [738.495894] schedule+0x3a/0x180
<6> [738.495906] schedule_timeout+0x117/0x130
<6> [738.495914] ? mark_held_locks+0x46/0x90
<6> [738.495931] ? dma_fence_default_wait+0x100/0x2b0
<6> [738.495948] ? trace_hardirqs_on+0x63/0xd0
<6> [738.495970] dma_fence_default_wait+0x22a/0x2b0
<6> [738.495988] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [738.496011] dma_fence_wait_timeout+0x332/0x420
<6> [738.496032] dma_resv_wait_timeout+0x7b/0x100
<6> [738.496061] vma_userptr_invalidate+0x29c/0x520 [xe]
<6> [738.496529] ? lock_acquire+0xc4/0x2c0
<6> [738.496573] __mmu_notifier_invalidate_range_start+0x222/0x2c0
<6> [738.496592] __split_huge_pmd+0x191/0x1b0
<6> [738.496619] vma_adjust_trans_huge+0x10d/0x1b0
<6> [738.496639] __split_vma+0x225/0x330
<6> [738.496697] vms_gather_munmap_vmas+0x4d/0x2d0
<6> [738.496710] ? trace_hardirqs_on+0x63/0xd0
<6> [738.496729] do_vmi_align_munmap+0x16a/0x220
<6> [738.496741] ? __pm_runtime_idle+0xb1/0x130
<6> [738.496771] ? lock_is_held_type+0xa3/0x130
<6> [738.496825] do_vmi_munmap+0xd3/0x1a0
<6> [738.496849] __vm_munmap+0xc3/0x1b0
<6> [738.496888] __x64_sys_munmap+0x1b/0x30
<6> [738.496900] x64_sys_call+0x1c00/0x26b0
<6> [738.496910] do_syscall_64+0x93/0x1470
<6> [738.496924] ? _copy_to_user+0x23/0x70
<6> [738.496944] ? drm_ioctl+0x30f/0x5a0
<6> [738.496960] ? __pfx_drm_syncobj_destroy_ioctl+0x10/0x10
<6> [738.496993] ? trace_hardirqs_on+0x63/0xd0
<6> [738.497007] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [738.497021] ? ktime_get_mono_fast_ns+0x3c/0xd0
<6> [738.497033] ? __pm_runtime_idle+0xb1/0x130
<6> [738.497050] ? xe_pm_runtime_put+0x64/0x110 [xe]
<6> [738.497547] ? xe_drm_ioctl+0x7c/0xb0 [xe]
<6> [738.497653] ? __x64_sys_ioctl+0xbf/0x100
<6> [738.497659] ? do_syscall_64+0x1e4/0x1470
<6> [738.497663] ? do_syscall_64+0x1e4/0x1470
<6> [738.497668] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<6> [738.497670] RIP: 0033:0x7a8f2bb25d7b
<6> [738.497673] RSP: 002b:00007a8f28859518 EFLAGS: 00000207 ORIG_RAX: 000000000000000b
<6> [738.497677] RAX: ffffffffffffffda RBX: fffffffffffffed8 RCX: 00007a8f2bb25d7b
<6> [738.497679] RDX: 0000000000000000 RSI: 00000000000a1000 RDI: 00007a8f23e70000
<6> [738.497680] RBP: 00007a8f28859550 R08: 00000000000a0010 R09: 0000000000000400
<6> [738.497682] R10: 0000000000000010 R11: 0000000000000207 R12: 0000000000000000
<6> [738.497684] R13: 00007a8f23e70ff0 R14: 00007a8f288596f0 R15: 00007a8f28859670
<6> [738.497693] </TASK>
<4> [738.497712]
Showing all locks held in the system:
<4> [738.497720] 1 lock held by khungtaskd/117:
<4> [738.497722] #0: ffffffff83593f20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x37/0x220
<4> [738.497734] 3 locks held by kworker/u64:8/203:
<4> [738.497736] #0: ffff8881578a5948 ((wq_completion)gt-ordered-wq#2){+.+.}-{0:0}, at: process_one_work+0x441/0x6b0
<4> [738.497745] #1: ffffc900014c3e20 ((work_completion)(>->reset.worker)){+.+.}-{0:0}, at: process_one_work+0x1ee/0x6b0
<4> [738.497752] #2: ffff88812f899558 (&guc->submission_state.lock){+.+.}-{3:3}, at: xe_guc_submit_start+0x45/0x4c0 [xe]
<4> [738.497871] 1 lock held by in:imklog/794:
<4> [738.497873] #0: ffff88812b814338 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [738.497880] 5 locks held by rs:main Q:Reg/795:
<4> [738.497891] 2 locks held by kworker/u64:33/6203:
<4> [738.497892] #0: ffff8881002b9d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x441/0x6b0
<4> [738.497898] #1: ffffc90005c07e20 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1ee/0x6b0
<4> [738.497909] 1 lock held by xe_exec_threads/11900:
<4> [738.497911] #0: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: __vm_munmap+0x8b/0x1b0
<4> [738.497917] 4 locks held by xe_exec_threads/11902:
<4> [738.497919] #0: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: __vm_munmap+0x8b/0x1b0
<4> [738.497924] #1: ffff888133d19710 (&anon_vma->rwsem){++++}-{3:3}, at: vma_prepare+0x9a/0x170
<4> [738.497931] #2: ffffffff836d5b40 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}, at: __split_huge_pmd+0x7a/0x1b0
<4> [738.497938] #3: ffff888118a19248 (&gpusvm->notifier_lock){++++}-{3:3}, at: vma_userptr_invalidate+0x1b9/0x520 [xe]
<4> [738.498052] 3 locks held by xe_exec_threads/11903:
<4> [738.498053] #0: ffff888113412738 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [738.498060] #1: ffff888122e213e0 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0x65/0x4f0
<4> [738.498066] #2: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: lock_next_vma+0x1d0/0x6c0
<4> [738.498074] 3 locks held by xe_exec_threads/11905:
<4> [738.498075] #0: ffff88812f156938 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [738.498081] #1: ffff888122e21e20 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0x65/0x4f0
<4> [738.498087] #2: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: lock_next_vma+0x1d0/0x6c0
<4> [738.498100]
<4> [738.498101] =============================================
<6> [740.131829] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [742.427793] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [744.723803] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<7> [746.876061] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x30312d2e
<7> [746.876224] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x30303031
<6> [747.019752] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [749.315716] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [751.611698] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [753.907641] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [754.020740] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [754.020765] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [754.020773] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [754.020782] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [754.128699] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [754.128723] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [754.128731] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [754.128740] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [756.203641] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [756.311771] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [756.311795] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [756.311803] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [756.311811] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [756.421561] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [756.421585] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [756.421593] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [756.421602] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [758.499544] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [760.795509] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [760.907189] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [760.907213] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [760.907221] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [760.907230] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [761.842001] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31312e2e
<7> [761.842156] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31303031
<6> [763.091547] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [763.199618] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [763.199641] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [763.199649] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [763.199658] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [765.387427] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [767.683298] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [769.979359] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [772.275314] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [774.571333] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<7> [776.861981] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31322e2e
<7> [776.862155] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313031
<6> [776.866998] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [779.163017] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [779.272843] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [779.272867] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [779.272875] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [779.272884] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [779.382163] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [779.382188] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [779.382195] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [779.382204] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [781.459188] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [781.567100] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [781.567124] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [781.567132] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [781.567141] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [783.755142] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [786.051034] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [786.164285] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [786.164309] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [786.164317] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [786.164326] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [788.347100] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [790.643054] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<7> [791.831920] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31322e2f
<7> [791.832074] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313131
<6> [792.938992] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [795.234957] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [797.530845] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [799.826867] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<3> [799.931006] INFO: task kworker/u64:33:6203 blocked for more than 122 seconds.
<3> [799.931065] Tainted: G S U W L 6.19.0-lgci-xe-xe-4576-cc2c646d39200973c-debug+ #1
<3> [799.931108] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [799.931145] task:kworker/u64:33 state:D stack:0 pid:6203 tgid:6203 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [799.931164] Workqueue: events_unbound vma_destroy_work_func [xe]
<6> [799.931702] Call Trace:
<6> [799.931709] <TASK>
<6> [799.931724] __schedule+0x5bf/0x1d20
<6> [799.931741] ? lock_acquire+0xc4/0x2c0
<6> [799.931760] ? schedule+0x10e/0x180
<6> [799.931774] ? lock_release+0xce/0x280
<6> [799.931795] schedule+0x3a/0x180
<6> [799.931807] mmu_interval_notifier_remove+0x22d/0x2a0
<6> [799.931826] ? __pfx_autoremove_wake_function+0x10/0x10
<6> [799.931850] xe_userptr_remove+0x37/0x50 [xe]
<6> [799.932271] xe_vma_destroy_late+0x86/0x90 [xe]
<6> [799.932746] vma_destroy_work_func+0x15/0x20 [xe]
<6> [799.933179] process_one_work+0x22e/0x6b0
<6> [799.933209] worker_thread+0x1e8/0x3d0
<6> [799.933223] ? __pfx_worker_thread+0x10/0x10
<6> [799.933233] kthread+0x11f/0x250
<6> [799.933250] ? __pfx_kthread+0x10/0x10
<6> [799.933266] ret_from_fork+0x344/0x3a0
<6> [799.933276] ? __pfx_kthread+0x10/0x10
<6> [799.933290] ret_from_fork_asm+0x1a/0x30
<6> [799.933321] </TASK>
<3> [799.933356] INFO: task xe_exec_threads:11902 blocked for more than 122 seconds.
<3> [799.933397] Tainted: G S U W L 6.19.0-lgci-xe-xe-4576-cc2c646d39200973c-debug+ #1
<3> [799.933439] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [799.933475] task:xe_exec_threads state:D stack:0 pid:11902 tgid:11899 ppid:2201 task_flags:0x400140 flags:0x00080000
<6> [799.933489] Call Trace:
<6> [799.933494] <TASK>
<6> [799.933505] __schedule+0x5bf/0x1d20
<6> [799.933516] ? lock_acquire+0xc4/0x2c0
<6> [799.933533] ? schedule+0x10e/0x180
<6> [799.933547] ? lock_release+0xce/0x280
<6> [799.933565] ? dma_fence_default_wait+0x100/0x2b0
<6> [799.933581] schedule+0x3a/0x180
<6> [799.933592] schedule_timeout+0x117/0x130
<6> [799.933601] ? mark_held_locks+0x46/0x90
<6> [799.933616] ? dma_fence_default_wait+0x100/0x2b0
<6> [799.933652] ? trace_hardirqs_on+0x63/0xd0
<6> [799.933671] dma_fence_default_wait+0x22a/0x2b0
<6> [799.933687] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [799.933707] dma_fence_wait_timeout+0x332/0x420
<6> [799.933725] dma_resv_wait_timeout+0x7b/0x100
<6> [799.933748] vma_userptr_invalidate+0x29c/0x520 [xe]
<6> [799.934147] ? lock_acquire+0xc4/0x2c0
<6> [799.934181] __mmu_notifier_invalidate_range_start+0x222/0x2c0
<6> [799.934198] __split_huge_pmd+0x191/0x1b0
<6> [799.934221] vma_adjust_trans_huge+0x10d/0x1b0
<6> [799.934238] __split_vma+0x225/0x330
<6> [799.934289] vms_gather_munmap_vmas+0x4d/0x2d0
<6> [799.934307] ? trace_hardirqs_on+0x63/0xd0
<6> [799.934334] do_vmi_align_munmap+0x16a/0x220
<6> [799.934347] ? __pm_runtime_idle+0xb1/0x130
<6> [799.934371] ? lock_is_held_type+0xa3/0x130
<6> [799.934412] do_vmi_munmap+0xd3/0x1a0
<6> [799.934433] __vm_munmap+0xc3/0x1b0
<6> [799.934462] __x64_sys_munmap+0x1b/0x30
<6> [799.934473] x64_sys_call+0x1c00/0x26b0
<6> [799.934482] do_syscall_64+0x93/0x1470
<6> [799.934504] ? _copy_to_user+0x23/0x70
<6> [799.934531] ? drm_ioctl+0x30f/0x5a0
<6> [799.934550] ? __pfx_drm_syncobj_destroy_ioctl+0x10/0x10
<6> [799.934592] ? trace_hardirqs_on+0x63/0xd0
<6> [799.934605] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [799.934618] ? ktime_get_mono_fast_ns+0x3c/0xd0
<6> [799.934655] ? __pm_runtime_idle+0xb1/0x130
<6> [799.934669] ? xe_pm_runtime_put+0x64/0x110 [xe]
<6> [799.935072] ? xe_drm_ioctl+0x7c/0xb0 [xe]
<6> [799.935368] ? __x64_sys_ioctl+0xbf/0x100
<6> [799.935386] ? do_syscall_64+0x1e4/0x1470
<6> [799.935402] ? do_syscall_64+0x1e4/0x1470
<6> [799.935416] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<6> [799.935425] RIP: 0033:0x7a8f2bb25d7b
<6> [799.935437] RSP: 002b:00007a8f28859518 EFLAGS: 00000207 ORIG_RAX: 000000000000000b
<6> [799.935448] RAX: ffffffffffffffda RBX: fffffffffffffed8 RCX: 00007a8f2bb25d7b
<6> [799.935455] RDX: 0000000000000000 RSI: 00000000000a1000 RDI: 00007a8f23e70000
<6> [799.935460] RBP: 00007a8f28859550 R08: 00000000000a0010 R09: 0000000000000400
<6> [799.935466] R10: 0000000000000010 R11: 0000000000000207 R12: 0000000000000000
<6> [799.935472] R13: 00007a8f23e70ff0 R14: 00007a8f288596f0 R15: 00007a8f28859670
<6> [799.935501] </TASK>
<4> [799.935542]
Showing all locks held in the system:
<4> [799.935563] 1 lock held by khungtaskd/117:
<4> [799.935570] #0: ffffffff83593f20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x37/0x220
<4> [799.935613] 3 locks held by kworker/u64:8/203:
<4> [799.935638] #0: ffff8881578a5948 ((wq_completion)gt-ordered-wq#2){+.+.}-{0:0}, at: process_one_work+0x441/0x6b0
<4> [799.935670] #1: ffffc900014c3e20 ((work_completion)(>->reset.worker)){+.+.}-{0:0}, at: process_one_work+0x1ee/0x6b0
<4> [799.935694] #2: ffff88812f899558 (&guc->submission_state.lock){+.+.}-{3:3}, at: xe_guc_submit_start+0x45/0x4c0 [xe]
<4> [799.936257] 1 lock held by in:imklog/794:
<4> [799.936267] #0: ffff88812b814338 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [799.936321] 4 locks held by dmesg/2083:
<4> [799.936347] 2 locks held by kworker/u64:33/6203:
<4> [799.936357] #0: ffff8881002b9d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x441/0x6b0
<4> [799.936400] #1: ffffc90005c07e20 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1ee/0x6b0
<4> [799.936456] 1 lock held by xe_exec_threads/11899:
<4> [799.936466] #0: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: lock_mm_and_find_vma+0x17d/0x730
<4> [799.936515] 1 lock held by xe_exec_threads/11900:
<4> [799.936524] #0: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: __vm_munmap+0x8b/0x1b0
<4> [799.936566] 4 locks held by xe_exec_threads/11902:
<4> [799.936575] #0: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: __vm_munmap+0x8b/0x1b0
<4> [799.936616] #1: ffff888133d19710 (&anon_vma->rwsem){++++}-{3:3}, at: vma_prepare+0x9a/0x170
<4> [799.936675] #2: ffffffff836d5b40 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}, at: __split_huge_pmd+0x7a/0x1b0
<4> [799.936683] #3: ffff888118a19248 (&gpusvm->notifier_lock){++++}-{3:3}, at: vma_userptr_invalidate+0x1b9/0x520 [xe]
<4> [799.936812] 3 locks held by xe_exec_threads/11903:
<4> [799.936814] #0: ffff888113412738 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [799.936823] #1: ffff888122e213e0 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0x65/0x4f0
<4> [799.936831] #2: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: lock_next_vma+0x1d0/0x6c0
<4> [799.936839] 3 locks held by xe_exec_threads/11905:
<4> [799.936841] #0: ffff88812f156938 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [799.936849] #1: ffff888122e21e20 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0x65/0x4f0
<4> [799.936856] #2: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: lock_next_vma+0x1d0/0x6c0
<4> [799.936868]
<4> [799.936870] =============================================
<6> [802.122832] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [804.418843] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [806.714790] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [806.822981] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [806.822988] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [806.822990] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [806.822992] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [806.825338] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31322e2f
<7> [806.825475] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313132
<6> [809.010743] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [810.276074] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [810.276098] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [810.276106] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [810.276115] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [811.306680] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [811.414814] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [811.414838] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [811.414846] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [811.414854] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [813.602672] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [813.715632] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [813.715656] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [813.715664] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [813.715673] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [815.396896] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [815.396920] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [815.396928] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [815.396937] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [815.505338] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [815.505361] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [815.505369] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [815.505463] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [815.898522] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [816.006453] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [816.006477] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [816.006486] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [816.006494] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [818.194640] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [820.490566] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<7> [821.833239] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x31322f2f
<7> [821.833413] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313132
<6> [822.786444] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [825.082535] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [827.378369] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [827.490064] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [827.490088] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [827.490096] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [827.490104] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [829.674394] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [829.787656] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [829.787680] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [829.787688] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [829.787696] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [831.970384] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [834.266355] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [836.562237] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [836.670460] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [836.670484] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [836.670492] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [836.670500] nvme 0000:05:00.0: [ 0] RxErr (First)
<7> [836.834753] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x32322f2f
<7> [836.834999] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x31313132
<6> [838.858345] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [841.154238] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [841.263116] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [841.263141] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [841.263149] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [841.263157] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [843.450168] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [843.563095] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [843.563119] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [843.563127] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [843.563135] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [845.746206] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [848.042161] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [848.150211] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [848.150235] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [848.150243] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [848.150252] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [850.338177] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<7> [851.920359] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x32322f2f
<7> [851.920520] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x32313232
<6> [852.026996] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [852.027020] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [852.027028] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [852.027037] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [852.634095] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [854.930065] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [857.225942] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [857.337657] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [857.337680] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [857.337688] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [857.337697] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [859.521951] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<3> [861.370080] INFO: task kworker/u64:33:6203 blocked for more than 184 seconds.
<3> [861.370141] Tainted: G S U W L 6.19.0-lgci-xe-xe-4576-cc2c646d39200973c-debug+ #1
<3> [861.370185] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [861.370221] task:kworker/u64:33 state:D stack:0 pid:6203 tgid:6203 ppid:2 task_flags:0x4208060 flags:0x00080000
<6> [861.370241] Workqueue: events_unbound vma_destroy_work_func [xe]
<6> [861.370894] Call Trace:
<6> [861.370903] <TASK>
<6> [861.370921] __schedule+0x5bf/0x1d20
<6> [861.370939] ? lock_acquire+0xc4/0x2c0
<6> [861.370959] ? schedule+0x10e/0x180
<6> [861.370972] ? lock_release+0xce/0x280
<6> [861.370993] schedule+0x3a/0x180
<6> [861.371004] mmu_interval_notifier_remove+0x22d/0x2a0
<6> [861.371022] ? __pfx_autoremove_wake_function+0x10/0x10
<6> [861.371045] xe_userptr_remove+0x37/0x50 [xe]
<6> [861.371471] xe_vma_destroy_late+0x86/0x90 [xe]
<6> [861.371956] vma_destroy_work_func+0x15/0x20 [xe]
<6> [861.372383] process_one_work+0x22e/0x6b0
<6> [861.372413] worker_thread+0x1e8/0x3d0
<6> [861.372426] ? __pfx_worker_thread+0x10/0x10
<6> [861.372437] kthread+0x11f/0x250
<6> [861.372453] ? __pfx_kthread+0x10/0x10
<6> [861.372469] ret_from_fork+0x344/0x3a0
<6> [861.372481] ? __pfx_kthread+0x10/0x10
<6> [861.372495] ret_from_fork_asm+0x1a/0x30
<6> [861.372526] </TASK>
<3> [861.372561] INFO: task xe_exec_threads:11902 blocked for more than 184 seconds.
<3> [861.372603] Tainted: G S U W L 6.19.0-lgci-xe-xe-4576-cc2c646d39200973c-debug+ #1
<3> [861.372644] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6> [861.372680] task:xe_exec_threads state:D stack:0 pid:11902 tgid:11899 ppid:2201 task_flags:0x400140 flags:0x00080000
<6> [861.372696] Call Trace:
<6> [861.372713] <TASK>
<6> [861.372724] __schedule+0x5bf/0x1d20
<6> [861.372736] ? lock_acquire+0xc4/0x2c0
<6> [861.372752] ? schedule+0x10e/0x180
<6> [861.372766] ? lock_release+0xce/0x280
<6> [861.372784] ? dma_fence_default_wait+0x100/0x2b0
<6> [861.372801] schedule+0x3a/0x180
<6> [861.372812] schedule_timeout+0x117/0x130
<6> [861.372821] ? mark_held_locks+0x46/0x90
<6> [861.372836] ? dma_fence_default_wait+0x100/0x2b0
<6> [861.372852] ? trace_hardirqs_on+0x63/0xd0
<6> [861.372871] dma_fence_default_wait+0x22a/0x2b0
<6> [861.372887] ? __pfx_dma_fence_default_wait_cb+0x10/0x10
<6> [861.372907] dma_fence_wait_timeout+0x332/0x420
<6> [861.372924] dma_resv_wait_timeout+0x7b/0x100
<6> [861.372947] vma_userptr_invalidate+0x29c/0x520 [xe]
<6> [861.373352] ? lock_acquire+0xc4/0x2c0
<6> [861.373386] __mmu_notifier_invalidate_range_start+0x222/0x2c0
<6> [861.373403] __split_huge_pmd+0x191/0x1b0
<6> [861.373426] vma_adjust_trans_huge+0x10d/0x1b0
<6> [861.373443] __split_vma+0x225/0x330
<6> [861.373473] vms_gather_munmap_vmas+0x4d/0x2d0
<6> [861.373485] ? trace_hardirqs_on+0x63/0xd0
<6> [861.373502] do_vmi_align_munmap+0x16a/0x220
<6> [861.373514] ? __pm_runtime_idle+0xb1/0x130
<6> [861.373538] ? lock_is_held_type+0xa3/0x130
<6> [861.373578] do_vmi_munmap+0xd3/0x1a0
<6> [861.373599] __vm_munmap+0xc3/0x1b0
<6> [861.373628] __x64_sys_munmap+0x1b/0x30
<6> [861.373639] x64_sys_call+0x1c00/0x26b0
<6> [861.373649] do_syscall_64+0x93/0x1470
<6> [861.373661] ? _copy_to_user+0x23/0x70
<6> [861.373679] ? drm_ioctl+0x30f/0x5a0
<6> [861.373691] ? __pfx_drm_syncobj_destroy_ioctl+0x10/0x10
<6> [861.373734] ? trace_hardirqs_on+0x63/0xd0
<6> [861.373749] ? _raw_spin_unlock_irqrestore+0x51/0x80
<6> [861.373762] ? ktime_get_mono_fast_ns+0x3c/0xd0
<6> [861.373773] ? __pm_runtime_idle+0xb1/0x130
<6> [861.373789] ? xe_pm_runtime_put+0x64/0x110 [xe]
<6> [861.374208] ? xe_drm_ioctl+0x7c/0xb0 [xe]
<6> [861.374509] ? __x64_sys_ioctl+0xbf/0x100
<6> [861.374527] ? do_syscall_64+0x1e4/0x1470
<6> [861.374543] ? do_syscall_64+0x1e4/0x1470
<6> [861.374558] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<6> [861.374567] RIP: 0033:0x7a8f2bb25d7b
<6> [861.374579] RSP: 002b:00007a8f28859518 EFLAGS: 00000207 ORIG_RAX: 000000000000000b
<6> [861.374590] RAX: ffffffffffffffda RBX: fffffffffffffed8 RCX: 00007a8f2bb25d7b
<6> [861.374597] RDX: 0000000000000000 RSI: 00000000000a1000 RDI: 00007a8f23e70000
<6> [861.374603] RBP: 00007a8f28859550 R08: 00000000000a0010 R09: 0000000000000400
<6> [861.374608] R10: 0000000000000010 R11: 0000000000000207 R12: 0000000000000000
<6> [861.374614] R13: 00007a8f23e70ff0 R14: 00007a8f288596f0 R15: 00007a8f28859670
<6> [861.374644] </TASK>
<4> [861.374683]
Showing all locks held in the system:
<4> [861.374719] 1 lock held by khungtaskd/117:
<4> [861.374731] #0: ffffffff83593f20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x37/0x220
<4> [861.374752] 3 locks held by kworker/u64:8/203:
<4> [861.374753] #0: ffff8881578a5948 ((wq_completion)gt-ordered-wq#2){+.+.}-{0:0}, at: process_one_work+0x441/0x6b0
<4> [861.374761] #1: ffffc900014c3e20 ((work_completion)(>->reset.worker)){+.+.}-{0:0}, at: process_one_work+0x1ee/0x6b0
<4> [861.374773] #2: ffff88812f899558 (&guc->submission_state.lock){+.+.}-{3:3}, at: xe_guc_submit_start+0x45/0x4c0 [xe]
<4> [861.374850] 3 locks held by dmesg/2083:
<4> [861.374856] 2 locks held by kworker/u64:33/6203:
<4> [861.374857] #0: ffff8881002b9d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x441/0x6b0
<4> [861.374862] #1: ffffc90005c07e20 ((work_completion)(&vma->destroy_work)){+.+.}-{0:0}, at: process_one_work+0x1ee/0x6b0
<4> [861.374871] 1 lock held by xe_exec_threads/11899:
<4> [861.374872] #0: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: lock_mm_and_find_vma+0x17d/0x730
<4> [861.374878] 1 lock held by xe_exec_threads/11900:
<4> [861.374879] #0: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: __vm_munmap+0x8b/0x1b0
<4> [861.374883] 4 locks held by xe_exec_threads/11902:
<4> [861.374884] #0: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: __vm_munmap+0x8b/0x1b0
<4> [861.374888] #1: ffff888133d19710 (&anon_vma->rwsem){++++}-{3:3}, at: vma_prepare+0x9a/0x170
<4> [861.374893] #2: ffffffff836d5b40 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}, at: __split_huge_pmd+0x7a/0x1b0
<4> [861.374898] #3: ffff888118a19248 (&gpusvm->notifier_lock){++++}-{3:3}, at: vma_userptr_invalidate+0x1b9/0x520 [xe]
<4> [861.374980] 3 locks held by xe_exec_threads/11903:
<4> [861.374981] #0: ffff888113412738 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [861.374986] #1: ffff888122e213e0 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0x65/0x4f0
<4> [861.374991] #2: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: lock_next_vma+0x1d0/0x6c0
<4> [861.374995] 3 locks held by xe_exec_threads/11905:
<4> [861.374996] #0: ffff88812f156938 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x81/0xd0
<4> [861.375000] #1: ffff888122e21e20 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0x65/0x4f0
<4> [861.375004] #2: ffff888114380fc0 (&mm->mmap_lock){++++}-{3:3}, at: lock_next_vma+0x1d0/0x6c0
<4> [861.375012]
<4> [861.375013] =============================================
<6> [861.481002] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [861.481027] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [861.481035] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [861.481044] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [861.589879] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [861.589902] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [861.589910] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [861.589919] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [861.699764] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [861.699789] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [861.699796] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [861.699805] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [861.807880] pcieport 0000:00:06.0: AER: Multiple Correctable error message received from 0000:05:00.0
<4> [861.807904] nvme 0000:05:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
<4> [861.807912] nvme 0000:05:00.0: device [15b7:5017] error status/mask=00000001/0000e000
<4> [861.807921] nvme 0000:05:00.0: [ 0] RxErr (First)
<6> [861.817886] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from wq_wait_for_space [xe]
<6> [862.823912] seq_file: buggy .next function m_next did not update position index
|