Result: 8 Warning(s)
i915_display_info7 igt_runner7 results7.json results7-xe-load.json guc_logs7.tar boot7 dmesg7
| Detail | Value |
|---|---|
| Duration | unknown |
| Hostname |
shard-bmg-8 |
| Igt-Version |
IGT-Version: 2.4-g1c23bc1bd (x86_64) (Linux: 7.0.0-lgci-xe-xe-4924-84de0c4efa971d100-debug+ x86_64) |
| Out |
Using IGT_SRANDOM=1776844574 for randomisation Opened device: /dev/dri/card0 Starting subtest: threads-shared-vm-many-large-mmap-shared |
| Err |
Starting subtest: threads-shared-vm-many-large-mmap-shared (xe_exec_system_allocator:2773) [thread:2782] xe/xe_ioctl-CRITICAL: Test assertion failure function xe_wait_ufence, file ../lib/xe/xe_ioctl.c:763: (xe_exec_system_allocator:2773) [thread:2782] xe/xe_ioctl-CRITICAL: Failed assertion: __xe_wait_ufence(fd, addr, value, exec_queue, &timeout) == 0 (xe_exec_system_allocator:2773) [thread:2782] xe/xe_ioctl-CRITICAL: Last errno: 5, Input/output error |
| Dmesg |
<6> [67.410237] Console: switching to colour dummy device 80x25
<6> [67.410411] [IGT] xe_exec_system_allocator: executing
<6> [67.485408] [IGT] xe_exec_system_allocator: starting subtest threads-shared-vm-many-large-mmap-shared
<7> [67.485736] xe 0000:03:00.0: [drm:drm_pagemap_dev_unhold_work [drm_gpusvm_helper]] Releasing reference on provider device and module.
<7> [67.537104] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [67.537480] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [67.538147] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [67.538389] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [67.540025] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [67.541933] xe 0000:03:00.0: [drm:xe_svm_garbage_collector [xe]] Skipping madvise reset for vma.
<7> [67.664116] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 0 val 0x24232122
<7> [67.664264] xe 0000:03:00.0: [drm:xe_hwmon_read [xe]] thermal data for group 1 val 0x25252525
<3> [68.921122] xe 0000:03:00.0: [drm] *ERROR* Tile0: GT1: GuC engine reset request failed on 2:0 because 0x00000000
<6> [68.931629] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from xe_guc_exec_queue_reset_failure_handler [xe]
<6> [68.931742] xe 0000:03:00.0: [drm] Tile0: GT1: reset queued
<7> [68.931788] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.931862] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.931945] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.932002] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.932076] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.932136] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.932213] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.932270] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.932384] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.932443] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.932522] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.932579] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.932655] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.932713] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.932788] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.932846] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.932925] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.932998] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.933086] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.933147] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.933229] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.933292] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.933372] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.933432] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.933504] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.933558] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.933628] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.933682] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.933752] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.933805] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.933876] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.933931] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.934000] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.934053] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.934122] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.934176] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.934247] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.934350] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.934436] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.934492] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.934566] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.934621] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.934690] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.934745] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.934817] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.934870] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.934938] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.934993] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.935063] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.935117] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.935185] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.935239] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.935341] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.935400] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.935473] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.935528] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.935598] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.935653] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.935723] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.935777] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.935848] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.935902] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<7> [68.935970] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<6> [68.936025] xe 0000:03:00.0: [drm] Tile0: GT1: trying reset from guc_exec_queue_timedout_job [xe]
<6> [68.936135] xe 0000:03:00.0: [drm] Tile0: GT1: reset started
<7> [68.936254] xe 0000:03:00.0: [drm:guc_ct_change_state [xe]] Tile0: GT1: GuC CT communication channel stopped
<3> [68.942278] xe 0000:03:00.0: [drm] *ERROR* Tile0: GT1: failed to clear GRDOM_FULL (-ETIMEDOUT)
<3> [68.951453] xe 0000:03:00.0: [drm] *ERROR* Tile0: GT1: reset failed (-ETIMEDOUT)
<3> [68.959057] xe 0000:03:00.0: [drm] *ERROR* CRITICAL: Xe has declared device 0000:03:00.0 as wedged.
IOCTLs and executions are blocked.
For recovery procedure, refer to https://docs.kernel.org/gpu/drm-uapi.html#device-wedging
Please file a _new_ bug report at https://gitlab.freedesktop.org/drm/xe/kernel/issues/new
<7> [68.991054] xe 0000:03:00.0: [drm:guc_ct_change_state [xe]] Tile0: GT0: GuC CT communication channel stopped
<7> [68.991351] xe 0000:03:00.0: [drm:guc_ct_change_state [xe]] Tile0: GT1: GuC CT communication channel stopped
<3> [69.051082] xe 0000:03:00.0: [drm] *ERROR* Tile0: GT1: GuC mmio request 0x5507: no reply 0x5507
<6> [69.059853] xe 0000:03:00.0: [drm] exec queue reset detected
<6> [69.059888] xe 0000:03:00.0: [drm] exec queue reset detected
<6> [69.059890] xe 0000:03:00.0: [drm] exec queue reset detected
<6> [69.059907] xe 0000:03:00.0: [drm] exec queue reset detected
<6> [69.060014] xe 0000:03:00.0: [drm] device wedged, needs recovery
|