Result:
i915_display_info16 igt_runner16 results16.json results16-xe-load.json guc_logs16.tar i915_display_info_post_exec16 serial_data16 boot16 dmesg16
| Detail | Value |
|---|---|
| Duration | 10.39 seconds |
| Hostname |
shard-bmg-4 |
| Igt-Version |
IGT-Version: 2.4-gd28bd0b9e (x86_64) (Linux: 7.1.0-rc3-lgci-xe-xe-pw-166104v2-debug+ x86_64) |
| Out |
Using IGT_SRANDOM=1778527576 for randomisation Opened device: /dev/dri/card0 Starting subtest: long-spin-reuse-many-preempt Stack trace: #0 ../lib/igt_core.c:2075 __igt_fail_assert() #1 [xe_exec+0x48] #2 ../lib/xe/xe_legacy.c:241 xe_legacy_test_mode() #3 ../tests/intel/xe_exec_reset.c:956 __igt_unique____real_main877() #4 ../tests/intel/xe_exec_reset.c:877 main() #5 [__libc_init_first+0x8a] #6 [__libc_start_main+0x8b] #7 [_start+0x25] Subtest long-spin-reuse-many-preempt: FAIL (10.392s) |
| Err |
Starting subtest: long-spin-reuse-many-preempt (xe_exec_reset:7105) xe/xe_ioctl-CRITICAL: Test assertion failure function xe_exec, file ../lib/xe/xe_ioctl.c:652: (xe_exec_reset:7105) xe/xe_ioctl-CRITICAL: Failed assertion: __xe_exec(fd, exec) == 0 (xe_exec_reset:7105) xe/xe_ioctl-CRITICAL: error: -125 != 0 Subtest long-spin-reuse-many-preempt failed. **** DEBUG **** (xe_exec_reset:7105) xe/xe_ioctl-CRITICAL: Test assertion failure function xe_exec, file ../lib/xe/xe_ioctl.c:652: (xe_exec_reset:7105) xe/xe_ioctl-CRITICAL: Failed assertion: __xe_exec(fd, exec) == 0 (xe_exec_reset:7105) xe/xe_ioctl-CRITICAL: error: -125 != 0 (xe_exec_reset:7105) igt_core-INFO: Stack trace: (xe_exec_reset:7105) igt_core-INFO: #0 ../lib/igt_core.c:2075 __igt_fail_assert() (xe_exec_reset:7105) igt_core-INFO: #1 [xe_exec+0x48] (xe_exec_reset:7105) igt_core-INFO: #2 ../lib/xe/xe_legacy.c:241 xe_legacy_test_mode() (xe_exec_reset:7105) igt_core-INFO: #3 ../tests/intel/xe_exec_reset.c:956 __igt_unique____real_main877() (xe_exec_reset:7105) igt_core-INFO: #4 ../tests/intel/xe_exec_reset.c:877 main() (xe_exec_reset:7105) igt_core-INFO: #5 [__libc_init_first+0x8a] (xe_exec_reset:7105) igt_core-INFO: #6 [__libc_start_main+0x8b] (xe_exec_reset:7105) igt_core-INFO: #7 [_start+0x25] **** END **** Subtest long-spin-reuse-many-preempt: FAIL (10.392s) |
| Dmesg |
<6> [324.771591] Console: switching to colour dummy device 80x25
<6> [324.771873] [IGT] xe_exec_reset: executing
<6> [324.799228] [IGT] xe_exec_reset: starting subtest long-spin-reuse-many-preempt
<7> [326.277020] xe 0000:03:00.0: [drm:intel_power_well_enable [xe]] enabling AUX_TC2
<7> [326.389020] xe 0000:03:00.0: [drm:intel_power_well_disable [xe]] disabling AUX_TC2
<7> [330.052128] xe 0000:03:00.0: [drm:guc_exec_queue_timedout_job [xe]] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=5, running_time_ms=234880, timeout_ms=5000, diff=0xffffff5d
<5> [330.052907] xe 0000:03:00.0: [drm] Tile0: GT0: Timedout job: seqno=4294967169, lrc_seqno=4294967169, guc_id=5, flags=0x0 in xe_exec_reset [7105]
<7> [330.052923] xe 0000:03:00.0: [drm:xe_devcoredump [xe]] Multiple hangs are occurring, but only the first snapshot was taken
<7> [330.053487] xe 0000:03:00.0: [drm:guc_exec_queue_timedout_job [xe]] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=4, running_time_ms=1372, timeout_ms=5000, diff=0x017ef15a
<7> [330.054708] xe 0000:03:00.0: [drm:guc_exec_queue_timedout_job [xe]] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=3, running_time_ms=1373, timeout_ms=5000, diff=0x017f5730
<7> [330.055323] xe 0000:03:00.0: [drm:guc_exec_queue_timedout_job [xe]] Tile0: GT0: Check job timeout: seqno=4294967169, lrc_seqno=4294967169, guc_id=2, running_time_ms=1373, timeout_ms=5000, diff=0x017f3e8c
<7> [335.149549] xe 0000:03:00.0: [drm:xe_exec_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_exec.c:153: q->ops->reset_status(q)
<6> [335.191380] [IGT] xe_exec_reset: finished subtest long-spin-reuse-many-preempt, FAIL
<6> [335.191939] [IGT] xe_exec_reset: exiting, ret=98
<6> [335.192387] Console: switching to colour frame buffer device 240x67
<7> [335.207724] xe 0000:03:00.0: [drm:drm_client_dev_restore] fbdev: ret=0
|