Result: 8 Warning(s)
git-log-oneline i915_display_info24 igt_runner24 runtimes24 results24.json results24-i915-load.json guc_logs24.tar i915_display_info_post_exec24 boot24 dmesg24
pstore-001-dmesg24 pstore-002-dmesg24| Detail | Value |
|---|---|
| Duration | unknown |
| Hostname |
shard-mtlp-8 |
| Igt-Version |
IGT-Version: 2.4-gb9a380969 (x86_64) (Linux: 7.0.0-rc6-CI_DRM_18270-g7b7e57c49dff+ x86_64) |
| Out |
Using IGT_SRANDOM=1775208726 for randomisation Opened device: /dev/dri/card0 Starting subtest: clear-via-pagefault Starting dynamic subtest: smem0 [thread:2371] Checked 1544645 / 2265548 pages [thread:2376] Checked 694500 / 708057 pages [thread:2374] Checked 358914 / 561583 pages [thread:2378] Checked 731019 / 1280892 pages [thread:2377] Checked 562299 / 608501 pages [thread:2373] Checked 811252 / 1766020 pages [thread:2379] Checked 405944 / 918396 pages [thread:2380] Checked 597525 / 904815 pages [thread:2372] Checked 934555 / 1366187 pages |
| Err |
Starting subtest: clear-via-pagefault Starting dynamic subtest: smem0 |
| Dmesg |
<6> [60.402910] Console: switching to colour dummy device 80x25
<6> [60.402968] [IGT] gem_mmap_offset: executing
<7> [60.431274] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [60.433670] i915 0000:00:02.0: [drm:i915_drop_caches_set [i915]] Dropping caches: 0x000001dc [0x000001dc]
<7> [60.445370] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<6> [60.446482] [IGT] gem_mmap_offset: starting subtest clear-via-pagefault
<6> [60.446714] [IGT] gem_mmap_offset: starting dynamic subtest smem0
<5> [61.253308] i915 0000:00:02.0: Using 41-bit DMA addresses
<4> [61.255102] ------------[ cut here ]------------
<3> [61.255110] UBSAN: shift-out-of-bounds in drivers/iommu/intel/iommu.h:1097:31
<3> [61.255162] shift exponent 65 is too large for 64-bit type 'long long unsigned int'
<4> [61.255169] CPU: 4 UID: 0 PID: 2375 Comm: gem_mmap_offset Tainted: G S U 7.0.0-rc6-CI_DRM_18270-g7b7e57c49dff+ #1 PREEMPT(lazy)
<4> [61.255171] Tainted: [S]=CPU_OUT_OF_SPEC, [U]=USER
<4> [61.255172] Hardware name: Intel Corporation Meteor Lake Client Platform/MTL-P LP5x T3 RVP, BIOS MTLPFWI1.R00.3471.D91.2401310918 01/31/2024
<4> [61.255173] Call Trace:
<4> [61.255173] <TASK>
<4> [61.255174] dump_stack_lvl+0x91/0xf0
<4> [61.255180] dump_stack+0x10/0x20
<4> [61.255181] __ubsan_handle_shift_out_of_bounds+0x133/0x2e0
<4> [61.255185] ? lock_acquire+0xc4/0x2f0
<4> [61.255188] ? cache_tag_flush_range+0x76/0x3d0
<4> [61.255193] cache_tag_flush_iotlb.cold+0x1f/0x5d
<4> [61.255198] cache_tag_flush_range+0x2fb/0x3d0
<4> [61.255204] intel_iommu_tlb_sync+0x26/0x50
<4> [61.255206] iommu_unmap+0x72/0xa0
<4> [61.255211] iommu_map_nosync+0x158/0x340
<4> [61.255216] iommu_map_sg+0xd9/0x1d0
<4> [61.255219] iommu_dma_map_sg+0x267/0x630
<4> [61.255225] __dma_map_sg_attrs+0x1c6/0x2a0
<4> [61.255230] dma_map_sg_attrs+0xe/0x30
<4> [61.255232] i915_gem_gtt_prepare_pages+0x59/0x100 [i915]
<4> [61.255405] shmem_get_pages+0x7a/0x2e0 [i915]
<4> [61.255549] ____i915_gem_object_get_pages+0x3f/0x120 [i915]
<4> [61.255687] __i915_gem_object_get_pages+0xa5/0x110 [i915]
<4> [61.255833] vm_fault_cpu+0x150/0x330 [i915]
<4> [61.255992] __do_fault+0x3d/0x1b0
<4> [61.255996] do_fault+0x329/0x650
<4> [61.255996] ? mt_find+0x27b/0x4a0
<4> [61.256000] __handle_mm_fault+0x99f/0x1030
<4> [61.256003] ? mt_find+0x285/0x4a0
<4> [61.256010] handle_mm_fault+0x12c/0x300
<4> [61.256011] ? lock_mm_and_find_vma+0x4b/0x940
<4> [61.256014] do_user_addr_fault+0x2d3/0x8e0
<4> [61.256019] exc_page_fault+0x96/0x2c0
<4> [61.256022] asm_exc_page_fault+0x27/0x30
<4> [61.256024] RIP: 0033:0x5d250e72e1a0
<4> [61.256030] Code: 0f 85 c7 00 00 00 48 83 c0 08 49 39 c7 75 eb 48 8d 45 01 4c 39 e5 74 36 48 89 c5 e9 5a ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 <f3> 0f 6f 0e 0f 29 0b f3 0f 6f 56 10 0f 29 53 10 f3 0f 6f 5e 20 0f
<4> [61.256032] RSP: 002b:00007e9b275cabf0 EFLAGS: 00010246
<4> [61.256034] RAX: 0000000000000000 RBX: 00007e9b275cac40 RCX: 0000000000000000
<4> [61.256035] RDX: 0000000000000000 RSI: 00007e995fce8000 RDI: 0000000000000000
<4> [61.256035] RBP: 0000000000000000 R08: 0000000000000005 R09: 0000000116afd000
<4> [61.256035] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000181820
<4> [61.256036] R13: 00007e995fce8000 R14: 0000000000181821 R15: 00007e9b275cac80
<4> [61.256043] </TASK>
<4> [61.256043] ---[ end trace ]---
<4> [61.256051] ------------[ cut here ]------------
<3> [61.256052] UBSAN: shift-out-of-bounds in drivers/iommu/intel/iommu.h:1070:17
<3> [61.256061] shift exponent 64 is too large for 64-bit type 'long long unsigned int'
<4> [61.256070] CPU: 4 UID: 0 PID: 2375 Comm: gem_mmap_offset Tainted: G S U 7.0.0-rc6-CI_DRM_18270-g7b7e57c49dff+ #1 PREEMPT(lazy)
<4> [61.256072] Tainted: [S]=CPU_OUT_OF_SPEC, [U]=USER
<4> [61.256072] Hardware name: Intel Corporation Meteor Lake Client Platform/MTL-P LP5x T3 RVP, BIOS MTLPFWI1.R00.3471.D91.2401310918 01/31/2024
<4> [61.256073] Call Trace:
<4> [61.256073] <TASK>
<4> [61.256074] dump_stack_lvl+0x91/0xf0
<4> [61.256076] dump_stack+0x10/0x20
<4> [61.256077] __ubsan_handle_shift_out_of_bounds+0x133/0x2e0
<4> [61.256080] ? lock_acquire+0xc4/0x2f0
<4> [61.256082] ? cache_tag_flush_range+0x76/0x3d0
<4> [61.256087] qi_batch_add_dev_iotlb.part.0.cold+0x1e/0x35
<4> [61.256090] cache_tag_flush_range+0x236/0x3d0
<4> [61.256096] intel_iommu_tlb_sync+0x26/0x50
<4> [61.256098] iommu_unmap+0x72/0xa0
<4> [61.256103] iommu_map_nosync+0x158/0x340
<4> [61.256107] iommu_map_sg+0xd9/0x1d0
<4> [61.256111] iommu_dma_map_sg+0x267/0x630
<4> [61.256117] __dma_map_sg_attrs+0x1c6/0x2a0
<4> [61.256121] dma_map_sg_attrs+0xe/0x30
<4> [61.256123] i915_gem_gtt_prepare_pages+0x59/0x100 [i915]
<4> [61.256274] shmem_get_pages+0x7a/0x2e0 [i915]
<4> [61.256413] ____i915_gem_object_get_pages+0x3f/0x120 [i915]
<4> [61.256546] __i915_gem_object_get_pages+0xa5/0x110 [i915]
<4> [61.256676] vm_fault_cpu+0x150/0x330 [i915]
<4> [61.256806] __do_fault+0x3d/0x1b0
<4> [61.256808] do_fault+0x329/0x650
<4> [61.256809] ? mt_find+0x27b/0x4a0
<4> [61.256811] __handle_mm_fault+0x99f/0x1030
<4> [61.256814] ? mt_find+0x285/0x4a0
<4> [61.256820] handle_mm_fault+0x12c/0x300
<4> [61.256821] ? lock_mm_and_find_vma+0x4b/0x940
<4> [61.256824] do_user_addr_fault+0x2d3/0x8e0
<4> [61.256829] exc_page_fault+0x96/0x2c0
<4> [61.256831] asm_exc_page_fault+0x27/0x30
<4> [61.256832] RIP: 0033:0x5d250e72e1a0
<4> [61.256833] Code: 0f 85 c7 00 00 00 48 83 c0 08 49 39 c7 75 eb 48 8d 45 01 4c 39 e5 74 36 48 89 c5 e9 5a ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 <f3> 0f 6f 0e 0f 29 0b f3 0f 6f 56 10 0f 29 53 10 f3 0f 6f 5e 20 0f
<4> [61.256834] RSP: 002b:00007e9b275cabf0 EFLAGS: 00010246
<4> [61.256834] RAX: 0000000000000000 RBX: 00007e9b275cac40 RCX: 0000000000000000
<4> [61.256835] RDX: 0000000000000000 RSI: 00007e995fce8000 RDI: 0000000000000000
<4> [61.256835] RBP: 0000000000000000 R08: 0000000000000005 R09: 0000000116afd000
<4> [61.256835] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000181820
<4> [61.256836] R13: 00007e995fce8000 R14: 0000000000181821 R15: 00007e9b275cac80
<4> [61.256842] </TASK>
<4> [61.256842] ---[ end trace ]---
<4> [61.771070] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [61.784803] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [62.331728] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [62.345700] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [62.965893] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [62.982158] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [63.321862] i915 0000:00:02.0: Failed to DMA remap 1136452 pages
<4> [63.332127] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [63.717558] i915 0000:00:02.0: Failed to DMA remap 1136452 pages
<4> [63.728717] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [64.520935] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [64.535597] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [65.074175] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [65.088642] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [65.927275] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [65.941901] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [66.478089] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [66.492603] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [67.053596] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [67.069326] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [67.616069] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [67.678614] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [68.353788] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [68.380744] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [68.971411] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [68.985422] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [69.576511] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [69.591595] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [70.145461] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [70.161385] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [70.742071] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [70.757356] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [71.297991] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [71.313022] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [72.575521] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [72.589779] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [73.135135] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [73.149407] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [73.761974] i915 0000:00:02.0: Failed to DMA remap 353377 pages
<4> [73.764590] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [74.050336] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [74.063604] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [74.615669] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [74.631680] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [74.861422] i915 0000:00:02.0: Failed to DMA remap 353377 pages
<4> [74.864498] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [74.912141] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [74.918004] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [75.296419] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [75.311216] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [75.854419] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [75.869003] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [76.044574] i915 0000:00:02.0: Failed to DMA remap 154607 pages
<4> [76.046618] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [76.114420] i915 0000:00:02.0: Failed to DMA remap 353377 pages
<4> [76.169894] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [76.583824] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [77.149432] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [77.432996] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [77.800521] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [77.816357] pagefault_out_of_memory: 5 callbacks suppressed
<4> [77.816359] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [78.361079] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [78.376608] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [78.535371] i915 0000:00:02.0: Failed to DMA remap 353377 pages
<4> [78.538590] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [78.605810] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [78.612217] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [78.953339] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [78.967080] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [79.511262] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [79.526125] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [79.801307] i915 0000:00:02.0: Failed to DMA remap 353377 pages
<4> [79.804141] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [79.931612] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [79.935288] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [80.212825] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [80.227221] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [80.772319] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [80.786346] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [81.005929] i915 0000:00:02.0: Failed to DMA remap 353377 pages
<4> [81.066911] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [81.433521] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [82.002906] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [82.632049] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [82.983386] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [82.997140] pagefault_out_of_memory: 5 callbacks suppressed
<4> [82.997142] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [83.539704] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [83.552764] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [83.761685] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [83.766516] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [84.119232] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [84.134077] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [84.680159] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [84.695268] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [85.012431] i915 0000:00:02.0: Failed to DMA remap 216204 pages
<4> [85.014965] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [85.131125] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [85.134613] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [85.504906] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [85.519265] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [86.060548] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [86.075086] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [86.381464] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [86.388201] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [86.765540] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [87.325583] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [87.664191] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [88.022415] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [88.037569] pagefault_out_of_memory: 3 callbacks suppressed
<4> [88.037571] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [88.581219] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [88.596195] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [88.987993] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [88.993327] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [89.356492] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [89.372186] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [89.916911] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [89.933460] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [90.290290] i915 0000:00:02.0: Failed to DMA remap 506587 pages
<4> [90.295469] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [90.656733] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [90.671372] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [91.216312] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [91.230033] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [91.499407] i915 0000:00:02.0: Failed to DMA remap 336565 pages
<4> [91.502674] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [92.209899] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [92.226890] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [93.307804] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [93.321140] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [93.866995] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [93.880504] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [94.302978] i915 0000:00:02.0: Failed to DMA remap 487191 pages
<4> [94.306639] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [94.496582] i915 0000:00:02.0: Failed to DMA remap 487191 pages
<4> [94.500351] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [94.874558] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [94.892206] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [95.441738] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [95.459580] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [96.078984] i915 0000:00:02.0: Failed to DMA remap 487191 pages
<4> [96.085679] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [96.903909] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [96.926616] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [97.151029] i915 0000:00:02.0: Failed to DMA remap 294897 pages
<4> [97.158084] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [97.747007] i915 0000:00:02.0: Failed to DMA remap 487191 pages
<4> [97.755820] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [112.897755] i915 0000:00:02.0: Failed to DMA remap 1579041 pages
<4> [113.146220] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<6> [140.495331] Purging GPU memory, 0 pages freed, 0 pages still pinned, 5639 pages left available.
<4> [140.528692] igt_runner invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=-1000
<4> [140.528710] CPU: 7 UID: 0 PID: 2369 Comm: igt_runner Tainted: G S U 7.0.0-rc6-CI_DRM_18270-g7b7e57c49dff+ #1 PREEMPT(lazy)
<4> [140.528713] Tainted: [S]=CPU_OUT_OF_SPEC, [U]=USER
<4> [140.528714] Hardware name: Intel Corporation Meteor Lake Client Platform/MTL-P LP5x T3 RVP, BIOS MTLPFWI1.R00.3471.D91.2401310918 01/31/2024
<4> [140.528716] Call Trace:
<4> [140.528717] <TASK>
<4> [140.528718] dump_stack_lvl+0xc1/0xf0
<4> [140.528728] dump_stack+0x10/0x20
<4> [140.528731] dump_header+0x4a/0x350
<4> [140.528739] oom_kill_process+0xe1/0x4e0
<4> [140.528742] ? __rcu_read_unlock+0x158/0x240
<4> [140.528748] out_of_memory+0x26e/0x6a0
<4> [140.528754] __alloc_frozen_pages_noprof+0x16c1/0x1820
<4> [140.528758] ? raw_spin_rq_unlock+0x14/0x50
<4> [140.528776] alloc_pages_mpol+0x7e/0x1a0
<4> [140.528782] folio_alloc_mpol_noprof+0x14/0x40
<4> [140.528784] swap_cache_alloc_folio+0x9a/0xf0
<4> [140.528791] swapin_readahead+0x590/0x7f0
<4> [140.528795] ? swapin_readahead+0x344/0x7f0
<4> [140.528804] do_swap_page+0xa27/0x1a30
<4> [140.528808] ? do_swap_page+0xa27/0x1a30
<4> [140.528813] ? __pte_offset_map+0x46/0x250
<4> [140.528817] ? __pte_offset_map+0x19c/0x250
<4> [140.528823] __handle_mm_fault+0xa10/0x1030
<4> [140.528833] handle_mm_fault+0x12c/0x300
<4> [140.528835] ? lock_mm_and_find_vma+0x4b/0x940
<4> [140.528840] do_user_addr_fault+0x2d3/0x8e0
<4> [140.528848] exc_page_fault+0x96/0x2c0
<4> [140.528853] asm_exc_page_fault+0x27/0x30
<4> [140.528856] RIP: 0010:core_sys_select+0x6e2/0x7c0
<4> [140.528861] Code: c2 1b 83 e8 20 c3 ec ff 66 90 ba 01 00 00 00 4c 89 f6 4c 89 ef e8 0e a0 fc ff 0f 01 cb 48 8b bd 98 fe ff ff 4c 89 f1 4c 89 ee <f3> a4 0f 1f 00 0f 01 ca 48 85 c9 0f 85 6f fd ff ff e9 0a fd ff ff
<4> [140.528862] RSP: 0000:ffffc900086039c0 EFLAGS: 00050246
<4> [140.528865] RAX: 0000000000000000 RBX: ffffc90008603a50 RCX: 0000000000000008
<4> [140.528866] RDX: 0000000000000000 RSI: ffffc90008603a68 RDI: 00007ffc09acfdc0
<4> [140.528866] RBP: ffffc90008603b80 R08: 0000000000000000 R09: 0000000000000000
<4> [140.528867] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
<4> [140.528868] R13: ffffc90008603a68 R14: 0000000000000008 R15: 00000000000000e0
<4> [140.528884] ? lock_acquire+0xc4/0x2f0
<4> [140.528890] ? find_held_lock+0x31/0x90
<4> [140.528892] ? ktime_get_ts64+0x24/0x160
<4> [140.528896] ? lock_release+0xd0/0x2b0
<4> [140.528899] ? seqcount_lockdep_reader_access.constprop.0+0x9f/0xc0
<4> [140.528902] ? ktime_get_ts64+0x24/0x160
<4> [140.528904] ? trace_hardirqs_on+0x22/0x100
<4> [140.528911] ? seqcount_lockdep_reader_access.constprop.0+0x9f/0xc0
<4> [140.528916] do_pselect.constprop.0+0xc4/0x170
<4> [140.528922] __x64_sys_pselect6+0x4c/0x80
<4> [140.528924] x64_sys_call+0x24cd/0x26e0
<4> [140.528929] do_syscall_64+0xdd/0x1440
<4> [140.528933] ? lock_release+0xd0/0x2b0
<4> [140.528938] ? rcu_read_unlock+0x26/0x80
<4> [140.528943] ? __lock_acquire+0x43e/0x2790
<4> [140.528948] ? __pte_offset_map+0x46/0x250
<4> [140.528950] ? __pte_offset_map+0x19c/0x250
<4> [140.528957] ? find_held_lock+0x31/0x90
<4> [140.528959] ? rcu_read_unlock+0x1c/0x80
<4> [140.528962] ? lock_release+0xd0/0x2b0
<4> [140.528966] ? find_held_lock+0x31/0x90
<4> [140.528969] ? do_user_addr_fault+0x2fd/0x8e0
<4> [140.528972] ? lock_release+0xd0/0x2b0
<4> [140.528977] ? up_read+0x22/0x30
<4> [140.528978] ? trace_hardirqs_on_prepare+0xe1/0x100
<4> [140.528982] ? irqentry_exit+0x17d/0x780
<4> [140.528986] ? exc_page_fault+0xbd/0x2c0
<4> [140.528989] entry_SYSCALL_64_after_hwframe+0x76/0x7e
<4> [140.528991] RIP: 0033:0x7cc3d7726c6e
<4> [140.528996] Code: 00 00 00 48 89 75 b0 4c 89 45 b8 4c 8d 45 b0 4d 89 c5 80 3d e3 43 0e 00 00 74 79 45 31 c9 4c 89 fe 89 df b8 0e 01 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 d6 00 00 00 89 c3 4d 85 e4 74 30 49 8b 4d
<4> [140.528997] RSP: 002b:00007ffc09acfad0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
<4> [140.528999] RAX: ffffffffffffffda RBX: 0000000000000015 RCX: 00007cc3d7726c6e
<4> [140.529000] RDX: 0000000000000000 RSI: 00007ffc09acfdc0 RDI: 0000000000000015
<4> [140.529000] RBP: 00007ffc09acfb40 R08: 00007ffc09acfaf0 R09: 0000000000000000
<4> [140.529001] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc09acfca0
<4> [140.529001] R13: 00007ffc09acfaf0 R14: 0000000000000000 R15: 00007ffc09acfdc0
<4> [140.529010] </TASK>
<4> [140.529010] Mem-Info:
<4> [140.529028] active_anon:522072 inactive_anon:6555258 isolated_anon:0
active_file:117 inactive_file:157 isolated_file:0
unevictable:769667 dirty:0 writeback:8
slab_reclaimable:11734 slab_unreclaimable:79544
mapped:2138 shmem:2829152 pagetables:2496
sec_pagetables:937 bounce:0
kernel_misc_reclaimable:0
free:51455 free_pcp:171 free_cma:0
<4> [140.529034] Node 0 active_anon:2092384kB inactive_anon:26214888kB active_file:468kB inactive_file:628kB unevictable:3078668kB isolated(anon):0kB isolated(file):0kB mapped:8552kB dirty:0kB writeback:32kB shmem:11316608kB shmem_thp:8986624kB shmem_pmdmapped:0kB anon_thp:0kB kernel_stack:7088kB pagetables:9984kB sec_pagetables:3748kB all_unreclaimable? yes Balloon:0kB
<4> [140.529039] Node 0 DMA free:11272kB boost:0kB min:32kB low:44kB high:56kB reserved_highatomic:0KB free_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB zspages:0kB present:15992kB managed:15368kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
<4> [140.529044] lowmem_reserve[]: 0 1360 31497 31497 31497
<4> [140.529052] Node 0 DMA32 free:122980kB boost:0kB min:2900kB low:4284kB high:5668kB reserved_highatomic:0KB free_highatomic:0KB active_anon:12288kB inactive_anon:268288kB active_file:64kB inactive_file:0kB unevictable:974020kB writepending:0kB zspages:0kB present:1566956kB managed:1393616kB mlocked:0kB bounce:0kB free_pcp:288kB local_pcp:0kB free_cma:0kB
<4> [140.529057] lowmem_reserve[]: 0 0 30136 30136 30136
<4> [140.529063] Node 0 Normal free:71568kB boost:16384kB min:81032kB low:111888kB high:142744kB reserved_highatomic:6144KB free_highatomic:0KB active_anon:2102624kB inactive_anon:25919976kB active_file:596kB inactive_file:436kB unevictable:2104648kB writepending:32kB zspages:0kB present:31457280kB managed:30859448kB mlocked:26980kB bounce:0kB free_pcp:396kB local_pcp:0kB free_cma:0kB
<4> [140.529068] lowmem_reserve[]: 0 0 0 0 0
<4> [140.529075] Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 2*4096kB (M) = 11272kB
<4> [140.529109] Node 0 DMA32: 163*4kB (UME) 71*8kB (UM) 54*16kB (UM) 74*32kB (UM) 164*64kB (UME) 382*128kB (UME) 231*256kB (UME) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 122980kB
<4> [140.529130] Node 0 Normal: 84*4kB (M) 2446*8kB (UM) 2095*16kB (UM) 567*32kB (UM) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 71568kB
<4> [140.529149] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
<4> [140.529152] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
<4> [140.529154] 2831546 total pagecache pages
<4> [140.529155] 8 pages in swap cache
<4> [140.529156] Free swap = 6840104kB
<4> [140.529158] Total swap = 8388604kB
<4> [140.529159] 8260057 pages RAM
<4> [140.529160] 0 pages HighMem/MovableOnly
<4> [140.529162] 192949 pages reserved
<4> [140.529163] 0 pages hwpoisoned
<4> [140.529165] Memory cgroup min protection 0kB -- low protection 0kB
<6> [140.529166] Tasks state (memory values in pages):
<6> [140.529168] [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
<6> [140.529222] [ 308] 0 308 12724 815 0 815 0 106496 354 -250 systemd-journal
<6> [140.529245] [ 403] 0 403 72240 6771 4633 2138 0 106496 0 -1000 multipathd
<6> [140.529252] [ 409] 0 409 6948 567 0 567 0 81920 849 -1000 systemd-udevd
<6> [140.529264] [ 689] 998 689 4759 678 0 678 0 77824 294 0 systemd-network
<6> [140.529271] [ 701] 992 701 5400 664 0 664 0 86016 582 0 systemd-resolve
<6> [140.529278] [ 704] 997 704 22759 699 0 699 0 81920 243 0 systemd-timesyn
<6> [140.529286] [ 756] 0 756 77756 758 0 758 0 110592 199 0 boltd
<6> [140.529292] [ 757] 101 757 2443 808 0 808 0 65536 226 -900 dbus-daemon
<6> [140.529298] [ 775] 110 775 403134 407 0 407 0 184320 1991 0 prometheus-node
<6> [140.529305] [ 795] 0 795 3003 795 0 795 0 69632 271 0 smartd
<6> [140.529312] [ 808] 0 808 4542 710 0 710 0 77824 273 0 systemd-logind
<6> [140.529319] [ 814] 0 814 105579 751 0 751 0 143360 266 0 thermald
<6> [140.529326] [ 825] 0 825 117263 835 0 835 0 139264 557 0 udisksd
<6> [140.529333] [ 844] 103 844 55629 772 0 772 0 86016 470 0 rsyslogd
<6> [140.529339] [ 866] 0 866 1708 503 0 503 0 57344 61 0 cron
<6> [140.529345] [ 884] 991 884 77043 797 0 797 0 98304 224 0 polkitd
<6> [140.529350] [ 951] 0 951 27424 783 0 783 0 114688 2363 0 unattended-upgr
<6> [140.529356] [ 956] 0 956 78674 633 0 633 0 114688 565 0 upowerd
<6> [140.529362] [ 1041] 0 1041 1528 375 0 375 0 61440 34 0 agetty
<6> [140.529371] [ 1102] 0 1102 3008 535 0 535 0 65536 286 -1000 sshd
<6> [140.529377] [ 1107] 1000 1107 5024 662 0 662 0 90112 471 100 systemd
<6> [140.529383] [ 1108] 1000 1108 5291 224 0 224 0 81920 431 100 (sd-pam)
<6> [140.529389] [ 1194] 0 1194 3743 610 0 610 0 77824 427 0 sshd
<6> [140.529395] [ 1240] 1000 1240 3818 385 0 385 0 77824 588 0 sshd
<6> [140.529401] [ 1244] 1000 1244 3160100 653 2 651 0 704512 21361 -1000 java
<6> [140.529408] [ 2170] 1000 2170 1870 407 0 407 0 61440 97 0 bash
<6> [140.529413] [ 2352] 1000 2352 1494 359 0 359 0 53248 34 0 dmesg
<6> [140.529419] [ 2368] 1000 2368 4187 619 0 619 0 73728 298 0 sudo
<6> [140.529425] [ 2369] 0 2369 10049 771 0 771 0 122880 555 -1000 igt_runner
<6> [140.529431] [ 2370] 0 2370 3595816 705 0 705 0 5984256 605 1000 gem_mmap_offset
<6> [140.529433] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=user.slice,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/session-3.scope,task=gem_mmap_offset,pid=2370,uid=0
<3> [140.530209] Out of memory: Killed process 2370 (gem_mmap_offset) total-vm:14383264kB, anon-rss:0kB, file-rss:2820kB, shmem-rss:0kB, UID:0 pgtables:5844kB oom_score_adj:1000
<6> [140.592408] Purging GPU memory, 0 pages freed, 0 pages still pinned, 5637 pages left available.
<6> [141.022993] Purging GPU memory, 0 pages freed, 0 pages still pinned, 5639 pages left available.
<6> [141.120530] Purging GPU memory, 0 pages freed, 0 pages still pinned, 5639 pages left available.
<6> [141.217070] Purging GPU memory, 0 pages freed, 0 pages still pinned, 5639 pages left available.
<6> [141.679935] Purging GPU memory, 0 pages freed, 0 pages still pinned, 5639 pages left available.
<6> [142.248854] Purging GPU memory, 0 pages freed, 0 pages still pinned, 5639 pages left available.
<6> [142.651223] oom_reaper: reaped process 2370 (gem_mmap_offset), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
<6> [143.402149] systemd-journald[308]: Under memory pressure, flushing caches.
<4> [143.524610] i915 0000:00:02.0: Failed to DMA remap 294897 pages
<6> [144.184819] systemd-journald[308]: Under memory pressure, flushing caches.
<7> [144.498972] i915 0000:00:02.0: [drm:intel_power_well_enable [i915]] enabling DC_off
<7> [144.500020] i915 0000:00:02.0: [drm:gen9_set_dc_state.part.0 [i915]] Setting DC state from 02 to 00
<7> [144.511750] i915 0000:00:02.0: [drm:drm_client_dev_restore] fbdev: ret=0
|