Result: 8 Warning(s)
git-log-oneline i915_display_info3 igt_runner3 runtimes3 results3.json results3-i915-load.json guc_logs3.tar i915_display_info_post_exec3 boot3 dmesg3
| Detail | Value |
|---|---|
| Duration | unknown |
| Hostname |
shard-mtlp-5 |
| Igt-Version |
IGT-Version: 2.3-g6b597f008 (x86_64) (Linux: 7.0.0-rc3-CI_DRM_18137-g3ccc0e340459+ x86_64) |
| Out |
Using IGT_SRANDOM=1773415592 for randomisation Opened device: /dev/dri/card0 Starting subtest: clear-via-pagefault Starting dynamic subtest: smem0 [thread:3022] Checked 1868316 / 2777729 pages [thread:3020] Checked 1961362 / 3331607 pages [thread:3021] Checked 364531 / 888958 pages [thread:3023] Checked 682717 / 685297 pages [thread:3019] Checked 851854 / 1300516 pages [thread:3024] Checked 1902315 / 3728306 pages [thread:3025] Checked 750259 / 1275859 pages [thread:3026] Checked 735038 / 1891976 pages [thread:3027] Checked 885152 / 1283516 pages [thread:3029] Checked 924990 / 1402869 pages |
| Err |
Starting subtest: clear-via-pagefault Starting dynamic subtest: smem0 |
| Dmesg |
<6> [501.265153] Console: switching to colour dummy device 80x25
<6> [501.265673] [IGT] gem_mmap_offset: executing
<7> [501.276204] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<7> [501.278923] i915 0000:00:02.0: [drm:i915_drop_caches_set [i915]] Dropping caches: 0x000001dc [0x000001dc]
<7> [501.298641] i915 0000:00:02.0: [drm:i915_gem_open [i915]]
<6> [501.299976] [IGT] gem_mmap_offset: starting subtest clear-via-pagefault
<6> [501.300572] [IGT] gem_mmap_offset: starting dynamic subtest smem0
<4> [502.205823] ------------[ cut here ]------------
<3> [502.205833] UBSAN: shift-out-of-bounds in drivers/iommu/intel/iommu.h:1097:31
<3> [502.205865] shift exponent 65 is too large for 64-bit type 'long long unsigned int'
<4> [502.205874] CPU: 2 UID: 0 PID: 3024 Comm: gem_mmap_offset Tainted: G S U L 7.0.0-rc3-CI_DRM_18137-g3ccc0e340459+ #1 PREEMPT(lazy)
<4> [502.205876] Tainted: [S]=CPU_OUT_OF_SPEC, [U]=USER, [L]=SOFTLOCKUP
<4> [502.205876] Hardware name: Intel Corporation Meteor Lake Client Platform/MTL-P LP5x T3 RVP, BIOS MTLPFWI1.R00.3471.D91.2401310918 01/31/2024
<4> [502.205878] Call Trace:
<4> [502.205878] <TASK>
<4> [502.205879] dump_stack_lvl+0x91/0xf0
<4> [502.205884] dump_stack+0x10/0x20
<4> [502.205885] __ubsan_handle_shift_out_of_bounds+0x133/0x2e0
<4> [502.205888] ? lock_acquire+0xc4/0x2f0
<4> [502.205891] ? cache_tag_flush_range+0x76/0x3d0
<4> [502.205896] cache_tag_flush_iotlb.cold+0x1f/0x5d
<4> [502.205899] cache_tag_flush_range+0x2fb/0x3d0
<4> [502.205904] intel_iommu_tlb_sync+0x26/0x50
<4> [502.205906] iommu_unmap+0x72/0xa0
<4> [502.205910] iommu_map_nosync+0x158/0x340
<4> [502.205914] iommu_map_sg+0xd9/0x1d0
<4> [502.205917] iommu_dma_map_sg+0x267/0x630
<4> [502.205922] __dma_map_sg_attrs+0x1c6/0x2a0
<4> [502.205925] dma_map_sg_attrs+0xe/0x30
<4> [502.205927] i915_gem_gtt_prepare_pages+0x59/0x100 [i915]
<4> [502.206089] shmem_get_pages+0x7a/0x2e0 [i915]
<4> [502.206213] ____i915_gem_object_get_pages+0x3f/0x120 [i915]
<4> [502.206334] __i915_gem_object_get_pages+0xa5/0x110 [i915]
<4> [502.206460] vm_fault_cpu+0x150/0x330 [i915]
<4> [502.206576] __do_fault+0x3d/0x1b0
<4> [502.206580] do_fault+0x339/0x680
<4> [502.206581] ? mt_find+0x27b/0x4a0
<4> [502.206584] __handle_mm_fault+0x99f/0x1030
<4> [502.206586] ? mt_find+0x285/0x4a0
<4> [502.206591] handle_mm_fault+0x12c/0x300
<4> [502.206592] ? lock_mm_and_find_vma+0x4b/0x940
<4> [502.206594] do_user_addr_fault+0x2d3/0x8e0
<4> [502.206598] exc_page_fault+0x96/0x2c0
<4> [502.206602] asm_exc_page_fault+0x27/0x30
<4> [502.206603] RIP: 0033:0x64f65512d1a0
<4> [502.206605] Code: 0f 85 c7 00 00 00 48 83 c0 08 49 39 c7 75 eb 48 8d 45 01 4c 39 e5 74 36 48 89 c5 e9 5a ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 <f3> 0f 6f 0e 0f 29 0b f3 0f 6f 56 10 0f 29 53 10 f3 0f 6f 5e 20 0f
<4> [502.206606] RSP: 002b:0000799e5a7fdbf0 EFLAGS: 00010246
<4> [502.206608] RAX: 0000000000000000 RBX: 0000799e5a7fdc40 RCX: 0000000000000000
<4> [502.206609] RDX: 0000000000000000 RSI: 0000799c73915000 RDI: 0000000000000000
<4> [502.206609] RBP: 0000000000000000 R08: 0000000000000005 R09: 00000001013cc000
<4> [502.206609] R10: 0000000000000001 R11: 0000000000000246 R12: 00000000001d06ea
<4> [502.206610] R13: 0000799c73915000 R14: 00000000001d06eb R15: 0000799e5a7fdc80
<4> [502.206614] </TASK>
<4> [502.206615] ---[ end trace ]---
<4> [502.206621] ------------[ cut here ]------------
<3> [502.206623] UBSAN: shift-out-of-bounds in drivers/iommu/intel/iommu.h:1070:17
<3> [502.206632] shift exponent 64 is too large for 64-bit type 'long long unsigned int'
<4> [502.206639] CPU: 2 UID: 0 PID: 3024 Comm: gem_mmap_offset Tainted: G S U L 7.0.0-rc3-CI_DRM_18137-g3ccc0e340459+ #1 PREEMPT(lazy)
<4> [502.206641] Tainted: [S]=CPU_OUT_OF_SPEC, [U]=USER, [L]=SOFTLOCKUP
<4> [502.206641] Hardware name: Intel Corporation Meteor Lake Client Platform/MTL-P LP5x T3 RVP, BIOS MTLPFWI1.R00.3471.D91.2401310918 01/31/2024
<4> [502.206641] Call Trace:
<4> [502.206641] <TASK>
<4> [502.206642] dump_stack_lvl+0x91/0xf0
<4> [502.206643] dump_stack+0x10/0x20
<4> [502.206644] __ubsan_handle_shift_out_of_bounds+0x133/0x2e0
<4> [502.206646] ? lock_acquire+0xc4/0x2f0
<4> [502.206647] ? cache_tag_flush_range+0x76/0x3d0
<4> [502.206651] qi_batch_add_dev_iotlb.part.0.cold+0x1e/0x35
<4> [502.206655] cache_tag_flush_range+0x236/0x3d0
<4> [502.206660] intel_iommu_tlb_sync+0x26/0x50
<4> [502.206661] iommu_unmap+0x72/0xa0
<4> [502.206666] iommu_map_nosync+0x158/0x340
<4> [502.206671] iommu_map_sg+0xd9/0x1d0
<4> [502.206675] iommu_dma_map_sg+0x267/0x630
<4> [502.206681] __dma_map_sg_attrs+0x1c6/0x2a0
<4> [502.206684] dma_map_sg_attrs+0xe/0x30
<4> [502.206686] i915_gem_gtt_prepare_pages+0x59/0x100 [i915]
<4> [502.206819] shmem_get_pages+0x7a/0x2e0 [i915]
<4> [502.206950] ____i915_gem_object_get_pages+0x3f/0x120 [i915]
<4> [502.207076] __i915_gem_object_get_pages+0xa5/0x110 [i915]
<4> [502.207200] vm_fault_cpu+0x150/0x330 [i915]
<4> [502.207324] __do_fault+0x3d/0x1b0
<4> [502.207326] do_fault+0x339/0x680
<4> [502.207328] ? mt_find+0x27b/0x4a0
<4> [502.207330] __handle_mm_fault+0x99f/0x1030
<4> [502.207332] ? mt_find+0x285/0x4a0
<4> [502.207338] handle_mm_fault+0x12c/0x300
<4> [502.207339] ? lock_mm_and_find_vma+0x4b/0x940
<4> [502.207342] do_user_addr_fault+0x2d3/0x8e0
<4> [502.207346] exc_page_fault+0x96/0x2c0
<4> [502.207348] asm_exc_page_fault+0x27/0x30
<4> [502.207349] RIP: 0033:0x64f65512d1a0
<4> [502.207350] Code: 0f 85 c7 00 00 00 48 83 c0 08 49 39 c7 75 eb 48 8d 45 01 4c 39 e5 74 36 48 89 c5 e9 5a ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 <f3> 0f 6f 0e 0f 29 0b f3 0f 6f 56 10 0f 29 53 10 f3 0f 6f 5e 20 0f
<4> [502.207351] RSP: 002b:0000799e5a7fdbf0 EFLAGS: 00010246
<4> [502.207351] RAX: 0000000000000000 RBX: 0000799e5a7fdc40 RCX: 0000000000000000
<4> [502.207352] RDX: 0000000000000000 RSI: 0000799c73915000 RDI: 0000000000000000
<4> [502.207352] RBP: 0000000000000000 R08: 0000000000000005 R09: 00000001013cc000
<4> [502.207353] R10: 0000000000000001 R11: 0000000000000246 R12: 00000000001d06ea
<4> [502.207353] R13: 0000799c73915000 R14: 00000000001d06eb R15: 0000799e5a7fdc80
<4> [502.207359] </TASK>
<4> [502.207359] ---[ end trace ]---
<4> [502.802019] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [502.820947] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [503.451788] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [503.471156] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<7> [503.724781] i915 0000:00:02.0: [drm:intel_pps_vdd_off_sync_unlocked [i915]] [ENCODER:506:DDI A/PHY A] PPS 0 turning VDD off
<7> [503.725108] i915 0000:00:02.0: [drm:intel_pps_vdd_off_sync_unlocked [i915]] [ENCODER:506:DDI A/PHY A] PPS 0 PP_STATUS: 0x80000008 PP_CONTROL: 0x00000067
<7> [503.725234] i915 0000:00:02.0: [drm:intel_power_well_disable [i915]] disabling DC_off
<7> [503.725402] i915 0000:00:02.0: [drm:skl_enable_dc6 [i915]] Enabling DC6
<7> [503.725544] i915 0000:00:02.0: [drm:gen9_set_dc_state.part.0 [i915]] Setting DC state from 00 to 02
<4> [504.116109] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [504.137025] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [504.777569] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [504.798202] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [505.436904] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [505.456033] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [506.097299] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [506.116531] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [507.062173] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [507.082558] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [507.715618] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [507.734646] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [508.457916] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [508.477553] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [509.114941] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [509.134222] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [510.742991] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [510.764339] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [511.399138] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [511.419296] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [512.402996] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [512.421759] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [513.057541] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [513.078493] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [514.010917] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [514.028193] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [514.702413] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [514.721386] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [515.355834] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [515.374912] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [516.176775] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [516.200190] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [516.842875] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [516.862660] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [517.572739] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [517.591922] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [518.334617] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [518.353883] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [518.992246] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [519.012391] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [519.688307] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [519.707698] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [520.344083] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [520.362872] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [521.176347] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [521.196474] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [521.839081] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [521.859082] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [522.667148] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [522.688147] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [523.341504] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [523.361215] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [524.175821] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [524.196159] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [524.852438] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [524.872812] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [525.831022] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [525.852068] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [526.491652] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [526.510508] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [527.516312] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [527.536208] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [528.173899] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [528.193811] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [529.004535] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [529.023406] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [529.663515] i915 0000:00:02.0: Failed to DMA remap 1902315 pages
<4> [529.682519] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [535.280860] i915 0000:00:02.0: Failed to DMA remap 679356 pages
<4> [535.286896] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [537.227272] i915 0000:00:02.0: Failed to DMA remap 679356 pages
<4> [537.233888] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<4> [538.534218] i915 0000:00:02.0: Failed to DMA remap 679356 pages
<4> [538.540492] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
<6> [554.911089] Purging GPU memory, 0 pages freed, 0 pages still pinned, 5639 pages left available.
<4> [554.911483] upowerd invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
<4> [554.911499] CPU: 1 UID: 0 PID: 953 Comm: upowerd Tainted: G S U L 7.0.0-rc3-CI_DRM_18137-g3ccc0e340459+ #1 PREEMPT(lazy)
<4> [554.911501] Tainted: [S]=CPU_OUT_OF_SPEC, [U]=USER, [L]=SOFTLOCKUP
<4> [554.911502] Hardware name: Intel Corporation Meteor Lake Client Platform/MTL-P LP5x T3 RVP, BIOS MTLPFWI1.R00.3471.D91.2401310918 01/31/2024
<4> [554.911503] Call Trace:
<4> [554.911504] <TASK>
<4> [554.911506] dump_stack_lvl+0xc1/0xf0
<4> [554.911515] dump_stack+0x10/0x20
<4> [554.911517] dump_header+0x4a/0x350
<4> [554.911523] oom_kill_process+0xe1/0x4e0
<4> [554.911527] out_of_memory+0x26e/0x6a0
<4> [554.911531] __alloc_frozen_pages_noprof+0x16c1/0x1820
<4> [554.911534] ? __lock_acquire+0x43e/0x2790
<4> [554.911539] ? policy_nodemask+0x138/0x190
<4> [554.911554] alloc_pages_mpol+0x7e/0x1a0
<4> [554.911557] folio_alloc_noprof+0x65/0xf0
<4> [554.911560] filemap_alloc_folio_noprof+0x61/0x80
<4> [554.911564] __filemap_get_folio_mpol+0x204/0x360
<4> [554.911568] filemap_fault+0x127/0x16a0
<4> [554.911575] __do_fault+0x3d/0x1b0
<4> [554.911579] do_fault+0x339/0x680
<4> [554.911583] __handle_mm_fault+0x99f/0x1030
<4> [554.911586] ? lock_vma_under_rcu+0xda/0x2f0
<4> [554.911593] handle_mm_fault+0x12c/0x300
<4> [554.911596] do_user_addr_fault+0x1d6/0x8e0
<4> [554.911602] exc_page_fault+0x96/0x2c0
<4> [554.911607] asm_exc_page_fault+0x27/0x30
<4> [554.911610] RIP: 0033:0x76a9bae181bb
<4> [554.911620] Code: Unable to access opcode bytes at 0x76a9bae18191.
<4> [554.911621] RSP: 002b:00007ffe4a8de628 EFLAGS: 00010206
<4> [554.911623] RAX: 0000000000000000 RBX: 000058c35b8d6100 RCX: 000000005b9193e0
<4> [554.911624] RDX: 000058c35b9193e0 RSI: 000058c32cd70d6b RDI: 00007ffe4a8de690
<4> [554.911625] RBP: 00007ffe4a8de670 R08: 00007ffe4a8de9b0 R09: 0000000000000000
<4> [554.911625] R10: 0000000000000000 R11: 0000000000000000 R12: 00007ffe4a8de690
<4> [554.911626] R13: 000058c32cd70d6b R14: 000058c35b9193e0 R15: 000058c35b8d6100
<4> [554.911632] </TASK>
<4> [554.911633] Mem-Info:
<4> [554.911642] active_anon:133120 inactive_anon:6549041 isolated_anon:0
active_file:124 inactive_file:98 isolated_file:0
unevictable:1153277 dirty:0 writeback:15
slab_reclaimable:13061 slab_unreclaimable:87334
mapped:2174 shmem:2849338 pagetables:5072
sec_pagetables:1173 bounce:0
kernel_misc_reclaimable:0
free:50077 free_pcp:289 free_cma:0
<4> [554.911646] Node 0 active_anon:536576kB inactive_anon:26187972kB active_file:496kB inactive_file:392kB unevictable:4613108kB isolated(anon):0kB isolated(file):0kB mapped:8696kB dirty:0kB writeback:60kB shmem:11397352kB shmem_thp:8468480kB shmem_pmdmapped:0kB anon_thp:0kB kernel_stack:7792kB pagetables:20288kB sec_pagetables:4692kB all_unreclaimable? yes Balloon:0kB
<4> [554.911650] Node 0 DMA free:11272kB boost:0kB min:32kB low:44kB high:56kB reserved_highatomic:0KB free_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB zspages:0kB present:15992kB managed:15368kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
<4> [554.911655] lowmem_reserve[]: 0 1360 31497 31497 31497
<4> [554.911661] Node 0 DMA32 free:122464kB boost:0kB min:2900kB low:4284kB high:5668kB reserved_highatomic:0KB free_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:1263756kB writepending:0kB zspages:0kB present:1566976kB managed:1393660kB mlocked:0kB bounce:0kB free_pcp:888kB local_pcp:0kB free_cma:0kB
<4> [554.911664] lowmem_reserve[]: 0 0 30136 30136 30136
<4> [554.911670] Node 0 Normal free:66572kB boost:10240kB min:74888kB low:105744kB high:136600kB reserved_highatomic:2048KB free_highatomic:0KB active_anon:561152kB inactive_anon:26171588kB active_file:496kB inactive_file:392kB unevictable:3349352kB writepending:60kB zspages:0kB present:31457280kB managed:30859448kB mlocked:27124kB bounce:0kB free_pcp:268kB local_pcp:0kB free_cma:0kB
<4> [554.911674] lowmem_reserve[]: 0 0 0 0 0
<4> [554.911679] Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 2*4096kB (M) = 11272kB
<4> [554.911713] Node 0 DMA32: 36*4kB (UME) 26*8kB (UME) 38*16kB (UM) 117*32kB (UM) 352*64kB (UME) 332*128kB (UM) 206*256kB (UME) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 122464kB
<4> [554.911730] Node 0 Normal: 81*4kB (UME) 449*8kB (UM) 600*16kB (UME) 1492*32kB (UM) 83*64kB (M) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 66572kB
<4> [554.911746] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
<4> [554.911748] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
<4> [554.911749] 2851723 total pagecache pages
<4> [554.911750] 15 pages in swap cache
<4> [554.911752] Free swap = 6332396kB
<4> [554.911753] Total swap = 8388604kB
<4> [554.911754] 8260062 pages RAM
<4> [554.911755] 0 pages HighMem/MovableOnly
<4> [554.911756] 192943 pages reserved
<4> [554.911757] 0 pages hwpoisoned
<4> [554.911759] Memory cgroup min protection 0kB -- low protection 0kB
<6> [554.911760] Tasks state (memory values in pages):
<6> [554.911761] [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
<6> [554.911807] [ 307] 0 307 16822 858 0 858 0 114688 357 -250 systemd-journal
<6> [554.911825] [ 381] 0 381 72240 6807 4633 2174 0 110592 0 -1000 multipathd
<6> [554.911832] [ 387] 0 387 6913 600 0 600 0 73728 813 -1000 systemd-udevd
<6> [554.911839] [ 664] 998 664 4759 726 0 726 0 77824 304 0 systemd-network
<6> [554.911844] [ 687] 992 687 5400 728 0 728 0 90112 584 0 systemd-resolve
<6> [554.911848] [ 689] 997 689 22740 764 0 764 0 81920 244 0 systemd-timesyn
<6> [554.911853] [ 722] 0 722 77756 805 0 805 0 110592 200 0 boltd
<6> [554.911858] [ 723] 101 723 2479 880 0 880 0 57344 250 -900 dbus-daemon
<6> [554.911894] [ 745] 110 745 421567 423 0 423 0 188416 2026 0 prometheus-node
<6> [554.911899] [ 766] 0 766 3003 853 0 853 0 69632 270 0 smartd
<6> [554.911905] [ 773] 0 773 4543 762 0 762 0 81920 275 0 systemd-logind
<6> [554.911909] [ 774] 0 774 105579 811 0 811 0 139264 269 0 thermald
<6> [554.911914] [ 785] 0 785 117262 887 0 887 0 139264 567 0 udisksd
<6> [554.911919] [ 820] 103 820 55629 830 0 830 0 81920 601 0 rsyslogd
<6> [554.911924] [ 831] 0 831 1708 519 0 519 0 61440 62 0 cron
<6> [554.911929] [ 848] 991 848 77043 859 0 859 0 98304 225 0 polkitd
<6> [554.911933] [ 945] 0 945 27424 854 0 854 0 114688 2363 0 unattended-upgr
<6> [554.911937] [ 953] 0 953 78701 640 0 640 0 122880 581 0 upowerd
<6> [554.911942] [ 1014] 0 1014 1528 407 0 407 0 57344 35 0 agetty
<6> [554.911951] [ 1063] 0 1063 3008 567 0 567 0 65536 286 -1000 sshd
<6> [554.911956] [ 1065] 0 1065 3743 667 0 667 0 73728 428 0 sshd
<6> [554.911961] [ 1069] 1000 1069 5024 726 0 726 0 86016 471 100 systemd
<6> [554.911965] [ 1070] 1000 1070 5290 248 0 248 0 77824 431 100 (sd-pam)
<6> [554.911970] [ 1153] 1000 1153 3865 417 0 417 0 73728 603 0 sshd
<6> [554.911976] [ 1157] 1000 1157 3160614 689 3 686 0 778240 29561 -1000 java
<6> [554.911983] [ 2128] 1000 2128 1870 439 0 439 0 53248 99 0 bash
<6> [554.911987] [ 2310] 1000 2310 1494 391 0 391 0 49152 34 0 dmesg
<6> [554.911992] [ 2326] 1000 2326 4186 642 0 642 0 73728 298 0 sudo
<6> [554.911997] [ 2327] 0 2327 10075 942 0 942 0 114688 587 -1000 igt_runner
<6> [554.912016] [ 2972] 989 2972 109715 839 0 839 0 184320 582 0 fwupdmgr
<6> [554.912021] [ 2978] 0 2978 117195 888 0 888 0 233472 2509 0 fwupd
<6> [554.912026] [ 3018] 0 3018 2934014 753 0 753 0 15646720 590 1000 gem_mmap_offset
<6> [554.912028] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=upower.service,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/session-1.scope,task=gem_mmap_offset,pid=3018,uid=0
<3> [554.913126] Out of memory: Killed process 3018 (gem_mmap_offset) total-vm:11736056kB, anon-rss:0kB, file-rss:3012kB, shmem-rss:0kB, UID:0 pgtables:15280kB oom_score_adj:1000
<7> [555.701475] i915 0000:00:02.0: [drm:intel_power_well_enable [i915]] enabling DC_off
<7> [555.702295] i915 0000:00:02.0: [drm:gen9_set_dc_state.part.0 [i915]] Setting DC state from 02 to 00
<7> [555.713964] i915 0000:00:02.0: [drm:drm_client_dev_restore] fbdev: ret=0
<7> [555.731344] i915 0000:00:02.0: [drm:intel_power_well_disable [i915]] disabling DC_off
<7> [555.731734] i915 0000:00:02.0: [drm:skl_enable_dc6 [i915]] Enabling DC6
<7> [555.731957] i915 0000:00:02.0: [drm:gen9_set_dc_state.part.0 [i915]] Setting DC state from 00 to 02
|