inxi/pinxi - RAM/Memory + partitions, file systems, and drive use
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
inxi/pinxi - RAM/Memory + partitions, file systems, and drive use
A few more things came around based on some random user observations, and things I've noticed but ignored. This gets fairly arcane, but I have here hit issues that I'm not solid on, and have added debuggers etc to make figuring some of these things easier.
All these are running now in pinxi:
Code:
pinxi -U
# or, if not installed:
wget -O /usr/local/bin/pinxi smxi.org/pinxi && chmod +x /usr/local/bin/pinxi
The System RAM/Memory: it appears in -m, -tm (if not -m), and -I (if not -tm or -m).
System RAM true total
Starting with the change to show Memory available:, that being the technical term, the RAM the system has available to it on boot, as I understand it, I started wondering if there was some way to get the actual physical total, and ended up with 3, one is solid, based on dmidecode data, the second requires superuser, and is a slight hack, and the third can be run as user assuming you have a kernel compiled with CONFIG_MEMORY_HOTPLUG enabled.
I believe Slackware kernels are compiled with this, mine has it. Easy to see if you have this directory: /sys/devices/system/memory your kernel supports that. That's for allowing hot swapping ram, which is nowadays mostly useful from what I gather in virtual machine use, but also on big hardware.
block_size_bytes is a file that has a hex number of bytes per block, which are memoryX. These can be online: memoryX/state == online or memoryX/online == 1 (inxi uses that method because it's more efficient).
I have seen block sizes of 8000000 (128 MMiB) and 80000000 (2 GiB) so far.
In ideal cases, this returns the amound of physical RAM, but, sadly, this being the real world, I almost immediately found corner cases that broke the logic, which is my first question:
Why do some systems have 1 or 2 extra blocks is a total mystery to me, I'm not sure what they would reference since there is no physical RAM that can correspond.
So that was method one, it's usually right, unless it's wrong, and when it's wrong, it's wrong by 1 or 2 blocks so far, which results in 256 MiB to 2 TiB oversize.
To resolve this, pinxi shows note: check when the source is /sys/devices/system/memory.
--dbg 54 will show the actual sizes of what was found in /sys. --dbg 53,54 will show the raw KiB/block counts and the actual sizes.
This brought me to method 2, /proc/iomem, which I only recently became aware of as a possible data source.
This file requires superuser to read with real data, if read as user, its values all become 0 ranges.
I thought this would be a usable source, but of course immediatly hit exceptions and oddities.
Code:
00000000-00000fff : Reserved
00001000-0009d3ff : System RAM
0009d400-0009ffff : Reserved
00000000-00000000 : PCI Bus 0000:00
000a0000-000dffff : PCI Bus 0000:00
000c0000-000cffff : Video ROM
000e0000-000fffff : Reserved
000f0000-000fffff : System ROM
00100000-09c3efff : System RAM
01000000-021fffff : Kernel code
02200000-02bf8fff : Kernel rodata
02c00000-02dc11bf : Kernel data
0358b000-039fffff : Kernel bss
09c3f000-09ffffff : Reserved
...
The docs say that System RAM conform to the actual Physical RAM ranges, minus some reserved stuff, in particular vram / iGPU, internal gpus that is, not standalone devices with their own RAM. So the method there is to just add up the system ram ranges, then add in any gpu ram found, and that should in theory be close to the total. Only it isn't always, so I've used a hack, which is not lovely, of taking the result, after adding in detected gpu ram, and rounding it up to the nearest integer if it's >~ 2 GiB. My first version had it without the gpu ram added back in, but someone had a system with 1 GiB gpu ram reserved, but oddly, showing only a total of 544 MiB (512 + 32 we assume), which then dropped the total down by 1 GiB, so I then added the iGPU ram back in and it worked again, though in that case, it's rounding say, 31.3 GiB ram up to 32.
As the engineers say, not ideal. As with the /sys method, I am finding that this second method 'usually' works quite well, now it shows igpu ram if it's present (I think with apu/igpu stuff mostly), and it seems to be accurate.
To resolve this uncertainty, inxi uses note: est. to let the user know this isn't certainly right, but probably is.
To see what that data really is: sudo pinxi -S --dbg 54
I'm hoping this --dbg 54 will make it more obvious where the hidden RAM lives, or if it's as simple as adding back in the reserved ram, though pci and other devices also have such space, kernel memory is confusing and I won't pretend I understand it.
This led me to a final absolute override, one that is right, but required some refactoring to let the memory class get data from the ram class, which is that it now just counts up all the ram it finds with dmidecode, as root and dmidecode installed of course, and believes that number. This won't always work because some systems, particularly ones with embedded RAM, won't show this info, but on systems with normal ram sticks, I believe it's always right. Hopefully.
Because this is 'real' data, no 'note:' item appears after the total:.
Obviously, the ideal is to get the amount of physical ram present as user, not superuser, but this is a significant improvement because first, I hadn't ever really paid attention to the question of reserved on boot ram (you can see this in dmesg I believe, unless your dmesg is filled with error messages, with: dmesg | grep RAM. I think.
I don't however use dmesg as a data source in inxi because it's too unreliable in Linux.
To debug the /sys and /proc methods, you can use: --dbg 53 which will show you the KiB totals for the data in each section, and for /sys, how many online blocks were found.
This is a Liquorix/zen kernel, which is not compiled with that option, so only /proc/iomem is used.
Code:
# desktop, no igpu present, the 3rd 0 item is igpu if found
sudo pinxi -I --dbg 53
proc/iomem: $VAR1 = \[
'33475437',
'71719',
'0'
];
Info:
Processes: 519 Uptime: 5d 22h 38m Memory: total: 32 GiB note: est.
available: 31.27 GiB used: 10.25 GiB (32.8%) Shell: Bash pinxi: 3.3.27-12
You'll note that: perl -E 'say 33475437/1024**2'
returns: 31.9246644973755 which is just a hair under 32 GiB.
One question I have is, can I find all the ram found by adding in some other line values from /proc/iomem besides reserved igpu ram? Perfect would be to end up with the math working correctly, and giving the actual physical ram. _Most_ systems are working fine, but some don't.
File systems, partitions, total disk used
I also did a big upgrade, which I'm not sure about the specifics yet, for how partitions exclude or include what to show for various file system types, that is a full refactor of those items, so I'm still not solid on what to show and what not to show, although what to exclude from local disk used totals is obvious, that's anything that is either an overlay type file system, an iso/archive type file system, or a distributed/remote type file system.
i hit this issue while testing some mtp stuff, aka, android phone mounting, and realized inxi did not know about mtp as a possible type, so I did a full redo of that, including the docs/inxi-partitions.txt file.
I basically here just have a question: do you think showing things like distributed file systems along with stuff like nfs or smb / cifs is worth it? inxi/pinxi already shows anything that uses a recognizable remote syntax, like machine:/data/dir but there's many corner cases where the system won't get that syntax, at least I think there are, but I have no access to clustered, distributed, etc, file systems.
Overlay or stackable file systems are excluded from everything because they are just confusing.
But other things I decided might be worth revisiting, for example, inxi has never shown iso type file system, like iso9660 since the notion was, it's not a partition, but as I thought of it, I realized, well, inxi does show stuff like smb/cifs/nfs mounts, and I believe sshfs etc, so why should it not show all remote / iso / archive type file systems too?
This logic is largely fine and working, my only question there is what inxi should or not show under the 'Partitions: ' section, it's always excluded kernel type file systems, and ram file systems, but the rest has always been slightly inconsistent.
The debuggers helped, I am getting correct totals from /proc/iomem when I use System RAM + Reserved (including any video ram used) + RAM buffer and tweaking for some legacy syntaxes.
So I may be able to get rid of the note: est. once I can confirm this is correct on more hardware.
@h2-1 , Where should we place our finding of the output from 'pinxi -I --dbg 53' and the hardware info that it was run on ?
Tia , JimL
the HW info can be a bit verbose .
After updating pinxi to latest, just put it here, that's easiest.
I improved the debugger, so:
sudo or root, whichever:
Code:
sudo pinxi -I --dbg 53,54
will show the data.
I've gotten it so it's within a few thousands of the actual number now. With a few legacy exceptions, but I grabbed those old /proc/iomem samples from redhat website, and I don't even know if they are real values or just sample/examples. Sadly the inxi debugger did not collect /proc/iomem data until this current pinxi so I don't have any old data to check and test against that is real, which is usually how I figure these questions out.
There is one newer system someone gave me data for that unfortunately is hitting a few thousands OVER the real physical ram amount, but because of how sprintf does rounding, it's getting rounded down. I'm not sure which value is tripping it to go over the physical ram amount.
It's really hard to get this stuff figured out because the documenation is opaque, but so far I have found these variants:
Older /proc/iomem did not use indentation as consistently, and did not always place video ram or system rom in reserved blocks, that's now handled in latest pinxi.
Reading up:
ACPI Tables are in RAM
System RAM is ram, of course
Video RAM/iGPU on modern systems is in Reserved block, but not always on legacy systems. Legacy should also be handled in pinxi now. That's for built in gpus/apus, of course.
System ROM, like video RAM, on modern is in Reserved block, but on legacy, not consistently, also handled.
I realized that since this goes through sprintf that was doing some rounding, so I added a raw GiB ram debugger:
babydr, yes, it helps. That's a good cross section, including one that reproduces the extra memory block issue from /sys. The vm is interesting too, I hadn't thought to check them, those are different, note the almost total lack of Reserved RAM there.
I now have a somewhat ok rounding running, first, anything that has less than 0.1 between the ceil and the sprintf rounded value will show note: est. In other words, if it's too far away from the rounded result, we have to be skeptical of the result. It rounds if it's above 1.8 GiB, or if it's under 1 GiB and the ceil rounding is divisible by 64, which means it's using sticks of memory of those sizes, like 64, 128, 256, 512.
This leaves 1 GiB to 1.75 GiB unhandled, which is not very common anymore so I'm not going to worry about that, it will just show the raw value, which if it was a few thousands off, will get rounded by sprintf anyway. There's probably some way to handle that, I'll test it.
I'm glad/sad to see your multiprocessor system manifests the issue I saw on another server, which is also multiprocessor now that I think about it:
Note the extra block found, that is the same as my test case had, except it showed 130 GiB instead of 128.
Do you have any idea what can create this extra memoryx block? I've checked other servers, with a lot of RAM, and they did not have this extra block or blocks in /sys/devices/system/memory/ so I don't know where they come from or what creates them, but your example shows that I was right not to trust that data, and to replace it with the /proc/iomem totals.
I've now upped the accuracy if /proc/iomem total ram slightly, and it also rounds better.
Getting within a few KiB/MiB of the real total is decent, so far none of my test cases, or sample files, are returning wrong results.
So far this gives me the confidence to remove the note: est. in cases where the difference between the rounded and the raw sprintf number is less than 0.1.
I also tested this on some old /proc/iomem samples, and it's now working I think on all of them, I have one that does not add up right, it comes to 5.87 GiB so it shows note: est. but only that one.
JayByrd, i wish I could think of some way to check if the /sys data has that extra block, but I don't know what could predict it.
Note that these blocks can in theory if the system supports it be disabled/turned offline, so you can't really deduce anything from the counts of blocks.
I was really hoping I could finally get a reliable no root RAM total going, assuming the kernel is compiled to create that directory and those files, but given I found one case instantly that had an extra block, and that digging through my debugger data sets, I found another one, then babydr has a live one too, it clearly has something else going on.
The drag is, it's usually in most cases right./sys:
8 works too, but 4 is probably more conservative, that's assuming that there are at least 4 blocks per physical stick, I think, but if that is wrong, it will just show note: check.
chrisretusn, oh, good, you found one that broke my latest assumptions, lol. That didn't take long.
I'll if I can get that one fixed, thanks.
[update]
this is a puzzler, and may be the exception that kills this notion, or it can just live with it I guess.
Not a bug, but an example of my assumptions not quite matching reality. This is the first example so far where /sys total was right, and /proc/iomem total is wrong.
]
What's that
Code:
e0000000-febfffff : Reserved
size: 492 MiB
PCI block? I think I see the issue roughly, maybe, those are all PCI devices and it shouldn't be showing Reserved, but then again, I don't fully understand what Reserved means, this example suggests it doesn't mean exactly what I thought.
Annoyingly all samples are now working except this one I believe. I'll have to look through to see if this could be filtered out, that seems to be something that normally would have shown as a PCI Bus item as primary row container, but instead showed as Reserved.
Is one of those bus IDs an internal video device?
The issue is that it's showing as Reserved instead of PCI Bus as the primary container, but then even if I subtract the PCI bits inside of it, it still comes up some 50 MiB over.
What kind of system is that? Laptop, Intel CPU? Desktop?
To me it looks like it was supposed to say PCI Bus instead of Reserved, but this stuff is so arcane I really don't know what is supposed to be what, just what is generally what.
So this so far is the corner case.
The contents of that reserved item don't add up to the 492 MiB, so I'll have to think on this one. In this case, the desired behavior would have been to keep the /sys total, and dump this total, but that is really hard to determine. One test could be if /sys total is divisible by 2, and /proc isn't, proc is probably wrong, but that is hackish again.
I suspected I needed more data, that's what my changelog had in it too as a note.
Either this is a bug in how the kernel is reporting that particular device, as Reserved instead of PCI bus, or there's another layer completely.
I knew there was a risk using ceil to boost up the ram totals in case one did just what yours did, be slightly over more than slightly over the real total.
I'll have to think about this.
The other oddity is that like the vm example above, the actual system RAM was actually complete at 8 GiB, without adding anything to it, which makes this a new type of system unless it's a VM.
chrisretusn, oh, good, you found one that broke my latest assumptions, lol. That didn't take long.
Glad I could be of help.
Not a bug, but an example of my assumptions not quite matching reality. This is the first example so far where /sys total was right, and /proc/iomem total is wrong.
]
What's that
Code:
e0000000-febfffff : Reserved
size: 492 MiB
PCI block? I think I see the issue roughly, maybe, those are all PCI devices and it shouldn't be showing Reserved, but then again, I don't fully understand what Reserved means, this example suggests it doesn't mean exactly what I thought.
Annoyingly all samples are now working except this one I believe. I'll have to look through to see if this could be filtered out, that seems to be something that normally would have shown as a PCI Bus item as primary row container, but instead showed as Reserved.
Well your kind of over my head now. I ran this, maybe it will be useful.
Code:
# dmesg | grep -i reserved
[ 0.000000] BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000cfef0000-0x00000000cfefffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000e0000000-0x00000000e7ffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000ffffffff] reserved
[ 0.002629] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[ 0.008590] e820: update [mem 0xcff00000-0xffffffff] usable ==> reserved
[ 0.109830] Memory: 8059868K/8387036K available (14345K kernel code, 2686K rwdata, 4476K rodata, 1936K init, 5160K bss, 326908K reserved, 0K cma-reserved)
[ 0.243569] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820
[ 0.270099] Expanded resource Reserved due to conflict with PCI Bus 0000:00
[ 0.270133] hpet: 3 channels of 0 reserved for per-cpu timers
[ 0.272854] system 00:00: [io 0x1000-0x107f] has been reserved
[ 0.272860] system 00:00: [io 0x1080-0x10ff] has been reserved
[ 0.272863] system 00:00: [io 0x1400-0x147f] has been reserved
[ 0.272866] system 00:00: [io 0x1480-0x14ff] has been reserved
[ 0.272869] system 00:00: [io 0x1800-0x187f] has been reserved
[ 0.272872] system 00:00: [io 0x1880-0x18ff] has been reserved
[ 0.272876] system 00:00: [mem 0xfefe0000-0xfefe01ff] has been reserved
[ 0.272883] system 00:00: [mem 0xfefe1000-0xfefe10ff] has been reserved
[ 0.272955] system 00:01: [io 0x04d0-0x04d1] has been reserved
[ 0.272959] system 00:01: [io 0x0800-0x087f] has been reserved
[ 0.272963] system 00:01: [io 0x0295-0x0296] has been reserved
[ 0.272966] system 00:01: [io 0x0290-0x0294] has been reserved
[ 0.273889] system 00:06: [mem 0xe0000000-0xe7ffffff] could not be reserved
[ 0.274040] system 00:07: [mem 0x000f0000-0x000f7fff] could not be reserved
[ 0.274044] system 00:07: [mem 0x000f8000-0x000fbfff] could not be reserved
[ 0.274048] system 00:07: [mem 0x000fc000-0x000fffff] could not be reserved
[ 0.274051] system 00:07: [mem 0xcfee0000-0xcfeeffff] could not be reserved
[ 0.274055] system 00:07: [mem 0xffff0000-0xffffffff] has been reserved
[ 0.274058] system 00:07: [mem 0x00000000-0x0009ffff] could not be reserved
[ 0.274062] system 00:07: [mem 0x00100000-0xcfedffff] could not be reserved
[ 0.274065] system 00:07: [mem 0xcfef0000-0xcfefffff] has been reserved
[ 0.274068] system 00:07: [mem 0xcff00000-0xcfffffff] could not be reserved
[ 0.274072] system 00:07: [mem 0xfec00000-0xfec00fff] could not be reserved
[ 0.274075] system 00:07: [mem 0xfee00000-0xfee00fff] has been reserved
Quote:
Is one of those bus IDs an internal video device?
The issue is that it's showing as Reserved instead of PCI Bus as the primary container, but then even if I subtract the PCI bits inside of it, it still comes up some 50 MiB over.
Could be, I do have an internal video device (NVIDIA) on the motherboard. I don't use it.
Quote:
What kind of system is that? Laptop, Intel CPU? Desktop?
To me it looks like it was supposed to say PCI Bus instead of Reserved, but this stuff is so arcane I really don't know what is supposed to be what, just what is generally what.
So this so far is the corner case.
The contents of that reserved item don't add up to the 492 MiB, so I'll have to think on this one. In this case, the desired behavior would have been to keep the /sys total, and dump this total, but that is really hard to determine. One test could be if /sys total is divisible by 2, and /proc isn't, proc is probably wrong, but that is hackish again.
I suspected I needed more data, that's what my changelog had in it too as a note.
Either this is a bug in how the kernel is reporting that particular device, as Reserved instead of PCI bus, or there's another layer completely.
I knew there was a risk using ceil to boost up the ram totals in case one did just what yours did, be slightly over more than slightly over the real total.
I'll have to think about this.
The other oddity is that like the vm example above, the actual system RAM was actually complete at 8 GiB, without adding anything to it, which makes this a new type of system unless it's a VM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.