LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 12-02-2021, 01:36 PM   #136
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320

I just forgot to update the other place the cpuinfo_data_grabber function was called, in the machine data section. That's corrected in 3.3.09-25. Wouldn't have impacted the main cpu section. Thanks for noting that. Function name changed to be more consistent, should have used global search/replace, forgot.

Due to the amount of real topology data pinxi is able to get now, I'm leaning towards making -Ca (--cpu --admin) report add a real 'topology:' item with sub field:value pairs, rather than globbing it all into one string. Not sure how to format that, I'll try some stuff out. Most cpus would see little change, this applies more to complex toplogies, like 2 cpus in pine64 running with both different core counts and different speeds, and maybe also soon different caches.

Thanks for the pine64 data, that will help, though I'm still waiting for an Alder Lake dataset, which I was supposed to get, but I suspect the guy might have given up on alder lake because it's going to take a while to get full native linux kernel scheduler to handle those e and p cores. However any other asymmetric type topology should also 'just work' out of the box, so the handler should really just kick in any time it's not a fully symmetric topology.

Basically the problem is that in any cpu, or > 1 physical cpus, the following variations can happen, at least in theory:

1. different number of cores per cpu
2. different number of e and p cores per cpu
3. different number of single and multiple threads per cpu
4. different number of dies per cpu
5. different min/max frequency per cpu, and possibly per e and p core type, though initial data I've seen show alder lake apparently running at same speed in e and p cores, but I suspect that scenario will change on future cpus, Zen may explore this type of thing as well.
6. different numbers of physical cpus
7. each core IDs as having a separate die id (common on arm), this seems like a bug in how they report to the kernel to me. Current solution to this oddity is if risc and if core count equals die count, set die count to 1.
8. ARM cpus, when simple consumer level, like pine64 and many others, actually can have 2 or more dies reporting as separate cpus, that was the hack inxi had used to detect dies in soc arm devices, but arm servers can and almost certainly will have high core count > 1 physical cpus, so you can't actually tell those apart.
9. A virtualized machine can have only 1 thread of a multithreaded core, I have a server like that, that was one of the first bugs inxi 3.3.09 had done an initial fix for, falsely reporting it as MT when it was running a virtualized single thread set of cpu cores.
10. A virtualized machine can have 1 thread from host cpu treated as a physical cpu, I have another server that does that with qemu. This is handled already automatically, inxi accepts that those are separate single core cpus.

The hack inxi used to handle I think specifically the pine64 case is what created this latest problem in a sense, it was assuming that any ARM cpu detected with more than one physical id actually was a > 1 die single physical cpu, but that assumption only applies to a subset of arm cpus, consumer ones mostly.

I think the ideal is to have an actual topology report for -C --admin, and give various complexity string versions for -C, -b, and pinxi short form.

The coming cpu eras I suspect are going to keep morphing away from symmetric to asymmetric topologies if those prove to be more efficient or better suited to the tasks.

One thinks of Tesla's dojo extreme machine learning cpu architecture on the extreme outer limit of what is happening with heavy duty cpus (not that inxi can run on that, though in theory it can since I'm sure they run linux), and with the variety of architectures we're already finding on consumer ARM cpus, and now, with Alder Lake, desktop/server cpus.

So I suspect the more dynamic this logic and output is able to be, the less bug fixing I'll have to do in the future, and the better prepared inxi will be for that future.

Last edited by h2-1; 12-02-2021 at 03:56 PM.
 
3 members found this post helpful.
Old 12-02-2021, 07:18 PM   #137
JayByrd
Member
 
Registered: Aug 2021
Location: Seattle, WA
Distribution: Slackware
Posts: 302

Rep: Reputation: 310Reputation: 310Reputation: 310Reputation: 310
Dusted off another oldie. Output looks good--at least, to my untrained eye.
Code:
guest@deskpro:~$ ./pinxi --version | grep ^pinxi
pinxi 3.3.09-26 (2021-12-02)

guest@deskpro:~$ ./pinxi -MCazy1
Machine:
  Type: Desktop
  System: Compaq
    product: Deskpro
    v: N/A
    serial: <superuser required>
  Chassis:
    type: 3
    serial: <superuser required>
  Mobo: Compaq
    model: 0684h
    serial: <superuser required>
  BIOS: Compaq
    v: 686P2 v2.04
    date: 08/25/2000

CPU:
  Info: Single core
    model: Pentium III (Coppermine)
    bits: 32
    type: MCP
    arch: P6 III Coppermine
    family: 6
    model-id: 8
    stepping: 6
    microcode: 8
    cache: 256 KiB
      note: check
    flags: pae sse
    bogomips: 1860
  Speed (MHz): 930
    min/max: N/A
    core:
      1: 930
  Vulnerabilities: No CPU vulnerability/bugs data available.
 
Old 12-02-2021, 07:58 PM   #138
fourtysixandtwo
Member
 
Registered: Jun 2021
Location: Alberta
Distribution: Slackware...mostly
Posts: 328

Rep: Reputation: 217Reputation: 217Reputation: 217
Looks like I found bug. E7400 should be a Wolfdale socket 775

edit dmidecode does say Socket 423 while the pdf description of the box is correct with 775.

https://www.cpu-world.com/CPUs/Core_...o%20E7400.html

Code:
# ./inxi --version | grep ^inxi && ./pinxi --version | grep ^pinxi
inxi 3.3.09-00 (2021-11-22)
pinxi 3.3.09-26 (2021-12-02)

# ./inxi -MCazy
Machine:
  Type: Desktop System: Wincomm product: WPE-793 v: N/A serial: N/A
  Mobo: Wincomm model: WEB-8892 serial: N/A BIOS: Phoenix
  v: 1.20/636G00WPE79300 date: 10/28/2009
CPU:
  Info: Dual Core model: Intel Core2 Duo E7400 socket: 423 bits: 64 type: MCP
  arch: Penryn family: 6 model-id: 17 (23) stepping: A (10) microcode: A0B
  cache: L1: 32 KiB L2: 3 MiB
  flags: ht lm nx pae sse sse2 sse3 sse4_1 ssse3 bogomips: 11199
  Speed: 1600 MHz min/max: 1600/2800 MHz base/boost: 2793/2793 volts: 1.2 V
  ext-clock: 266 MHz Core speeds (MHz): 1: 1600 2: 1600
  Vulnerabilities: Type: itlb_multihit status: KVM: Vulnerable
  Type: l1tf mitigation: PTE Inversion
  Type: mds
  status: Vulnerable: Clear CPU buffers attempted, no microcode; SMT disabled
  Type: meltdown mitigation: PTI
  Type: spec_store_bypass status: Vulnerable
  Type: spectre_v1
  mitigation: usercopy/swapgs barriers and __user pointer sanitization
  Type: spectre_v2
  mitigation: Full generic retpoline, STIBP: disabled, RSB filling
  Type: srbds status: Not affected
  Type: tsx_async_abort status: Not affected

# ./pinxi -MCazy1
Machine:
  Type: Desktop
  System: Wincomm
    product: WPE-793
    v: N/A
    serial: N/A
  Mobo: Wincomm
    model: WEB-8892
    serial: N/A
  BIOS: Phoenix
    v: 1.20/636G00WPE79300
    date: 10/28/2009

CPU:
  Info: Dual core
    model: Intel Core2 Duo E7400
    socket: 423
    bits: 64
    type: MCP
    arch: Core Penryn
    family: 6
    model-id: 17 (23)
    stepping: A (10)
    microcode: A0B
    cache:
      L1: 128 KiB
        desc: d-2x32 KiB; i-2x32 KiB
      L2: 3 MiB
        desc: 1x3 MiB
    flags: ht lm nx pae sse sse2 sse3 sse4_1 ssse3
    bogomips: 5599
  Speed (MHz):
    avg: 1600
    min/max: 1600/2800
    base/boost: 2793/2793
    volts: 1.2 V
    ext-clock: 266 MHz
    cores:
      1: 1600
      2: 1600
  Vulnerabilities:
    Type: itlb_multihit
      status: KVM: Vulnerable
    Type: l1tf
      mitigation: PTE Inversion
    Type: mds
      status: Vulnerable: Clear CPU buffers attempted, no microcode; SMT disabled
    Type: meltdown
      mitigation: PTI
    Type: spec_store_bypass
      status: Vulnerable
    Type: spectre_v1
      mitigation: usercopy/swapgs barriers and __user pointer sanitization
    Type: spectre_v2
      mitigation: Full generic retpoline, STIBP: disabled, RSB filling
    Type: srbds
      status: Not affected
    Type: tsx_async_abort
      status: Not affected

# cat /proc/cpuinfo 
processor  : 0
vendor_id  : GenuineIntel
cpu family : 6
model   : 23
model name : Intel(R) Core(TM)2 Duo CPU     E7400  @ 2.80GHz
stepping : 10
microcode  : 0xa0b
cpu MHz  : 1600.018
cache size : 3072 KB
physical id : 0
siblings : 2
core id  : 0
cpu cores  : 2
apicid  : 0
initial apicid : 0
fpu     : yes
fpu_exception  : yes
cpuid level : 13
wp     : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm pti dtherm
bugs    : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips : 5599.56
clflush size   : 64
cache_alignment : 64
address sizes  : 36 bits physical, 48 bits virtual
power management:

processor  : 1
vendor_id  : GenuineIntel
cpu family : 6
model   : 23
model name : Intel(R) Core(TM)2 Duo CPU     E7400  @ 2.80GHz
stepping : 10
microcode  : 0xa0b
cpu MHz  : 1600.023
cache size : 3072 KB
physical id : 0
siblings : 2
core id  : 1
cpu cores  : 2
apicid  : 1
initial apicid : 1
fpu     : yes
fpu_exception  : yes
cpuid level : 13
wp     : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm pti dtherm
bugs    : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips : 5599.56
clflush size   : 64
cache_alignment : 64
address sizes  : 36 bits physical, 48 bits virtual
power management:

Last edited by fourtysixandtwo; 12-02-2021 at 08:04 PM.
 
Old 12-02-2021, 08:04 PM   #139
TheTKS
Member
 
Registered: Sep 2017
Location: Ontario, Canada
Distribution: Slackware, X/ubuntu, OpenBSD, OpenWRT
Posts: 361

Rep: Reputation: 243Reputation: 243Reputation: 243
h2-1, hope this helps. Let me know if you would like me to post anything from cpu-world.com

Just update Slackware64 -current to kernel 5.15.6 and packages to Dec 1

Code:
# inxi --version | grep ^inxi
inxi 3.3.09-00 (2021-11-22)
Code:
# pinxi --version | grep ^pinxi
pinxi 3.3.09-25 (2021-12-02)
Code:
# inxi -MCazy
Machine:
  Type: Desktop System: ASUSTeK product: K30BF_M32BF_A_F_K31BF_6 v: N/A
  serial: <filter>
  Mobo: ASUSTeK model: K30BF_M32BF_A_F_K31BF_6 v: Rev X.0x serial: <filter>
  UEFI: American Megatrends v: 0703 date: 02/15/2017
CPU:
  Info: Quad Core model: AMD A10-7800 Radeon R7 12 Compute Cores 4C+8G
  socket: FM2 (FM2+) note: check bits: 64 type: MCP arch: Steamroller
  family: 15 (21) model-id: 30 (48) stepping: 1 microcode: 6003106 cache:
  L1: 448 KiB L2: 8 MiB
  flags: avx ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm
  bogomips: 27930
  Speed: 1394 MHz min/max: 1400/3500 MHz base/boost: 3500/3500 boost: enabled
  volts: 1.3 V ext-clock: 100 MHz Core speeds (MHz): 1: 1480 2: 1625 3: 2044
  4: 1831
  Vulnerabilities: Type: itlb_multihit status: Not affected
  Type: l1tf status: Not affected
  Type: mds status: Not affected
  Type: meltdown status: Not affected
  Type: spec_store_bypass
  mitigation: Speculative Store Bypass disabled via prctl and seccomp
  Type: spectre_v1
  mitigation: usercopy/swapgs barriers and __user pointer sanitization
  Type: spectre_v2
  mitigation: Full AMD retpoline, STIBP: disabled, RSB filling
Code:
# pinxi -MCazy1
Machine:
  Type: Desktop
  System: ASUSTeK
    product: K30BF_M32BF_A_F_K31BF_6
    v: N/A
    serial: <filter>
  Mobo: ASUSTeK
    model: K30BF_M32BF_A_F_K31BF_6
    v: Rev X.0x
    serial: <filter>
  UEFI: American Megatrends
    v: 0703
    date: 02/15/2017

CPU:
  Info: 4-Core
    model: AMD A10-7800 Radeon R7 12 Compute Cores 4C+8G
    socket: FM2 (FM2+)
      note: check
    bits: 64
    type: MCP
    arch: Steamroller
    family: 15 (21)
    model-id: 30 (48)
    stepping: 1
    microcode: 6003106
    cache:
      L1: 256 KiB
        desc: d-4x16 KiB; i-2x96 KiB
      L2: 4 MiB
        desc: 2x2 MiB
    flags: avx ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm
    bogomips: 6982
  Speed (MHz):
    avg: 1435
    high: 1494
    min/max: 1400/3500
    base/boost: 3500/3500
    boost: enabled
    volts: 1.3 V
    ext-clock: 100 MHz
    cores:
      1: 1455
      2: 1391
      3: 1402
      4: 1494
  Vulnerabilities:
    Type: itlb_multihit
      status: Not affected
    Type: l1tf
      status: Not affected
    Type: mds
      status: Not affected
    Type: meltdown
      status: Not affected
    Type: spec_store_bypass
      mitigation: Speculative Store Bypass disabled via prctl and seccomp
    Type: spectre_v1
      mitigation: usercopy/swapgs barriers and __user pointer sanitization
    Type: spectre_v2
      mitigation: Full AMD retpoline, STIBP: disabled, RSB filling
    Type: srbds
      status: Not affected
    Type: tsx_async_abort
      status: Not affected
  Type: srbds status: Not affected
  Type: tsx_async_abort status: Not affected

Last edited by TheTKS; 12-02-2021 at 08:18 PM.
 
Old 12-02-2021, 08:13 PM   #140
baumei
Member
 
Registered: Feb 2019
Location: USA; North Carolina
Distribution: Slackware 15.0 (replacing 14.2)
Posts: 365

Rep: Reputation: 124Reputation: 124
Did you get the debug-file which "pinxi" uploaded to your fileserver (I think on Nov/30)?

Quote:
Originally Posted by h2-1 View Post
By the way, something that does not appear to be working in general is the die handler, I forgot about that one.

I know the core duo quad above is two dies, for example.

Can you paste: pinxi -C --debug 39 --dbg 8
on pastebin.com or somewhere and I'll check if that data is just not there. I know the core duo quadro had two dies, for sure, with 1 L2 cache per die, they stuck together two core duos basically.

Last edited by baumei; 12-02-2021 at 08:14 PM.
 
Old 12-02-2021, 09:47 PM   #141
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
baumei, yes, I've been grabbing those as they appear, thanks.

TheTKS, that AMD A10-7800 Radeon R7 12 Compute Cores 4C+8G shows something I hadn't even realized inxi was doing oddly:

Code:
inxi: Speed: 1394 MHz min/max: 1400/3500 MHz base/boost: 3500/3500 boost: enabled
  volts: 1.3 V ext-clock: 100 MHz Core speeds (MHz): 1: 1480 2: 1625 3: 2044
  4: 1831

pinxi: Speed (MHz):
    avg: 1435
    high: 1494
    min/max: 1400/3500
    base/boost: 3500/3500
    boost: enabled
    volts: 1.3 V
    ext-clock: 100 MHz
    cores:
      1: 1455
      2: 1391
      3: 1402
      4: 1494
Subtle until you actually look at what inxi is saying about cpu speeds, it's listing a speed below any of the core speeds listed, at which point one wonders where it got that speed from in the first place. I thought those were the highest speed found, but there you can see it's not related to any of the listed speeds at all.

It also looks like the optimizations I did to get the most accurate possible core frequencies are working quite well, you can see on inxi that the speeds are all over the place, but on pinxi, they had just enough time to all settle back down to the system baseline, or closer to it, which is what I was hoping to see, the idea being of course to not see what the system will do when it runs this much data parsing and grabbing internally with pinxi, but what it was doing before you ran pinxi/inxi, to give you an idea of the actual state of the system. I'd used a variety of function calls and internal tools to read that data in inxi, but in pinxi, it's all coded to use the absolute minimum of system work to get the speed data, all further processing of it happens after the data is grabbed. Your case seems to really highlight that that seems to be working as intended, actually a bit better than hoped for. How well this will work I think is cpu dependent, and also on how the scaling of the cpu works, and how quickly it changes speeds to handle less or more work loads.

I like averaging and showing highest if you want to know that, that seems much more useful. It always slightly bugged me that inxi would default to showing usually (but not in your sample case, which is even odder) the highest speed found, but never really thought about it. That's a side effect of inxi having been born in the days of single core, then dual core, cpus, where there wasn't much difference involved.

I've hit a kind of mental roadblock in terms of how to present the data in complex scenarios, pinxi now has the data internally, and there's several potential ways to present it in very verbose forms, but for some reason I'm having difficulty in how to situate the data on the line, I think that might be related to this being such an old feature that try as I might, I'm super used to seeing it be a certain way.

But internally it can now show:

* cores per cpu
* threads per cpu
* threads per core
* min frequency per cpu, actually 1 or more minimums should the cpu have more than one possible min freq
* max frequency per cpu, same deal as min
* number of dies per cpu
* number of multi and single threaded cores per cpu

This is a ton more information than ever. I wonder if --admin maybe shouldn't just get its own 'Topology' line if there is found enough data to warrant it, that would require some tests.

If that option were selected for admin output, then there would the further question of how to use some of that data for -C, -Cx, -Cxx, -Cxxx, -b, and short form (the last two I think are solid, they will show a subset of the data and call it a day.

Just thinking aloud here. It's a confusing variety of output to deal with cleanly and clearly. Maybe a Topology: line is the way to go once a certain amount of data is actually really available. But there are so many possible variations etc that it's hard to know how to really cleanly handle it.

Last edited by h2-1; 12-02-2021 at 09:58 PM.
 
1 members found this post helpful.
Old 12-03-2021, 12:47 AM   #142
fourtysixandtwo
Member
 
Registered: Jun 2021
Location: Alberta
Distribution: Slackware...mostly
Posts: 328

Rep: Reputation: 217Reputation: 217Reputation: 217
Here's the P90.

Code:
Slackware 11.0.0 
Linux p90 2.4.33.3 #1 Fri Sep 1 01:48:52 CDT 2006 i586 pentium i386 GNU/Linux

# ./inxi --version | grep ^inxi && ./pinxi --version | grep ^pinxi
inxi 3.3.09-00 (2021-11-22)
pinxi 3.3.09-26 (2021-12-02)

# ./inxi -MCazy
Machine:
  Message: No machine data: try newer kernel. Is dmidecode installed? Try -M
  --dmidecode.
Can't use an undefined value as a HASH reference at ./inxi line 9595.

# ./pinxi -MCazy1
Machine:
  Message: No machine data: try newer kernel. Is dmidecode installed? Try -M --dmidecode.

CPU:
  Info: Single core
    model: Pentium 75 - 200
    bits: 32
    type: UP
    arch: P5
    family: 5
    model-id: 2
    stepping: 4
    microcode: N/A
    cache: N/A
    flags: N/A
    bogomips: 179
  Speed (MHz): 90
    min/max: N/A
    core:
      1: 90
  Vulnerabilities: No CPU vulnerability/bugs data available.


# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 5
model           : 2
model name      : Pentium 75 - 200
stepping        : 4
cpu MHz         : 90.208
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : yes
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr mce cx8
bogomips        : 179.81
 
Old 12-03-2021, 05:44 AM   #143
carriunix
Member
 
Registered: Feb 2020
Location: Brazil
Distribution: Slackware
Posts: 45
Blog Entries: 6

Rep: Reputation: Disabled
Quote:
Originally Posted by h2-1 View Post
I'm actually curious, if you run on that system (root or sudo so dmidecode can run):
Code:
pinxi -Ca --dmidecode
What is the cache data then?
Code:
#./pinxi -Ca --dmidecode
CPU:
  Info: Dual Core model: Intel Core i5 M 430 bits: 64 type: MT MCP
  arch: Westmere family: 6 model-id: 25 (37) stepping: 2 microcode: 9 cache:
  L1: 64 KiB desc: d-2x32 KiB; i-2x32 KiB L2: 512 KiB desc: 2x256 KiB
  L3: 3 MiB desc: 1x3 MiB
  flags: ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 4521
  Speed (MHz): avg: 1463 high: 2262 min/max: 1197/2262 base/boost: 2261/2527
  boost: enabled volts: 1.3 V ext-clock: 533 MHz cores: 1: 2262 2: 1197
  3: 1197 4: 1197
  Vulnerabilities:
  Type: itlb_multihit status: Processor vulnerable
  Type: l1tf mitigation: PTE Inversion
  Type: mds status: Vulnerable: Clear CPU buffers attempted, no microcode;
  SMT vulnerable
  Type: meltdown mitigation: PTI
  Type: spec_store_bypass status: Vulnerable
  Type: spectre_v1
  mitigation: usercopy/swapgs barriers and __user pointer sanitization
  Type: spectre_v2
  mitigation: Full generic retpoline, STIBP: disabled, RSB filling
  Type: srbds status: Not affected
  Type: tsx_async_abort status: Not affected
 
Old 12-03-2021, 02:03 PM   #144
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
carriunix, thats not current pinxi, I had forgotten to get rid of the /sys desc: data for L1/L2/L3 if it was forced dmidecode, that's corrected and should not show the desc in 3.3.09-26. I think I fixed that one yesterday sometime, I was testing --dmidecode and realized I'd forgotten to unset all the values from /sys for the dmidecode cache data.

REVIEW SO FAR

Note in the following that forcing myself to layout this all in words will hopefully let me figure out the last remaining step, topology, as well as make sure I'm not missing anything. If you think something is missing from this, or a mistake, etc, now is the time to let me know!

Caches
A lot of highly unpredictable results were happening due to accepting the cpuinfo 'cache size:' as L2 cache (which by the way it almost always was in the past, which is why inxi had used it as L2 cache equivalent, but that changed, as we saw in this thread already:
  1. Very old system had only L1, and cpuinfo 'cache size:' was the L1.
  2. Most if not all AMD that had L2 cache, whether or not they had L3, continued to use L2 cache size (per core or per core complex block) for the cpuinfo 'cache size:' value. This is why most AMDs tended to have the right L2 cache size using the old inxi guessing method.
  3. Intels started putting L3 cache data in cpuinfo 'cache size:' which led to almost all intel L2 cache sizes in inxi being wrong, often dramatically wrong. The only time they were right is when the cpu had no L3 cache.
  4. Most of the time when the CPU had 1 L2 per core complex, like with old intel core duos, and some of the pre ryzen AMD generations, inxi got it wrong unless it guessed based on the cpu family that it was 1 L2 per 2 cores. That was always very sketchy, but tended to be closer for AMDs than Intels, just because AMD tends to be more consistent than Intels in the way they use family/model to identify a cpu series.
  5. It's now apparent to me that my assumption that the kernel guys create all the data in /proc/cpuinfo based on real values was not correct, because there are several ways cpus present cache for cpuinfo output, and that must be left up to the cpu vendors, not the kernel programmers, or we would never have had had L1 or L2 or L3 cache showing depending on the vendor in 'cache size:', which means, that was what I call 'garbage data', which is data provided by O.E.M.'s, which is almost always, by definition, random and unreliable. The rule there is, let them pick randomly how to fill out data fields, and they will.
  6. I believe the data in /proc/cpuinfo is very similar to the data in dmidecode, some is real data, from the kernel, dynamic, live, and very trustworthy, and some is vendor strings containing logical gibberish and randomness. As with dmidecode, it's important to know which type it is before trusting it.

Dies
The 'dies:' item was almost always wrong, or unreliable, because it was synthesized completely for epyc and ryzen and consumer ARM SOCs like pine64, that data was largely useless and almost no cpu actually identifies its dies to the linux kernel as far as I can tell, so far the only one that does it is at least Epyc using the first generation Zen series microarchitecture. No other cpu seems to have that data at all, I don't believe I've seen it once during this entire thread, which led me to believe it was broken, but no, the fact is, cpus are just not reporting their dies to the kernel. I knew for example that:
  • first generation rzyen had 2 cores per die (very briefly), subsequent generations 4 cores, then later, 8 cores.
  • Epyc first generation had 4 cores per die, unknown subsequnt Zen versions, like zen 2, zen 3, coming zen 4.
  • Core Duo Quadro, the 4 core intel core duo, was two core duo dies stuck together, more or less, yet it does not report itself as having two dies, which surprised me, I was glad to see that confirmed in the previous data samples.
  • The Pine64 reports itself as having 2 physical cpus now, previously inxi had used another weak hack to capture that specific situation and then say it had 2 dies even though it was using physical cpu id, not die id. As far as I can tell so far, /proc/cpuinfo actually has never had anyway to identify the cpu die, unless maybe it's via the siblings count, in theory a multithreaded intel quad duo core should show 4, not 8, siblings in proc/cpuinfo, but I don't know if it actually does. I think it presented itself as a single body however, even though it wasn't.
  • The ARM SOC die hack would as a side effect have made ALL true > 1 physical CPU ARM systems report as having > 1 dies, not > 1 physical CPUS.
  • The only hack retained from inxi die logic was the test if a risc (all risc types now, not just arm) cpu had > 1 cores, and > 1 dies, and if core count matched die count per physical cpu, set the die count to 1. That appears to be an oddity with RISC / ARM type cpu reporting, so inxi will always reject that die count as probably invalid, not worth believing.
So I've moved the die output to -Cxx because it's almost never there anyway, and is largely useless, and the cpus are not actually telling the kernel that information in general with one known exception, AMD Epyc. Probably should make it -Cxxx since that is for data that is rarely there, or largely useless to know, and not reliable across hardware.

PROGRESS AND TODO

Progress
  • Speed data is pretty solid, with the latest redo of that logic, and seems very reliable and decent, definitely better than previous inxi.
  • Cache data is extremely good, quite reliable, if it's wrong, the cpu lied to the kernel and pinxi cannot do anything about that. Far better than previous inxi. Modern cpus (post 2005-2008 or so) appear to be complete and full data plus desc: always available, and correct. Basically now if you see the 'desc:' values, it's going to be right, or as right as the cpu>kernel data itself is.
    • Almost all the legacy hacks for cache calculations are removed and inactive, now if legacy (or if using --force cpuinfo to test) will just show 'cache: [size] note: check' and call it a day, since it can never really get that right.
    • As far as I know, almost no CPUs showed L1/L2/L3 cache data in /proc/cpuinfo. Elbrus did, and does, and I think a small handful of others did that, at least L2 cache, over the years, but mostly it was only Elbrus that had granular cache data in /proc/cpuinfo. The latest CPU logic still needs empirical confirmation for Elbrus CPUs.
    • The --dmidecode flag continues to completely override all previous cpu cache data if dmidecode has cpu cache data in it.
    • Everything that supported granular L1-d,L1-i data now reports it if available, for Linux, that would be fallback mode for Elbrus, and for BSD, at least maybe FreeBSD may show L1-d and L1-i data now along with the L1 total summary. Maybe. Not tested. But might work in theory, figured why not. Previously for BSD, L1-d was ignored (why? no idea, that was a mistake by me), and for Elbrus, L1-d and L1-i were added together and treated as one single L1.
    • Not sure, but I don't believe OpenBSD currently has L1/L3 cache data available, my install it doesn't. Does have L2, but not in a very nice or reliable to parse format. BSD complex granular CPU data remains weak, to put it mildy, to non-existent.
  • Identifying Asymmetric CPUs in the Topology 'Info:' string is currently working in theory, but I still need an Alder Lake dataset so I can tweak and test that (with e-cores enabled in bios and with them disabled), but that again I don't like the output of, would show:

    Code:
    Info: 16-core (8-mt, 8-st) model: Intel 12th Generation...
    Which isn't that bad really, and that might be 'good enough' for now. As far as I can tell, if e-cores are disabled, it would just show as a regular 8 core multithreaded cpu.
  • CPU type: data is enhanced slightly and now handles two new types, MST (multi + single thread in the same physical cpu, aka, Alder Lake), AMCP (Asymmetric Multi Core Processor, aka AlderLake). As we saw, however, the way pine64 identifies itself is as two cpus, so it does not get that type.
  • Now that I think of it, I need to add another type: AMP (Asymmetric Multi Processing,more than 1 physical CPU, but the two do do not match). This is for cases like ARM SOC, two cpus, with either/or 2 different min/max speeds and core counts.
  • The CPU 'type:' will actually become more important now than before, since neither pinxi short form, or pinxi -b, or pinxi -C to -Cxxx will really show super granular data about why an MST or AMCP or AMP is that, unless it's showing --admin mode or shows the alder lake type mt/st threads, which are not decided fully. But the 'type' value always shows in all inxi verbosity levels by default.

Topology
This leaves a somewhat thorny problem of Topology data, which is a different type of problem than the others. Getting the data internally is pretty easy, and is largely complete with one small exception now, but how to arrange that output is something that I have to really think about, and I keep drawing a total blank on how to use this data in output. There's quite a few datatypes that can be used there, including:
  • number of physical cpus, including the following situations:
    • how to show if there are 2 or more physical cpus, but they are asymmetrical in core counts, like Pine64. That can include the following scenario as well.
    • how to show if there are 2 or more physical cpus, but they are asymmetrical in min/max frequencies, like many ARM SOCs, particularly on phones, that use Performance/Efficiency cores in a similar way that Alder Lake does, except they seem to have each type on a separate cpu IP block, which reports itself as a physical cpu.
    • That is, an eg. 8 core overall ARM SOC, but 2 CPU types, one slower, the other faster for max frequency, which can have the same core count per physical ID, or different. I have one of these on a phone, which makes testing easy (inxi and pinxi run on termux, use curl to grab pinxi, inxi is packaged by termux and tends to be current).
    • I believe that virtually all asymmetric physical cpus will be asymmetric specifically to leverage power saving by having different core counts and different min/max speeds, I don't see much point in having different core counts but not different speeds, why bother in that case creating two physical cpus for a low power RISC device in the first place?
    • Use a fairly neutral but short term for each set of 1 or more physical cpu structures, like 'cpus:'. Note the very unfortunately serial ambiguity between 'core' and 'cpu' internally in linux, that's gotten even worse in latest kernels by the way. Could use 'complexes:', but a core complex actually refers to something similar to what die referred to, a complex of cpus on the same die, for example, new ryzen will have complexes that share 1 L3 per complex, but are all made on one mega die.
  • number of dies per physical cpu
  • number of cores per physical cpu
  • threads per core (made more complex if st/mt alder lake type topology)
    • number of multi threaded cores per physical cpu
      • How many threads per MT core in case MT+ST
    • number of single threaded cores per physical cpu
  • threads per physical cpu
  • min/max frequencies per physical cpu. Unknown if 1 physical cpu can have different min max speeds, like alder lake performance and efficiency cores. pinxi is assuming internally they can.
  • Found during some data checks an awkward by parseable data structure in FreeBSD sysctl that 'may' let inxi finally deduce > 1 physical CPUs, and how many cores per CPU. Not confirmed if that is hyperthreaded or not cores listed, unfortunately. BSD cpu data remains anemic at best.
  • Long standing issue re BSDs not showing per core frequency data in any reliable way remains, but that ball is in their court, if they want to provide the data in a reliable robust easy to grab way, inxi will use it, otherwise it won't try since there is currently nothing there. That's standard unix 'everything is a file [or should be] file based data, like /proc/cpuinfo, /sys, etc, that is, not some unreliable small application whose output changes every major release and probably isn't even in the core packages anyway (yes, you read some frustration there, lol).

Note that the 'threads:' item will be removed in pinxi, that was just to test a concept, but I don't like how that is used, it violates the 'core' inxi rule to have all data as key:value pairs. by having too many types of data in one string.

If I can't come up with an elegant solution, I may just do 3.3.10 after checking all the latest system data sets above, but Topology is the last item. After laying out the possible variations above, I can see why this is giving me headaches, LOL, this will be I believe one of the most complex data types in inxi (RAID/LOGICAL is more complex, however). Generally once the problem can be laid out in words, there tends to come a solution I find, I just have not discovered it yet.

I don't want to let 'the perfect be the enemy of the good' here however, so don't want to delay release for too long if I can't come up with an elegant topology full featured output solution. I'm leaning towards a highly dynamic, based on complexity of available data and of actual topology, which will go from what you see now, basically, to an entire line of dedicated Topology: data (for --admin, that's roughly how RAID/LOGICAL do it), to dedicated multiple lines for very complicated situations. But -C, -Cx, -Cxx, -Cxxx, -b, pinxi no options, short form, would always show something very similar to what you see now.

Possible -Ca syntax:

Code:
# ARM SOC 2x 4core phone
# so if different core counts, and/or different min/max speeds
CPU: 
 Topology: 
  cpus: 1
   cores: 4 
   freq: 300/1800
  cpus: 1
   cores: 4
   freq: 300/2200
 Info:
  model: N/A
  ...
  type: MCP AMP

# short form, -C/-b etc
CPU:
 Info: 2x 4-core 
  model: N/A
  ...
  type: MCP AMP

## PowerPC:
CPU:
 Topology:
  cpus: 1
   cores: 8
    t-core: 4
   threads: 32
 Info:
  model: POWER9 altivec
  ...
  type: MT MCP

# short form:
CPU:
 Info: 8-core
  model: POWER9 altivec
  ...
  type: MT MCP

# AMD EPYC
CPU:
 Topology:
   cpus: 2
    cores: 16
     t-core: 2
    threads: 32
    dies: 4
 Info:
  Model: AMD Epyc...
  ...
  type: MT MCP MCM SMP

# short form
CPU:
 Info: 2x 16-core
  model: AMD Ryzen...
  ...
  type: MT MCP MCM SMP

# Alder Lake
CPU:
 Topology: 
  cpus: 1
   cores: 16
    mt: 8
     t-core: 2
    st: 8
   threads: 24
 Info:
  model: Intel 12th Gen. ....
  ...
  type: MST AMCP

# possible future variant if > 1 min/max are found per physical cpu:
CPU:
 Topology: 
  cpus: 1
   cores: 16
    mt-cores: 8
     t-core: 2
     freq: 1400/4500 
    st-cores: 8
     freq: 1000/2300
   threads: 24
 Info:
  model: Intel 12th Gen. ....
  ...
  type: MST AMCP

# short form:
CPU:
 Info: 16-core (8-mt,8-st) 
  model: Intel 12th Gen..
  ...
  type: MST AMCP

# MT core duo
CPU:
 Topology:
  cpus: 1
   cores: 2
    t-core: 2
   threads: 4
 Info:
  model: Intel Core...
  ...
  type: MT MCP

# short form:
CPU:
 Info: dual core
  model: Intel Core...
 ...
  type: MT MCP 

# non mt cpu, like early core duo
CPU:
 Info: dual core
  model: Intel Core DUo
  ..
  type: MCP

# Legacy cpu with no advanced data
CPU: 
 Info: single core
  model: Intel Pentium D
  ...
  type: UP

# short form same

Last edited by h2-1; 12-03-2021 at 06:24 PM.
 
1 members found this post helpful.
Old 12-03-2021, 03:38 PM   #145
carriunix
Member
 
Registered: Feb 2020
Location: Brazil
Distribution: Slackware
Posts: 45
Blog Entries: 6

Rep: Reputation: Disabled
Quote:
Originally Posted by h2-1 View Post
carriunix, thats not current pinxi...
Oh, man... I'm sorry. I forgot that you were working on it "as we speak". I will do upgrade pinxi and post again... some day, next week... hehehe. Sorry about that!
 
Old 12-03-2021, 03:49 PM   #146
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
No worries, it gets frenetic during this stuff, I'm doing almost non stop upgrades, the only times you might see a lag is when I am shooting the test pinxi version to my own server and grabbing it from there, not committing it to github pinxi. The true 'bleeding edge' branch of pinxi is gotten using -U 3, not -U, that grabs it from my server, which I update using a small script as I develop. But -U 3 is NOT meant for normal use, it's 100% development, but sometimes if you see a long delay, and try -U 3, you might find stuff has changed quite a bit, or suddenly a bunch of debugger output sprays out on screen since that's what I'm testing on many machines at that moment.
 
Old 12-04-2021, 12:53 AM   #147
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
First working prototype of something that might be the final logic/output:

pinxi 3.3.09-27

Code:
# 2 physical cpu server
pinxi -Cay1 
CPU:
  Topology:
    cpus: 2
      cores: 16
        tpc: 2
      threads: 32
      dies: 4
  Info:
    model: AMD EPYC 7281
    bits: 64
    type: MT MCP MCM SMP
    arch: Zen

# standard desktop ryzen
CPU:
  Topology:
    cpus: 1
      cores: 6
        tpc: 2
      threads: 12
  Info:
    model: AMD Ryzen 5 2600
    bits: 64
    type: MT MCP
    arch: Zen+
    family: 17 (23)

# PPC 4 thread per core system
Topology:
    cpus: 1
      cores: 8
        tpc: 4
      threads: 32
  Info:
    model: POWER9 altivec supported
    bits: 64
    type: MT MCP
    arch: ARM

# complex arm cpu group:
CPU:
  Topology:
    variant:
      cpus: 1
        cores: 4
        min/max: 300/1805
    variant:
      cpus: 1
        cores: 2
        min/max: 300/2016
  Info:
    model: AArch64
    bits: 64
    type: MCP AMP
    arch: aarch64

# nothing complex has been detected, no MT, no different physical cpu types or structures
CPU:
  Info: 2x 6-core
    model: Intel Xeon E5-2630 v2
    bits: 64
    type: MCP SMP
    arch: Ivy Bridge
    family: 6
I'll let this kick around a bit, this is my first draft of the final results.

No Topology data shows for non MT or basic cpu setups because there would be no added information, the thread and core counts would be the same, and the rest is the same.

This will only show items where it's worth noting, for example, has to detect > 1 die to show dies, has to detect variations in min/max frequencies or core counts per physical cpu when > 1 physical cpu.

This seems ok, roughly.

Have to document it but will let it settle for a day or so before finalizing it, unless something else comes up.

Note the thread count per core key name, 'tpc:'. That stands for 'threads per core'. That's an abbreviation that makes some sense, everything else I tried was odd and didn't look right. Admin rule is RTFM, that is, the manual will explain the things, they don't have to be crystal clear or user friendly for --admin options, they have to be accurate however.

Not bad for a first attempt I think, might even be good enough.

Note that for cpus with different topologies, I tried a few different ways of doing that, but without the 'variant:' for each variant, it was very unclear and hard to read. Very few cpus in the x86/amd64 world will have variants, that's going to be a RISC thing, ARM mostly, and I think mostly SOC cpus where they want to cram as much energy saving as possible into one place.

Unknown: what happens if an arm cpu has 2 dies but reports as 1 physical cpu but has 2 different frequency min/max ranges?

Forcing legacy mode, minus a few features:

Code:
# desktop ryzen
pinxi -Ca --force cpuinfo
CPU:
  Info: 6-core model: AMD Ryzen 5 2600 bits: 64 type: MT MCP arch: Zen+
  family: 17 (23) model-id: 8 stepping: 2 microcode: 8008204 cache: 512 KiB
  note: check

# power9 powerpc workstation
pinxi -Cay  --fake cpu --arm --force cpuinfo
CPU:
  Info: 32-core model: POWER9 altivec supported bits: 64 type: MCP arch: ARM
  family: N/A model-id: N/A

# 2 cpu epyc server
pinxi -Cay --force cpuinfo
CPU:
  Info: 2x 16-core model: AMD EPYC 7281 bits: 64 type: MT MCP MCM SMP
  arch: Zen family: 17 (23) model-id: 1 stepping: 2 microcode: 8001250
  cache: 2x 512 KiB (1024 KiB) note: check
With --force cpuinfo all use of /sys is removed so it's not exactly what legacy mode would show in all cases, but it's fairly close. Good for finding bugs and behavior on older systems and BSDs etc, since those tend to have similar types of data give or take.

1 non intuitive thing with -Ca is that it's not clear that the cpus: item owns the following sub items, that is, if there are 2 cpus in the group, the cores: dies: threads: etc refers to a per cpu, not all of them.

In that sense, the 2x 6-cores is more clear. I'll see if something better as an idea for -Ca comes to me over the next days or nights. However, all the per type data should be per cpu, not the total, that is, cores per cpu, dies per cpu, threads per cpu, etc. That's mostly a question of semantics and not the data however.

Topology with -Cay1 is very clear however, the goal would be to get it as clear in the wide output format as well.

Tentatively adding in an 'x' if it is not -y1 output format, like so:
Code:
Topology: cpus: 2x cores: 6 tpc: 2 threads: 12
That makes it similar to the short form, and to the L caches for > 1 cpu.

Last edited by h2-1; 12-04-2021 at 02:16 AM.
 
Old 12-04-2021, 08:38 AM   #148
kjhambrick
Senior Member
 
Registered: Jul 2005
Location: Round Rock, TX
Distribution: Slackware64 15.0 + Multilib
Posts: 2,159

Rep: Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512
h2-1 --

Nice !

Very tidy output.

This is my Slackware 14.2 BackUp Box:

-- kjh

Code:
# uname -a

Linux bupbox 4.4.276 #2 SMP Tue Jul 20 23:24:31 CDT 2021 x86_64 Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz GenuineIntel GNU/Linux

# ./pinxi --version ; ./pinxi -Cay1

pinxi 3.3.09-27 (2021-12-03)

Copyright (C) 2008-2021 Harald Hope aka h2
Forked from Infobash 3.02: 
Copyright (C) 2005-2007 Michiel de Boer aka locsmif. 
Using Perl version: 5.022002
Program Location: /home/dld/pinxi/pinxi

Website: https://github.com/smxi/inxi or https://smxi.org/
IRC: irc.oftc.net channel: #smxi
Forums: https://techpatterns.com/forums/forum-33.html

This program is free software; you can redistribute it and/or modify it under 
the terms of the GNU General Public License as published by the Free Software 
Foundation; either version 3 of the License, or (at your option) any later 
version. (https://www.gnu.org/licenses/gpl.html) 
CPU:
  Topology:
    cpus: 1
      cores: 2
        tpc: 2
      threads: 4
  Info:
    model: Intel Core i3-4150
    socket: BGA1155 (Proc 1)
      note: check
    bits: 64
    type: MT MCP
    arch: Haswell
    family: 6
    model-id: 3C (60)
    stepping: 3
    microcode: 27
    cache:
      L1: 128 KiB
        desc: d-2x32 KiB; i-2x32 KiB
      L2: 512 KiB
        desc: 2x256 KiB
      L3: 3 MiB
        desc: 1x3 MiB
    flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
    bogomips: 6983
  Speed (MHz):
    avg: 1118
    high: 1186
    min/max: 1064/3500
    base/boost: 3500/4800
    volts: 1.4 V
    ext-clock: 100 MHz
    cores:
      1: 1064
      2: 1064
      3: 1161
      4: 1186
  Vulnerabilities:
    Type: itlb_multihit
      status: Processor vulnerable
    Type: l1tf
      mitigation: PTE Inversion
    Type: mds
      mitigation: Clear CPU buffers; SMT vulnerable
    Type: meltdown
      mitigation: PTI
    Type: spec_store_bypass
      mitigation: Speculative Store Bypass disabled via prctl and seccomp
    Type: spectre_v1
      mitigation: usercopy/swapgs barriers and __user pointer sanitization
    Type: spectre_v2
      mitigation: Full generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB filling
    Type: srbds
      status: Vulnerable: No microcode
    Type: tsx_async_abort
      status: Not affected
 
Old 12-04-2021, 01:14 PM   #149
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
I'm debating moving the 'cache:' to the 'Topology:' line, since that is part of the Topology:.

Code:
CPU:
  Topology: 
    cpus: 1
      cores: 2
        tpc: 2
      threads: 4
      cache:
        L1: 128 KiB
          desc: d-2x32 KiB; i-2x32 KiB
        L2: 512 KiB
          desc: 2x256 KiB
        L3: 3 MiB
          desc: 1x3 MiB
  Info:
    model: Intel Core i3-4150
    socket: BGA1155 (Proc 1)
      note: check
    bits: 64
    type: MT MCP
    arch: Haswell
    family: 6
    model-id: 3C (60)
    stepping: 3
    microcode: 27
    flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
    bogomips: 6983
  Speed (MHz):
    avg: 1118
And/or using the same 1x 6-cores as the short form, like so:

Code:
CPU:
  Topology: 1x 2-cores
    tpc: 2
    threads: 4
    cache:
      L1: 128 KiB
        desc: d-2x32 KiB; i-2x32 KiB
      L2: 512 KiB
        desc: 2x256 KiB
      L3: 3 MiB
        desc: 1x3 MiB
  Info:
    model: Intel Core i3-4150
For wide output like so:

Code:
CPU:
  Topology: 1x 2-cores tpc: 2 threads: 4 cache: L1: 128 KiB
  desc: d-2x32 KiB; i-2x32 KiB L2: 512 KiB desc: 2x256 KiB
  L3: 3 MiB desc: 1x3 MiB
  Info: model: Intel Core i3-4150 ...
versus the present long form of (this adds for long the 'x' to make it more clear that what follows the 'cpus' item is x the physical cpus count, not the totals (at least I'm trying to make that somewhat more clear for long form). -y1 already makes it clear by the indentation structure:

Code:
pinxi -Ca
CPU:
  Topology: cpus: 1x cores: 6 tpc: 2 threads: 12
  Info: model: AMD Ryzen 5 2600 bits: 64 type: MT MCP arch: Zen+
  family: 17 (23) model-id: 8 stepping: 2 microcode: 8008204 cache:
  L1: 576 KiB desc: d-6x32 KiB; i-6x64 KiB L2: 3 MiB desc: 6x512 KiB
  L3: 16 MiB desc: 2x8 MiB
However there is value in having cpus: [physical count] cores: [core count per cpu] in terms of having the data as clean key:value pairs, particularly for xml/json export.

But this is all just re-arranging existing data, and doesn't pose any real challenges.

Any feedback or thoughts on these various -C --admin presentation options would be appreciated. Technically the caches are part of the topology as I understand the term.

Yesterday I found it very helpful to just create the sample outputs so I could visually see how each one looked, so maybe these will help you see if you have any preferences, it's all the same data, just a question of how to arrange it.

The downside with adding cache is that moves the cpu model name further away from the start. The upside is that it is logically more correct, and arranges the various data types according to what they actually are. Also, if more data was on the 'Topology:' line, the cache stuff that is, then all cpu types that had advanced /sys type data could be moved to the Topology: item, currently only ones that warrant it do that, that is, if no MT, no complex architecture, it shows as normal.

Sample with and without that change (a virtualized instance that uses only 1 thread per core):

Code:
# as is now
CPU:
  Info: 2x 6-core model: Intel Xeon E5-2620 0 bits: 64 type: MCP SMP
  arch: Sandy Bridge family: 6 model-id: 2D (45) stepping: 7 microcode: 71A
  cache: L1: 2x 384 KiB (768 KiB) desc: d-6x32 KiB; i-6x32 KiB
  L2: 2x 1.5 MiB (3 MiB) desc: 6x256 KiB L3: 2x 15 MiB (30 MiB)

# with the change:
CPU:
  Topology: cpus: 2x cores: 6 cache: L1: 2x 384 KiB (768 KiB) 
  desc: d-6x32 KiB; i-6x32 KiB L2: 2x 1.5 MiB (3 MiB) desc: 6x256 KiB 
  L3: 2x 15 MiB (30 MiB)
  Info: model: Intel Xeon E5-2620 0 bits: 64 type: MCP SMP arch: Sandy Bridge 
  family: 6 model-id: 2D (45) stepping: 7 microcode: 71A
There was a small bug that made cache data not appear on some virtualized server instances, corrected in 3.3.09-28.

Last edited by h2-1; 12-04-2021 at 01:40 PM.
 
1 members found this post helpful.
Old 12-04-2021, 04:46 PM   #150
fourtysixandtwo
Member
 
Registered: Jun 2021
Location: Alberta
Distribution: Slackware...mostly
Posts: 328

Rep: Reputation: 217Reputation: 217Reputation: 217
I think it looks odd not having the Info first, it doesn't seem logical. I agree with keeping the topology and cache together, maybe moving them below like this?

Code:
CPU:
  Info: model: Intel Core i7-3770K socket: BGA1155 bits: 64 type: MT MCP
  arch: Ivy Bridge family: 6 model-id: 3A (58) stepping: 9 microcode: 21
  flags: avx ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
  bogomips: 6999
  Topology: cpus: 1x cores: 4 tpc: 2 threads: 8
  cache: L1: 256 KiB desc: d-4x32 KiB; i-4x32 KiB L2: 1024 KiB
  desc: 4x256 KiB L3: 8 MiB desc: 1x8 MiB
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
pinxi/inxi huge BSD updates, testers? h2-1 *BSD 0 03-08-2021 11:54 PM
Testersfeedback for new pinxi/inxi feature -E/--bluetooth h2-1 Slackware 2 01-29-2021 06:53 PM
Huge inxi/pinxi upgrade, new features, Logical volumes, raid rewrite, beta testers? h2-1 Slackware 12 12-17-2020 05:04 PM
Beta testers for Perl inxi requested h2-1 Slackware 147 12-14-2020 09:00 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 09:54 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration