LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 12-12-2021, 03:09 PM   #226
baumei
Member
 
Registered: Feb 2019
Location: USA; North Carolina
Distribution: Slackware 15.0 (replacing 14.2)
Posts: 365

Rep: Reputation: 124Reputation: 124

Hi "h2-1",

In the category "Chassis", are the two single-quotes intended?

Code:
user1@darkstar:~$ /tmp/pinxi -V | grep ^pin
pinxi 3.3.09-46 (2021-12-12)
user1@darkstar:~$ /tmp/pinxi -Mazy1
Machine:
  Type: Desktop
  System: Dell
    product: Studio Slim 540s
      v: N/A
      serial: <superuser required>
  Chassis:
    type: 3
    v: '01'
    serial: <superuser required>
  Mobo: Dell
    model: 0M017G
      v: A00
      serial: <superuser required>
  BIOS: Dell
    v: 1.1.3
    date: 08/25/2009

Last edited by baumei; 12-12-2021 at 06:04 PM.
 
Old 12-12-2021, 03:09 PM   #227
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
3.3.09-46 - in response to some good points raided re less -R, which I'd simply never been aware of, and to complete some features that either were not implemented, or which I forgot to document. Also just the fact I had briefly thought an INDENTS/--indents default of 1 was a good idea, which it wasn't for default, I decided to just make that use the sane 2 as default, but to let users decide for themselves what the multiple indent width will be.
  • -Y -2: retains color codes on redirect or piped output. bit does not apply max line counts. That is;
    Code:
    pinxi -v8Y -1 | less -R
    This changed the very short lived override fpr LINES_MAX from -Y 2 to -Y 3. -Y -1|unset|0|1-xxx remain the same.
  • To allow for easy testing of indent widths:
    • --indent [11-20]: change wide mode indent level. I don't know how I forget to add the --indent switch. Set permanently with INDENT in configuration file. That has always been supported, but I forgot to document it in the man configuration values section.
    • --indents [0-10]: change level 1 main indent when in wrapped mode, and multilevel indents. This is new. Make permanent with INDENTS, a new configuration item. These are also what apply to -y1 indents.
    • --max-wrap [70-xxx]: change width at which line starter stops being wrapped to its own line, current working default value is 110, though that may change based on user feedback and how forum/bug/issue reports look in the future. Since I could never remember if it was --wrap-max or --max-wrap, both work now. Same for permanent configuration item, can be MAX_WRAP or WRAP_MAX, depending on how your brain likes it. Any time I find that I can't remember which order a long arg with 2 terms goes in, I tend to just make it accept either one, since it's irrelevant in any larger context.
  • Because there's a lot of new and enhanced output and filter options, those are no longer mixed in with other sections of the help menu, they have their own sections now, Filter Options: and Output Control Options: That makes them way easier to scan for and find, particularly if you're using -Y or less.
  • The man page was tweaked a little bit and some options got alternate versions of the option name with placeholders left to point to the alternate name so it's fairly easy to find the stuff no matter how you think of it.

It seems like a reasonable method to add new features then see what the immediate feedback / response to them is, then tweak them to take those other views into account. That wasn't my plan or intention doing this process here, but it seems to be working very well, I'm really liking the way other eyes filter and process these things, then report what they see or experience very quickly, that helps avoid a lot of out of the box complaints or valid concerns, since already a larger set of views is handled here than just my own.

These are some of the biggest changes ever in the default output of inxi, so they do make me nervous, I'll be honest.

Last edited by h2-1; 12-12-2021 at 03:46 PM.
 
1 members found this post helpful.
Old 12-12-2021, 03:11 PM   #228
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
baumei, that's a good eye, those would be in the original value, but I believe start/end ' could be added to the default filter tools, I'll check to see if that's practical to do. Most in particular dmi values are passed through a special filter to remove the vast amounts of cr@p O.E.M.'s throw in there, but that looks a variant that would be good to filter against.
 
Old 12-12-2021, 03:20 PM   #229
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
Quote:
Does Perl have output control characters which have an effect similar to what these four "\r\n\t\t" would do in C/C++? If so, when pinxi gets "y1" on the command line, would it be reasonably possible for "pinxi" to process long lines, either on input or output, to: read over about 40 characters and then find the next space, and insert the Perl equivalent to "\r\n\t\t"?
I missed this one, the internal pinxi/inxi output generator uses a fairly complex bit of logic to create line wraps, basically, it takes the length of the previous key: value pairs in the line (minus the color code characters, which is tricky to handle), adds the total length of indent/indents, key + ':' separator, ' ' after, and the value, at each key: value pair, and if the total is greater than the max-width that has been set or selected, it wraps it to the next line.

A subset of that logic takes a single key: value pair, and if that in itself is longer than the max width and if the line contains 2 or more 'words', it will collect them and wrap them to the next line, until it's done. That's how -f cpu flags full report is done, and it's also how -Sa parameters is done. To avoid certain common pointless verbosity in device names, it will also strip out some common redundancies and duplicated 'words', which almost always brings the length back down to decent levels. Those are empirical known too verbose product names for example, or stuff that commonly contains repeated terms, like product: AMD .... AMD...

That was the feature that had the bug that was hidden by my making a long since forgotten mistake by putting parameters always on its own line, something I only discovered I think 2 days ago when testing the variations for output.

The only thing that does not have wrapping of too long lines is repos, and that's honestly just my being lazy. I may correct that today though, though wrapping those long strings can create strange results which is why I've avoided it. I did find and correct another long standing failure, which you almost never see, but I did, luckily, on a remote system where repos are being abused grossly, but they had long comments AFTER the repo value, which of course looked terrible. Those are now removed when detected, which made a fairly astonishing difference in how the output looked given they had like 50 or more repos enabled.

But re wrapping long lines in -y1, no, that to me is against the principle of having key: value one pair per line output. Call it a result of too many years of trying to parse and process awful output from various tools in systems that do just that, fail to give single key: value pair output per line when you request it. Basically -y1 is the screen version of --output json / xml, it's creating a set of key: value pairs, with which the end user can do whatever they want, and since its' driven me absolutely bonkers over the years when programs decide to wrap to next lines the values of the key when it's supposed to be machine parseable output, that's not one I'd be into changing.

I'm not going to name names, but this is a really big problem in the repos section, but also in many other tools. This is also by the way why I try to avoid cramming too much into one value, if I feel like it's actually starting to contain more key: value types than just the primary value, then I'll add child values (like v: for something before it).

One exception to this is the Desktop info, that was mainly not done to preserve the shortness of the line, since that can push the -S Desktop: line to be quite long if I add in all the keys for the combined values, and desktops are really difficult to process consistently anyway.

This was also one of the core rewrite requirements, to make inxi usable as a data source, which meant being much stricter about key: value pairings (except in short/basic output modes). It is to my understanding used that way now by some distros (I know of one that used it in their installer for example, don't know if they still do), and people, and companies, which means that effort paid off. I know for example a prominent gaming company uses it to debug their Linux user issues.

Last edited by h2-1; 12-12-2021 at 03:58 PM.
 
Old 12-12-2021, 03:42 PM   #230
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
fourtysixandtwo,
Quote:
>No need to add the kitchen sink. <insert systemd joke here>
By the way, this was something that actually really worried me for quite a while when the rewrite to Perl enabled a flood of new and enhanced features in inxi (--usb/-J, --slots, were both added immediately in 2.9>3.0 primarily to test that the main new data generator/output handler logic was working in terms of the abstractions and classes, for example), this was kind of a really big philosophical change I had to deal with in terms of the early years extreme brevity requirement, both in lines produced, and max amount of data per line, to something much looser and longer and more free-flowing.

inxi bash/gawk had basically hit the language limits of that unfortunate but sadly unavoidable choice of languages to use many years before the rewrite, and in fact, I'm still to this day fixing subtle and not so subtle errors directly caused by the restrictions and limits of those shell tools, I find them all the time, many were simply translated to Perl during the rewrite, and remain in place, waiting for refactoring (CPU was I think one of the biggest remaining examples of those, the logic preventing the solution, and the logic created by the limits of the original odd language combination, which was back then the only way to get the 'works on everything, from any reasonable era' requirement handled).

But the rewrite fortunately, or unfortunately, depending on your perspective, made extending and enhancing inxi not only possible, but actually kind of entertaining, since I'm no longer fighting against the limits imposed by the language, but just my own lack of understanding and knowledge, which can sometimes be improved if I work at it.

However, this is one reason I make sure to test inxi on old hardware, my 1998 pentium mmx laptop in my case, and on legacy OS in vm, it's too easy to get complacent and forget to make sure all core requirements of working always on everything are maintained. At least on Linux with 2.4 kernel, Perl 5.8.0 or newer. Other operating systems the support isn't as complete, either because the data isn't there, or because the data or tools are just so unpleasant to work with that I won't do it unless I get paid, and/or because they aren't free operating systems (OSX comes to mind there, but not only them).

Last edited by h2-1; 12-12-2021 at 04:01 PM.
 
Old 12-12-2021, 04:27 PM   #231
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
baumei, 3.3.09-47 should now in theory strip off any leading or trailing ' or " along with the stripping of trailing spaces the main cleaners already did. Thanks for pointing that one out, it fit in well with the just completed internal refactor of all the cleaner/filter tools, and of how/where/when they are used.

So
Code:
'01'
becomes 01. There may be some corner case undesired outputs from that, like:

Code:
vendor: Bob's Big Boys "best pci devices in the west"
would become
Code:
vendor: Bob's Big Boys "best pci devices in the west
But that's a very rare case. Your example is much more likely. Leading/trailing spaces are also a very common bug in vendor supplied values, I assume due to sloppy copy/pasting, so just adding ' and " to those filter rules takes care of all of them at once.

Not all data is run through the cleaners, just the stuff that experience has shown to be most likely to contain junk or unwanted characters/words, etc.

This was easier to do because I just redid/refactored the cleaner and filter utility functions to be much more consistent and predictable in terms of their naming, use, and functionalities.

Last edited by h2-1; 12-12-2021 at 04:35 PM.
 
Old 12-12-2021, 04:54 PM   #232
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
Wanted to make sure I didn't miss anything:

JayByrd, latest pinxi fixes for secondary indentation issues, and you can play with them now yourself with the new --indent and --indents options. if you find one you like, you can set them with configuration values INDENT and INDENTS.

mlangdn, thank you all for the massive help and energy provided, it's almost impossible, if not impossible, to succeed at stuff this complicated without it. Very motivating. Priceless, at least to me. And greatly appreciated on my end.

baumei, learning more, re framebuffer softscroll removal, that was one part of my brain did register, but since I started using framebuffer console mode years ago, I just plain forgot about that not being the only mode, or that this kernel change applies to only that mode.

In fact, I was just reading someone explaining in an article how you could increase the size of your console display without using a framebuffer:
https://fvue.nl/wiki/Linux:_Terminal...umns_x_24_rows

Don't know if that method still works. But thanks for the reminder on that one, I'll check the man page to make sure that is clear. It makes obvious sense to not use framebuffers on servers, minimize attack surfaces etc.
 
Old 12-12-2021, 05:39 PM   #233
baumei
Member
 
Registered: Feb 2019
Location: USA; North Carolina
Distribution: Slackware 15.0 (replacing 14.2)
Posts: 365

Rep: Reputation: 124Reputation: 124
Hi "h2-1",

According to my understanding, the fastest 386 Intel ever made was 33 MHz. Well --- AMD one-upped Intel and made a 40 MHz version: the Am386-40 (which is what my old computer has).

Quote:
Originally Posted by h2-1 View Post
baumei, interesting stuff, I haven't thought about a 386 since I owned one, I think it was a 386, checked release date, 1985, sounds about right.

[snip]

Since the CPU is listed at running at around 20 MHz, aka, 10x slower than my Pentium MMX, yet takes about 650 seconds to generate the --version, vs about 8 seconds on the MMX at 200MHz, you'd expect 10x slower, aka, 80 seconds, so it must be swapping, along with totally different processor architecture.

The specs say 32 MiB RAM for 386, and since that's almost exactly what pinxi/Perl need to run (usually seems to come in at around 29-32 MiB while its running), there was almost certainly heavy swapping going on there. But great extreme corner case.
This old 386 computer has no swap-partition, and also has swapping disabled. So, the long run-time of pinxi is not caused by swapping --> my guess is because the processor has only 7.9 bogoMIPS. I measured the amount of memory which pinxi used, and came up with about 12.3 MB --- this was not enough to use up all available memory.

I notice that for running "pinxi -MCazy1":
"pinxi 3.3.09-36" took 9.3 minutes; and
"pinxi 3.3.09-46" took 8.3 minutes.
You must have made something more efficient. :-)
 
Old 12-12-2021, 05:58 PM   #234
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
baumei, lol, yes, I did, I've been optimizing and fine tuning fairly consistently, what's funny is those mainly disappear into background cpu 'noise' on multicore modern systems in terms of visible gains, but make a big difference on old hardware.

Lately however I've had to slightly pull away from my initial radical optimize always as first way to do things, towards a slightly compromised 'optimize as much as practical but not at the expense of maintainable code'. For example, using a bunch of global scalar booleans is far more efficient than using one global hash that contains those booleans as keys, since every time I use that, Perl has to dig through all the keys to find the one being tested, but keeping track of a herd of global booleans is a pain, so I finally started getting rid of those, makes developing and testing way easier, but there is a definite loss in cpu cycles required to query that boolean value. But sometimes I just hide ultra efficient but totally unreadable Perl code in utility functions, after verifying that they are in fact worth implementing in terms of execution times.

Makes me wonder if I can knock another minute off your 386 system though, lol. I suspect I could if I had a better idea of what is slowing it down.

You can see that for yourself by running:

Code:
time pinxi -MCazy1 --debug 3
which shows what takes time AFTER the options run. If you really want to see what is taking time as it happens, in slow motion, by editing pinxi and uncommenting lines 170 and 171 $debugger{'level'} = 3; and set_debugger();, then saving pinxi, and running pinxi -MCazy1, then you'll see what is taking time per function, and how long each main function takes to execute. I don't time every function because some I want to run super fast without having to worry about if the debugger is running or not, but inxi has timers in most of them.

"time pinxi ..." will show you, if you uncomment those debugger lines, how long it took the 386 to actually compile the Perl code from pinxi versus how long it took pinxi to run after it started.

Optimizing Perl is actually pretty fun, they have great tools for doing that, that's what has determined the choice of methods or logic to use in many cases. Those are somewhat time consuming to run so I tend to only run them when I'm testing new core concepts or methods to use, and to try to resolve bottlenecks.

But on modern hardware, except for this irksome scaling_cur_freq clear kernel / cpu delay to generate the current thread speed, something like 90% of inxi's run time is caused by subshells, everything else is very fast. Though every time I have made a serious effort towards optimization I've usually pulled in another 10% or so, which always amazes me, that's on -v8 full. The CPU refactor I think actually gained more than 10% but then promptly lost most of it because it's doing so much more than before.

One gain I suspect on that very old hardware is being very careful to avoid creating/copying new hashes or arrays, and using instead always references. I could do that a bit more, but it makes the logic a lot harder to read in some cases so I don't. But I got a lot of those corrected in the cpu refactor.

There's a few places where if I replaced copied arrays/hashes with full by reference arrays/hashes, it would almost certainly gain noticeable time on your 386, I'll probably get to those one day, but I have to do them all at once globally. I probably should at some point since those are the biggest remaining low hanging fruit in that area, and are used across the entire program, heavily.

Last edited by h2-1; 12-12-2021 at 06:04 PM.
 
Old 12-12-2021, 06:58 PM   #235
baumei
Member
 
Registered: Feb 2019
Location: USA; North Carolina
Distribution: Slackware 15.0 (replacing 14.2)
Posts: 365

Rep: Reputation: 124Reputation: 124
Hi "h2-1",

On the old Am386 I ran:
Code:
time pinxi -MCazy1 --debug 3
If I understood the output, then the algorithm of pinxi had an elapsed time of only 51.255272 seconds.

So, I guess about seven minutes was used in converting the Perl code into an executable for Linux.

Is this the way Perl does things: it is first 'compiled', and subsequently the executable is run? If so, then I wonder whether the executable could reasonably be cached?

----

Running "pinxi 3.3.09-47" on the Dell Studio 540s confirms that the single-quotes are gone. :-)
Code:
Machine:
  Chassis:
    type: 3
    v: 01
    serial: <superuser required>

Last edited by baumei; 12-12-2021 at 08:33 PM.
 
Old 12-12-2021, 08:46 PM   #236
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
Yes, my understanding is that Perl essentially compiles the file or files into an executable on runtime via the interpreter, and then that's what you run. I faintly remember reading about an attempt to create a Perl compiler that would create binary executables, but I don't know if that ever was a thing or worked.

You can see that the interpreter is basically a compiler because when you get compile errors, it acts like any other compiler that failed to compile the code it was given, undeclared variables, compile errors, warnings, lots of things.

That would put your roughly 60 second time from start (the timer starts right at the beginning of the file, at the top) well within range of the 10ish seconds my pentium mmx 200 MHz laptop takes to run the same command. It's entirely possible also that the Perl interpreter was written to take advantage of CPU / RAM functionalities that did not exist at the time of 386 CPUs, and because of those not being there, it takes a long time to do it the non-optimized ways. I think it takes my pentium mmx 200 MHz about 6 seconds to compile the code before it runs, which if it were only a matter of CPU speeds, should have taken about 30 seconds on the 386, so there has to be something else going on there.

I'm wondering if the ongoing switch to references and various other small optimizations really helps the compiler in your case since it doesn't have to track as many arrays, hashes, and variables while compiling.

There's an area here however that I have never looked into, and that's actually optimizing the Perl for the compiler itself, that is, there are probably ways of doing things that are easy for the compiler t figure out, and other ways that are not, and that could be an area that might yield some positive results, but also one I don't know anything about.

https://www.marcbilodeau.com/compiling-perl/
That talks about compiling perl program into a binary using PAR. It's a much bigger binary because it includes the various dependency Modules used, and I think the interpreter itself? But I guess it can be done, loses the portability instantly though, has to be compiled for each OS type/variant, I don't see the benefit there, though it would be interesting to time the execution of it compiled.

Unfortunately that article is more concerned with making the code non accessible than performance or any actual technical questions about how the compiled program performs.

https://perl.mines-albi.fr/perl5.8.5...5/PAR/FAQ.html
so yeah, it could be tested, easier actually to test on very old hardware, which is much more likely to show meaningful time differences.

Checking on my ryzen, it looks like compiling it takes about 130ms, roughly. Execution takes 260 (subtracting the 300 ms sleep time if --sleep 0 is not used), for -MCa. Then subtract about 130 ms for the 12 cpu scaling_cur_freq reads at ~`10ms/core, and it takes roughly 130ms to compile pinxi, and 130ms to run it for those options, to make it apples to apples since your cpu wouldn't have any of the cpufreq stuff to read.

Disk I/O is another area that will radically change compile times as well, IDE will be much slower, and the oldest generation IDE is I guess from 1986:
https://en.wikipedia.org/wiki/Parallel_ATA, 8.3 MB/s, then 33. So that's another area real slowdowns will occur, sort of a similar difference between IDE and SATA 1 hdd and hdd and sata 3 or nvme SSD roughly.

If it had to write back and forth to disk a lot while compiling pinxi, that would account for the very large slowdown in compiling.

Here's my Pentium MMX 200 MHz:
https://www.cpu-world.com/CPUs/Penti...V80503200.html

There's one possible source of significant speed up, it has L1 cache! I did not know that.
Code:
Floating Point Unit	Integrated
Level 1 cache size  ? 	16 KB 4-way set associative code cache
16 KB 4-way set associative write-back data cache
Physical memory	4 GB
Multiprocessing	Up to 2 processors
Extensions and Technologies:
MMX instructions
Quite the power house. I think the L1 cache may make a large difference, and also the 192 MiB of RAM, which inxi is heavily optimized to favor for storage of the data it's working on until it exits. Plus this is running an ancient IBM server SSD, 1 GiB, top of the line back in its day. Someone donated that to me a long time ago, ultra durable, I'll be sad if it ever dies. Runs 24/7.

Last edited by h2-1; 12-12-2021 at 09:21 PM.
 
Old 12-12-2021, 10:12 PM   #237
fourtysixandtwo
Member
 
Registered: Jun 2021
Location: Alberta
Distribution: Slackware...mostly
Posts: 328

Rep: Reputation: 217Reputation: 217Reputation: 217
h2-1,

Code:
pinxi -zv8Y -2 | less -R
That works nicely. FYI, and I should have included this earlier, other utilities seem to have standardized on "--color=always", but I wish there was a shorter less verbose option. Although bash argument completion makes it less of a hassle...if it's installed.

This is how I most often use less -R.
Code:
dmesg -T --color=always|less -R
 
Old 12-12-2021, 10:15 PM   #238
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
More reading, this slowdown in cpufreq/scaling_cur_freq is documented and known, but does not appear to manifest if the root only read cpufreq/cpuinfo_cur_freq is used.

This looks very much like a kernel bug to me.
https://github.com/htop-dev/htop/issues/471
https://stackoverflow.com/questions/...android-device

And it appears to be getting worse, not better, the above issue noted it was only happening on Epyc CPUs, not Ryzens, but now it is happening on all the cpus I'm testing on as far as I can see. And seems worse on Xeons, ~20ms per thread.

Fix is to do a quick check for readable cpuinfo_cur_freq file, if readable, use that, which means, if root/sudo pinxi will be much faster on high core count systems:

3.3.09-50 with and without sudo, the difference is clear, the 10ms delay vanishes with sudo and this new switching.
Code:
time sudo pinxi -C --sleep 0
CPU:
  Info: 6-core model: AMD Ryzen 5 2600 bits: 64 type: MT MCP cache: L2: 3 MiB
  Speed (MHz): avg: 1962 min/max: 1550/3400 cores: 1: 1550 2: 1550 3: 1550
    4: 1550 5: 3400 6: 1550 7: 2800 8: 1550 9: 3400 10: 1550 11: 1550 12: 1550

real	0m0.211s
user	0m0.151s
sys	0m0.042s
and the slowdown you can see here with scaling_cpu_freq
Code:
time pinxi -C --sleep 0
CPU:
  Info: 6-core model: AMD Ryzen 5 2600 bits: 64 type: MT MCP cache: L2: 3 MiB
  Speed (MHz): avg: 2119 min/max: 1550/3400 cores: 1: 1697 2: 2253 3: 3317
    4: 2006 5: 3813 6: 2680 7: 1321 8: 1377 9: 1378 10: 1335 11: 2087 12: 2173

real	0m0.328s
user	0m0.148s
sys	0m0.038s
64 thread system with sudo:
Code:
sudo pinxi -Cy --sleep 0
CPU:
  Info: 2x 16-core model: AMD EPYC 7281 bits: 64 type: MT MCP MCM SMP cache:
    L2: 2x 8 MiB (16 MiB)
  Speed (MHz): avg: 1242 min/max: 1200/2100 cores: 1: 1200 2: 1200 3: 1200
    4: 1200 5: 1200 6: 1200 7: 1200 8: 1200 9: 1200 10: 1200 11: 2100 12: 1200
    13: 1200 14: 1200 15: 1200 16: 1200 17: 1200 18: 1200 19: 1200 20: 1200
    21: 1200 22: 1200 23: 1200 24: 1200 25: 1200 26: 1200 27: 1200 28: 1200
    29: 2100 30: 1200 31: 1200 32: 1200 33: 1200 34: 1200 35: 1200 36: 1200
    37: 1200 38: 1200 39: 1200 40: 1200 41: 1200 42: 1200 43: 2100 44: 1200
    45: 1200 46: 1200 47: 1200 48: 1200 49: 1200 50: 1200 51: 1200 52: 1200
    53: 1200 54: 1200 55: 1200 56: 1200 57: 1200 58: 1200 59: 1200 60: 1200
    61: 1200 62: 1200 63: 1200 64: 1200

real	0m0.338s
user	0m0.224s
sys	0m0.100s
and without:
Code:
pinxi -Cy --sleep 0
CPU:
  Info: 2x 16-core model: AMD EPYC 7281 bits: 64 type: MT MCP MCM SMP cache:
    L2: 2x 8 MiB (16 MiB)
  Speed (MHz): avg: 1218 min/max: 1200/2100 cores: 1: 1195 2: 1195 3: 1196
    4: 1196 5: 1196 6: 1195 7: 1196 8: 1195 9: 1197 10: 1196 11: 1196 12: 1196
    13: 1196 14: 1196 15: 1196 16: 1196 17: 1196 18: 1195 19: 1195 20: 1196
    21: 1196 22: 1195 23: 1195 24: 1196 25: 1195 26: 1197 27: 1195 28: 2687
    29: 1196 30: 1195 31: 1196 32: 1196 33: 1196 34: 1195 35: 1195 36: 1196
    37: 1196 38: 1196 39: 1195 40: 1195 41: 1195 42: 1196 43: 1196 44: 1195
    45: 1195 46: 1196 47: 1195 48: 1195 49: 1195 50: 1195 51: 1196 52: 1196
    53: 1196 54: 1195 55: 1195 56: 1196 57: 1196 58: 1195 59: 1196 60: 1197
    61: 1196 62: 1196 63: 1196 64: 1195

real	0m1.096s
user	0m0.235s
sys	0m0.102s
As you can see, a 750ms difference, ie, 11ms per core. Seems very consistent, must be a setting or something.

We note from the stackoverflow this useful comment:
Quote:
One thing that I found in investigating this was that scaling_cur_freq is not necessarily the current CPU frequency, but rather what the kernel thinks the frequency is. To get the real frequency, you need root access to read cpuinfo_cur_freq. Also, gaining root access allows you to set the cpu speed, which is quite useful for profiling under best/worst case conditions.
The question there of course is if scaling_cur_freq is merely "what the kernel thinks the speed is" then why does it take a relative CPU cycle eternity for the cpu to return that answer to us? I think this is a bug somewhere in either the frequency driver or the kernel. Nothing should take this long at this low level.

My assumption that /proc/cpuinfo contains bogus speeds appears confirmed:
https://bugzilla.kernel.org/show_bug.cgi?id=197009
This appears to be in the kernel guys' minds 'a feature not a bug'.

Which leaves the very significant high core count scaling_cur_freq delay a significant bug, I think that might be newish judging from the bug reports I've found.

Last edited by h2-1; 12-12-2021 at 10:32 PM.
 
Old 12-12-2021, 10:27 PM   #239
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 562

Original Poster
Rep: Reputation: 320Reputation: 320Reputation: 320Reputation: 320
fourtysixandtwo, you can save 3 characters!: pinxi -zv8Y-2|less -R

Last edited by h2-1; 12-12-2021 at 10:40 PM.
 
1 members found this post helpful.
Old 12-12-2021, 10:41 PM   #240
fourtysixandtwo
Member
 
Registered: Jun 2021
Location: Alberta
Distribution: Slackware...mostly
Posts: 328

Rep: Reputation: 217Reputation: 217Reputation: 217
Quote:
Originally Posted by h2-1 View Post
fourtysixandtwo,
But the rewrite fortunately, or unfortunately, depending on your perspective, made extending and enhancing inxi not only possible, but actually kind of entertaining, since I'm no longer fighting against the limits imposed by the language, but just my own lack of understanding and knowledge, which can sometimes be improved if I work at it.
I would just add in, and you've previously touched on it, forgetfulness can be a huge factor too.

Quote:
However, this is one reason I make sure to test inxi on old hardware, my 1998 pentium mmx laptop in my case, and on legacy OS in vm, it's too easy to get complacent and forget to make sure all core requirements of working always on everything are maintained. At least on Linux with 2.4 kernel, Perl 5.8.0 or newer. Other operating systems the support isn't as complete, either because the data isn't there, or because the data or tools are just so unpleasant to work with that I won't do it unless I get paid, and/or because they aren't free operating systems (OSX comes to mind there, but not only them).
Here's some output from OSX I collected yesterday with -41 and then -45 with ubuntu (slackware usb image wouldn't boot).

Code:
# cat mbp-Cazy1.txt
CPU:
 Info: dual core
  model: Intel Core2 Duo T9900
  bits: 64
  type: MCP
  arch: N/A
  family: N/A
  model-id: N/A
  stepping: N/A
  microcode: N/A
  cache: N/A
 Speed (MHz): 3060
  min/max: 1064/1064
  cores: No OS support for core speeds.
 Features: pae sse sse2 sse3 ssse3 vmx
 Vulnerabilities: No CPU vulnerability/bugs data available.
Code:
# cat mbp-Cazy1.txt-ubuntu 
Machine:
  Type: Portable
  System: Apple
    product: MacBookPro5,2
      v: 1.0
      serial: <filter>
  Chassis:
    type: 8
    v: Mac-F2268EC8
    serial: <filter>
  Mobo: Apple
    model: Mac-F2268EC8
      serial: N/A
  UEFI: Apple
    v: MBP52.88Z.008E.B05.0905042202
    date: 05/04/09

CPU:
  Info:
    model: Intel Core2 Duo T9900
    socket: U2E1
    bits: 64
    type: MCP SMP
    arch: Core Penryn
    family: 6
    model-id: 0x17 (23)
    stepping: 0xA (10)
    microcode: 0xA0B
  Topology:
    cpus: 1
      cores: 2
    smt: <unsupported>
    cache:
      L1: 2x 128 KiB (256 KiB)
        desc: d-2x32 KiB; i-2x32 KiB
      L2: 2x 6 MiB (12 MiB)
        desc: 1x6 MiB
  Speed (MHz):
    avg: 1592
    min/max: 1596/3059
    base/boost: 3060/3060
    governor: schedutil
    volts: 1.6 V
    ext-clock: 266 MHz
    cores:
      1: 1592
      2: 1592
    bogomips: 12204
  Flags: ht lm nx pae sse sse2 sse3 sse4_1 ssse3 vmx
  Vulnerabilities:
    Type: itlb_multihit
      status: KVM: VMX disabled
    Type: l1tf
      mitigation: PTE Inversion; VMX: EPT disabled
    Type: mds
      status: Vulnerable: Clear CPU buffers attempted, no microcode; SMT disabled
    Type: meltdown
      mitigation: PTI
    Type: spec_store_bypass
      status: Vulnerable
    Type: spectre_v1
      mitigation: usercopy/swapgs barriers and __user pointer sanitization
    Type: spectre_v2
      mitigation: Full generic retpoline, STIBP: disabled, RSB filling
    Type: srbds
      status: Not affected
    Type: tsx_async_abort
      status: Not affected
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
pinxi/inxi huge BSD updates, testers? h2-1 *BSD 0 03-08-2021 11:54 PM
Testersfeedback for new pinxi/inxi feature -E/--bluetooth h2-1 Slackware 2 01-29-2021 06:53 PM
Huge inxi/pinxi upgrade, new features, Logical volumes, raid rewrite, beta testers? h2-1 Slackware 12 12-17-2020 05:04 PM
Beta testers for Perl inxi requested h2-1 Slackware 147 12-14-2020 09:00 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 10:32 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration