[SOLVED] Suggestion: use "performance" CPU frequency governor
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
rc.cpufreq: for CPUs that use intel_pstate, default to the performance governor. The performance governor provides power savings while avoiding the ramp-up lag caused by using "ondemand", which defaults to "powersave" on these systems.
Do I understand that if the default performance option is used that the CPU still conserves energy, despite always running close to top speed?
Spurred by this thread I now am testing the default performance option. According to conky and cpufreq-info, my CPU now is mostly running at max frequency. Fan speeds are tad higher now too.
Previously I was using the ondemand option, which triggered defaulting to the pstate powersave option.
Edit: What is meant by ramp-up lag? Are we talking micro or milliseconds or several seconds? BTW, I am using a Sky Lake quad core Intel i5-6400.
Decided to test the performance governor also. My Haswell-E now runs at max speed about 80% of the time, but does idle most cores when not in use. Temperatures went up a few degrees Celsius.
Edit: What is meant by ramp-up lag? Are we talking micro or milliseconds or several seconds? BTW, I am using a Sky Lake quad core Intel i5-6400.
There are definitely fractions of seconds and the exact delay is to be measured, if you know how, I don't, without causing myself to much load and influencing the results, maybe Intel knows better. The problem with this delay, or ramp-up lag, is that it's apparent especially in a resource hungry DE, KDE for instance, where you can clearly notice the sluggishness. Maybe not that apparent on some high end Core i7/i8...etc.
In my post #3 there's a link pointing to an older benchmark (phoronix) about the governors (algorithms) and on page 6 you can find a nice and conclusive time-graph where these are compared.
And then there could be other factors too, BIOS PM, the CPU's own PM (if any) and the quality of the intel_pstate driver.
The only more detailed info I could find about these CPU states is: https://software.intel.com/en-us/art...ckage-c-states
- it covers the Xeon CPUs, but down on the page it states:
"We discussed the different types of power management states. Though the concepts are general, we concentrated on a specific platform, the Intel® Xeon Phi™ coprocessor. Most modern processors, be they Intel Corporation, AMD* or embedded, have such states with some variation."
@Daedra
These newer CPUs, starting with Haswell are very efficient and under normal usage they don't produce too much heat, thus the fan doesn't even start spinning with the powersave governor. Talking about laptops. Although I'm using the performance governor myself, I do have an issue with it, it's actually not optimal. The fans are spinning constantly at lower speed and these are mechanical components with somewhat limited lifetime that I might like to protect. That's why I'll try to blacklist the intel_pstate driver and observe how the old acpi-cpufreq with the governor "ondemand" is actually behaving. Now this is not recommended as the intel_pstate is chosen now by default for Intel CPUs in the Linux kernel, but still, it's worth trying. Hope I'll get some time during the weekend to play with it and do some benchmarks.
Last edited by abga; 08-29-2018 at 06:01 PM.
Reason: typo
Do I understand that if the default performance option is used that the CPU still conserves energy, despite always running close to top speed?
Modern CPUs have fine-grained gating on both the clock and power supply. When idle, large parts of the CPU core are shut off which makes the input clock frequency mostly irrelevant.
In principle, the powersave governor reduces power (and energy!) by not going to maximum frequency/voltage right away when a CPU core becomes busy. I measured the effects some time ago, and as I recall, the performance loss occurred over a timescale of tens to hundreds of milliseconds.
Ed
I just hope the new kernel setting does not screw up things for us AMD users.
I haven't seen the new rc.cpufreq (part of the a/sysvinit-scripts-2.1-noarch-18.txz) but if it's looking in /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors and picks ondemand if it's available and performance if it's not, then you'll be good with your AMD, ARM users will be fine and other older Intel CPUs (pre Sandy Bridge) users will be happy too. AFAIK it's starting with Intel's Sandy Bridge where intel_pstate is activated.
How about rc.cpufreq sourcing a user-defined /etc/default/cpufreq file?
A person might not want to use the performance option and prefers powersave.
As is, the new script requires 1) editing rc.cpufreq to comment out the new pstate test, 2) editing rc.M to change the start parameter, or 3) editing rc.local to relaunch rc.cpufreq with the desired powersave parameter.
Sourcing a user-defined file in /etc/default avoids editing the rc.d scripts.
Creating a new conf file for a simple value might be a little overkill IMHO.
I'm not sure I'm able to follow your logic, don't understand why 1) is necessary, what do you want to comment out? Establishing the best default governor depending on the system? It'll be a variable that might or might not be used in the next stage of the script, depending on how rc.cpufreq was called, with or without a parameter. Simple and elegant.
I also cannot follow why 3) is necessary, only step 2) is sufficient, that's:
Given that rc.cpufreq contains the necessary instructions to manually set the preferred governor, leaving SCALING_GOVERNOR= empty, using a different variable for choosing the appropriate governor from /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors and then expanding the conditional statement, checking first if the "override" SCALING_GOVERNOR is set by the user, using it to set the governor and breaking the loop, would be a better approach, keeping it all in one file.
Given that rc.cpufreq contains the necessary instructions to manually set the preferred governor, leaving SCALING_GOVERNOR= empty, using a different variable for choosing the appropriate governor from /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors and then expanding the conditional statement, checking first if the "override" SCALING_GOVERNOR is set by the user, using it to set the governor and breaking the loop, would be a better approach, keeping it all in one file.
I think the point is that the file in /etc/default would not be overwritten if the RC scripts get an update?
M
In principle, the powersave governor reduces power (and energy!) by not going to maximum frequency/voltage right away when a CPU core becomes busy. I measured the effects some time ago, and as I recall, the performance loss occurred over a timescale of tens to hundreds of milliseconds.
Ed
which is, when you are on a notebook and travelling, irrelevant.
like for most all other common user tasks
so I hope the change did not effect notebook installations?
I even have doubts that it makes sense on most of workstations also
if you have use cases where tens to hundreds of milliseconds matter, what is not a every day every user scenario, than this might be a better candidate for special settings, imho
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.