Linux - KernelThis forum is for all discussion relating to the Linux kernel.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I want to disable the caches for a research project where we want to measure execution times and want to be sure they are not influenced by the internal caches.
Realistically speaking ... and certainly from the point-of-view of any pragmatically useful research project, "internal caches" are the nature of the beast, and therefore any research results must pragmatically consider their influence.
Your handling of this variation should be "appropriate." If you judge that the variation would be significant to your target audience (or to those who seek to validate your results), then you should account for them. (The "execution time" of any algorithm, under real-world conditions, is a confidence interval that is governed by some sort of probability distribution.) If, on the other hand, you judge that it is not, then you should merely report a nominal figure ... "if microseconds actually matter" ... or just the one figure that you have. The presence of caches in all CPUs is well-understood by everyone.
Never report "certainty" where certainty should not be. My brother was once penalized on a test for reporting four digits of precision on a result that only justified three. But then again, that was a situation where the difference was well-known to be an important difference. My brother's error was properly penalized because it was, in fact, an error.
Last edited by sundialsvcs; 08-01-2012 at 02:41 PM.
Thank you for your input, sundialsvcs, although I'm not shure wheter I got you right. Actually we wanted to justify our results by showing that a specific timing channel consists both with enabled and disabled caches. Specific execution times were not important for the result, just the fact that a side channel attack was possible.
But since there was to much noise with globally disabled caches, we switched to disable caches for specific processes only. I planed to use the /proc/mttr to set specific memory regions as uncachable but failed with resolving virtual addresses into physical ones. Anyway the project is finished, so I don't know wheter I will continue this work some day.
hi ulmo,
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
in the above can you be specific with the "$(shell uname -r)" and the rest ...and also what it is meant and thanks!
I used the files exactly as I posted them; normally you should not need to replace the $(...) parts (they are expanded by the shell). I used bash - don't know if the behavior of other shells differs in this case. Also for the commands, I used them exactly as posted. Did you already tried, rajisekar, or are you asking in advance? It's easier to deal with specific error messages.
hi again,
i'm trying to disable the level1 and level2 cache as you did in ubuntu 11.04!
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
after running this in the command prompt in "bash shell".. i got the following...
The program 'shell' is currently not installed. You can install it by typing:
sudo apt-get install byobu
PWD: command not found
make: *** /lib/modules//build: No such file or directory. Stop.
P.S:after installing byobu also i got the same can you
help me in doing and thanks!
without arguments. If this is a misunderstanding because of my statement with the shell, I'm sorry. As I said, I'm no expert with make; I guess ($...) is expanded directly by make, not by the shell.
For the problem with "Nothing to be done for `all'." I think the problem are space characters instead of a tab character at the beginning of the line with "make -C". For me it only works with a tab.
The presence of a side-channel attack would have to be demonstrated in the presence of known CPU caching behavior, not at the exclusion of it. The exploitable characteristic of the system must be sufficiently pronounced that any and all cache-induced variances (CPU or otherwise) do not prevent a useful exploit. Disabling caches would perhaps further demonstrate the flaw, but in doing so they create conditions that are no longer quite real-world. If a side-channel is there, then presumably the caches would not very seriously affect it, because the exploited behavior would lie well outside the zone of uncertainty produced by the cache's known hit-probability distributions.
Last edited by sundialsvcs; 08-13-2012 at 10:02 AM.
hi ulmo,
Thanks a lot! you were right! at last, it worked with "make" command and your "tab" part Thanks a lot again and as you said i could see my system get freezes.
Disabling caches would perhaps further demonstrate the flaw, but in doing so they create conditions that are no longer quite real-world.
You are absolutely right. That's why our experiments without caches were meant to strengthen our assumption that the timing channel was not caused by caches. This was important to us because we wanted to separate from cache based timing attacks.
Quote:
Originally Posted by sundialsvcs
If a side-channel is there, then presumably the caches would not very seriously affect it, because the exploited behavior would lie well outside the zone of uncertainty produced by the cache's known hit-probability distributions.
Disabling CPU caches seemed to us as the most straightforward way. I have no experience with prediction of cache behavior, but I would assume this to be difficult because we were analyzing a Java virtual machine (where I would assume some code running in between I can not predict). But maybe that's an option if the project gets resumed.
hi ulmo,
could you tell me how it is working like setting 30th bit of control register is enough to disable the cache?if so and in what way?As you have mentioned earlier the system get freezes after executing the line mov cr0,eax ? is it so?
could you tell me how it is working like setting 30th bit of control register is enough to disable the cache?
For me it seemed to be enough, but Intel's documentation says you have to disable MTRRs also. See Intel 64 and IA-32 Architectures Software Developer’s Manual volume 3 chapter 11.5.3 (http://www.intel.com/content/www/us/...-manuals.html/). In linux disabling MTRRs is also possible by executing as root something like
As you have mentioned earlier the system get freezes after executing the line mov cr0,eax ? is it so?
With an X server running it seemed to me the system was freezed, but it was not. Without X it was possible for me to work with the system, although the console prints only a few lines (something like three) per second.
hi ulmo,
how can you prove that the L1 and L2 caches get disabled? we cannot say that the system get freezes which means the cache got disabled! Let me know is there any tester to prove the above ?! or any other idea ?? Between do you have any idea for disabling the hyper threading ?? thanks
how can you prove that the L1 and L2 caches get disabled?
Unfortunately I don't know how to do this. As I described, I experienced a significant performance drop, even without X (which is of course no prove). I would rely on the Intel specification to derive the state of internal caches from cr0.
Quote:
Originally Posted by rajisekar
Between do you have any idea for disabling the hyper threading ??
I'm afraid I have no experience with hyper threading so I don't know about it.
hi ulmo,
how can you prove that the L1 and L2 caches get disabled? we cannot say that the system get freezes which means the cache got disabled! Let me know is there any tester to prove the above ?! or any other idea ?? Between do you have any idea for disabling the hyper threading ?? thanks
Hi folks,
I also have to disable/enable the cpu caches for a research project. Disabling caches by setting bit 30 in %cr0 works fine. One can verify the absence of the caches with a small test program like this (cachetest.c):
Code:
int main(int argc, char **argv) {
int i, j;
char memory[1024];
for (i = 0; i < 1000; ++i) {
for (j = 0; j < 1023; ++j) {
memory[j] = memory[j+1]+1;
}
}
return 0;
}
and profile it using the linux perf tool like:
Code:
perf stat -e L1-dcache-load-misses ./cachetest
The memory array fits into the cache and you should get a low number for the L1 cache misses. For me it is about 10000 misses where most of the misses come from the dynamic loader.
When I disable the caches the number goes up to something like 8 million misses. The execution time is about 1000x higher with the caches disabled.
When I reset bit 30 in %cr0 the number of misses drops back to about 10000 and the test program runs at the original execution time.
However, the system is still horribly slow. Much faster then having the caches disabled but still very slow until I restart the system.
Could there be some side effects I'm missing here? I checked the contests of %cr0 and it is the same as after startup. Any ideas?
I used the code from the first post and it seems to disable caches only on one CPU core.
I have an Intel i5-2320 (4 cores) and after inserting the module, some executions of programs run slow (about 10 times slower than normally), but some equally fast as before. (that is the same program run several times sometimes runs fast as before, sometimes 10 times slower)
The system monitor (I use Ubuntu 14.04) shows one CPU core almost always 100% loaded, while others much less.
Can anyone shed some light onto this?
Also, the code apparently works only in 32 bit mode. It wont even compile with -m64 option.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.