Does lock_page set some flag for pte record that tiggers TLB fault if someone tries to write to that page?
Linux - KernelThis forum is for all discussion relating to the Linux kernel.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Does lock_page set some flag for pte record that tiggers TLB fault if someone tries to write to that page?
Hi everyone, I'm a Linux user space developer.
My application is using memory-mapped files backed by ext4 + spinning disk and it experiences spikes on memory writes up to hundreds of milliseconds. I suspect it has something to do with writeback process.
From the code I see there is a call to lock_page and the locked page is passed to the underlying fs(ext4) write_page (that looks heavy).
So my question is about lock_page.
Does it set something on pte so the write to the mapped vm would generate a tlb fault?
If so, will tlb be fixed only after the page is unlocked?
The problem is much more likely a design issue by someone sitting in your chair. If writes to previously allocated pages spike like that it probably indicates you are thrashing. Your target page(s) have been migrated out to disk and now some other pages need to be paged out so your target pages can be paged back in. All takes time - a lot of time these days if still using spinning rust.
Buy some RAM, or reduce the working set of that mapped area would be simple solutions. If you can't, buy a few SSDs - depending on the bus architecture of the box, you should be able to get much faster parallel transfers going. Would "hide" rather than fix the issue, but might be sufficient.
Great thanks, moreover with tempfs for which write page is just noop(besides swap) I don’t see the latency spikes. Have 256GB Ram but looks like Linux starts page cache out writeback much earlier. Anyway will places SSDs shortly
Categorically speaking, when you design for a "memory-mapped file," you have to approach it as a file, not as memory. Pay very close attention to things like "locality of reference." (For example, accumulate a list of memory addresses that you intend to reference, then sort that list before traversing through it.) If you instead treat the resource as "real memory," you can encounter thrashing much sooner than you expect. And, yes, you will also be able to observe the internal OS caches being flushed, as you now are seeing.
Great thanks, moreover with tempfs for which write page is just noop(besides swap) I don’t see the latency spikes. Have 256GB Ram but looks like Linux starts page cache out writeback much earlier. Anyway will places SSDs shortly
Swap is of course subject to the same effect once pages start being pushed out.
And tmpfs is non-persistant. But you know all that if you've been in the code. Good luck with it.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.