Contact

Subscribe via Email

Subscribe via RSS

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Very Brief Notes on Windows Memory etc

I have no time nowadays to update this blog but I wanted to dump these notes I made for myself today. Just so the blog has some update and I know I can find these notes here again when needed.

This is on the various types of memory etc in Windows (and other OSes in general). I was reading up on this in the context of Memory Mapped Files.

Virtual Memory

  • Amount of memory the OS can access. Not related to the physical memory. It is only related to processor and OS – is it 32-bit or 64-bit.
  • 32-bit means 2^32 = 4GB; 64-bit means 2^64 = a lot! :)
  • On a default install of 32-bit Windows kernel reserves 2GB for itself and applications can use the balance 2GB. Each application gets 2GB. Coz it doesn’t really exist and is not limited by the physical memory in the machine. The OS just lies to the applications that they have 2GB of virtual memory for themselves.

Physical Memory

  • This is physical. But not limited to the RAM modules. It is RAM modules plus paging file/ disk.

Committed Memory

  • When a virtual memory page is touched (read/ write/ committed) it becomes “real” – i.e. put into Physical Memory. This is Committed Memory. It is a mix of RAM modules and disk.

Commit Limit

  • The total amount of Committed Memory is obviously limited by your Physical Memory – i.e. the RAM modules plus disk space. This is the Commit Limit.

Working Set

  • Set of virtual memory pages that are committed and fully belong to that process.
  • These are memory pages that exist. They are backed by Physical Memory (RAM plus paging files). They are real, not virtual.
  • So a working set can be thought of as the subset of a processes Virtual Memory space that is valid; i.e. can be referenced without a page fault.
    • Page fault means when the process requests for a virtual page and that is not in the Physical Memory, and so has to be loaded from disk (not page file even), the OS will put that process on hold and do this behind the scene. Obviously this causes a performance impact so you want to avoid page faults. Again note: this is not RAM to page file fault; this is Physical Memory to disk fault. Former is “soft” page fault; latter is “hard” page fault.
    • Hard faults are bad and tied to insufficient RAM.

Life cycle of a Page

  • Pages in Working Set -> Modified Pages -> Standby Pages -> Free Pages -> Zeroed pages.
  • All of these are still in Physical RAM, just different lists on the way out.

From http://stackoverflow.com/a/22174816:

Memory can be reserved, committed, first accessed, and be part of the working set. When memory is reserved, a portion of address space is set aside, nothing else happens.

When memory is committed, the operating system guarantees that the corresponding pages could in principle exist either in physical RAM or on the page file. In other words, it counts toward its hard limit of total available pages on the system, and it formally creates pages. That is, it creates pages and pretends that they exist (when in reality they don’t exist yet).

When memory is accessed for the first time, the pages that formally exist are created so they truly exist. Either a zero page is supplied to the process, or data is read into a page from a mapping. The page is moved into the working set of the process (but will not necessarily remain in there forever).

Memory Mapped Files

Windows (and other OSes) have a feature called memory mapped files.

Typically your files are on a physical disk and there’s an I/O cost involved in using them. To improve performance what Windows can do is map a part of the virtual memory allocated to a process to the file(s) on disk.

This doesn’t copy the entire file(s) into RAM, but a part of the virtual memory address range allocated to the process is set aside as mapping to these files on disk. When the process tries to read/ write these files, the parts that are read/ written get copied into the virtual memory. The changes happen in virtual memory, and the process continues to access the data via virtual memory (for better performance) and behind the scenes the data is read/ written to disk if needed. This is what is known as memory mapped files.

My understanding is that even though I say “virtual memory” above, it is actually restricted to the Physical RAM and does not include page files (coz obviously there’s no advantage to using page files instead of the location where the file already is). So memory mapped files are mapped to Physical RAM. Memory mapped files are commonly used by Windows with binary images (EXE & DLL files).

In Task Manager the “Memory (Private Working Set)” column does not show memory mapped files. For this look to the “Commit Size” column.

Also, use tools like RAMMap (from SysInternals) or Performance Monitor.

More info

[Aside] Understanding SSD performance

Was reading up on SSD performance degradation and came across this (old) article from AnandTech. Good one! Explains why SSD performance degrades over time and what TRIM sorta does to improve things. The unfortunate truth seems to be that SSD performance will slowly degrade over time and the only way to restore performance then is to do a secure erase (see this PCWorld article too). 

Update: I don’t want to make a new post just to mention this so I’ll update this older post of mine. Here’s a post from Scott Hanselman on why Windows defrags your SSD drives and how that’s not such a bad idea. Upshot of the matter is this: fragmentation affects SSDs too, though not as much as HDDs (because SSDs have no performance hit unlike HDDs). With SSDs fragmentation affects performance in that (1) there’s a limit to the number of fragments a file can have, and once that limit is reached it can cause errors when writing/ updating; (2) more fragments means more time spent putting these fragments together when a file is read. To avoid these performance issues Windows automatically defrags SSDs once every month.

iPhone 4S under iOS 7 is slow…?

A few days ago I had remarked that I don’t find the iPhone 4S under iOS 7 slower than before. Now I am wondering whether that comment was in haste.

The past 5 days, while I am on vacation in India, I have been using the iPhone 4S exclusively. And it does feel slow. Not extreme slow like my Galaxy Nexus, but definitely slower compared to the 5S and also possibly compared to how the 4S was before.

I have to qualify my statement with a “possibly” because I am not really sure. I feel the 4S is slower under iOS 7 than iOS 6.x but I am aware that could partly be perception too. After using the iPhone 5S for a month I am used to its faster speeds and so the bar is set higher in my mind. Now I expect apps to open with the speed of iPhone 5S and animations to be as smooth as the 5S, but the 4S being a slower phone can definitely not live up to that expectations.

A few days ago for instance I took out my old iPod Touch 2nd generation and installed whited00r, a custom firmware for such old iDevices that include enhancements from newer Apple firmware. I expected this firmware to be fast on the iPod Touch, but it was not. I disabled all the extra animations and tools, and while I can see the device is not sluggish it just does not match up to my expectations. Which is where the idea first cropped in my head that maybe the problem is in my head. I know the iPod Touch was a great device up to the time I used it last, so if it feels slow now with the same version of firmware it was on (whited00r is based on the 3.x series) it can’t be an issue of using a more demanding firmware on an older device – it has to be my expectation. This is why I qualify my statement about the iPhone 4S and iOS 7 with a “possibly” above.

Apart from that, the iPhone 4S is actually probably slower on the iOS 7 than iOS 6 due to the extra features and gloss and animation. Even the iPhone 5S has a bit of slowness opening folders and some apps, so one can expect it to be worse on iPhone 4S. I feel this slowness is slower than bearable, and I don’t like it much, but I also feel if it’s fixed on the iPhone 5S (I hear iOS 7.1 fixes all these) then it will probably improve on the iPhone 4S. These are quirks with iOS 7 itself – manifesting in both devices – and fixing these quirks will make the iPhone 4S behave as it should. Slow, yes, partly due to perception and partly due to slower hardware, but not as noticeably slow as I have been finding it the past few days.

Hope iOS 7.1 indeed makes the 4S faster. I hate being unable to use the 4S as happily as I used to before.

I believe Apple does not sell you a phone. It sells you an experience, which is why Apple tightly controls the hardware, software, selling channels etc, and so a sub par performance on the 4S is something Apple can’t ignore as other manufactures might, because experience is central to Apple. Even on an older hardware users expect bearable performance from newer firmware as long as Apple officially supports it. Some performance loss would be there due to perception, but apart from that it should work smooth. If Apple feels certain features might not work well on the older devices they are welcome to disable it on such hardware, but everything else should work as expected.