SoftRAM 95 (1996)
The slogan "Double click. Double Memory" was at the heart of one of the most successful advertising campaigns in the history of PC software. In May of 1995, Syncronys Softcorp rolled out a software package called "SoftRAM for Windows 3.1" for users frustrated by "out of memory" errors and other resource limitations. Six months and a Windows 95 version later, Syncronys reported net revenues of $11 million for the third quarter of 1995, based almost entirely on SoftRAM sales of 650,000 units. SoftRAM coasted into the new year as one of the top three best-selling software products of 1995, up there with the likes of the Windows 95 Upgrade.
After a summer of extremely strong sales, however, speculation began building that SoftRAM wasn't doing what Syncronys claimed. Toward the end of October, several independent sources (including one of the authors, Mark Russinovich) examined the internals of the software to find out the extent of RAM doubling underneath the pretty dials that SoftRAM presents to both Windows 3.x or Windows 95 users. About the same time, Syncronys issued a press release announcing that in the Windows 95 version "RAM compression is not being delivered to the operating system."
About then, mainstream media began to pick up the controversy, with articles appearing in PC Magazine, PC Week, Houston Chronicle, Washington Post, and Time magazine. In early 1996, Syncronys stock dropped to a fraction of its heights and the company faced numerous class-action and stock-holder lawsuits, as well as rumored pressure of inquiries by the Securities and Exchange Commission and the Federal Trade Commission. In addition, while Syncronys still states that the Windows 3.x version delivers everything it claims, an upgrade that promises to correct the admittedly broken Windows 95 version is overdue.
In this article, we'll present an introduction to the RAM-doubling technology used by, and claimed by, SoftRAM, followed by a brief tour of SoftRAM 95 internals. We'll include a program that can be used to detect whether a popular technique of RAM doubling, called "RAM compression," is taking place (available electronically).
RAM-Doubling Technology
Syncronys claims that SoftRAM uses two techniques to increase usable memory. The first technique involves overcoming a flaw in how Windows 3.1 allocates memory. Specifically, when users encounter a message related to insufficient memory, it is usually because the system has allocated most, or all, of the critical memory that resides in the first 1 MB of the PC. This memory, known as "conventional memory," is where Windows allocates a program segment prefix (PSP) data structure for every Windows program it starts. When this memory runs out, no more programs can be run. Unfortunately, Windows also uses this memory as space to load device drivers and fixed memory DLLs (fixed means that the code is RAM resident at all times and never paged to disk) that can be loaded just as easily into memory above the 1-MB boundary. Memory can needlessly disappear quickly. Windows can be prevented from placing drivers in the low, 1-MB region, thus providing more space for user applications, by fragmenting the memory into pieces that are too small to be used for DLLs and drivers. Other methods can also prevent programs from being loaded there. Every memory-enhancement product sold for Windows 3.1 does some form of this.
The second technique for increasing usable memory is RAM compression. To understand the goal of this performance-enhancement method, you must understand the behavior of demand paged virtual-memory systems such as those present in Windows 3.x and Windows 95. Modern operating systems provide applications with the illusion that they are running on computers with much more physical RAM than actually exists. When physical memory is exhausted, the OS moves information that's unlikely to be used to a disk file known as a "paging" (or swap) file (pagefile). The system can then load additional programs or data into the space that has been freed in physical memory. Whenever a program attempts to access code or data that has been moved to disk, it triggers a page fault, which is an exception condition that the operating system transparently handles by exchanging the requested data on disk with other data in memory. The act of moving data from a paging file into memory is called a "swap-in" or "pagefile read," while writing data to the paging file is a "swap-out" or "pagefile write." Thus, the name "demand paged virtual memory" comes from the fact that memory that has been swapped-out is swapped-in when it is demanded by an application, and that the data stored in a swap file is in virtual memory.
RAM compression assumes that the pagefile can be cached in compressed form in physical memory in order to reduce disk accesses and thereby increase performance. The presence of this type of cache has several effects that combine to determine whether it yields an improvement in overall performance. Figure 1 shows the memory-access pattern of a typical Windows application. The graph shows how many page faults occur when an application is given varying amounts of memory. The value of the curve at 8 MB, for instance, indicates how many page faults (and therefore disk accesses) are generated when running the application on an 8-MB machine. When memory is allocated for a compressed pagefile cache, applications executing on the machine have less memory to work with and, therefore, generate page faults as if they were executing on a smaller machine.
Figure 1: Typical Windows application memory-access profile.
Figure 2 shows that, for a 2-MB cache, the number of page faults is determined by the value of the curve at 6 MB. Whenever there is a page fault, Windows determines where the page should be fetched from. This results in a disruption of the currently executing application and is the first effect of a compressed pagefile cache-the overhead of cache-induced page faults.
Figure 2: Effect of compressed RAM cache.
A typical RAM compressor compresses all or most of the data that is sent from Windows to the swap file, so the page-fault rate at the effective (reduced) memory size determines the number of pages that will be compressed and decompressed. This is the second effect of a compressed pagefile cache-the overhead of compression and decompression, increasing the overall execution time.
The compressed pagefile cache creates an intermediate region on the graph that extends from the effective memory size (6 MB in Figure 2) to a point to the right of the physical-memory point that is related to the compression ratio obtained on the data being paged. In the example, the compression ratio is an optimistic 3:1, so the 2 MB allocated for the cache creates a cache region that is 6 MB in size and that extends to the 12-MB point on the graph.
The third effect introduced by the compressed pagefile cache is that any accesses that land in the region defined by the area between the physical-memory size of the machine and the right side of the cache region would have resulted in disk I/O in the absence of the cache, but because these accesses land in the cache, this I/O is avoided. So, the third effect is avoided pagefile reads and writes, decreasing overall execution time.
This description is simplified since there are other secondary effects, such as the effect on the size of the disk cache, but the general approach should be clear. A basic rule must be satisfied for RAM compression to be effective: Time saved due to avoided disk I/O must be greater than overhead incurred by cache-induced page faults, plus overhead of compression and decompression.
While this seems an easy relationship to satisfy, more-complex models based on real-world performance parameters and access patterns indicate that it is, in reality, very difficult to obtain a performance boost with this method. In fact, benchmarks of popular memory-enhancement software (those that actually perform RAM compression) show major performance degradations when running typical Windows applications.
The final technique, which Syncronys does not advertise, is actually a trick users can perform from the control panel to increase the size of the paging file. Since the total memory that can be allocated at any given time is equal to the sum of the size of physical RAM and the size of the swap file, more memory can be allocated with a larger swap file. Windows 95 dynamically resizes the swap file to accommodate the needs of the applications that are executing, but Windows 3.x does not have this level of sophistication. Under Windows 3.1 you have two choices for the type of pagefile that can be used: a permanent swap file that is created with fixed size, or a temporary swap file that is dynamically created each time Windows runs, but can grow during a session. The advantage of a permanent swap file is that Windows can access it very efficiently using low-level disk I/O since it knows exactly where the swap file lies on disk. The drawback is that, since it is of fixed size, large applications can potentially run out of memory. The advantage of a temporary swap file is that it can grow to some extent, but the drawback is that, under Windows 3.1, accessing it requires going through the slow DOS disk I/O facilities. (Note that under Windows For Workgroups 3.11, DOS disk I/O is faster, so there is no disadvantage to having a temporary swap file.)
When a temporary swap file is in effect, Windows 3.x by default allows it to grow only to a size equal to four times the amount of physical memory present on the machine (for example, on a 4-MB machine the maximum swap-file size would be 16 MB). This default can be overridden by any user, however, by placing the line PageFileOverCommit=X in the [386Enh] section of the system.ini file, where the value X is the multiplier that replaces the default value of 4. Since many users are unaware of the size restriction for temporary swap files, most RAM doublers modify the overcommit value for them in order to allow larger applications to run.
The three techniques for enhancing memory are controlling 1-MB memory usage, RAM compression, and resizing the swap file. SoftRAM's most vocal claim is that it performs RAM compression.
Inside SoftRAM 95: The Windows 95 Version
We'll start by describing the functionality of the Windows 95 version of SoftRAM in spite of the fact that, in October 1995, the company admitted it doesn't work. It may seem unnecessary to go to this trouble, but it turns out that the reason that the Windows 95 version of SoftRAM does not deliver compression is not just the result of a minor bug. Analysis indicates that SoftRAM 95 does try to create a pagefile cache, though one that is uncompressed.
The first thing to note about the Windows 95 version of SoftRAM is that, under Windows 95, the pagefile-sizing limitations, and the conventional memory-allocation bottleneck are effectively gone. Further, the resource-heap size limitations of Windows 3.x are also a thing of the past. Out of the RAM-doubling techniques claimed by SoftRAM, that leaves just RAM compression as a Windows 95 memory enhancement possibility.
The Windows 95 version of SoftRAM is based on the Microsoft Device Driver Kit's (DDK) Dynapage example (see DDK \BASE\SAMPLES\DYNAPAGE) pagefile device. This device driver takes pagefile read and write requests and creates the appropriate disk I/O requests. When SoftRAM is loaded during Windows start-up, the first thing it attempts to do is allocate a buffer for its cache. The default size of this buffer, as determined by the SoftRAM MaxPhys parameter created in the system.ini file, is 100 percent of the memory that Windows reports to it as free. Windows will not let all this memory be allocated in one try at that time, so the memory allocation specified by the default will always fail. From that point on, SoftRAM turns off its code, and only the Microsoft dynapage code within it remains active. This is the behavior that users will get unless they modify SoftRAM's default configuration.
If the SoftRAM MaxPhys parameter is lowered by users to a point where the allocation succeeds, SoftRAM actually does manage a pagefile cache. Data that would normally go to the swap file stops in this cache, and conversely, when data is read from the swap file it also ends up in this cache. However, the data in the cache is not compressed. Moreover, thorough disassembly reveals the presence of code that manages pseudocompressed pages. At idle time, SoftRAM goes through the area of the cache containing non-pseudocompressed pages and copies them to another area of the cache, attaching a 3-byte header containing only one piece of information that is used-the size of the header! It is clear that while the author of the code may have at some point intended to put in a compression routine instead of this pseudocompression, no such routine exists.
from Hacker News https://ift.tt/364ycRq