Computers Windows Internet

The larger the hard drive cache. Cache segmentation and hard drive performance. Other HDD Specifications

Let me remind you that the Seagate SeaTools Enterprise utility allows the user to manage the caching policy and, in particular, switch the latest Seagate SCSI drives between two different caching models - Desktop Mode and Server Mode. This item in the SeaTools menu is called Performance Mode (PM) and can take two values ​​- On (Desktop Mode) and Off (Server Mode). The differences between these two modes are purely software - in the case of Desktop Mode, the hard disk cache is divided into a fixed number of segments of a constant (same) size, and then they are used to cache read and write accesses. Moreover, in a separate menu item, the user can even set the number of segments (control cache segmentation): for example, instead of the default 32 segments, set a different value (in this case, the volume of each segment will proportionally decrease).

In the case of Server Mode, segments of the buffer (disk cache) can be dynamically (re)assigned, while changing their size and number. The microprocessor (and firmware) of the disk itself dynamically optimizes the number (and capacity) of cache segments, depending on the instructions received for execution on the disk.

Then we were able to find out that using the new Seagate Cheetah drives in “Desktop” mode (with a fixed segmentation of 32 segments by default) instead of the default “Server” with dynamic segmentation can slightly increase disk performance in a number of tasks that are more typical for a desktop computer or media servers. Moreover, this increase can sometimes reach 30-100% (!) Depending on the type of task and disk model, although on average it is estimated at 30%, which, you see, is also not bad. Among such tasks are the routine work of a desktop PC (WinBench, PCmark, H2bench tests), reading and copying files, defragmentation. At the same time, in purely server applications, the performance of drives almost does not drop (if it does, then it does not drop significantly). However, we were able to observe a noticeable gain from using Desktop Mode only on the Cheetah 10K.7 drive, while its older sister Cheetah 15K.4 turned out to be almost all the same in which mode to work on desktop applications.

In an attempt to understand further how the cache segmentation of these hard drives affects performance in various applications and which segmentation modes (how many memory segments) are more beneficial for certain tasks, I investigated the effect of the number of cache memory segments on the performance of the Seagate Cheetah drive 15K.4 in a wide range of values ​​- from 4 to 128 segments (4, 8, 16, 32, 64 and 128). The results of these studies are offered to your attention in this part of the review. Let me emphasize that these results are interesting not only for this drive model (or Seagate SCSI drives in general) - cache memory segmentation and selection of the number of segments is one of the main areas of firmware optimization, including desktop drives with an ATA interface, which are now also predominantly equipped with an 8 MB buffer. Therefore, the performance results of a drive in various tasks, depending on the segmentation of its cache memory, described in this article, are also relevant to the industry of desktop ATA drives. And since the test methodology was described in the first part, we proceed directly to the results themselves.

However, before discussing the results, let's take a closer look at the design and operation of the Seagate Cheetah 15K.4 cache segments in order to better understand what is at stake. Of the eight megabytes for the actual cache memory (that is, for caching operations), 7077 KB are available here (the rest is the service area). This area is divided into logical segments (Mode Select Page 08h, byte 13), which are used for reading and writing data (to implement the functions of pre-reading from platters and lazy writing to the disk surface). To access data on magnetic platters, segments use the logical addressing of drive blocks. Drives in this series support a maximum of 64 cache segments, with each segment being an integer number of sectors on the disk. The amount of available cache memory seems to be distributed equally between the segments, that is, if there are, say, 32 segments, then the volume of each segment is approximately 220 KB. With dynamic segmentation (in PM=off mode), the number of segments can be automatically changed by the hard drive depending on the command flow from the host.

Server and desktop applications require different caching operations from disks for optimal performance, so it is difficult to provide a single configuration to best perform these tasks. According to Seagate, desktop applications need to configure the cache to quickly respond to repeated requests for a large number of small data segments without having to wait for adjacent segments to read ahead. In contrast, server tasks require the cache to be configured to accommodate large amounts of sequential data in non-repeating requests. In this case, the ability of the cache to store more data from adjacent segments during read-ahead is more important. Therefore, for Desktop Mode, the manufacturer recommends using 32 segments (in earlier versions of Cheetah, 16 segments were used), and for Server Mode, the adaptive number of segments starts from only three for the entire cache, although it may increase during operation. In our experiments on the effect of the number of segments on performance in various applications, we will limit ourselves to the range from 4 segments to 64 segments, and as a test, we will “run” the disk also with 128 segments set in the SeaTools Enterprise program (the program does not report that this the number of segments on this disk is invalid).

Results of tests of physical parameters

It makes no sense to present linear read speed graphs for different numbers of cache memory segments - they are the same. But according to the performance of the Ultra320 SCSI interface, measured by tests, one can observe a very interesting picture: at 64 segments, some programs begin to incorrectly determine the speed of the interface, reducing it by more than an order of magnitude.

According to the measured average access time, the differences between the different number of cache segments become more noticeable - as the segmentation decreases, the average read access time measured under Windows under Windows slightly increases, and significantly better readings are observed in the PM=off mode, although it can be argued that the number segments are very few or, conversely, very large, based on these data is difficult. It is possible that the disk in this case simply starts to ignore the prefetch when reading in order to eliminate additional delays.

We can try to judge the efficiency of the algorithms for lazy disk firmware writing and caching of written data in the drive buffer by how the average access time measured by the operating system decreases when writing relative to reading with enabled write-back caching of the drive (it was always enabled in our tests). To do this, we usually use the results of the C "T H2benchW test, but this time we will supplement the picture with a test in the IOmeter program, the read and write patterns for which used 100% random access in blocks of 512 bytes with a single request queue depth. (Of course, you should not think that the average write access time in the two diagrams below really reflects this physical storage specifications! This is just some programmatically measured parameter using the test, which can be used to judge the effectiveness of caching the write in the disk buffer. The actual manufacturer-claimed average write access time for the Cheetah 15K.4 is 4.0+2.0=6.0 ms). By the way, anticipating questions, I note that in this case (that is, when lazy writing is enabled on the disk), the drive reports to the host about the successful completion of the write command (GOOD status) as soon as they are written to the cache memory, and not directly to the magnetic media . This is the reason for the lower value of the externally measured average write access time than for a similar parameter when reading.

According to the results of these tests, there is a clear dependence of the efficiency of caching random writes of small data blocks on the number of cache segments - the more segments, the better. With four segments, the efficiency drops sharply and the average write access time increases almost to the read values. And in the "server mode" the number of segments in this case is obviously close to 32. The cases of 64 and "128" segments are completely identical, which confirms the software limit of 64 segments from above.

Interestingly, the IOmeter test in the simplest patterns for random access in blocks of 512 bytes gives exactly the same values ​​when writing as the C "T H2BenchW test (with an accuracy of literally hundredths of a millisecond), while when reading, IOmeter showed a slightly overestimated result in everything sharding range - maybe 0.1-0.19ms difference with other tests for random access time while reading due to some "internal" reasons for IOmeter (or block size of 512 bytes instead of 0 bytes, as required ideally for such measurements). However, the "read" results of IOmeter practically coincide with those for the disk test of the AIDA32 program.

Application performance

Let's move on to the performance tests of drives in applications. And first of all, let's try to find out how well the disks are optimized for multithreading. To do this, I traditionally use tests in the NBench 2.4 program, where 100 MB files are written to disk and read from it by several simultaneous streams.

This diagram allows us to judge the effectiveness of algorithms for multi-threaded lazy writing of hard disks in real (not synthetic, as it was in the diagram with average access time) conditions when the operating system works with files. The leadership of both Maxtor SCSI drives when writing in multiple simultaneous streams is beyond doubt, however, in Chita, we already observe a certain optimum in the region between 8 and 16 segments, while at higher and lower values, the disk speed drops in these tasks. For Server Mode, the number of segments is obviously 32 (with good accuracy :)) and "128" segments is actually 64.

With multi-threaded reading, the situation for Seagate drives is clearly better than for Maxtor drives. As for the effect of segmentation, just like during recording, we observe a certain optimum closer to 8 segments (during recording, it was closer to 16 segments), and with very high segmentation (64), the disk speed drops significantly (as well as during recording) . It is gratifying that Server Mode here “monitors the market” of the host and changes segmentation from 32 when writing to ~ 8 when reading.

Now let's see how the drives behave in the "advanced", but still popular Disk WinMark 99 tests from the WinBench 99 package. Let me remind you that we conduct these tests not only for the "beginning", but also for the "middle" (in terms of volume) physical media for two file systems, and the diagrams show the average results. Undoubtedly, these tests are not "profile" for SCSI drives, and presenting their results here, we rather pay tribute to the test itself and to those who are used to judging disk speed using WinBench 99 tests. As a "consolation", we note that these tests will show us with a certain degree of certainty what the performance of these enterprise drives is when performing tasks that are more typical for a desktop computer.

Obviously, there is an optimum segmentation here too, and with a small number of segments, the disk looks inexpressive, and with 32 segments it looks the best (perhaps that is why the Seagate developers “shifted” the default Desktop Mode setting from 16 to 32 segments). However, for Server Mode in office (Business) tasks, segmentation is not entirely optimal, while for professional (High-End) performance, segmentation is more than optimized, significantly outperforming even the optimal “permanent” segmentation. Apparently, it is during the test execution that it changes depending on the command flow, and due to this, a gain in overall performance is obtained.

Unfortunately, such optimization "in the course of the test" is not observed for the more recent "track" complex tests for evaluating the "desktop" performance of disks in the PCMakr04 and C "T H2BenchW packages.

On both (more precisely, on 10 different) "activity tracks", the Server Mode intelligence is noticeably inferior to the optimal constant segmentation, which for PCmark04 is about 8 segments, and for H2benchW - 16 segments.

For both of these tests, 4 cache segments turn out to be very undesirable, and 64 too, and it's hard to say which Server Mode gravitates towards in this case.

In contrast to these, of course, still synthetic (although very similar to reality) tests - a completely "real" test of the speed of disks with a temporary file of Adobe Photoshop. Here the situation is much more transparent - the more segments, the better! And Server Mode almost “caught” this, using 32 segments for its work (although 64 would be even a little better).

Tests in Intel Iometer

Let's move on to tasks that are more typical for SCSI storage profiles - the operation of various servers (DataBase, File Server, Web Server) and a workstation (Workstation) according to the corresponding patterns in the Intel IOmeter program version 2003.5.10.

Maxtor is the most successful at imitating a database server, and Seagate is most profitable using Server Mode, although in fact the latter is very close to 32 persistent segments (about 220 KB each). Smaller or larger segmentation is worse in this case. However, this pattern is too simple in terms of the type of requests - let's see what happens for more complex patterns.

When simulating a file server, adaptive segmentation is again in the lead, although 16 permanent segments lag behind it negligibly (32 segments are slightly worse here, although they are also quite worthy). With small segmentation, deterioration is observed on a large command queue, and if it is too large (64), any queue is generally contraindicated - apparently, in this case, the size of the cache sectors is too small (less than 111 KB, that is, only 220 blocks on the media) to effectively cache acceptable data volumes.

Finally, for the Web server, we see an even more interesting picture - with a non-single command queue, Server Mode is equivalent to anyone segmentation level, except for 64, although it is slightly better at single segmentation.

As a result of the geometric averaging of the server loads shown above by patterns and request queues (without weight coefficients), we find that adaptive sharding is best for such tasks, although 32 persistent segments lag slightly behind, and 16 segments also look good overall. In general, the choice of Seagate is quite understandable.

As for the “workstation” pattern, Server Mode is clearly the best here.

And the optimum for continuous segmentation is at the level of 16 segments.

Now - our patterns for IOmeter, closer in purpose to a desktop PC, although definitely indicative for enterprise drives, since in "deeply professional" systems, hard drives read and write large and small files the lion's share of the time, and sometimes copy files. And since the nature of accesses in these patterns in these patterns in the IOmeter test (by random addresses within the entire disk volume) is more typical for server-class systems, then the importance of these patterns for the disks under study is higher.

Reading large files is again better for Server Mode, with the exception of an incomprehensible dip at QD=4. However, a small number of large segments is clearly preferable for a disk in these operations (which, in principle, is predictable and is in excellent agreement with the results for multi-threaded reading of files, see above).

sporadic record Large files, on the contrary, are too tough for the Server Mode intellect, and here it is more profitable to have constant segmentation at the level of 8-16 segments, as in multi-threaded file writing, see above. Separately, we note that in these operations, a large cache segmentation is extremely harmful - at the level of 64 segments. However, it turns out to be useful for small file read operations with a large request queue:

I think this is what Server Mode uses to select adaptive mode - their graphics are very similar.

At the same time, when writing small files to random addresses, 64 segments fail again, and Server Mode is inferior here to constant segmentation with a level of 8-16 segments per cache, although Server Mode is clearly trying to use optimal settings (only with 32-64 segments in queue 64 bad luck came out ;)).

Copying large files is a clear failure of Server Mode! Here, segmentation with level 16 is clearly more profitable (this is the optimum, since 8 and 32 are worse in queue 4).

As for copying small files, 8-16-32 segments are practically equivalent here, overtaking 64 segments (oddly enough), and Server Mode is a little "freaky".

According to the results of geometric averaging of data for random reading, writing and copying large and small files, we find that the best average result is given by constant segmentation with a level of only 4 segments per cache (that is, segment sizes of more than 1.5 MB!), while 8 and 16 segments are approximately equal and almost not behind 4 segments, but 64 segments are clearly contraindicated. Adaptive Server Mode, on average, only slightly yielded to constant segmentation - a loss of one percent can hardly be considered noticeable.

It remains to be noted that when simulating defragmentation, we observe an approximate equality of all levels of permanent segmentation and a slight advantage of Server Mode (by the same 1%).

And in the pattern of streaming read-write in large and small blocks, it is slightly more profitable to use a small number of segments, although again, the differences in the performance of cache memory configurations here, oddly enough, are homeopathic.

conclusions

Having conducted a more detailed study of the effect of cache segmentation on the performance of the Seagate Cheetah 15K.4 drive in various tasks in the second part of our review, I would like to note that the developers called the caching modes the way they called them for a reason: in Server Mode, sharding is indeed often adapted cache memory for the task being performed, and this sometimes leads to very good results - especially when performing "heavy" tasks, including server patterns in Intel IOmeter, and the High-End Disk WinMark 99 test, and random reading of small blocks around disk... At the same time, the choice of the cache memory segmentation level in Server Mode often turns out to be suboptimal (and requires further work to improve the criteria for analyzing the host command stream), and then Desktop Mode comes forward with a fixed segmentation at the level of 8, 16 or 32 segments per cache. Moreover, depending on the type of task, sometimes it is more profitable to use 16 and 32, and sometimes - 8 or only 4 memory segments! Among the latter are multi-threaded reads and writes (both random and sequential), "track" tests like PCMark04, and threaded tasks with simultaneous reads and writes. Although the “synthetics” for random write access clearly shows that the efficiency of lazy writing (at arbitrary addresses) decreases significantly with a decrease in the number of segments. That is, there is a struggle between two trends - and that is why, on average, it is more efficient to use 16 or 32 segments per 8-megabyte buffer. With a doubling of the buffer size, it can be predicted that it is more profitable to keep the number of segments at the level of 16-32, but due to a proportional increase in the capacity of each segment, the average performance of the drive can increase significantly. Apparently, even cache segmentation with 64 segments, which is now inefficient in most tasks, can be very useful when the buffer size is doubled, while using 4 or even 8 segments in this case will become inefficient. However, these conclusions also strongly depend on which blocks the operating system and applications prefer to operate with the drive, and what size files are used. It is possible that when the environment changes, the optimal cache segmentation may shift in one direction or another. Well, we wish Seagate success in optimizing the "intelligence" of Server Mode, which, to a certain extent, can smooth out this "system dependence" and "task dependence", having learned how to best select the most optimal segmentation depending on the host command flow.

Cache memory or as it is called hard disk buffer memory. If you do not know what it is, then we will be happy to answer this question and tell you about all the available features. This is a special type of RAM that acts as a buffer for storing previously read but not yet transmitted data for further processing, as well as for storing information that the system accesses most often.

The need for transit storage arose due to the significant difference between the throughput of the PC system and the speed of reading data from the drive. Also, cache memory can be found on other devices, namely in video cards, processors, network cards, and others.

What is the volume and what does it affect

The volume of the buffer deserves special attention. Often, HDDs are equipped with 8, 16, 32 and 64 MB caches. When copying large files between 8 and 16 MB, a significant difference in terms of performance will be noticeable, but between 16 and 32 it is already less noticeable. If you choose between 32 and 64, then there will be almost none at all. It must be understood that the buffer often experiences heavy loads, and in this case, the larger it is, the better.

Modern hard drives use 32 or 64 MB, less today can hardly be found anywhere. For a normal user, both the first and second values ​​will suffice. Moreover, in addition to this, performance is also affected by the size of its own cache built into the system. It is he who increases the performance of the hard drive, especially with a sufficient amount of RAM.

That is, in theory, the larger the volume, the better the performance and the more information can be in the buffer and not load the hard drive, but in practice everything is a little different, and the average user, except in rare cases, will not notice much difference. Of course, it is recommended to choose and buy devices with the largest size, which will greatly improve the performance of the PC. However, this should be done only if financial possibilities allow.

purpose

It is designed to read and write data, however, on SCSI drives, permission to write caching is rarely needed, since the default setting is that write caching is disabled. As we have already said, volume is not a decisive factor for improving work efficiency. To increase the performance of the hard drive, it is more important to organize the exchange of information with the buffer. In addition, it is also fully affected by the functioning of the control electronics, prevention of occurrence, and so on.

The most frequently used data is stored in the buffer memory, while the volume determines the capacity of this most stored information. Due to the large size, the performance of the hard drive increases significantly, since the data is loaded directly from the cache and does not require physical reading.

Physical reading - direct system access to the hard disk and its sectors. This process is measured in milliseconds and takes a fairly large amount of time. At the same time, the HDD transmits data more than 100 times faster than when requested by physically accessing the hard drive. That is, it allows the device to work even if the host bus is busy.

Main advantages

Buffer memory has a number of advantages, the main of which is fast data processing, which takes a minimum amount of time, while physical access to the sectors of the drive requires a certain time until the disk head finds the required data section and starts reading them. Moreover, hard drives with the largest storage can significantly offload the computer's processor. Accordingly, the processor is used minimally.

It can also be called a full-fledged accelerator, since the buffering function makes the hard drive much more efficient and faster. But today, with the rapid development of technology, it is losing its former importance. This is due to the fact that most modern models have 32 and 64 MB, which is enough for the normal functioning of the drive. As mentioned above, you can overpay the difference only when the difference in cost corresponds to the difference in efficiency.

In conclusion, I would like to say that buffer memory, whatever it is, improves the performance of a particular program or device only if the same data is repeatedly accessed, the size of which is no larger than the cache size. If your work at the computer involves programs that actively interact with small files, then you need an HDD with the most storage.

How to find out the current cache size

All you need is to download and install the free program HDTune. After launch, go to the "Information" section and at the bottom of the window you will see all the necessary parameters.


If you are buying a new device, then all the necessary characteristics can be found on the box or in the attached instructions. Another option is to look online.

Choosing a hard drive for a PC is a very responsible task. After all, it is the main repository of both official and your personal information. In this article, we will talk about the key characteristics of the HDD, which you should pay attention to when buying a magnetic drive.

Introduction

When buying a computer, many users often focus on the characteristics of its components such as monitor, processor, video card. And such an integral component of any PC as a hard drive (in computer slang - a hard drive), buyers often purchase, guided only by its volume, practically neglecting other important parameters. Nevertheless, it should be remembered that a competent approach to choosing a hard drive is one of the guarantees of comfort during further work at the computer, as well as financial savings, in which we are so often constrained.

A hard disk drive or hard disk drive (HDD) is the main storage device in most modern computers, which stores not only the information needed by the user, including movies, games, photos, music, but also the operating system, as well as everything installed programs. Therefore, in fact, the choice of a hard drive for a computer should be treated with due attention. Remember that if any element of the PC fails, it can be replaced. The only negative point in this situation is the additional financial costs for repairs or the purchase of a new part. But a hard drive failure, in addition to unforeseen costs, can lead to the loss of all your information, as well as the need to reinstall the operating system and all required programs. The main purpose of this article is to help novice PC users in choosing a hard drive model that would best meet the requirements of specific "users" for a computer.

First of all, you should clearly decide in which computer device the hard drive will be installed and for what purposes this device is planned to be used. Based on the most common tasks, we can conditionally divide them into several groups:

  • A mobile computer for general tasks (working with documents, "surfing" the expanses of the World Wide Web, data processing and working with programs).
  • Powerful mobile computer for gaming and resource-intensive tasks.
  • Desktop computer for office tasks;
  • A productive desktop computer (working with multimedia, games, audio, video and image processing);
  • Multimedia player and data storage.
  • To assemble an external (portable) drive.

In accordance with one of the listed options for operating a computer, you can begin to select a suitable hard drive model according to its characteristics.

Form factor

Form factor is the physical size of a hard drive. Today, most drives for home computers are 2.5 or 3.5 inches wide. The first, which are smaller, are designed for installation in laptops, the second - in stationary system units. Of course, if desired, a 2.5-inch drive can also be installed in a desktop PC.

There are also smaller magnetic drives with sizes of 1.8", 1" and even 0.85". But these hard drives are much less common and are focused on specific devices, such as ultra-compact computers (UMPC), digital cameras, PDAs and other equipment, where small dimensions and weight of components are very important. We will not talk about them in this material.

The smaller the drive, the lighter it is and the less power it needs to run. Therefore, 2.5" form factor hard drives have almost completely replaced 3.5" models in external drives. Indeed, for the operation of large external drives, additional power is required from an electrical outlet, while the younger brother is content only with power from USB ports. So if you decide to assemble a portable drive yourself, then it is better to use a 2.5-inch HDD for this purpose. It will be a lighter and more compact solution, and you won’t have to carry a power supply with you.

As for the installation of 2.5-inch drives in a stationary system unit, such a decision looks ambiguous. Why? Read on.

Capacity

One of the main characteristics of any drive (in this regard, a hard drive is no exception) is its capacity (or volume), which today in some models reaches four terabytes (1024 GB in one terabyte). Some 5 years ago, such a volume might have seemed fantastic, but current OS builds, modern software, high-resolution video and photos, as well as three-dimensional computer video games, having a fairly solid “weight”, need a large hard drive capacity. So, some modern games need 12 or even more gigabytes of free hard disk space for normal functioning, and an hour and a half HD-quality movie may require more than 20 GB for storage.

To date, the capacity of 2.5-inch magnetic media ranges from 160 GB to 1.5 TB (the most common volumes are 250 GB, 320 GB, 500 GB, 750 GB and 1 TB). 3.5" drives for desktops are more capacious and can store from 160GB to 4TB of data (the most common sizes are 320GB, 500GB, 1TB, 2TB and 3TB).

When choosing a HDD capacity, consider one important detail - the larger the hard drive capacity, the lower the price of 1 GB of information storage. For example, a desktop hard drive for 320 GB costs 1600 rubles, for 500 GB - 1650 rubles, and for 1 TB - 1950 rubles. We consider: in the first case, the cost of a gigabyte of data storage is 5 rubles (1600 / 320 = 5), in the second - 3.3 rubles, and in the third - 1.95 rubles. Of course, such statistics do not mean that it is necessary to buy a very large disk, but in this example it is very clear that buying a 320-gigabyte disk is not advisable.

If you plan to use your computer mainly for office tasks, then a hard drive with a capacity of 250 - 320 GB, or even less, will be more than enough for you, unless, of course, there is a need to store huge archives of documentation on the computer. At the same time, as we noted above, buying a hard drive with a capacity of less than 500 GB is unprofitable. Having saved from 50 to 200 rubles, in the end you get a very high cost per gigabyte of data storage. At the same time, this fact applies to disks of both form factors.

Do you want to build a gaming or multimedia PC to work with graphics and video, plan to download new movies and music albums to your hard drive in large quantities? Then it is better to choose a hard drive with a capacity of at least 1 TB for a desktop PC and at least 750 GB for a mobile one. But, of course, the final calculation of the hard drive capacity must meet the specific needs of the user, and in this case we only give recommendations.

Separately, it is worth noting systems for data storage (NAS) and multimedia players that have become popular. As a rule, large 3.5” disks are installed in such equipment, preferably with a capacity of at least 2 TB. After all, these devices are focused on storing large amounts of data, which means that the hard drives installed in them must be capacious with the lowest price for storing 1 GB of information.

Disc geometry, platters and recording density

When choosing a hard disk, you should not blindly focus only on its total capacity, according to the principle “the more, the better.” There are other important characteristics, including: recording density and the number of platters used. After all, not only the volume of the hard drive, but also the speed of writing / reading data directly depends on these factors.

Let's make a small digression and say a few words about the design features of modern hard disk drives. Data is recorded in them on aluminum or glass disks, called plates, which are covered with a ferromagnetic film. For writing and reading data from one of the thousands of concentric tracks located on the surface of the plates, read heads are responsible, located on special rotary positioner brackets, sometimes called "rocker arms". This procedure occurs without direct (mechanical) contact between the disc and the head (they are at a distance of about 7-10 nm from each other), which provides protection against possible damage and a long service life of the device. Each plate has two working surfaces and is served by two heads (one for each side).

To create an address space, the surface of magnetic disks is divided into many circular areas called tracks. In turn, the tracks are divided into equal segments - sectors. Due to such a ring structure, the geometry of the plates, or rather their diameter, affects the speed of reading and writing information.

Closer to the outer edge of the disk, the tracks have a larger radius (greater length) and contain more sectors, and hence more information that can be read by the device in one revolution. Therefore, on the outer tracks of the disk, the data transfer rate is higher, since the reading head in this area overcomes a greater distance in a certain time period than on the inner tracks, which are closer to the center. Thus, disks with a diameter of 3.5 inches perform better than disks with a diameter of 2.5 inches.

Several platters can be located inside a hard disk at once, each of which can record a certain maximum amount of data. Strictly speaking, this determines the density of the recording, measured in gigabits per square inch (Gb / inch 2) or in gigabytes per platter (GB). The larger this value, the more information is placed on one track of the plate, and the faster the recording is carried out, as well as the subsequent reading of information arrays (regardless of what the disk rotation speed is).

The total volume of the hard drive is the sum of the capacities of each of the plates placed in it. For example, appeared in 2007, the first commercial drive with a capacity of 1000 GB (1TB) had as many as 5 platters with a density of 200 GB each. But technological progress does not stand still, and in 2011, thanks to the improvement of perpendicular recording technology, Hitachi introduced the first 1TB platter, which is ubiquitous in today's large-capacity hard drives.

Reducing the number of platters in hard drives has a number of important benefits:

  • Decrease in data reading time;
  • Reducing energy consumption and heat generation;
  • Increasing reliability and fault tolerance;
  • Reducing weight and thickness;
  • Cost reduction.

To date, the computer market at the same time there are models of hard drives that use plates with different recording densities. This means that hard drives of the same volume can have a completely different number of platters. If you are looking for the most efficient solution, then it is better to choose an HDD with the least number of magnetic platters and high recording density. But the problem is that, in almost no computer store, in the descriptions of the characteristics of disks, you will not find the value of the above parameters. Moreover, this information is often missing even on the official websites of manufacturers. As a result, for ordinary ordinary users, these characteristics are not always decisive when choosing a hard drive, due to their inaccessibility. Nevertheless, before buying, we recommend that you definitely find out the values ​​​​of these parameters, which will allow you to choose a hard drive with the most advanced and modern characteristics.

Spindle speed

The performance of a hard disk directly depends not only on the recording density, but also on the rotation speed of the magnetic disks placed in it. All plates inside the hard drive are rigidly attached to its internal axis, called the spindle, and rotate with it as a whole. The faster the plate rotates, the sooner there will be a sector that should be read.

In stationary home computers, hard drive models with an operating speed of 5400, 5900, 7200, or 10,000 rpm are used. Units with a 5400 rpm spindle speed are generally quieter than their high-speed competitors and generate less heat. Hard drives with higher speeds, in turn, have better performance, but at the same time are more energy-intensive.

For a typical office PC, a drive with a spindle speed of 5400 rpm will suffice. Also, such discs are well suited for installation in multimedia players or data storage, where an important role is played not so much by the speed of information transfer as by reduced power consumption and heat dissipation.

In other cases, in the vast majority, discs with a plate rotation speed of 7200 rpm are used. This applies to both mid-range and high-end computers. The use of HDD with a rotation speed of 10,000 rpm is relatively rare, since such models of hard drives are very noisy and have a rather high cost of storing one gigabyte of information. Moreover, in recent years, users are increasingly preferring to use solid-state drives instead of high-performance magnetic disks.

In the mobile sector, where 2.5-inch drives reign, the most common spindle speed is 5400 rpm. This is not surprising, since low power consumption and a low level of heating of parts are important for portable devices. But we didn't forget about the owners of productive laptops - there are a large selection of models with a rotation speed of 7200 rpm on the market, and even several members of the VelociRaptor family with a rotation speed of 10,000 rpm. Although the expediency of using the latter even in the most powerful mobile PCs is in great doubt. In our opinion, if you need to install a very fast disk subsystem, it is better to pay attention to solid state drives.

Connection interface

Almost all modern models, both small and large hard drives, are connected to personal computer motherboards using the SATA (Serial ATA) serial interface. If you have a very old computer, then you can connect using a parallel PATA (IDE) interface. But keep in mind that the assortment of such hard drives in stores today is very scarce, since their production has almost completely ceased.

As for the SATA interface, there are 2 disk options on the market: connection via the SATA II or SATA III bus. In the first option, the maximum data transfer rate between the disk and RAM can be 300 MB / s (bus bandwidth up to 3 Gb / s), and in the second - 600 MB / s (bus bandwidth up to 6 Gb / s). The SATA III interface also has slightly improved power management.

In practice, for any classic hard drives, the bandwidth of the SATA II interface is enough for the eyes. Indeed, even in the most productive HDD models, the speed of reading data from platters barely exceeds 200 MB / s. Another thing is solid-state drives, where data is stored not on magnetic platters, but in flash memory, the reading speed from which is many times higher and can reach values ​​of over 500 MB / s.

It should be noted that all versions of the SATA interface maintain compatibility with each other at the level of exchange protocols, connectors and cables. That is, a hard drive with a SATA III interface can be safely connected to the motherboard via the SATA I connector, although the maximum disk throughput will be limited by the capabilities of an older revision and will be 150 MB / s.

Buffer Memory (Cache)

Buffer memory is a fast intermediate memory (usually a standard type of RAM) that is used to level (smooth out) the difference between the speeds of reading, writing and transferring data over the interface during disk operation. The hard drive cache can be used to store the last data read, but not yet transferred for processing, or those data that can be requested again.

In the previous section, we already noted the difference between hard drive performance and interface bandwidth. It is this fact that determines the need for transit storage in modern hard drives. Thus, while data is being written or read from magnetic platters, the system can use the information stored in the cache for its own needs without waiting.

The size of the clipboard for modern hard drives made in the 2.5 ”form factor can be 8, 16, 32 or 64 MB. The older 3.5-inch brothers have a maximum buffer memory value of 128 MB. In the mobile sector, disks with 8 and 16 MB cache are the most common. Among hard drives for desktop PCs, the most common buffer sizes are 32 and 64 MB.

In theory, a larger cache should provide better performance to the disks. But in practice this is not always the case. There are various disk operations in which the clipboard practically does not affect the performance of the hard drive. For example, this can happen when sequentially reading data from the surface of the plates or when working with large files. In addition, the efficiency of the cache is affected by algorithms that can prevent errors when working with the buffer. And here, a disk with a smaller cache, but with advanced algorithms for its operation, may turn out to be more productive than a competitor with a larger clipboard.

Thus, chasing the maximum amount of buffer memory is not worth it. Especially if you need to significantly overpay for a large cache capacity. In addition, manufacturers themselves try to equip their products with the most efficient cache size, based on the class and characteristics of certain disk models.

Other characteristics

In conclusion, let's take a quick look at some of the remaining characteristics that you may come across in hard drive descriptions.

Reliability or mean time between failures ( MTBF) - the average duration of the hard drive before its first breakdown or the need for repair. It is usually measured in hours. This parameter is very important for disks used in server stations or file storages, as well as in RAID arrays. As a rule, specialized magnetic drives have an average operating time of 800,000 to 1,000,000 hours (for example, WD's RED series or Seagate's Constellation series).

Noise level - the noise generated by the elements of the hard drive during its operation. Measured in decibels (dB). It mainly consists of the noise that occurs during the positioning of the heads (crackling) and the noise from the rotation of the spindle (rustling). As a rule, the lower the spindle speed, the quieter the hard drive works. A hard drive can be called quiet if its noise level is below 26 dB.

Power consumption - an important parameter for drives installed in mobile devices, where a long battery life is valued. Also, the heat dissipation of the hard drive directly depends on the energy consumption, which is also important for portable PCs. As a rule, the level of energy consumption is indicated by the manufacturer on the cover of the disc, but you should not blindly trust these figures. Very often they are far from reality, so if you really want to find out the power consumption of a particular drive model, then it is better to search the Internet for independent test results.

Random access time - the average time for which the positioning of the disk reading head over an arbitrary section of the magnetic plate is performed, measured in milliseconds. A very important parameter that affects the performance of the hard drive as a whole. The shorter the positioning time, the faster the data will be written to or read from the disk. It can be from 2.5 ms (for some server disk models) to 14 ms. On average, for modern disks for personal computers, this parameter ranges from 7 to 11 ms. Although there are also very fast models, for example, WD Velociraptor with an average random access time of 3.6 ms.

Conclusion

In conclusion, I would like to say a few words about the increasingly popular hybrid magnetic drives (SSHD). Devices of this type combine a conventional hard disk drive (HDD) and a small solid state drive (SSD) that acts as an additional cache memory. Thus, developers are trying to use together the main advantages of the two technologies - the large capacity of magnetic plates and the speed of flash memory. At the same time, the cost of hybrid drives is much lower than that of newfangled SSDs, and slightly higher than conventional HDDs.

Despite the promise of this technology, so far SSHD drives on the hard drive market are very poorly represented by only a small number of models in the 2.5-inch form factor. Seagate is the most active in this segment, although competitors Western Digital (WD) and Toshiba have also already presented their hybrid solutions. All this leaves hope that the market for SSHD hard drives will develop, and in the near future we will see new models of such devices on sale not only for mobile computers, but also for desktop PCs.

This concludes our review, where we looked at all the main characteristics of computer hard drives. We hope that based on this material, you will be able to choose a hard drive for any purpose with the appropriate optimal parameters.

A hard drive (hard drive, HDD) is one of the most important parts of a computer. After all, if the processor, video card, etc. breaks down, You feel regret only about losing money for a new purchase, if the hard drive breaks down, you risk losing irretrievably important data. The speed of the computer as a whole also depends on the hard drive. Let's figure out how to choose the right hard drive.

Hard disk tasks

The job of a hard drive inside a computer is to store and retrieve information very quickly. The hard drive is an amazing invention of the computer industry. Using the laws of physics, this small device stores an almost unlimited amount of information.

Hard disk type

IDE - outdated hard drives are meant to be connected to old motherboards.

SATA - replaced IDE hard drives, have a higher data transfer rate.

SATA interfaces come in different models, they differ from each other in the same speed of data exchange and support for different technologies:

  • SATA has a transfer rate of up to 150Mb/s.
  • SATA II - has a transfer rate up to 300Mb / s
  • SATA III - has a transfer rate up to 600Mb / s

SATA-3 began to be produced quite recently, since the beginning of 2010. When buying such a hard drive, you need to pay attention to the year of manufacture of your computer (without an upgrade), if it is lower than this date, then this hard drive will not work for you! HDD - SATA, SATA 2 have the same connection connectors and are compatible with each other.

Hard disk capacity

The most common hard drives used by most users at home have a capacity of 250, 320, 500 gigabytes. There are even fewer, but there are less and less 120, 80 gigabytes, and they are no longer on sale at all. To be able to store very large information, there are hard drives of 1, 2, 4 terabytes.

Hard drive speed and cache

When choosing a hard drive, it is important to pay attention to its speed (spindle speed). The speed of the entire computer will depend on this. The usual drive speeds are 5400 and 7200 rpm.

The amount of buffer memory (cache memory) is the physical memory of the hard disk. There are several sizes of such memory 8, 16, 32, 64 megabytes. The higher the speed of the hard drive's RAM, the faster the data transfer rate will be.

In custody

Before buying, check which hard drive is suitable for your motherboard: IDE, SATA or SATA 3. We look at the characteristics of the disk rotation speed and the amount of buffer memory, these are the main indicators that you need to pay attention to. We also look at the manufacturer and the volume that suits you.

We wish you successful shopping!

Share your choice in the comments, it will help other users make the right choice!



xn----8sbabec6fbqes7h.xn--p1ai

System administration and more

Using a cache increases the performance of any hard drive by reducing the number of physical disk accesses, and also allows the hard drive to work even when the host bus is busy. Most modern drives have a cache size of 8 to 64 megabytes. This is even more than the size of the hard drive in the average computer of the nineties of the last century.

Despite the fact that the cache increases the speed of the drive in the system, it also has its drawbacks. For starters, the cache does not speed up the drive in any way with random requests for information located at different ends of the platter, since such requests make no sense in prefetching. Also, the cache does not help at all when reading large amounts of data, because. it is usually quite small, for example, when copying an 80 megabyte file, with a buffer of 16 megabytes that is usual in our time, only a little less than 20% of the copied file will fit into the cache.

Although the cache increases the speed of the drive in the system, it also has its drawbacks. For starters, the cache does not speed up the drive in any way with random requests for information located at different ends of the platter, since such requests make no sense in prefetching. Also, it does not help at all when reading large amounts of data, because. it is usually quite small. For example, when copying an 80 megabyte file, with a buffer of 16 megabytes that is usual in our time, only a little less than 20% of the copied file will fit into the cache.

In recent years, hard drive manufacturers have greatly increased the cache capacity in their products. Even in the late 90s, 256 kilobytes was the standard for all drives and only high-end devices had 512 kilobytes of cache. Currently, an 8 megabyte cache has already become the de facto standard for all drives, while the most productive models have capacities of 32 or even 64 megabytes. There are two reasons why the drive's buffer has grown so rapidly. One of them is a sharp decline in prices for synchronous memory chips. The second reason is the belief of users that doubling or even quadrupling the cache size will greatly affect the speed of the drive.

The size of the hard disk cache, of course, affects the speed of the drive in the operating system, but not as much as users imagine. Manufacturers take advantage of the user's faith in cache size, and make big claims in the brochures about four times the cache size compared to the standard model. However, comparing the same hard drive with buffer sizes of 16 and 64 megabytes, it turns out that the acceleration results in several percent. What does this lead to? In addition, only a very large difference in cache sizes (for example, between 512 kilobytes and 64 megabytes) will significantly affect the speed of the drive. It should also be remembered that the size of the hard drive buffer is quite small compared to computer memory, and often the "soft" cache, that is, an intermediate buffer organized by the operating system for caching operations with the file system and located in the computer's memory, often has a greater contribution to the operation of the drive. .

Fortunately, there is a faster version of the cache: the computer writes data to the drive, they get into the cache, and the drive immediately responds to the system that the write has been completed; the computer continues to work, believing that the drive was able to write data very quickly, while the drive "deceived" the computer and only wrote the necessary data to the cache, and only then began to write them to disk. This technology is called write-back caching.

Because of this risk, some workstations don't cache at all. Modern drives allow you to disable the write cache mode. This is especially important in applications where the correctness of the data is very critical. Because this type of caching greatly increases the speed of the drive, yet they usually resort to other methods that reduce the risk of data loss due to a power outage. The most common method is to connect the computer to an uninterruptible power supply. In addition, all modern drives have the “flush write cache” function, which forces the drive to write data from the cache to the surface, but the system has to execute this command blindly, because. it still doesn't know if there is data in the cache or not. Every time the power is turned off, modern operating systems send this command to the hard drive, then the command to park the heads is sent (although this command could not be sent, because every modern drive automatically parks the heads when the voltage drops below the maximum permissible level ) and only after that the computer turns off. This ensures the safety of user data and the correct shutdown of the hard drive.

sysadminstvo.ru

hard drive cache

05.09.2005

All modern drives have a built-in cache, also called a buffer. The purpose of this cache is not the same as the CPU cache. The function of the cache is buffering between fast and slow devices. In the case of hard disks, the cache is used to temporarily store the results of the last read from the disk, as well as to prefetch information that may be requested a little later, for example, several sectors after the currently requested sector.

Using a cache increases the performance of any hard drive by reducing the number of physical disk accesses, and also allows the hard drive to work even when the host bus is busy. Most modern drives have a cache size of 2 to 8 megabytes. However, the most advanced SCSI drives have a cache of up to 16 megabytes, which is even more than the average computer of the nineties of the last century.

It should be noted that when someone talks about a disk cache, most often it is not the hard disk cache that is meant, but a certain buffer allocated by the operating system to speed up read-write procedures in this particular operating system.

The reason the hard drive cache is so important is because there is a big difference between the speed of the hard drive itself and the speed of the hard drive interface. When searching for the sector we need, whole milliseconds pass, because time is spent moving the head, waiting for the desired sector. In modern personal computers, even one millisecond is a lot. On a typical IDE/ATA drive, the time to transfer a 16K block of data from the cache to the computer is about a hundred times faster than the time it takes to find and read it from the surface. This is why all hard drives have an internal cache.

Another situation is writing data to disk. Suppose that we need to write the same 16-kilobyte data block, having a cache. Winchester instantly transfers this block of data to the internal cache, and reports to the system that it is again free for requests, while simultaneously writing data to the surface of magnetic disks. In the case of sequential reading of sectors from the surface, the cache no longer plays a big role, because. sequential read speeds and interface speeds are about the same in this case.

General Concepts of Hard Drive Cache Operation

The simplest principle of the cache is to store data not only for the requested sector, but also for several sectors after it. As a rule, reading from a hard disk occurs not in blocks of 512 bytes, but in blocks of 4096 bytes (a cluster, although the cluster size may vary). The cache is divided into segments, each of which can store one block of data. When a request is made to a hard drive, the drive controller first checks to see if the requested data is in the cache and, if so, immediately issues it to the computer without physically accessing the surface. If there was no data in the cache, they are first read and entered into the cache, and only then transferred to the computer. Because the size of the cache is limited, there is a constant update of the cache pieces. Typically, the oldest piece is replaced by a new one. This is called a circular buffer, or circular cache.

To increase the performance of the drive, manufacturers have come up with several methods to increase the speed of work due to the cache:

  1. adaptive segmentation. Usually the cache is divided into segments of the same size. Since requests can have different sizes, this leads to unnecessary consumption of cache blocks, because. one request will be divided into fixed length segments. Many modern drives dynamically change the segment size by determining the request size and adjusting the segment size for a particular request, thus increasing efficiency and increasing or decreasing the segment size. The number of segments may also change. This task is more complex than operations with fixed-length segments, and can lead to data fragmentation within the cache, increasing the load on the hard disk microprocessor.
  2. Oversampling. The microprocessor of the hard disk, based on the analysis of the requested data at the moment and requests at previous points in time, loads into the cache data that has not yet been requested, but has a high probability of this. The simplest case of prefetching is to load additional data into the cache that is a little further than the currently requested data, because statistically they are more likely to be requested later. If the prefetch algorithm is implemented correctly in the drive's firmware, it will increase the speed of its operation in various file systems and with various data types.
  3. User control. High-tech hard drives have a set of commands that allow the user to precisely control all cache operations. These commands include the following: enabling and disabling the cache, managing segment sizes, enabling and disabling adaptive segmentation and prefetching, and so on.

Despite the fact that the cache increases the speed of the drive in the system, it also has its drawbacks. For starters, the cache does not speed up the drive in any way with random requests for information located at different ends of the platter, since such requests make no sense in prefetching. Also, the cache does not help at all when reading large amounts of data, because. it is usually quite small, for example, when copying a 10 megabyte file, with the usual 2 megabyte buffer in our time, only a little less than 20% of the copied file will fit into the cache.

Due to these and other features of the cache, it does not speed up the drive as much as we would like. What speed gain it gives depends not only on the size of the buffer, but also on the algorithm for working with the microprocessor cache, as well as on the type of files that are being worked with at the moment. And, as a rule, it is very difficult to find out which cache algorithms are used in this particular drive.

The figure shows the cache chip of the Seagate Barracuda drive, it has a capacity of 4 megabits or 512 kilobytes.

Read-Write Caching

Although the cache increases the speed of the drive in the system, it also has its drawbacks. For starters, the cache does not speed up the drive in any way with random requests for information located at different ends of the platter, since such requests make no sense in prefetching. Also, it does not help at all when reading large amounts of data, because. it is usually quite small. For example, when copying a 10 megabyte file, with the usual 2 megabyte buffer in our time, only a little less than 20% of the copied file will fit into the cache.

Due to these features of the cache, it does not speed up the drive as much as we would like. What speed gain it gives depends not only on the size of the buffer, but also on the algorithm for working with the microprocessor cache, as well as on the type of files that are being worked with at the moment. And, as a rule, it is very difficult to find out which cache algorithms are used in this particular drive.

In recent years, hard drive manufacturers have greatly increased the cache capacity in their products. Even in the late 90s, 256 kilobytes was the standard for all drives and only high-end devices had 512 kilobytes of cache. Currently, a cache of 2 megabytes has become the de facto standard for all drives, while the most productive models have capacities of 8 or even 16 megabytes. As a rule, 16 megabytes is found only on SCSI drives. There are two reasons why the drive's buffer has grown so rapidly. One of them is a sharp decline in prices for synchronous memory chips. The second reason is the belief of users that doubling or even quadrupling the cache size will greatly affect the speed of the drive.

The size of the hard disk cache, of course, affects the speed of the drive in the operating system, but not as much as users imagine. Manufacturers take advantage of the user's faith in cache size, and make big claims in the brochures about four times the cache size compared to the standard model. However, comparing the same hard drive with buffer sizes of 2 and 8 megabytes, it turns out that the acceleration results in several percent. What does this lead to? In addition, only a very large difference in cache sizes (for example, between 512 kilobytes and 8 megabytes) will significantly affect the speed of the drive. It should also be remembered that the size of the hard drive buffer is quite small compared to the computer memory, and often the "soft" cache, that is, an intermediate buffer organized by the operating system for caching operations with the file system and located in the computer's memory, often has a greater contribution to the operation of the drive. .

Read caching and write caching are somewhat similar, but they also have many differences. Both of these operations are intended to increase the overall performance of the drive: they are buffers between a fast computer and slow drive mechanics. The main difference between these operations is that one of them does not change the data in the drive, while the other does.

Without caching, each write operation would result in an agonizing wait for the heads to move to the right place and the data to be written to the surface. Working with a computer would be impossible: as we mentioned earlier, this operation on most hard drives would take at least 10 milliseconds, which is a lot from the point of view of the computer as a whole, since the computer's microprocessor would have to wait for these 10 milliseconds with each write of information to the winchester. The most striking thing is that there is just such a mode of working with the cache, when data is simultaneously written to both the cache and the surface, and the system is waiting for both operations to be performed. This is called write-through caching. This technology speeds up work in the event that in the near future the data just written needs to be read back to the computer, and the recording itself takes much longer than the time after which the computer will need this data.

Fortunately, there is a faster version of the cache: the computer writes data to the drive, they get into the cache, and the drive immediately responds to the system that the write has been completed; the computer continues to work, believing that the drive was able to write data very quickly, while the drive "deceived" the computer and only wrote the necessary data to the cache, and only then began to write them to disk. This technology is called write-back caching.

Of course, the write-back caching technology increases performance, but, nevertheless, this technology also has its drawbacks. The hard drive tells the computer that the write has already been done, while the data is only in the cache, and only then begins to write the data to the surface. It takes some time. This is not a problem as long as there is power to the computer. Because cache memory is a volatile memory, at the moment of power off all contents of the cache are irretrievably lost. If there was data in the cache waiting to be written to the surface, and the power was turned off at that moment, the data would be lost forever. And, which is also bad, the system does not know if the data was accurately written to disk, because Winchester has already reported that he did it. Thus, we not only lose the data itself, but also do not know which data did not have time to be written, and we do not even know that a failure has occurred. As a result, a part of the file may be lost, which will lead to a violation of its integrity, loss of operating system performance, etc. Of course, this issue does not affect read data caching.

Because of this risk, some workstations don't cache at all. Modern drives allow you to disable the write cache mode. This is especially important in applications where the correctness of the data is very critical. Because this type of caching greatly increases the speed of the drive, yet they usually resort to other methods that reduce the risk of data loss due to a power outage. The most common method is to connect the computer to an uninterruptible power supply. In addition, all modern drives have the "flush write cache" function, which forces the drive to write data from the cache to the surface, but the system has to execute this command blindly, because. it still doesn't know if there is data in the cache or not. Every time the power is turned off, modern operating systems send this command to the hard drive, then the command to park the heads is sent (although this command could not be sent, because every modern drive automatically parks the heads when the voltage drops below the maximum permissible level ) and only after that the computer turns off. This ensures the safety of user data and the correct shutdown of the hard drive.

spas-info.ru

What is a hard disk buffer and why is it needed

Today, a common storage medium is a magnetic hard drive. It has a certain amount of memory dedicated to storing basic data. It also has a buffer memory, the purpose of which is to store intermediate data. Professionals call the hard disk buffer the term "cache memory" or simply "cache". Let's see why the HDD buffer is needed, what it affects and what size it has.

The hard disk buffer helps the operating system temporarily store data that was read from the main memory of the hard drive, but was not transferred for processing. The need for a transit storage is due to the fact that the speed of reading information from the HDD drive and the throughput of the OS vary significantly. Therefore, the computer needs to temporarily store data in the "cache", and only then use them for their intended purpose.

The hard disk buffer itself is not separate sectors, as incompetent computer users believe. It is a special memory chips located on the internal HDD board. Such microcircuits are able to work much faster than the drive itself. As a result, they cause an increase (by several percent) in computer performance observed during operation.

It is worth noting that the size of "cache memory" depends on the specific disk model. Previously, it was about 8 megabytes, and this figure was considered satisfactory. However, with the development of technology, manufacturers have been able to produce chips with more memory. Therefore, most modern hard drives have a buffer whose size varies from 32 to 128 megabytes. Of course, the largest "cache" is installed in expensive models.

What impact does a hard disk buffer have on performance

Now we will tell you why the hard drive buffer size affects computer performance. Theoretically, the more information will be in the "cache memory", the less often the operating system will access the hard drive. This is especially true for a work scenario when a potential user is processing a large number of small files. They simply move to the hard disk buffer and wait there for their turn.

However, if the PC is used to process large files, then the "cache" loses its relevance. After all, information cannot fit on microcircuits, the volume of which is small. As a result, the user will not notice an increase in computer performance, since the buffer will be practically not used. This happens in cases where programs for editing video files, etc. will be launched in the operating system.

Thus, when purchasing a new hard drive, it is recommended to pay attention to the size of the "cache" only in cases where you plan to constantly process small files. Then it will turn out to really notice an increase in the performance of your personal computer. And if the PC will be used for ordinary everyday tasks or processing large files, then you can not attach any importance to the clipboard.

A personal collection of digital data tends to grow exponentially over time. Over the years, the amount of data in the form of thousands of songs, films, photographs, documents, all kinds of video courses is constantly growing and, of course, they must be stored somewhere. computer or, no matter how big it is, it will still someday completely run out of free space.

The obvious solution to the problem of lack of storage space is to buy DVDs, USB flash drives or an external hard drive (HDD). Flash drives usually provide several GB of disk space, but they are definitely not suitable for long-term storage, and their price-to-volume ratio is, to put it mildly, not the best. DVDs are a good option in terms of price, but not convenient in terms of burning, rewriting and deleting unnecessary data, but they are slowly dying out and becoming obsolete technology. The external HDD provides a large amount of space, portable, convenient to use, great for long-term data storage.

When buying an external HDD, to make the right choice, you need to know what to look for first. In this article, we will tell you what criteria should be followed when choosing and buying an external hard drive.

What to look for when buying an external hard drive

Let's start by choosing a brand, the best ones are Maxtor, Seagate, Iomega, LaCie, Toshiba and Western Digital l.
The most important characteristics that you need to pay attention to when buying:

Capacity

The amount of disk space is the first thing to consider. The main rule that you should be guided by when buying is the capacity that you need, multiply by three. For example, if you think that 250 GB of additional hard drive space is enough, buy a model from 750 GB. Drives with large amounts of storage tend to be quite bulky, which affects their mobility, and should also be taken into account by those who often carry an external drive with them. For desktop computers, models with several terabytes of disk space are commercially available.

Form factor

The form factor determines the size of the device. Currently, 2.5 and 3.5 form factors are used for external HDDs.
2.5-form factors (size in inches) - smaller, lighter, port-powered, compact, mobile.
3.5 form factors are larger, have additional power from the mains, are quite heavy (often more than 1 Kg), and have a large amount of disk space. Pay attention to the power supply from the network, because. if you plan to connect the device to a weak laptop, then it may not be able to spin up the disk - and the disk simply will not work.

Rotation speed (RPM)

The second important factor to consider is the rotational speed of the disk, indicated in RPM (revolutions per minute). High speed provides fast data reading and high write speed. Any HDD that has a disk rotation speed of 7200 RPM or more is a good choice. If speed is not critical for you, then you can choose a model with 5400 RPM, they are quieter and less heated.

Cache size

Each external HDD has a buffer, or cache, where data is temporarily placed before it goes to disk. Drives with large caches transfer data faster than those with smaller caches. Choose a model that has at least 16 MB of cache, preferably more.

Interface

In addition to the above factors, another important feature is the type of interface used for data transfer. The most common is USB 2.0. USB 3.0 is gaining popularity, the new generation has significantly increased data transfer speed, models with FireWire and eSATA interfaces are also available. We recommend that you opt for USB 3.0 and eSATA models with high data transfer rates, provided that your computer is equipped with the appropriate ports. If the ability to connect an external hard drive to as many devices as possible is critical for you, choose a model with a USB 2.0 interface version.