Friday, April 29, 2005

I wonder?!

I'm just wondering if data cache management can be made in a simple yet appealing way. Usually the cache usually maintains data in fixed sized pages so that the cache hit ratio is increased. When a request for even 1 byte comes, the whole page containing that byte is brought into the cache. This is usually done keeping in mind the principle of locality of reference. There's a likely chance that the forthcoming requests also want data within that page. Could this be made any simpler from an implementation point of view.

Lets say that we're really not bothered about the exact data. What we maintain in cache is just the addresses that are accessed. The request whose address is in the cache results in a cache hit and those which are'nt result in a cache miss. So, to implement this paging of the cache data, I use only the address range of a page in the cache. For example, if I assume the page size to be N Kb, then for any request that wants data from an address X, I just round off the address to the next N Kb aligned address. This is the higher end of my range. Subtract N Kb from this, now I have the lower end of the page range. The table itself can be implemented for fast lookup. I think that even though it eventually does the same thing as any other cache would do, this is a lot simpler from an implementation point of view.

No comments:

Post a Comment

What I want to say is: