> **The project is renamed to Caches, please see crate [caches](https://crates.io/crates/caches)**
[
][Github-url]
[
][CI-url]
[
][codecov-url]
[
][doc-url]
[
][crates-url]
[
][license-apache-url]
[
][license-mit-url]
This is a Rust implementation for [HashiCorp's golang-lru](https://github.com/hashicorp/golang-lru).
This crate contains three LRU based cache, `LRUCache`, `TwoQueueCache` and `AdaptiveCache`.
See [Introduction](#introduction), [Trade-Off](#trade-off) and [Usages](#usages) for more details.
English | [简体中文](README-zh_CN.md)
## Introduction
The MSRV for this crate is 1.55.0.
LRU based caches are `O(1)` for read, write and delete.
- `LRUCache` or `RawLRU` is a fixed size LRU cache.
- `AdaptiveCache` is a fixed size Adaptive Replacement Cache (ARC).
ARC is an enhancement over the standard LRU cache in that tracks both
frequency and recency of use. This avoids a burst in access to new
entries from evicting the frequently used older entries.
- `TwoQueueCache` is a fixed size 2Q cache. 2Q is an enhancement over the standard LRU cache in that it tracks both frequently and recently used entries separately.
## Trade-Off
In theory, `AdaptiveCache` and `TwoQueueCache` add some additional tracking overhead to a `LRUCache` cache, computationally it is roughly **2x** the cost, and the extra memory overhead is linear with the size of the cache. `AdaptiveCache` and `TwoQueueCache` have similar computationally cost, which has been patented by IBM, but the `TwoQueueCache` (2Q) need to set reasonable parameters.
However, the implementation for the `RawLRU` uses [`Box`] and a raw pointer for each entry to break the limitation of the Rust language (It does not use [`Rc`], because [`Rc`] is slower). Thus, in practice, `TwoQueueCache` is **2.5x** computationally slower than `LRUCache` and `AdaptiveCache` is **3x** computationally slower than `LRUCache`, because `TwoQueueCache` and `AdaptiveCache` has to do more box and re-box than `LRUCache`, even though I try my best to avoid boxing and re-boxing and use [`mem::swap`] to avoid memory allocating and deallocating.
Hence, it is better to understand what is the situation is (your project wants a cache with a higher hit ratio or faster computationally performance), and then choose the reasonable Cache in your project.
(For more performance details, you can clone the project and run `cargo bench`. The source code for benchmark is in the [benches](benches), I am also looking forward to anyone's help for writing more reasonable test cases for benchmark).
## Usages
- [**`RawLRU`** and **`LRUCache`**](#rawlru-and-lrucache)
- [Default](#default-no-op-callback)
- [Custom OnEvictCallback](#custom-callback)
- [**`AdaptiveCache`**](#adaptivecache-adaptive-replacement-cache)
- [**`TwoQueueCache`**](#twoqueuecache)
### RawLRU and LRUCache
`RawLRU` is the basic data structure over the crate, it has a generic type `E: OnEvictCallback`, which support users to write and apply their own callback policy.
`LRUCache` is a type alias for `RawLRU