What is Redis?
Redis is an open-source database developed based on C language, and unlike traditional databases, Redis data is stored in memory (in-memory database), with very fast read and write speeds, and is widely used in caching directions. Also, Redis stores KV key-value pair data. In order to meet the needs of different business fields, Redis has built-in data type implementation. In addition, Redis also supports transactions and persistence of a variety of out-of-the-box cluster solutions.
Why use redis?
1.Performance.
If the user accesses some data in the database for the first time, the process is slower, after all, it is read from the hard disk. But if.
If the data accessed by the user is high-frequency data and does not change frequently, then we can safely store the data accessed by the user.
in the cache. That is, to ensure that the next time the user accesses the data, they can get it directly from the cache. Operate the cache.
It's a direct manipulation of memory, so it's quite fast.
2.High concurrency.
The QPS of a database like MySQL is about 1W (4 cores and 8G), but it is easy to achieve with Redis caching.
10W+, or even up to 30W+ (in the case of stand-alone Redis, it will be higher in Redis clusters).
qps (query per second): The number of queries per second that the server can execute
It can be seen that the number of database requests that can be sustained by the direct operation cache is much greater than that of direct access to the database, so we can consider the number.
Some of the data in the database is transferred to the cache, so that some of the user's requests go directly to the cache without going through the database. And then, I.
This also improves the concurrency of the system as a whole.
What else can Redis do besides caching?
Distributed locks: Distributed locks are a common way to use Redis to make distributed locks.
Rate limiting: Generally, you can use Redis + Lua scripts to limit your rate.
Message queue: The built-in list data structure of Redis can be used as a simple queue. redis 5.The stream type added to 0 is more suitable for message queues. It is similar to Kafka, with the concept of topics and consumer groups, and supports message persistence and the ACK mechanism.
Complex business scenarios: Through the data structures provided by Redis and Redis extensions (such as Redisson), we can easily complete many complex business fields, such as counting active users through Bitmap and maintaining leaderboards through sorted sets.
What are the commonly used data structures for Redis?
5 basic data structures: string, list, sethash (hash), zset (ordered set.
There are 3 special data structures: hyperloglogs, bitmaps, and geospatial.
What is the use of redis to set an expiration time for cached data?
In general, we set an expiration time when we set the saved cached data. Why?
Because memory is limited, if all the data in the cache is kept all the time, it will be out of memory in minutes.
Is the expiration time useful for anything other than helping to alleviate memory consumption?
In many cases, our business needs a certain data to exist only in a certain period of time, for example, our SMS verification code may only be 1 point.
The token logged in by the user may only be valid for 1 day.
If you use a traditional database to deal with it, you will generally judge the expiration yourself, which is more cumbersome and much less performant.
Do you know the deletion policy for expired data?
If you set a batch of keys to survive only for 1 minute, how does Redis delete the keys after 1 minute?
There are two commonly used deletion policies for expired data.
1.Lazy deletion: The data is checked for expiration only when the key is retrieved. This is the most CPU-friendly, but it can cause too much.
The period key has not been deleted.
2.Periodic deletion: Extract a batch of keys at regular intervals to delete expired keys. Also, the underlying Redis will enforce the deletion operation by restricting it.
The length and frequency of the row to reduce the impact of the delete operation on CPU time.
Regular deletion is more memory-friendly, and lazy deletion is more CPU-friendly. Both have their own merits, so Redis uses regular deletion + laziness.
Sex lazy style removal.
However, just by setting an expiration time for the key, it is still problematic. Because there may still be regular deletions and lazy deletions that miss a lot of expiration.
key. This causes a large number of expired keys to accumulate in memory, and then they are out of memory.
Do you understand the Redis memory elimination mechanism?
Related questions: There are 2000W data in MySQL and only 20W data in Redis, how to ensure that the data in Redis is hot data?
Redis provides six data elimination strategies:
1.volatile-lru: Picks the most recent from the dataset for which an expiration time has been set.
Least used data is eliminated.
2.volatile-ttl: Selects the data to expire from the set expiration time of the dataset.
3.volatile-random: selects any data from the dataset for which the expiration time has been set.
4.alkeys-r: When there is not enough memory to accommodate the newly written data, remove the least recently used key (this is the most commonly used) from the keyspace.
5.allkeys-random: selects any data from the dataset to be eliminated.
What is the difference between cache penetration and cache breakdown?
In cache penetration, the requested key does not exist in either the cache or the database.
In cache breakdown, the requested key corresponds to hot data, which exists in the database but not in the cache (usually because the data in the cache has expired).
What are the solutions?For cases where the Redis service is unavailable:
1.Redis clusters are used to avoid problems on a single machine, and the entire cache service cannot be used.
2.Throttling to avoid processing a large number of requests at the same time.
For hotspot cache invalidation:
1.Set different expiration times, such as randomly setting the expiration time of the cache.
2.The cache never expires (not recommended, too poor in use).
3.Set up a second-level cache.