编程知识 cdmana.com

Redis2020 the latest simple text and text course (Part 2)

SpringDataRedis

Create project

Add dependency

<dependencies>
    <!-- spring data redis  Components  -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-redis</artifactId>
    </dependency>
    <!-- commons-pool2  Object pool dependency  -->
    <dependency>
        <groupId>org.apache.commons</groupId>
        <artifactId>commons-pool2</artifactId>
    </dependency>
    <!-- web  Components  -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <!-- test  Components  -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

add to application.yml The configuration file

spring:
  redis:
    # Redis Server address 
    host: 192.168.10.100
    # Redis Server port 
    port: 6379
    # Redis Server port 
    password: root
    # Redis Server port 
    database: 0
    #  Connection timeout 
    timeout: 10000ms
    jedis:
      pool:
        #  maximum connection , Default 8
        max-active: 1024
        #  Maximum connection blocking wait time , Unit millisecond , Default -1ms
        max-wait: 10000ms
        #  Maximum free connection , Default 8
        max-idle: 200
        #  Minimum free connection , Default 0
        min-idle: 5

Lettuce and Jedis The difference between

 Jedis It's an excellent foundation Java Linguistic Redis client , however , Its shortcomings are also obvious :Jedis In terms of implementation, it's direct connection Redis-Server, Share one among multiple threads Jedis Instance is thread unsafe , If you want to use Jedis, Need to use connection pool , Each thread uses its own Jedis example , When the number of connections increases , Physics will consume more resources .

 Lettuce It completely overcomes its thread insecurity :Lettuce Is based on Netty The connection of (StatefulRedisConnection),

 Lettuce Is a scalable thread safe Redis client , Support synchronization 、 Asynchronous and reactive patterns . Multiple threads can share a connection instance , You don't have to worry about multithreading . It's based on excellence Netty NIO Frame building , Support Redis Advanced features of , Such as Sentinel, colony , Assembly line , And automatic reconnection Redis Data model .

Test environment test environment is built successfully

@RunWith(SpringRunner.class)
@SpringBootTest(classes = SpringDataRedisApplication.class)
public class SpringDataRedisApplicationTests {

    @Autowired
    private RedisTemplate redisTemplate;
    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    @Test
    public void initconn() {
        ValueOperations<String, String> ops = stringRedisTemplate.opsForValue();
        ops.set("username","lisi");
        ValueOperations<String, String> value = redisTemplate.opsForValue();
        value.set("name","wangwu");
        System.out.println(ops.get("name"));
    }
}

Custom templates solve serialization problems

  Template by default RedisTemplate<Object, Object>, The default serialization uses JdkSerializationRedisSerializer, Store binary bytecode . You need to customize the template , When you customize a template, you want to store it String When the string , You can make StringRedisTemplate The way , They don't conflict .

Serialization problem :

​ To put domain object As key-value Yes, save in redis in , We must solve the problem of object serialization .Spring Data Redis It gives us some ready-made solutions :

JdkSerializationRedisSerializer Use JDK The serialization function provided . The advantage is that you don't need to provide type information when deserializing (class), But the disadvantage is that the result of serialization is very large , yes JSON Format 5 About times , This will consume Redis A lot of memory on the server .

Jackson2JsonRedisSerializer Use Jackson The library serializes the object as JSON character string . The advantage is fast , The serialized string is short and concise . But the disadvantages are also very fatal , That is, there is a type parameter in the constructor of this class , You must provide type information for the object to be serialized (.class object ). By looking at the source code , It is found that it only uses type information during deserialization .

GenericJackson2JsonRedisSerializer Universal serialization , This serialization method does not need to specify the object manually Class.

@Configuration
public class RedisConfig {
    @Bean
    public RedisTemplate<String,Object> redisTemplate(LettuceConnectionFactory redisConnectionFactory){
        RedisTemplate<String,Object> redisTemplate = new RedisTemplate<>();
        // by string type key Set sequencer 
        redisTemplate.setKeySerializer(new StringRedisSerializer());
        // by string type value Set sequencer 
        redisTemplate.setValueSerializer(new GenericJackson2JsonRedisSerializer());
        // by hash type key Set sequencer 
        redisTemplate.setHashKeySerializer(new StringRedisSerializer());
        // by hash type value Set sequencer 
        redisTemplate.setHashValueSerializer(new GenericJackson2JsonRedisSerializer());
        redisTemplate.setConnectionFactory(redisConnectionFactory);
        return redisTemplate;
    }
}
// serialize 
@Test
public void testSerial(){
    User user = new User();
    user.setId(1);
    user.setUsername(" Zhang San ");
    user.setPassword("111");
    ValueOperations<String, Object> value = redisTemplate.opsForValue();
    value.set("userInfo",user);
    System.out.println(value.get("userInfo"));
}

operation string

// 1. operation String
@Test
public void testString() {
    ValueOperations<String, Object> valueOperations = redisTemplate.opsForValue();

    //  Add a piece of data 
    valueOperations.set("username", "zhangsan");
    valueOperations.set("age", "18");

    // redis There is a hierarchical relationship between China and Israel 、 Data is stored in the form of a directory 
    valueOperations.set("user:01", "lisi");
    valueOperations.set("user:02", "wangwu");

    //  Add multiple pieces of data 
    Map<String, String> userMap = new HashMap<>();
    userMap.put("address", "bj");
    userMap.put("sex", "1");
    valueOperations.multiSet(userMap);

    //  Get a piece of data 
    Object username = valueOperations.get("username");
    System.out.println(username);


    //  Get multiple pieces of data 
    List<String> keys = new ArrayList<>();
    keys.add("username");
    keys.add("age");
    keys.add("address");
    keys.add("sex");
    List<Object> resultList = valueOperations.multiGet(keys);
    for (Object str : resultList) {
        System.out.println(str);
    }

    //  Delete 
    redisTemplate.delete("username");
}

operation hash

// 2. operation Hash
@Test
public void testHash() {
    HashOperations<String, String, String> hashOperations = redisTemplate.opsForHash();

    /*
     *  Add a piece of data 
     *      Parameter one :redis Of key
     *      Parameter two :hash Of key
     *      Parameter 3 :hash Of value
     */
    hashOperations.put("userInfo","name","lisi");

    //  Add multiple pieces of data 
    Map<String, String> map = new HashMap();
    map.put("age", "20");
    map.put("sex", "1");
    hashOperations.putAll("userInfo", map);

    //  Get a piece of data 
    String name = hashOperations.get("userInfo", "name");
    System.out.println(name);

    //  Get multiple pieces of data 
    List<String> keys = new ArrayList<>();
    keys.add("age");
    keys.add("sex");
    List<String> resultlist =hashOperations.multiGet("userInfo", keys);
    for (String str : resultlist) {
        System.out.println(str);
    }

    //  obtain Hash Type all the data 
    Map<String, String> userMap = hashOperations.entries("userInfo");
    for (Entry<String, String> userInfo : userMap.entrySet()) {
        System.out.println(userInfo.getKey() + "--" + userInfo.getValue());
    }

    //  Delete   Used to delete hash Type data 
    hashOperations.delete("userInfo", "name");
}

operation list

// 3. operation list
@Test
public void testList() {
ListOperations<String, Object> listOperations = redisTemplate.opsForList();

//  Add left ( On )
//        listOperations.leftPush("students", "Wang Wu");
//        listOperations.leftPush("students", "Li Si");

//  Add left ( On )  hold value Values in key In the corresponding list pivot The left side of the value , If pivot If the value exists 
//listOperations.leftPush("students", "Wang Wu", "Li Si");

//  Right add ( Next )
//        listOperations.rightPush("students", "Zhao Liu");

//  obtain  start Start subscript  end End subscript   Inclusion relation 
List<Object> students = listOperations.range("students", 0,2);
for (Object stu : students) {
System.out.println(stu);
}

//  Get... By subscript 
Object stu = listOperations.index("students", 1);
System.out.println(stu);

//  Get the total number 
Long total = listOperations.size("students");
System.out.println(" Total number of articles :" + total);

//  To delete a single   Delete several occurrences of the list stored in the list Li Si.
listOperations.remove("students", 1, "Li Si");

//  Delete multiple 
redisTemplate.delete("students");
}

operation set

// 4. operation set- disorder 
@Test
public void testSet() {
    SetOperations<String, Object> setOperations = redisTemplate.opsForSet();
    //  Add data 
    String[] letters = new String[]{"aaa", "bbb", "ccc", "ddd", "eee"};
    //setOperations.add("letters", "aaa", "bbb", "ccc", "ddd", "eee");
    setOperations.add("letters", letters);

    //  get data 
    Set<Object> let = setOperations.members("letters");
    for (Object letter: let) {
        System.out.println(letter);
    }

    //  Delete 
    setOperations.remove("letters", "aaa", "bbb");
}

operation sorted set

// 5. operation sorted set- Orderly 
@Test
public void testSortedSet() {
    ZSetOperations<String, Object> zSetOperations = redisTemplate.opsForZSet();

    ZSetOperations.TypedTuple<Object> objectTypedTuple1 =
            new DefaultTypedTuple<Object>("zhangsan", 7D);
    ZSetOperations.TypedTuple<Object> objectTypedTuple2 =
            new DefaultTypedTuple<Object>("lisi", 3D);
    ZSetOperations.TypedTuple<Object> objectTypedTuple3 =
            new DefaultTypedTuple<Object>("wangwu", 5D);
    ZSetOperations.TypedTuple<Object> objectTypedTuple4 =
            new DefaultTypedTuple<Object>("zhaoliu", 6D);
    ZSetOperations.TypedTuple<Object> objectTypedTuple5 =
            new DefaultTypedTuple<Object>("tianqi", 2D);
    Set<ZSetOperations.TypedTuple<Object>> tuples = new HashSet<ZSetOperations.TypedTuple<Object>>();
    tuples.add(objectTypedTuple1);
    tuples.add(objectTypedTuple2);
    tuples.add(objectTypedTuple3);
    tuples.add(objectTypedTuple4);
    tuples.add(objectTypedTuple5);

    //  Add data 
    zSetOperations.add("score", tuples);

    //  get data 
    Set<Object> scores = zSetOperations.range("score", 0, 4);
    for (Object score: scores) {
        System.out.println(score);
    }

    //  Get the total number 
    Long total = zSetOperations.size("score");
    System.out.println(" Total number of articles :" + total);

    //  Delete 
    zSetOperations.remove("score", "zhangsan", "lisi");
}

Get all key& Delete

//  Get all key
@Test
public void testAllKeys() {
    //  Current library key The name of 
    Set<String> keys = redisTemplate.keys("*");
    for (String key: keys) {
        System.out.println(key);
    }
}

//  Delete 
@Test
public void testDelete() {
    //  Delete   Universal   For all data types 
    redisTemplate.delete("score");
}

Set up key The expiration time of

@Test
public void testEx() {
    ValueOperations<String, Object> valueOperations = redisTemplate.opsForValue();
    //  Method 1 : Insert a piece of data and set the expiration time 
    valueOperations.set("code", "abcd", 180, TimeUnit.SECONDS);
    //  Method 2 : To the existing key Set the expiration time 
    boolean flag = redisTemplate.expire("code", 180, TimeUnit.SECONDS);
    //  Get specified key The expiration time of 
    Long l = redisTemplate.getExpire("code");
}

SpringDataRedis Integrated use of sentinel mechanisms

application.yml

spring:
    redis:
        # Redis Server address 
        host: 192.168.10.100
        # Redis Server port 
        port: 6379
        # Redis Server port 
        password: root
        # Redis Server port 
        database: 0
        #  Connection timeout 
        timeout: 10000ms
        lettuce:
            pool:
                #  maximum connection , Default 8
                max-active: 1024
                #  Maximum connection blocking wait time , Unit millisecond , Default -1ms
                max-wait: 10000ms
                #  Maximum free connection , Default 8
                max-idle: 200
                #  Minimum free connection , Default 0
                min-idle: 5
        # Sentinel mode 
        sentinel:
            # Master node name 
            master: mymaster
            # node 
            nodes:  192.168.10.100:26379,192.168.10.100:26380,192.168.10.100:26381

Bean Annotation configuration

@Bean
public RedisSentinelConfiguration redisSentinelConfiguration(){
    RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration()
            //  Master node name 
            .master("mymaster")
            //  Master slave server address 
            .sentinel("192.168.10.100", 26379)
            .sentinel("192.168.10.100", 26380)
            .sentinel("192.168.10.100", 26381);
    //  Set the password 
    sentinelConfig.setPassword("root");
    return sentinelConfig;
}

How to deal with cache penetration 、 Cache breakdown 、 Cache avalanche problem

Key The expiration mechanism of

 Redis Can be stored on Redis Set expiration time for cached data in , For example, the SMS verification code we get usually expires in ten minutes , We need to store in the verification code at this time Redis Add a key The expiration time of , But there is a problem that needs to be paid more attention to : Is not key When the expiration time is up, it will be Redis Delete for .

Delete periodically

 Redis The default is every time 100ms Just randomly select some Key, Check if it is overdue , Delete if expired . Why random sampling instead of checking everything key? Because if you set key Tens of thousands of , Every time 100 Milliseconds will be all there is key Check it out , Will give CPU Bring more pressure .

Lazy deletion

  Regular deletion due to random sampling may lead to a lot of expiration Key The expiration date has not been deleted . So when users get data from the cache ,redis Will check this key Is it overdue , Delete this if it expires key. At this time, the query will expire key Clear from cache .

Memory obsolescence mechanism

  Just use regular delete + Lazy delete mechanism still leaves a serious hidden danger : If regular deletion leaves a lot of expired key, And users haven't used them for a long time key, Cause to expire key Can't be deleted by inertia , Which leads to expiration key It's been piling up in memory , It eventually results in Redis Memory blocks are consumed . How to solve this problem ? This is the time Redis Memory obsolescence mechanism came into being .Redis Memory knockout mechanism provides 6 A data culling strategy :

  • volatile-lru: Select the least recently used data from the data set with expiration time set .
  • volatile-ttl: Select the data that will expire from the data set with expiration time set .
  • volatile-random: Select any data elimination from the data set with expiration time set .
  • allkeys-lru: Remove the least recently used... When memory is insufficient for new write data key.
  • allkeys-random: Choose data elimination from data set .
  • no-enviction( Default ): When the memory is not enough to hold the newly written data , Error will be reported in new write operation .

  In general , Recommended volatile-lru Strategy , For important data such as configuration information , Expiration time should not be set , such Redis You'll never get rid of these important data . For general data, you can add a cache time , When the data fails, the request will be sent from DB To retrieve and re deposit Redis in .

Cache breakdown

  First let's look at how the request gets the data : When a user request is received , First try from Redis Get data from cache , If the data can be retrieved in the cache, the result will be returned directly , When there is no data in the cache DB get data , If the database successfully fetches the data , Update Redis, Then return the data

  Definition : High concurrency , Some popular key Suddenly expired , Resulting in a large number of requests in Redis Cache data not found , And then all of them visit DB Request data , cause DB The pressure suddenly increases .

  Solution : Cache breakdown is generally not easy to cause DB The crash of , It's just that it's going to make a difference to DB The periodic pressure of . The solution to cache breakdown can be as follows :

  • Redis Data in does not set expiration time , Then add an attribute to the cached object to identify the expiration time , Every time you get data , Verify the expiration time attribute in the object , If the data is about to expire , Then asynchronously initiates a thread to actively update the data in the cache . But this scheme may cause some requests to get overdue values , It depends on whether the business is acceptable ,
  • If the data is required to be new , The best solution is to set the hotspot data to never expire , Then add a mutex to ensure the cache write in one thread .

Cache penetration

  Definition : Cache penetration refers to query caching and DB Data that doesn't exist in . Such as through id Query product information ,id Generally greater than 0, The attacker will pass it on purpose id by -1 Go to query , Since the cache is a miss, it is from DB Get data in , This will cause every cache to miss data and every request to access DB, Cause cache penetration .

  Solution

  • Using mutexes , When the cache fails , Get the lock first , Got the lock , Ask for the database again . Not locked , Then sleep for a while and try again
  • Adopt asynchronous update strategy , No matter what key Whether to take the value , All go straight back to .value Value to maintain a cache expiration time , If the cache expires , Asynchronously start a thread to read the database , Update cache . Need to do cache preheating ( Before the project starts , Load the cache first ) operation .
  • Provide an interception mechanism that can quickly determine whether the request is effective , such as , Use the bloon filter , Internal maintenance of a series of legal and effective key. Quickly judge out , Ask for the Key Is it legal and effective . If it's not legal , Then return directly .
  • If the object queried from the database is empty , Also put it in the cache , Only the cache expiration time is set to be shorter , For example, set to 60 second .

Cache avalanche

  Definition : In the cache, if a large number of caches expire in a period of time , At this time, there will be a lot of cache breakdown , All the requests fall on DB On , Because of the huge amount of query data , cause DB Too much pressure even leads to DB Downtime .

  Solution

  • The expiration time given to the cache , Add a random value , Avoid collective failure . If Redis Is a cluster deployment , Distribute hotspot data evenly among different Redis All failures can also be avoided in the library
  • Use mutexes , But the throughput of this scheme is obviously reduced .
  • Set hotspot data never to expire .
  • Double cache . We have two caches , cache A And caching B. cache A The expiration time of is 20 minute , cache B No expiration time . Do the cache preheating operation by yourself . Then subdivide the following points

    1. From the cache A Read database , Go straight back to
    2. A No data , Directly from B Reading data , Go straight back to , And start an update thread asynchronously .
    3. Update thread update cache at the same time A And caching B.

Redis Learning video !!!

版权声明
本文为[Technical house Xiaobai]所创,转载请带上原文链接,感谢
https://cdmana.com/2020/12/20201224170632576c.html

Scroll to Top