编程知识 cdmana.com

Offer wind fire wheel: the difference between redis distributed lock and ZK distributed lock?

For the welfare of farmers , Set a life for a famous woman .offer Wind fire wheel series open ,xjjdog Will choose typical high value topic to share . Copyright notice : This paper is the result of group discussion , No official account is allowed , Unauthorized refusal to copy to other platforms .

label :【 senior 】【Redis】【ZooKeeper】

1. ask

redis Distributed lock and zk The difference between distributed locks ?

2. analysis

This question is more demanding for the interviewer , It's not just about understanding the implementation , And have a grasp of the principles . So the question answers , There are many levels .

as everyone knows ,Redis It's lightweight , Intuitively, distributed lock is better to implement , For example, use setnx, But once you add the high availability attribute ,Redis The difficulty of lock implementation will explode .

Add a few other properties of the lock : Optimism and pessimism 、 Read-write lock etc. , It's going to be more complicated .

If you all know , I can't finish chatting all day .

3. answer

Let's have one first , It's more simple 、 The answer to the introduction :

  • redis Distributed locks for , Can be based on setnx Instructions implement ( But it's better to use a belt nx Parametric set Instructions )
  • zk Distributed locks for , It is based on the ordering of temporary nodes and the monitoring mechanism of nodes

This way of answering , I've just got myself involved , Because it involves a lot of detail . Others just ask the difference , Why go around the source code level ?

It is suggested to answer :

  • Redis, Use redisson Packaged RedLock
  • Zk, Use curator Packaged InterProcessMutex

contrast :

  • On the difficulty of realization :Zookeeper >= redis
  • Server side performance :redis > Zookeeper
  • Client performance :Zookeeper > redis
  • reliability :Zookeeper > redis

Chatting :

3.1 Difficulty of realization

For direct manipulation of the underlying layer API Come on , The difficulty of implementation is almost the same , There are a lot of boundary scenarios to consider . But because of Zk Of ZNode Nature has the property of lock , So if you go straight ahead , It's simple .

Redis There are too many exception scenarios to consider , Like lock timeout 、 High availability of locks, etc , It's hard to achieve .

3.2 Server side performance

Zk be based on Zab agreement , You need half the nodes ACK, It's a success , Low throughput . If locking is frequent 、 Release the lock , There will be a lot of pressure on the server cluster .

Redis Memory based , Just write Master Even if it succeeds , High throughput ,Redis Server pressure is low .

3.3 Client performance

Zk Because of the notification mechanism , The process of obtaining the lock , Just add a listener . Avoid polling , Less performance consumption .

Redis There's no notification mechanism , It can only use something like CAS The polling method to scramble for the lock , More idling , Will cause pressure on the client .

3.4 reliability

This is obvious .Zookeeper It's for coordination , There are strict Zab Protocol controls data consistency , The lock model is robust .

Redis Pursue throughput , A little less reliable . Even if Redlock, There's no guarantee 100% Robustness of , But general applications don't encounter extreme scenarios , So it's also commonly used .

4. Expand

Zk Distributed lock sample code sample :

import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import java.util.concurrent.TimeUnit;

public class ExampleClientThatLocks
{
    private final InterProcessMutex lock;
    private final FakeLimitedResource resource;
    private final String clientName;

    public ExampleClientThatLocks(CuratorFramework client, String lockPath, FakeLimitedResource resource, String clientName)
    {
        this.resource = resource;
        this.clientName = clientName;
        lock = new InterProcessMutex(client, lockPath);
    }

    public void     doWork(long time, TimeUnit unit) throws Exception
    {
        if ( !lock.acquire(time, unit) )
        {
            throw new IllegalStateException(clientName + " could not acquire the lock");
        }
        try
        {
            System.out.println(clientName + " has the lock");
            resource.use();
        }
        finally
        {
            System.out.println(clientName + " releasing the lock");
            lock.release(); // always release the lock in a finally block
        }
    }
}

RedLock Example of using distributed lock for :

String resourceKey = "goodgirl";
RLock lock = redisson.getLock(resourceKey);
try {
    lock.lock(5, TimeUnit.SECONDS);
    // The real business 
    Thread.sleep(100);
} catch (Exception ex) {
    ex.printStackTrace();
} finally {
    if (lock.isLocked()) {
        lock.unlock();
    }
}

One more paragraph RedLock Internal lock and unlock Code implementation of , So that you have a certain understanding of its complexity .

@Override
    <T> RFuture<T> tryLockInnerAsync(long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand<T> command) {
        internalLockLeaseTime = unit.toMillis(leaseTime);

        return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, command,
                                "local mode = redis.call('hget', KEYS[1], 'mode'); " +
                                "if (mode == false) then " +
                                  "redis.call('hset', KEYS[1], 'mode', 'read'); " +
                                  "redis.call('hset', KEYS[1], ARGV[2], 1); " +
                                  "redis.call('set', KEYS[2] .. ':1', 1); " +
                                  "redis.call('pexpire', KEYS[2] .. ':1', ARGV[1]); " +
                                  "redis.call('pexpire', KEYS[1], ARGV[1]); " +
                                  "return nil; " +
                                "end; " +
                                "if (mode == 'read') or (mode == 'write' and redis.call('hexists', KEYS[1], ARGV[3]) == 1) then " +
                                  "local ind = redis.call('hincrby', KEYS[1], ARGV[2], 1); " + 
                                  "local key = KEYS[2] .. ':' .. ind;" +
                                  "redis.call('set', key, 1); " +
                                  "redis.call('pexpire', key, ARGV[1]); " +
                                  "local remainTime = redis.call('pttl', KEYS[1]); " +
                                  "redis.call('pexpire', KEYS[1], math.max(remainTime, ARGV[1])); " +
                                  "return nil; " +
                                "end;" +
                                "return redis.call('pttl', KEYS[1]);",
                        Arrays.<Object>asList(getName(), getReadWriteTimeoutNamePrefix(threadId)), 
                        internalLockLeaseTime, getLockName(threadId), getWriteLockName(threadId));
    }

@Override
    protected RFuture<Boolean> unlockInnerAsync(long threadId) {
        String timeoutPrefix = getReadWriteTimeoutNamePrefix(threadId);
        String keyPrefix = getKeyPrefix(threadId, timeoutPrefix);

        return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN,
                "local mode = redis.call('hget', KEYS[1], 'mode'); " +
                "if (mode == false) then " +
                    "redis.call('publish', KEYS[2], ARGV[1]); " +
                    "return 1; " +
                "end; " +
                "local lockExists = redis.call('hexists', KEYS[1], ARGV[2]); " +
                "if (lockExists == 0) then " +
                    "return nil;" +
                "end; " +
                    
                "local counter = redis.call('hincrby', KEYS[1], ARGV[2], -1); " + 
                "if (counter == 0) then " +
                    "redis.call('hdel', KEYS[1], ARGV[2]); " + 
                "end;" +
                "redis.call('del', KEYS[3] .. ':' .. (counter+1)); " +
                
                "if (redis.call('hlen', KEYS[1]) > 1) then " +
                    "local maxRemainTime = -3; " + 
                    "local keys = redis.call('hkeys', KEYS[1]); " + 
                    "for n, key in ipairs(keys) do " + 
                        "counter = tonumber(redis.call('hget', KEYS[1], key)); " + 
                        "if type(counter) == 'number' then " + 
                            "for i=counter, 1, -1 do " + 
                                "local remainTime = redis.call('pttl', KEYS[4] .. ':' .. key .. ':rwlock_timeout:' .. i); " + 
                                "maxRemainTime = math.max(remainTime, maxRemainTime);" + 
                            "end; " + 
                        "end; " + 
                    "end; " +
                            
                    "if maxRemainTime > 0 then " +
                        "redis.call('pexpire', KEYS[1], maxRemainTime); " +
                        "return 0; " +
                    "end;" + 
                        
                    "if mode == 'write' then " + 
                        "return 0;" + 
                    "end; " +
                "end; " +
                    
                "redis.call('del', KEYS[1]); " +
                "redis.call('publish', KEYS[2], ARGV[1]); " +
                "return 1; ",
                Arrays.<Object>asList(getName(), getChannelName(), timeoutPrefix, keyPrefix), 
                LockPubSub.UNLOCK_MESSAGE, getLockName(threadId));
    }

therefore , It is recommended to use the packaged components . If you have to use setnx perhaps set Orders to do these things ,xjjdog I just want to be abused . The basic principle is that we can understand , These details , You can't make sense without a little effort .

For a long time , When we choose the model , How to do it ? It depends on your infrastructure . If your app uses zk, And the cluster performance is very strong , optimization zk. If all you have is redis, Not for a distributed lock , Introduce bloated zk, Then use redis.

Author's brief introduction : Taste of little sister (xjjdog), An official account that does not allow programmers to take turns. . Focus on infrastructure and Linux. Ten year structure , 10 billion traffic per day , Discuss the world of high concurrency with you , Give you a different taste . My personal wechat xjjdog0, Welcome to add friends , Further communication .

http://xjjdog.cn Yes 200+ The original articles are classified in detail , Reading more smoothly , Welcome to collect .

This article is from WeChat official account. - Taste of little sister (xjjdog) , author : Miss's dog

The source and reprint of the original text are detailed in the text , If there is any infringement , Please contact the yunjia_community@tencent.com Delete .

Original publication time : 2020-12-13

Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share .

版权声明
本文为[xjjdog]所创,转载请带上原文链接,感谢
https://cdmana.com/2020/12/20201224122136504O.html

Scroll to Top