编程知识 cdmana.com

Redis - err max number of clients reached error

explain :

redis newspaper max number of clients error , Maybe it's because there are too many clients , It is also possible that the maximum number of file descriptors in the system is too small .
 

solve :
1. Encountered because the client access is too much , It can be modified by redis.conf Of maxclients XXXX, Sets the maximum number of client connections at one time ( Default 0, Means unrestricted ) To solve .

# netstat -an|grep 6380|wc -l
4602

When the number of client connections reaches the limit ,Redis The new connection is closed and returned to the client max number of clients reached error message . But it's better to know who's connecting redis Of , Is there something wrong with the connection , After all redis It's usually used for caching , It's unlikely there will be too many client To connect .
redis The number of client connections that can be opened simultaneously is Redis The maximum number of file descriptors that a process can open

2. If the maximum number of file descriptors in the system is too small , You can refer to the following to modify the maximum number of file descriptors in the system

# ps -ef|grep redis
root     23427 21683  0 19:08 pts/0    00:00:00 grep redis
root     31886     1  0 11:30 ?        00:00:00 /usr/local/bin/redis-server /data/redis/6379/redis6379.conf

The second column gets redis Of PID by 31886

# cat /proc/31886/limits 
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             65535                65535                processes 
Max open files            65535                65535                files     
Max locked memory         32768                32768                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       35840                35840                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    

Max open files Illustrate this redis The process can be opened at most 65535 File descriptors ( Does not contain its child processes or created threads ,65535 It has been changed , Originally for 1024). stay /proc/31886/task/ Under the table of contents , Details of the subtasks are listed , There's also one in each subfolder limits file , The situation of each subtask is limited .

How many file descriptors does a process open ?

# ll /proc/31886/fd/

 
subdirectories fd in , Detailed list of each file descriptor opened by the process , Again ,/proc/19488/task/XXXX/fd There will also be sub task open file descriptors . You know how many , perform

# ll /proc/31886/fd/ | wc -l

 
How to know which files a process and its subprocesses are associated with ?lsof You can do the job . Please note that , Associated file and open file descriptor are two different concepts , The number of associated files may be much larger than the number of open file descriptors .

# lsof | grep redis | wc –l  // You can also use the parent process's PID Filter 
# lsof | grep 31886 | wc –l   // It's worth it here 9525

 
Yes, it is the maximum number of file descriptors in the system , How to modify the number of file descriptors ?

# ulimit -SHn 2048  // Can pass ulimit Temporary modification , But this can only affect the current session, When the terminal reconnects or the current user exits , The configuration fails . To make permanent changes, you need to edit /etc/security/limits.conf  file , Add the following two lines :
* hard nofile 2048
* soft nofile 2048
 If the process is already on , Especially online business . If you don't want to restart the process , How to dynamically modify ?
# echo -n ‘Max open files=65535:65535’ > /proc/31886/limits  

 
Kernel parameters also have restrictions on file descriptors , If the set value is greater than the kernel limit , No way :

# sysctl -a|grep file-max   // lookup file-max Kernel parameters of :
# sysctl -w file-max=65535   // change file-max Kernel parameters of 
Sysctl It's also temporary , To be permanent , You can change it sysctl The file of , edit /etc/sysctl.conf file , Add or modify the following line :
fs.file-max=65535

 
summary :
It should be noted that , Restrictions on file descriptors , Not limited to the ones described here , It may also be related to the start parameters of the process 、 The user's environment settings are related to . Of course , If it's a process BUG The file descriptor is not closed for recycling in time , It's just increasing the limit , Basically, it has to be fixed BUG.
Besides ,lsof The resources occupied by the system will be listed , But these resources don't necessarily occupy open file descriptors ( Like shared memory , Semaphore , Message queue , Memory mapping . etc. , Although these resources are occupied , But it doesn't take up the open file number ), So it's possible that cat /proc/sys/fs/file-max The value is less than lsof | wc -l

版权声明
本文为[PddZyy]所创,转载请带上原文链接,感谢
https://cdmana.com/2020/12/20201224102147096K.html

Scroll to Top