Apache stops responding but is running

A few things that could cause that are:

Possible Causes:
1) MaxClients set to a value too low
2) If you have over about 800 VirtualHost entries (domains, subdomain, ssl, etc.. ), the ErrorLog files open too many file descriptors and apache won't be able to log the errors and may stop responding.   This number can vary per box depending on the limit, and setup of the system. (lower/higher)

Solutions:
1) edit  /etc/httpd/conf/httpd.conf and increase the MaxClients setting to something like 200 or 300.
2)

cd /usr/local/directadmin/data/templates
cp virtual_host*.conf custom
cd custom

# remove all the ErrorLog lines (or comment them out) from the 4 virtual_host*.conf files that are in the custom directory.

perl -pi -e 's/Error/#Error/' virtual_host*.conf
echo "action=rewrite&value=httpd" >> /usr/local/directadmin/data/task.queue

Apache should be restarted automatically after a few minutes later (rewrite might take a while with over 800 sites).
If that's not enough to lower the limit and the problem persists, also try commenting out the CustomLog entries:

perl -pi -e 's/CustomLog/#CustomLog/' virtual_host*.conf

and repeat the above echo command to rewrite the httpd.conf files.

3) Other possible information:
Edit /usr/include/bits/typesizes.h and /usr/include/linux/posix_types.h and set
#define __FD_SETSIZE 32768
and then recompile with customapache or custombuild.

On FreeBSD, it's /usr/include/sys/select.h or /usr/include/sys/types.h
Change:
#define FD_SETSIZE      1024U
to
#define FD_SETSIZE      32768U

then recompile apache/php

4) edit /etc/sysctl.conf and add:
fs.file-max = 32768

and run:

/sbin/sysctl -w fs.file-max=32768
/sbin/sysctl -w kern.maxfiles=32768
/sbin/sysctl -w kern.maxfilesperproc=32768

then recompile apache/php

Other possible entires for the sysctl.conf:
kern.maxfiles = 32768
kern.maxfilesperproc = 32768


5) Another way to free up FileDescriptors (FDs) is to disable ssl on any domain that does not require it.
A quck way to check is to type:

ls -la /home/*/domains/*/private_html/index.html

quickly scan the list for any index.html that isn't betwen 200-300 bytes in size.  Any that are not in that range will have been edited and the user is probably using ssl, so take note of those usernames and domains.    Now, the quick way to do a mass SSL shutoff for domains is to type:

perl -pi -e 's/ssl=ON/ssl=OFF/' /usr/local/directadmin/data/users/*/domains/*.conf

Then turn ssl=ON back on for any users who need it.  Note that this is an end user level setting, so they have the ability to turn it back on themselves via Domain Setup.  Then type the action=rewrite&value=httpd command as mentioned in step 2 above.
What this does is reduced the number of FD's by 50%.  Since many people rarely use SSL, disabling it reduceds half of all virtualhosts, since all domains, subdomains, etc.. have 2 virtualhosts each with ssl, and only 1 each without ssl.

6) Openssl bug.  Either update openssl and recompile apache, or patch apache 2:
http://issues.apache.org/bugzilla/show_bug.cgi?id=43717


Related error messages:

[error] System: Too many open files in system (errno: 23)

host: isc_socket_create: not enough free resources socket.c:2117: REQUIRE(maxfd <= (int)1024) failed.
host: isc_socket_create: not enough free resources


Also, exim may throw the following error is exim is called through a php script:

R=lookuphost defer (-1): host lookup did not complete

Lowering the number of Filedescriptors apache uses will help, if the file decriptor limit is the reason for the error.


Was this article helpful?

mood_bad Dislike 2
mood Like 0
visibility Views: 12302