About Elvis Ciotti

Founder and Software Engineer at Software Engineering Solutions (UK), currently working as a Senior PHP Developer for the UK Ministry of Justice.

Toggle cloudflare security level in case of high load

I’ve recently had some troubles with some stubborn spiders ignoring the robots.txt settings, spidering the site at high velocity and slowing the server down.

In a recent post, I’ve talked about how to limit the nginx requests per client, but that might be not enough in case of a distributed attack with many IPs. or simple high load preventing users from displaying the site. In case that happens, one action that could be taken is switching the Cloudflare (use if you are not) security settings to “under attack“: Cloudflare will display a page instead of the site, that verifies the user for being real ones, and then redirect the user to the real site.

I’ve created a simple bash script to toggle the site security level automatically using the cloudflare API when the server is under high load. Click on the gist name and read the first comment with the instruction to install it.

The way I use it is in a cron is launching the script every 5 minutes, and set the site to “under attack” when the server load is over 7. I’m pasting the ansible cron template here. Replace the variable with the cloudflare user/email, apiKey and zone id (different for each site). Of course, keep it on a single line.

*/5 *   *   *   *   root    /usr/local/bin/cloudflareSecurityLevel 
{{ cloudflare.user }} {{ cloudflare.apiKey }} {{ cloudflare.zoneId }} 
under_attack ">7"

You can also add another line to remove the under attack mode, e.g. when the load is under one

19 *   *   *   *   root    /usr/local/bin/cloudflareSecurityLevel 
{{ cloudflare.user }} {{ cloudflare.apiKey }} {{ cloudflare.zoneId }} 
medium "<1" > /dev/null 2>&1

How to limit nginx requests per client

Nginx has an interesting and powerful module, ngx_http_limit_req_module

This module allows limiting the number of requests per client (E.g: max 1 second per request).

To use it, define the zones (rules) just once at the nginx level (E.g. place into `/etc/nginx/conf.d/zones.conf`). See the following example

limit_req_zone $binary_remote_addr zone=myZone:10m rate=30r/m;

This rule defines a zone called “myZone” that limits clients to max 30 requests a minute (1 request every 2 seconds) per client. 10m is the amount of memory that can be used for nginx to remember. More clients require more memory of course.

To use this rule, place the following inside a “location” directive in the nginx site config

location ~ ^/index\.php(/|$) {
# …
limit_req zone=myZone burst=10 nodelay;
}

This adds the rule for the specific location matching. The burst=30 setting allow 30 consecutive requests, but – if all of them are performed immediately – then it’ll take another 20 seconds before the client can perform another request.

An interesting article about it here

PHPStorm cache on virtual RAM disk [MacOS]

When using a SSD disk, and enough RAM on the system, I always prefer to move software cache files into a virtual RAM disk, in order to extend the disk life, and also gain more speed.

The command on MacOs to create a 512Mb Virtual volume (under /Volumes/RAMDisk) is

diskutil erasevolume HFS+ 'RAMDisk' `hdiutil attach -nomount ram://524288`;

After that, you can create your symlink, and move that command in a startup script

PHPStorm script

Here is the script I’m using to create the virtual Disk and the index and cache directories for PHPStorm. Of course you need to symlink them the first time you use them (see the comments in the gist).

You’ll lose the cache and index next time you restart your computer or unmount your RAMDisk, but that’s something I actually prefer, to keep the cache clean from old projects and libraries.

Speed up PHPstorm: cache into a RAM disk (Mac)

With macOs, PHPStorm indexes all your project files and write cache files into files under

/Users/<yourAccount>/Library/Caches/PhpStorm<version>

Thousands of files are stored there. If you want your Mac to use less your hard disk (to extend its life), and also be faster in general, you can create a RAM disk (a virtual disk that uses RAM instead of your disk), and symlink the cache directory there.

I’ve been using this for a while and it seems to work well. The only downside is losing the cache when you restart the IDE, but that takes only a few mins to recreate (and clean automatically), so I’m overall happy with this approach.

Instructions

Add this script to your ~/.bash_profile

The bash command ramDiskCreate will now create a volume called RAMDiskwith a phpstorm-cache inside it. You need to call this script each time you restart and you need to launch PHPStorm (I don’t often restart, so I didn’t bother to make it automatically launched at start).

The thing left to do (only once) is symlinking the cache directory into the newly created directory (update and test with your path, don’t just copy and paste):

mv ~/Library/Caches/PhpStorm2017.3  /Volumes/RAMDisk/phpstorm-cache
ln -s  /Volumes/RAMDisk/phpstorm-cache ~/Library/Caches/PhpStorm2017.3

Mysql ibtmp1 taking too much space: solutions

I recently had one of my AWS instances running out of disk space. Mysql server (version 14.14, running a few hundred Mb single databae) created a temporary file of over 11Gb at the path /var/lib/mysql/ibtmp1 and saturated the 16GB disk.

I solved that with this setting

innodb_temp_data_file_path = ibtmp1:100M:autoextend:max:1G

And also the following command, that sets fast shutdown, stops mysql, deletes that temp file, and start mysql again

mysql -u root -e "SET GLOBAL innodb_fast_shutdown = 0;"; 
service mysql stop; 
rm /var/lib/mysql/ibtmp1; 
service mysql start

If you use ansible, you can just have this task

- name: mysql custom config
  copy:
    src: files/mysqld-custom.cnf
    dest: /etc/mysql/mysql.conf.d/mysqld-custom.cnf
    mode: "744"

where files/mysqld-custom.cnf contains the following

[mysqld]

# limits /var/lib/mysql/ibtmp1 to 100Mb-1GB
innodb_temp_data_file_path = ibtmp1:100M:autoextend:max:1G

301 redirect in nginx

A permanent server redirect is useful when you move domain, or when you want to redirect
users to the WWW-version

Example for nginx, add the following into /etc/nginx/conf.d/redirect.conf

server {
    server_name oldDomain.com;
    return 301 $scheme://www.newDomain.com$request_uri;
}

 

How to fix docker slowness (volumee mounting) with Docker-sync + PHPStorm file watchers

I’ve experienced lots of slowness using docker for Mac (xhyve), due to volume mounting.

A solution to that I’ve been using successfully for a few week is composed by docker-sync (rsync) and PHPStorm file watcher.

docker-sync (rsync)

I’ve used the rsync solution for docker-sync. Simple, but the downside is that newly created files in the container are not shared outside. That’s not generally a problem except when some code is generated from inside the container (e.g. Doctrine migrations). When needed, a normal mounting for those directories can still be used, or (more complicated), the file can be copied into the host with a docker cp command.

Other downsides are the slowness to rsync watch (a few seconds that can be annoying in some cases), that leads me to the next point:

PHPStorm file watcher

PHPStorm supports the execution of custom scripts on save (file watchers). The idea is having those file watchers to perform an immediate docker cp inside the container with the modified file.

The screenshot should be clear enough. Replace docker bin with the real path of your docker executable (not the symlink), and <CONTAINER_NAME> with your docker container, and adjust /app/ to have docker cp copying files where your docker container expects.

Depending on your needs, you might want to adjust your shares, and decide whether vendor directory needs to be watched or not.

Note: You can’t just use the file watcher, or you’ll have the rebuild (or manually copy all the files) any time you switch branch.

PHPStorm File Watcher Docker cp

PHPStorm File Watcher to copy files inside docker container – example