Toggle cloudflare security level in case of high load

I’ve recently had some troubles with some stubborn spiders ignoring the robots.txt settings, spidering the site at high velocity and slowing the server down.

In a recent post, I’ve talked about how to limit the nginx requests per client, but that might be not enough in case of a distributed attack with many IPs. or simple high load preventing users from displaying the site. In case that happens, one action that could be taken is switching the Cloudflare (use if you are not) security settings to “under attack“: Cloudflare will display a page instead of the site, that verifies the user for being real ones, and then redirect the user to the real site.

I’ve created a simple bash script to toggle the site security level automatically using the cloudflare API when the server is under high load. Click on the gist name and read the first comment with the instruction to install it.

The way I use it is in a cron is launching the script every 5 minutes, and set the site to “under attack” when the server load is over 7. I’m pasting the ansible cron template here. Replace the variable with the cloudflare user/email, apiKey and zone id (different for each site). Of course, keep it on a single line.

*/5 *   *   *   *   root    /usr/local/bin/cloudflareSecurityLevel 
{{ cloudflare.user }} {{ cloudflare.apiKey }} {{ cloudflare.zoneId }} 
under_attack ">7"

You can also add another line to remove the under attack mode, e.g. when the load is under one

19 *   *   *   *   root    /usr/local/bin/cloudflareSecurityLevel 
{{ cloudflare.user }} {{ cloudflare.apiKey }} {{ cloudflare.zoneId }} 
medium "<1" > /dev/null 2>&1

How to limit nginx requests per client

Nginx has an interesting and powerful module, ngx_http_limit_req_module

This module allows limiting the number of requests per client (E.g: max 1 second per request).

To use it, define the zones (rules) just once at the nginx level (E.g. place into `/etc/nginx/conf.d/zones.conf`). See the following example

limit_req_zone $binary_remote_addr zone=myZone:10m rate=30r/m;

This rule defines a zone called “myZone” that limits clients to max 30 requests a minute (1 request every 2 seconds) per client. 10m is the amount of memory that can be used for nginx to remember. More clients require more memory of course.

To use this rule, place the following inside a “location” directive in the nginx site config

location ~ ^/index\.php(/|$) {
# …
limit_req zone=myZone burst=10 nodelay;

This adds the rule for the specific location matching. The burst=30 setting allow 30 consecutive requests, but – if all of them are performed immediately – then it’ll take another 20 seconds before the client can perform another request.

An interesting article about it here

301 redirect in nginx

A permanent server redirect is useful when you move domain, or when you want to redirect
users to the WWW-version

Example for nginx, add the following into /etc/nginx/conf.d/redirect.conf

server {
    return 301 $scheme://$request_uri;


How to fix docker slowness (volumee mounting) with Docker-sync + PHPStorm file watchers

I’ve experienced lots of slowness using docker for Mac (xhyve), due to volume mounting.

A solution to that I’ve been using successfully for a few week is composed by docker-sync (rsync) and PHPStorm file watcher.

docker-sync (rsync)

I’ve used the rsync solution for docker-sync. Simple, but the downside is that newly created files in the container are not shared outside. That’s not generally a problem except when some code is generated from inside the container (e.g. Doctrine migrations). When needed, a normal mounting for those directories can still be used, or (more complicated), the file can be copied into the host with a docker cp command.

Other downsides are the slowness to rsync watch (a few seconds that can be annoying in some cases), that leads me to the next point:

PHPStorm file watcher

PHPStorm supports the execution of custom scripts on save (file watchers). The idea is having those file watchers to perform an immediate docker cp inside the container with the modified file.

The screenshot should be clear enough. Replace docker bin with the real path of your docker executable (not the symlink), and <CONTAINER_NAME> with your docker container, and adjust /app/ to have docker cp copying files where your docker container expects.

Depending on your needs, you might want to adjust your shares, and decide whether vendor directory needs to be watched or not.

Note: You can’t just use the file watcher, or you’ll have the rebuild (or manually copy all the files) any time you switch branch.

PHPStorm File Watcher Docker cp

PHPStorm File Watcher to copy files inside docker container – example


SymfonyCon, Berlin 2016, notes and thoughts



I wrote an article for the Italian PHP conference last year, so I decided to repeat that and write up some notes I took at the Symfony Conference in Berlin 2106. Talks and notes at this link.

Conference format

I personally prefer conferences around a wide subject, in order to list to more variety of talks, spreading across the whole area of the technology. This one was a 2 days conference around a single framework, so I was expecting to get a bit bored, and I was right. Lots of talks around minor functionalities on the framework that I can easily read online, or things I already know. Some talks were fortunately more general therefore more of my interest. And that’s because I want a talk to inspire me, giving me clues, tips, but not repeating what the online documentation says.


Some talks around the Symfony 3.3 new features, SensioCloud (a kind of heroku for Symfony that smells a bit commercial and coupled with Symfony at first glance), PHP 7 improvements I missed (static variables persisted in memory among requests), PHP types (things I’ve already heard many times, but good to hear them again, with updates on latest PHP updates).

One of the talk I like the most was about when to abstract, where lots of useful concepts were mentioned. Concepts that I knew already, but I always found it difficult to explain to other younger developers or the business.

  • Predicting patterns should be done very carefully, we can’t really know how the business logic is evolving, and a premature abstraction leads to difficulty to change the product. The risk is facing an over-engineered or over-architected product at the time you have to make changes. In this talk is suggested to develop first with duplication, abstract later. Completely agreed;
  • Refactor  is not only and improvement, but the best way to let the business patterns and logic emerge from your code, in case you don’t fully know it. I knew that, but I never though of explaining to the business that way. I understand “refactor” sounds scary to the business, like the builders in your house say “we need a paid (by you) day to refactor the wall we are building” when you clearly saw the were half way through after 2 days.
  • A code rewrite, instead, means losing some of the domain rules. In the talk it was mentioned 40%, but I think it depends, it was much less in my experience, also considering that some functionalities are not needed anymore, so it’s good to lose unknown and useless functionalities, and re-implemented the updated version if the users and business require those;
  • API should be optimised for stability, projects (what you build with a framework for example) for change, products (e.g. Symfony /Wordpress) for stable core. Agreed, again.

Other talks I followed worth linking:

Other tools

I spoke to guys in the sponsor stands. Interesting to see in action, that I’ll definitely try out next time I need to optimise an app. Interesting to see heroku to basically deploy and handle the “devops” part of an application (create server instances, install packages, manage servers, watch logs) entirely from the command line without a single SSH command inside the box. I wasn’t particularly lured by the whole SensioCloud idea, as I never felt the need for something like that, also not sure I want to use a platform created by the framework creator, that I’m not sure I can (easily) use with other frameworks.

How to make changes on your own repositories imported with Composer

composerIn a previous article, I talked about reusing and exporting code. This article is more practical, and it’s about implementing that with PHP.

In the PHP world, composer is the de-factor tool to put together, solve dependencies and autoload external libraries (the tons you can find on github) or your own repos (that means everything non-application-specific that you can export and it’s not already decently implemented by anyone else).

Export your libraries into a public github repo 

Say you locate some code that is non-application-specific and you want to export it into another public repository. To do that, create a new Github repo, using this skeleton for example, and add there your libraries, tests and composer.json with autoload information. Commit and push.

Use your new library on your main project

On your main project, add the require for your repo, and specify source as a preferred install. See the example below.
composer repo

You can now run

composer update your-github-username/your-repo

And have your repository autoloaded into your main one.

Make changes to your repo

You can make changes your your imported repository with your IDE directly inside the vendor/your-github-username/your-repo directory.

If you want to commit a change, bash into the vendor directory, commit and push the change.

Before you commit changes (that use the modified version of your imported repo) on your main project, remember to run

composer update

so that the composer.lock file will now point at the new version.

Point to a specific commit hash

If you want to point your main repo at a specific commit of your imported repo, get the commit hash (git log), then in your composer file use


instead of


If you want to make changes to an existing github repository that is not yours

If you need to use your fork instead an official repo, please refer to another article on this blog about using your own forks with composer. Making changes will be the same as described above. If you need to pull changes from the original repository, just add the remote origin (git remote add upstream <originalRepoUrl)). Refer to git remote operations for that.


PHPDay, the Italian PHP conference 2016, notes

veronaThis year I decided to take a couple of days off work and attend the 2016’s Italian PHP conference.

I was curious to see what the Italians audience thought of talks from international speakers, where the business is normally different. In Italy, according to my freelance experience in 2002-2008 and confirmed by the conference’s attendees, there is a majority of very small businesses requiring small/medium CMSes, often serving tourism needs (e.g. booking platforms), with a small budget, sometimes requiring to maintain old platforms and obliging developers to share their time doing multiple roles: devops, backend and frontend, sometimes design too, SEO and marketing) and/or working for lots of clients with different (sometimes legacy) platforms.

What I observed is that the backend world is more or less the same as the previous years. Basics are still the same. I even re-heard some recommendations read in the old 1994’s OOP bible book .

But there were some interesting points and tools. I would group the talks this way:


php 7In summary much more performances by just upgrading PHP to the version 7, a very few backwards incompatibilities, a few language improvements. Interesting to see some stats from the PHP’s creator himself: WordPress and composer seem to be hugely faster now, and use much less memory (I don’t remember the numbers for each but at least 2x). Tips about smem (a tool to better measure memory consumption excluding shared memory), settings tuning (realpath_cache_sizecommand_buffer_too_small, DocumentRoot in tmpfs), considerations about multiprocessors and NUMA.

I attended a talk from badoo – the widest dating/social network site, so quite a lot of servers – that switched to PHP 7 and implemented the needed upgrades to all the used extensions. A viable solution for a big company, whereas a small one would probably not afford that and would be obliged to wait for the stable repositories and extensions upgrades before switching. Pinba (MySQL storage engine to accumulate PHP profiling info sent over UDP, similarly to a local NewRelic setup) was used for some measuring. Runkit was used to modify constants, user-defined functions and classes in 60k tests, and – since it was not supported in PHP 7 – they ended up developing their own mock framework  and distributing it for free on GitHub (well, thanks !).


A bit too “DevOps” for a PHP conference, but since it replicates the platform architecture locally, and simplifies the deployment, I guess it’s becoming a MUST. At MOJ, fortunately, we already use it, thanks to your dedicated DevOps team. Nothing new to learn for me, apart from the pipeline jenkins plugin suggested in this talk, that I might play with when I have time, instead of simply using job triggering.

Event sourcing

That basically means storing DB changes and be able to query on those changes. I’ve already implemented something similar in the past using doctrine listeners, and IMO a great approach when the data to save is connected to entity operations. I didn’t like how the argument was covered but good to hear and become curious to learn more during the talk, ending up reading the Martin Fowler’s article about event sourcing, play with Prooph framework for it, along with doctrine and mongo adapters. The Command Query Responsibility Segregation (in short, different models for updating and displaying) pattern was also mentioned, but IMO not necessarily connected to event sourcing as I heard.

Doing something already existing, but in PHP

Interesting talk about the fann extension, for Artificial neural networks, and an application example of machine learning in this talk, where the “intelligence” was recognising PHP code from human language in code comments, by initially defining what a code normally contains (“$”, “->” and “;” symbols), then launching it on many inputs (=code comments), and using an iterative approach to improve the results.

Another talk about Raspberry PI and  PHP libs (alternative here) to pilot it. Not something developers normally do for their clients, but good to hear something refreshing and different. Raspberry PI’s OS is a Debian distribution, so a web server with PHP could be installed on it, and pilot a huge variety of sensors. Good to know. I might use it to recognise pigeons on my balcony and pilot a plastic bullet BB Gun to shoot at them !



  • Packing and distribution: Lots of useful tips from this talk, thanks to which I found a useful skeleton for new projects, refreshed the semantin version concepts, a tool to select a licence, conventions and other stuff;
  • Middleware, ways of glueing software. I’ll create a specific post for this, ZF3 and other frameworks like slim have a support for the idea;
  • PPI framework: to load and bootstrap multiple frameworks. I normally include libs with composer, so never had the need to use this framework, but I might play with it in the future;
  • API recommendations: in this talk, some recommendations, most of which I have already heard, but good to brush up. Among the most useful ones: revoking permissions will break/alter the client’s behaviour, so better to start with restricted permissions, and open them little by little (increasing permission will not break the consumers’ code). Good suggestion about not being 100% “perfect”, better to have a “RESTlike” working API than a 100%-compliant over-engineered and difficult to understand one. Loved the definition of pragmatism as cutting corners safely and be realistic. Interesting the Maslow pyramid from usability (base) to creativity (top);
  • PHPSpec for TDD. I skipped those talks. I already heard of it more than once in the past. I already do BDD with Behat, and TDD with PHPUnit that always proved to be a great combination of tools to guarantee application stability and maintainable code thanks to safe refactors. I haven’t found PHPSpec useful so far. IDEs also help a lot with code generation, so I don’t need more TDD/OOP tools. I personally prefer to spend my time on other aspects of software development (both frontend and DevOps) and business.

What I didn’t hear

No talks about unit testing, maybe because there is no much more to say ?

No talk about functional testing, one of the most underrated thing to discuss IMHO. The sSoftware has to respond to business’ needs, be reliable and bug-free. We should never forget and stop improving on this side. I hear developers talking and focusing too much on speed and performances, without even knowing the optimise-measure iterative process. Also I hear tools and framework adoptions based on personal preferences or by just trusting what’s new and sold “better” without objectively comparing the alternatives.


  • London’s environment, developers, community (and – as a consequence – clients) are always at the top in terms of framework and tools choices, so apparently no new technologies/approaches to learn from Italy;
  • No big news in terms of new frameworks and way of developing. One the reason I don’t spend too much time learning new frontend frameworks. The JS community seems to jump from a framework to another too often, a sign that things need more time to become mature before being worth spending lots of time on them. The only JS stable adoption seems to be JQuery, that pragmatically solves most of the problems elegantly (when JS is only an enrichment layer on the top of the application and not uniquely used as a front-end renderer).
  • Code distribution on GitHub and composer.json is definitely an emerging habit among developers, always good to share and stop re-inventing the wheel. Very few people in other professions think so broadly;
  • PHP 7 is a huge improvement from the past by 2x or more, for free, without coding (unless fixing a very few backwards compatibilities). That means fewer costs to host PHP apps, happier clients, happier users, happier developers. Never heard a so big improvement for other open source technologies. Not sure JAVA or .NET or even Python or Ruby communities we’ll hear one day that the new compiler/interpreter version is 2-5 times faster. Probably because they were already optimised from the start, you might say, at which I would add: if PHP made a long way without being optimised, it must have been able to listen to devs and business needs more than all the others;
  • PHP is somehow a language proving the  “Premature optimisation is the root of all evil” and lean startup rules. It started simple as a very simple scripting language, so developers started coding solutions quickly, and businesses liked it for its low costs and quick response to the market needs. Frameworks and tools were built and with time, both languages and framework grew and improved, more people and business moved to it gradually, and further improvements were added. Now the stack of tools available for PHP developers has nothing to envy from Java and .NET. Also, I noticed businesses preferring open source to closed platforms. The former have proved to be less risky, for example by avoiding the vendor lock-in problem. If I had the opportunity to work with PHP for a service, it’s also thanks to this winning approach.

Doctrine 2 + zf2 working with both SQL server and MySQL

The web application I’m working on (PHP 5.3, zf2, Doctrine2, on MySQL/Apache) has recently raised interest from the clients and it now needs to be installed on their premises, often a Windows server( arghhh!). That meant we had to make the application compatible with both MySQL and SQL-Server.
Here I summarize the changes and the solutions we adopted.
Continue reading

Chrome, Firefox, IE browser plugin to reload CSS without reloading the page

When developing and adjusting the CSS, a methodology is using firebug to directly edit the styles and then re-apply the change on the real file with the editor.
In other cases, it’s more practical to change the CSS directly, but reloading the full page is a bit time consuming.

Here is a trick to reload only the CSS. There are browser extensions for it, but at the time of writing, none of existing ones works as expected, at least on chrome.
So, I simply solved by creating a browser tool-bar link that reloads the CSS without reloading the full page, with one-line of JavaScript.

From your browser toolbar: right click and select the option to add a new URL (“Add Page” on Chrome)
name: “Reload CSS”
url: the following script

Or just drag this into the toolbar Reload CSS

Chrome command line options for windowed HTML applications

An example of combination of chrome command line arguments to launch an HTML page…

  • windowed, that means in “app mode” (no toolbars, –app=”<url>” argument),
  • with no web security checks (= allowed DOM manipulation of frame loaded on a different domain, –-disable-web-security), that allows Javascript to modify the content of forms loaded in other frames from external URLs, for full-flegded interactions.
  • with a custom user-agent (–user-agent). I personally use it for a website that – with a mobile User-Agent – displays much less annoying banners and a layout that better fits its small frame;
  • with independent user session settings (–user-data-dir=”C:/path-to-temp-files”). That’s necessary to avoid chrome sharing settings with your current chrome user session, and mandatory to enable ad-hoc settings like the web security