PHPStorm cache on virtual RAM disk [MacOS]

When using a SSD disk, and enough RAM on the system, I always prefer to move software cache files into a virtual RAM disk, in order to extend the disk life, and also gain more speed.

The command on MacOs to create a 512Mb Virtual volume (under /Volumes/RAMDisk) is

diskutil erasevolume HFS+ 'RAMDisk' `hdiutil attach -nomount ram://524288`;

After that, you can create your symlink, and move that command in a startup script

PHPStorm script

Here is the script I’m using to create the virtual Disk and the index and cache directories for PHPStorm. Of course you need to symlink them the first time you use them (see the comments in the gist).

You’ll lose the cache and index next time you restart your computer or unmount your RAMDisk, but that’s something I actually prefer, to keep the cache clean from old projects and libraries.

How to fix docker slowness (volumee mounting) with Docker-sync + PHPStorm file watchers

I’ve experienced lots of slowness using docker for Mac (xhyve), due to volume mounting.

A solution to that I’ve been using successfully for a few week is composed by docker-sync (rsync) and PHPStorm file watcher.

docker-sync (rsync)

I’ve used the rsync solution for docker-sync. Simple, but the downside is that newly created files in the container are not shared outside. That’s not generally a problem except when some code is generated from inside the container (e.g. Doctrine migrations). When needed, a normal mounting for those directories can still be used, or (more complicated), the file can be copied into the host with a docker cp command.

Other downsides are the slowness to rsync watch (a few seconds that can be annoying in some cases), that leads me to the next point:

PHPStorm file watcher

PHPStorm supports the execution of custom scripts on save (file watchers). The idea is having those file watchers to perform an immediate docker cp inside the container with the modified file.

The screenshot should be clear enough. Replace docker bin with the real path of your docker executable (not the symlink), and <CONTAINER_NAME> with your docker container, and adjust /app/ to have docker cp copying files where your docker container expects.

Depending on your needs, you might want to adjust your shares, and decide whether vendor directory needs to be watched or not.

Note: You can’t just use the file watcher, or you’ll have the rebuild (or manually copy all the files) any time you switch branch.

PHPStorm File Watcher Docker cp

PHPStorm File Watcher to copy files inside docker container – example

 

SymfonyCon, Berlin 2016, notes and thoughts

Berlin

Berlin

I wrote an article for the Italian PHP conference last year, so I decided to repeat that and write up some notes I took at the Symfony Conference in Berlin 2106. Talks and notes at this link.

Conference format

I personally prefer conferences around a wide subject, in order to list to more variety of talks, spreading across the whole area of the technology. This one was a 2 days conference around a single framework, so I was expecting to get a bit bored, and I was right. Lots of talks around minor functionalities on the framework that I can easily read online, or things I already know. Some talks were fortunately more general therefore more of my interest. And that’s because I want a talk to inspire me, giving me clues, tips, but not repeating what the online documentation says.

Talks

Some talks around the Symfony 3.3 new features, SensioCloud (a kind of heroku for Symfony that smells a bit commercial and coupled with Symfony at first glance), PHP 7 improvements I missed (static variables persisted in memory among requests), PHP types (things I’ve already heard many times, but good to hear them again, with updates on latest PHP updates).

One of the talk I like the most was about when to abstract, where lots of useful concepts were mentioned. Concepts that I knew already, but I always found it difficult to explain to other younger developers or the business.

  • Predicting patterns should be done very carefully, we can’t really know how the business logic is evolving, and a premature abstraction leads to difficulty to change the product. The risk is facing an over-engineered or over-architected product at the time you have to make changes. In this talk is suggested to develop first with duplication, abstract later. Completely agreed;
  • Refactor  is not only and improvement, but the best way to let the business patterns and logic emerge from your code, in case you don’t fully know it. I knew that, but I never though of explaining to the business that way. I understand “refactor” sounds scary to the business, like the builders in your house say “we need a paid (by you) day to refactor the wall we are building” when you clearly saw the were half way through after 2 days.
  • A code rewrite, instead, means losing some of the domain rules. In the talk it was mentioned 40%, but I think it depends, it was much less in my experience, also considering that some functionalities are not needed anymore, so it’s good to lose unknown and useless functionalities, and re-implemented the updated version if the users and business require those;
  • API should be optimised for stability, projects (what you build with a framework for example) for change, products (e.g. Symfony /Wordpress) for stable core. Agreed, again.

Other talks I followed worth linking:

Other tools

I spoke to guys in the sponsor stands. Interesting to see blackfire.io in action, that I’ll definitely try out next time I need to optimise an app. Interesting to see heroku to basically deploy and handle the “devops” part of an application (create server instances, install packages, manage servers, watch logs) entirely from the command line without a single SSH command inside the box. I wasn’t particularly lured by the whole SensioCloud idea, as I never felt the need for something like that, also not sure I want to use a platform created by the framework creator, that I’m not sure I can (easily) use with other frameworks.

PHPDay, the Italian PHP conference 2016, notes

veronaThis year I decided to take a couple of days off work and attend the 2016’s Italian PHP conference.

I was curious to see what the Italians audience thought of talks from international speakers, where the business is normally different. In Italy, according to my freelance experience in 2002-2008 and confirmed by the conference’s attendees, there is a majority of very small businesses requiring small/medium CMSes, often serving tourism needs (e.g. booking platforms), with a small budget, sometimes requiring to maintain old platforms and obliging developers to share their time doing multiple roles: devops, backend and frontend, sometimes design too, SEO and marketing) and/or working for lots of clients with different (sometimes legacy) platforms.

What I observed is that the backend world is more or less the same as the previous years. Basics are still the same. I even re-heard some recommendations read in the old 1994’s OOP bible book .

But there were some interesting points and tools. I would group the talks this way:

PHP 7

php 7In summary much more performances by just upgrading PHP to the version 7, a very few backwards incompatibilities, a few language improvements. Interesting to see some stats from the PHP’s creator himself: WordPress and composer seem to be hugely faster now, and use much less memory (I don’t remember the numbers for each but at least 2x). Tips about smem (a tool to better measure memory consumption excluding shared memory), settings tuning (realpath_cache_sizecommand_buffer_too_small, DocumentRoot in tmpfs), considerations about multiprocessors and NUMA.

I attended a talk from badoo – the widest dating/social network site, so quite a lot of servers – that switched to PHP 7 and implemented the needed upgrades to all the used extensions. A viable solution for a big company, whereas a small one would probably not afford that and would be obliged to wait for the stable repositories and extensions upgrades before switching. Pinba (MySQL storage engine to accumulate PHP profiling info sent over UDP, similarly to a local NewRelic setup) was used for some measuring. Runkit was used to modify constants, user-defined functions and classes in 60k tests, and – since it was not supported in PHP 7 – they ended up developing their own mock framework  and distributing it for free on GitHub (well, thanks !).

Docker

A bit too “DevOps” for a PHP conference, but since it replicates the platform architecture locally, and simplifies the deployment, I guess it’s becoming a MUST. At MOJ, fortunately, we already use it, thanks to your dedicated DevOps team. Nothing new to learn for me, apart from the pipeline jenkins plugin suggested in this talk, that I might play with when I have time, instead of simply using job triggering.

Event sourcing

That basically means storing DB changes and be able to query on those changes. I’ve already implemented something similar in the past using doctrine listeners, and IMO a great approach when the data to save is connected to entity operations. I didn’t like how the argument was covered but good to hear and become curious to learn more during the talk, ending up reading the Martin Fowler’s article about event sourcing, play with Prooph framework for it, along with doctrine and mongo adapters. The Command Query Responsibility Segregation (in short, different models for updating and displaying) pattern was also mentioned, but IMO not necessarily connected to event sourcing as I heard.

Doing something already existing, but in PHP

Interesting talk about the fann extension, for Artificial neural networks, and an application example of machine learning in this talk, where the “intelligence” was recognising PHP code from human language in code comments, by initially defining what a code normally contains (“$”, “->” and “;” symbols), then launching it on many inputs (=code comments), and using an iterative approach to improve the results.

Another talk about Raspberry PI and  PHP libs (alternative here) to pilot it. Not something developers normally do for their clients, but good to hear something refreshing and different. Raspberry PI’s OS is a Debian distribution, so a web server with PHP could be installed on it, and pilot a huge variety of sensors. Good to know. I might use it to recognise pigeons on my balcony and pilot a plastic bullet BB Gun to shoot at them !

 

Others

  • Packing and distribution: Lots of useful tips from this talk, thanks to which I found a useful skeleton for new projects, refreshed the semantin version concepts, a tool to select a licence, conventions and other stuff;
  • Middleware, ways of glueing software. I’ll create a specific post for this, ZF3 and other frameworks like slim have a support for the idea;
  • PPI framework: to load and bootstrap multiple frameworks. I normally include libs with composer, so never had the need to use this framework, but I might play with it in the future;
  • API recommendations: in this talk, some recommendations, most of which I have already heard, but good to brush up. Among the most useful ones: revoking permissions will break/alter the client’s behaviour, so better to start with restricted permissions, and open them little by little (increasing permission will not break the consumers’ code). Good suggestion about not being 100% “perfect”, better to have a “RESTlike” working API than a 100%-compliant over-engineered and difficult to understand one. Loved the definition of pragmatism as cutting corners safely and be realistic. Interesting the Maslow pyramid from usability (base) to creativity (top);
  • PHPSpec for TDD. I skipped those talks. I already heard of it more than once in the past. I already do BDD with Behat, and TDD with PHPUnit that always proved to be a great combination of tools to guarantee application stability and maintainable code thanks to safe refactors. I haven’t found PHPSpec useful so far. IDEs also help a lot with code generation, so I don’t need more TDD/OOP tools. I personally prefer to spend my time on other aspects of software development (both frontend and DevOps) and business.

What I didn’t hear

No talks about unit testing, maybe because there is no much more to say ?

No talk about functional testing, one of the most underrated thing to discuss IMHO. The sSoftware has to respond to business’ needs, be reliable and bug-free. We should never forget and stop improving on this side. I hear developers talking and focusing too much on speed and performances, without even knowing the optimise-measure iterative process. Also I hear tools and framework adoptions based on personal preferences or by just trusting what’s new and sold “better” without objectively comparing the alternatives.

Summary

  • London’s environment, developers, community (and – as a consequence – clients) are always at the top in terms of framework and tools choices, so apparently no new technologies/approaches to learn from Italy;
  • No big news in terms of new frameworks and way of developing. One the reason I don’t spend too much time learning new frontend frameworks. The JS community seems to jump from a framework to another too often, a sign that things need more time to become mature before being worth spending lots of time on them. The only JS stable adoption seems to be JQuery, that pragmatically solves most of the problems elegantly (when JS is only an enrichment layer on the top of the application and not uniquely used as a front-end renderer).
  • Code distribution on GitHub and composer.json is definitely an emerging habit among developers, always good to share and stop re-inventing the wheel. Very few people in other professions think so broadly;
  • PHP 7 is a huge improvement from the past by 2x or more, for free, without coding (unless fixing a very few backwards compatibilities). That means fewer costs to host PHP apps, happier clients, happier users, happier developers. Never heard a so big improvement for other open source technologies. Not sure JAVA or .NET or even Python or Ruby communities we’ll hear one day that the new compiler/interpreter version is 2-5 times faster. Probably because they were already optimised from the start, you might say, at which I would add: if PHP made a long way without being optimised, it must have been able to listen to devs and business needs more than all the others;
  • PHP is somehow a language proving the  “Premature optimisation is the root of all evil” and lean startup rules. It started simple as a very simple scripting language, so developers started coding solutions quickly, and businesses liked it for its low costs and quick response to the market needs. Frameworks and tools were built and with time, both languages and framework grew and improved, more people and business moved to it gradually, and further improvements were added. Now the stack of tools available for PHP developers has nothing to envy from Java and .NET. Also, I noticed businesses preferring open source to closed platforms. The former have proved to be less risky, for example by avoiding the vendor lock-in problem. If I had the opportunity to work with PHP for a gov.uk service, it’s also thanks to this winning approach.

File Upload from HTML form: client and server validation for maximum size

File uploading normally requires client validation (mainly as a UI improvement) and server validation (in case the client validation is bypassed/hacked).

The code

<input type="hidden" name="MAX_FILE_SIZE" 
 value="2621440"/>

only imposes a soft limit (in bytes) on the server side, therefore not a good practice since it can be altered by the user. The solution I adopted is composed by a server side validation AND a client validation (in JavaScript, mainly for a better user experience since it can be easily hacked as all the other JS solutions)

Client validation (JavaScript)

The idea is accessing the files JS property of the input field and read the file size on the ‘change’ event (when a file is selected by the user). If the selected files exceed the max size, I simply display a clear user warning and disable the submit button. I’ve created a generic script that automatically listens to the change event for any file upload element, and reads the max upload size in bytes from the MAX_FILE_SIZE HTML input hidden element in the same form.

Server side validation (PHP)

Just set the desired value for ‘upload_max_filesize’ and ‘post_max_size’ PHP settings (php.ini). Alternatively, if you need application specific settings and you use apache as web server, add the following lines into the .htaccess in your web root

php_value upload_max_filesize 25M
php_value post_max_size 25M

Doctrine 2 + zf2 working with both SQL server and MySQL

The web application I’m working on (PHP 5.3, zf2, Doctrine2, on MySQL/Apache) has recently raised interest from the clients and it now needs to be installed on their premises, often a Windows server( arghhh!). That meant we had to make the application compatible with both MySQL and SQL-Server.
Here I summarize the changes and the solutions we adopted.
Continue reading

97 things every programmer should know: personal notes

Last month I saw a book flying about the office, containing useful tips from developers. 97 suggestions from experienced developers. Amazon is still selling it, so I suggest you read it if work as a software developer or you occasionally dip into coding: 97 Things Every Programmer Should Know: Collective Wisdom from the Experts

I’ve finished reading it, and I took some notes while reading. Some concepts and suggestions were obvious, as I hear them many times from the community (blogs, conferences, colleagues) or I’ve already personally experienced them. I’ll paste those my notes here, with the hope that you can find something valuable from looking through them.

Continue reading

Optimising Zend Framework applications (2) – cache pages and PHP accelerator [updated]

[continue from previous post]

4. Use an op-code cache/accelerator (Apc, XCache)

PHP is very fast but it’s not compiled. A op-code cache helps. See this comparison as an example of what could be the performance increase. That does not mean we can skip the other optimisation, code bottlenecks should be removed anyway.
Optimising code is also helpful to understand what are the common code “slowness” and avoid writing them again in the future.

5. Cache pages (before zend framework bootstrap)

Even though you have optimised the code, you still have to bootstrap and run the zend application, and the whole process takes time (dispatching, controller logic, scripts  render). An solution I’ve recently used is caching the whole HTML as a result of the processing (see another post about caching HTML pages of a generic website).

There are many solutions ways to cache the output (apache modules, reverse proxies, zend server page cache). The best depends on the needs. Moving the logic to the application level usually allows more customisations.

Page caching can be done by using Zend_Cache_Frontend_Page. It basically uses ob_start() with a callback that saves the result into cache. I haven’t found any interesting article about its best use, except the fact that it should be used in a controller plugin. I’d say it’s better to instantiate a separate cache object and activate it directly into the index.php and caching before the zend application actually start (boostrap). In my local enviroment, when the page cache is valid, the response time is 2 ms, against 800 ms required to bootstrap and load the application.
See the following code to see where to place it. Note: I instantiate Zend_Application with no options just to have the autoloader available to load the needed classes.

# index.php

// create APPLICATION_* vars and set_include_path [...]

require_once 'Zend/Application.php';
$application = new Zend_Application(APPLICATION_SERVERNAME);
// Zend_Cache_Frontend_Page
$pageCache = Zend_Cache::factory(
    'Page', 'File',
    array(
        'caching' => true,
        'lifetime'=>300, //5 minutes
        'ignore_user_abort' => true,
        'ignore_user_abort' => 100,
        'memorize_headers' => true,
        // enable debug only on localhost
        'debug_header' => APPLICATION_ENV == 'localhost',
        'default_options' => array(
            'cache' => true,
            // test the following, depends on how you use sessions and browser plugins
            'cache_with_cookie_variables' => true,
            'cache_with_post_variables' => false
        ),
        // whitelist approach
        'regexps' => array(
           '^.*$' => array('cache'=>false),
            /* homepage */
            '^/$' => array('cache'=>true, 'specific_lifetime'=>650),
            /* example of other pages*/
            '^/(pagex|pagey)/' => array('cache'=>true,'specific_lifetime'=>700),
            // []...
        )
    ),
    array(
        'cache_dir' => APPLICATION_PATH . '/data/pagecache',
        'hashed_directory_level'=>2,
        'file_name_prefix'=>'zendpagecache'
    )
);
// start page cache, except for cron job and when user is logged
// note: I haven't tested yet if using Zend_Auth here is a good solution
if (PHP_SAPI !=='cli' && !Zend_Auth::getInstance()->hasIdentity()) {
  $pageCache->start();
}
// the following code is not executed when page cache is valid
$appSettings = new Zend_Config_Ini(APPLICATION_PATH . '/configs/sites.ini', APPLICATION_SERVERNAME);
$application->setOptions($appSettings->toArray());

$application->bootstrap();

if (PHP_SAPI !=='cli') {
    $application->run();
}

Of course page caching must be set carefully, depending on the traffic of the application. Ideally, a customised cron job should fetch the pages, invalidate the cache for each one and rebuild them before they normally expire, so that the user will always find the fast cached page. On my project a similar system has improved the average loading time by 8 times (so an external spider will consider the site in a different way). Links to keep cached should be at least the ones on the sitemap.

Set carefully all the settings of zend page cache frontend: cookie, get and post data, regexpr of URLs to cache. Note that no logic is executed when the cache is valid, so an eventual visitors counter or any other database writing query will not work.

Free web-based software for project management

project managementAfter being part of a new team that works with outsourcing team with dynamic allocation of resources (developers) without using a software to plan and schedule the project, I’m now interested in experimenting some free web-based software for project management [project magement wiki].

As expected, the awesome wikipedia contains a page about the software used for project management, as well as a comparison of the  project management software.

Among the open source, web-based, here is the list of the ones that seem more insteresting, with some notes. I’m making this list in order to having them read to try.

See in this article how to manage developers