Learning to type using all ten fingers and mastering the
correct finger placement is undoubtedly a great skill. But between us, I've
spent my entire life “pecking” at the keyboard with two fingers, and when
typing, I place far greater importance on something else. And that is the
layout of the keyboard.
Webmasters, programmers, or copywriters encounter the problem that many
frequently used characters are either completely missing or less accessible on
the Czech keyboard. Typographic characters suffer the most, like Czech quotation
marks „ “, ellipsis …, multiplication sign ×, copyright ©, etc.
Typically, this is resolved by switching between two keyboards, Czech and
English, and memorizing a million Alt-number shortcuts to substitute
for the missing characters. Either way, it greatly hinders creativity. Could
there be a better way?
Custom Keyboard Layout
The solution is to create your own keyboard layout. I perfected mine about ten
years ago, and it's suitable for programmers, web designers, and
copywriters, containing all the essential typographic tricks like dash, double and single
quotation marks, etc., intuitively placed. Of course, you can customize the
layout further, as described below.
All typographic characters are accessible via the right Alt, or AltGr. The
layout is intuitive:
- Czech double quotation marks „“ AltGr-<
AltGr->
- Czech single quotation marks ‚‘ AltGr-Shift-<
AltGr-Shift->
- non-breaking space AltGr-spacebar
- multiplication sign × AltGr-X
- ellipsis … AltGr-D (dot)
- en dash – AltGr-hyphen
- em dash — AltGr-Shift-hyphen
- copyright © AltGr-C
- trademark ™ AltGr-T
- € AltGr-E
- ø AltGr-O
And so on, you can view the entire layout on the images.
Download: klávesnice
dg v5 (for Windows)
How to Create Your Own
Keyboard Layout?
It's easy and fun. Directly from Microsoft, download the magical and
well-hidden program Microsoft Keyboard
Layout Creator (requires .NET Framework to run).
Upon launching, you'll see an “empty” keyboard, meaning no layout is
defined yet. Starting from scratch isn't ideal, so find the
Load existing keyboard
command in the menu and load one of the
standard layouts (like the classic Czech keyboard).
For each key, you can define the character that is typed when the key is
pressed alone and also when combined with modifiers (i.e., Shift,
Ctrl+Alt (right Alt), right Alt +Shift,
Caps Lock, and Shift+Caps Lock). You can also designate a
key as a dead key, meaning the character is typed only after pressing another
key. This is how accents like háček and čárka function on the Czech
keyboard.
The real gem is exporting the finished keyboard layout. The result is a
full-fledged keyboard driver, including an installation program. So, you can
upload your keyboard to the internet and install it on other computers.
Among the top 5 monstrous quirks of PHP certainly belongs the inability to
determine whether a call to a native function was successful or resulted in an
error. Yes, you read that right. You call a function and you don’t know
whether an error has occurred and what kind it was. [perex]
Now you might be smacking your forehead, thinking: surely I can tell by the
return value, right? Hmm…
Return Value
Native (or internal) functions usually return false
in case of
failure. There are exceptions, such as
"json_decode":http://php.net/manual/en/function.json-decode.php
,
which returns null
if the input is invalid or exceeds the nesting
limit, as mentioned in the documentation, so far so good.
This function is used for decoding JSON and its values, thus calling
json_decode('null')
also returns null
, but as a
correct result this time. We must therefore distinguish null
as a
correct result and null
as an error:
$res = json_decode($s);
if ($res === null && $s !== 'null') {
// an error occurred
}
It's silly, but thank goodness it's even possible. There are functions,
however, where you can't tell from the return value that an error has occurred.
For example, preg_grep
or preg_split
return a partial
result, namely an array, and you can't tell anything at all (more in Treacherous Regular
Expressions).
json_last_error & Co.
Functions that report the last error in a particular PHP extension.
Unfortunately, they are often unreliable and it is difficult to determine what
that last error actually was.
For example, json_decode('')
does not reset the last error flag,
so json_last_error
returns a result not for the last but for some
previous call to json_decode
(see How to encode and decode JSON in
PHP?). Similarly, preg_match('invalidexpression', $s)
does not
reset preg_last_error
. Some errors do not have a code, so they are
not returned at all, etc.
error_get_last
A general function that returns the last error. Unfortunately, it is
extremely complicated to determine whether the error was related to the function
you called. That last error might have been generated by a completely different
function.
One option is to consider error_get_last()
only when the return
value indicates an error. Unfortunately, for example, the mail()
function can generate an error even though it returns true
. Or
preg_replace
may not generate an error at all in case of
failure.
The second option is to reset the “last error” before calling our
function:
@trigger_error('', E_USER_NOTICE); // reset
$file = fopen($path, 'r');
if (error_get_last()['message']) {
// an error occurred
}
The code is seemingly clear, an error can only occur during the call to
fopen()
. But that's not the case. If $path
is an
object, it will be converted to a string by the __toString
method.
If it's the last occurrence, the destructor will also be called. Functions of
URL
wrappers may be called. Etc.
Thus, even a seemingly innocent line can execute a lot of PHP code, which may
generate other errors, the last of which will then be returned by
error_get_last()
.
We must therefore make sure that the error actually occurred during the call
to fopen
:
@trigger_error('', E_USER_NOTICE); // reset
$file = fopen($path, 'r');
$error = error_get_last();
if ($error['message'] && the error['file'] === __FILE__ && $error['line'] === __LINE__ - 3) {
// an error occurred
}
The magic constant 3
is the number of lines between
__LINE__
and the call to fopen
. Please no
comments.
In this way, we can detect an error (if the function emits one, which the
aforementioned functions for working with regular expressions usually do not),
but we are unable to suppress it, i.e., prevent it from being logged, etc. Using
the shut-up operator @
is problematic because it conceals
everything, including any further PHP code that is called in connection with our
function (see the mentioned destructors, wrappers, etc.).
Custom Error Handler
The crazy but seemingly only possible way to detect if a certain function
threw an error with the possibility of suppressing it is by installing a custom
error handler using set_error_handler
. But it's no joke to
do it
right:
- we must also remove the custom handler
- we must remove it even if an exception is thrown
- we must capture only errors that occurred in the incriminated function
- and pass all others to the original handler
The result looks like this:
$prev = set_error_handler(function($severity, $message, $file, $line) use (& $prev) {
if ($file === __FILE__ && $line === __LINE__ + 9) { // magic constant
throw new Exception($message);
} elseif ($prev) { // call the previous user handler
return $prev(...func_get_args());
}
return false; // call the system handler
});
try {
$file = fopen($path, 'r'); // this is the function we care about
} finally {
restore_error_handler();
}
You already know what the magic constant 9
is.
So this is how we live in PHP.
I've responded to many pull requests with “Can you add
tests?” Not because I'm a testophile, or to annoy the person involved.
When you send a pull request that fixes a bug, naturally, you must test it
before submitting to ensure it actually works. Often, one thinks something can
be easily fixed, but lo and behold, it ends up breaking even more. I don’t
want to repeat myself, but by testing it, you created a test, so
just attach it.
(Unfortunately, some people really don’t test their code. If it were up to
me, I would give out monthly bans for pull requests made directly in the Github
web editor.)
But that's still not the main reason: A test is the only guarantee that
your fix will work in the future.
It has happened many times that someone sent a pull request that wasn’t
useful to me, but altered functionality important to them. Especially if it was
someone I know and I know they are a good programmer, I would merge it.
I understood what they wanted, it didn’t interfere with anything else, so
I accepted the PR and then I put it out of my mind.
If their pull request included a test, then their code still works today and
will continue to work.
If they didn’t add a test, it might easily happen that some other
modification will break it. Not intentionally, it just happens. Or it has
already happened. And there's no point in complaining about how stupid I am
because I broke their code for the third time, even though I accepted their
pull request three years ago—am I supposed to remember that? No, so perhaps
I’m doing it on purpose… I’m not. No one remembers what we had for lunch
three years ago.
If you care about a functionality, attach a test to it. If you don’t
care about it, don’t send it at all.
“Convince me, why should I use Nette
Tester, what makes it better than PHPUnit?” I always feel a bit uneasy
with these questions because I don't feel the need to convince anyone to use
Tester. However, I couldn't do without it. [perex]
Testing itself is somewhat of a cursed topic. For five years, at every
conference, testing is repeatedly discussed, yet in almost no company is code
“tested.” I put that in quotes because, in reality, all programmers test
every day, but what they don't do is write tests in PHPUnit, for many reasons.
Truth be told, during that brief period when the Nette Framework was tested with
PHPUnit, I also lost the taste for testing. Yet, testing is as crucial to the
development of a framework as, say, a version control system.
But let's take it step by step. Why do all programmers test, even though
they “don't test”?
Imagine you program a function foobar()
function foobar($x, $y) {
// some calculations here
return $val;
}
The first thing every programmer does is to check if it works:
echo foobar(10, 20);
They run it, it prints 30, which is correct, it seems to work, and maybe they
try a few other inputs.
In other words, they test the function. So, they are testing!
Then, what happens next is that the test script is deleted. And
that's exactly the problem! All the developers' arguments that they don't
have time for testing fall apart at this moment because the reality is that
there is time for testing, and tests are even written, but then the programmers
delete them. The irony.
A test is not just a class in PHPUnit; a test is also this one-line script.
More precisely, the test must also contain information about what the correct
return value should be, so it would look like this:
assert(foobar(10, 20) === 30);
assert(foobar(-1, 5) === 4);
...
Sometime in the future, if I change the implementation of the
foobar
function, or if a colleague modifies it, all I need to do
is run this script, and if it throws a warning, I know immediately that the
function is broken.
And that's all. This is testing. It's that simple. We all do it,
unfortunately, many of you then delete those tests.
Over time, we accumulate a huge amount of such test scripts and the question
arises on how to run them collectively. It can be solved with some shell script,
but I wrote a PHP script for it. Its significant advantage is that it can run
tests in parallel (I typically run 40 threads), which dramatically speeds up
the testing of the entire set. It also neatly displays where exactly in which
file a failure occurred.
Instead of the PHP function assert
I wrote my own functions (class Assert
),
which differ mainly in that they clearly and legibly output what the function
should have returned and what it instead returned, so I can quickly identify
where the problem is.
That launcher, the Assert
class, and a few other things make up
the aforementioned Nette Tester. It has reliably tested the Nette Framework for
four years.
When someone reports a bug in the foobar
function, saying it
returns an empty string instead of the number -1
for inputs
0
and -1
, I start by verifying it:
Assert::same(-1, foobar(0, -1));
I run it, and indeed, it outputs:
Failed: '' should be -1
So, I wrote a failing test. I didn't do it because the manuals about
testing say that a test must fail first, or because I follow TDD, but because
I can't think of anything else to do but simply write a short piece of code
that checks the bug report, i.e., a failing test.
The bug really exists and needs to be fixed. In the IDE, I start stepping
through the code, looking for the issue. (I'll write an article about
programmers who code in notepads, whether they're named TextMate or Sublime,
instead of a full-fledged IDE, and therefore cannot step through code, some
other time.) Yes, I could have found the bug without stepping through by just
staring at the code and placing var_dump
, echo
, or
console.log
, but it would take much longer. I want to emphasize
that stepping and testing are not alternatives but completely different
activities that are great to use together.
I find and fix the bug, Assert::same
is satisfied
, and I commit not only the function correction foobar
but also
the test file to the repository. Thanks to this, such a mistake will never occur
again in the future. And believe me, bugs tend to repeat themselves, a
phenomenon that even has a name: regression.
This conversation might have seemed very obvious to you. And that's good
because it is obvious, and I want to break down prejudices and fears about
testing. But I still haven't answered the initial question: why don't I use
PHPUnit? Because I can't work with it this straightforwardly.
To test foobar
, I would have to write a whole class that
inherits from another class, whose name I can't remember. Well, I would use a
template. PHPUnit does not allow tests to be run in parallel, so testing the
whole set takes much longer. In the case of the Nette Framework, it's about
35 seconds versus 2 minutes, which is a significant difference. Moreover,
tests written in PHPUnit can only be run in PHPUnit, they are not standalone
scripts. So there's no way to write a failing test and then step through it and
easily search for the bug in the mentioned way.
The simplest solution, therefore, was to write my trivial testing tool. Over
four years, it has slowly evolved into a full-fledged tool, which I no longer
develop alone, and thanks to the guys from Oracle, it now has integrated support
in NetBeans 8.0. Since it generates output in
TAP format, there should be no problem integrating it into other tools
either.
I won't convince you to use Nette Tester, but I would like to convince you
not to delete the tests you write 🙂
Václav Novotný has prepared an infographic comparing
developer activity in Nette and Symfony. I'm eager and curious to look at it,
but without an explanation of the metric, the numbers can be treacherously
misleading. Exaggerating a bit: with a certain workflow and naive measurement,
I could appear in the statistics as the author of 100% of the code without
having written a single line.
Even with straightforward workflows, comparing the amount of commits is
tricky. Not all commits are equal. If you add five important commits and at the
same time ten people correct typos in your comments, you are, in terms of the
number of commits, the author of one-third of the code. However, this isn't
true; you are the author of the entire code, as corrections of typos are not
usually considered authorship (as we typically perceive it).
In GIT, “merge-commits” further complicate matters. If someone prepares
an interesting commit and you approve it (thus creating a merge-commit), you are
credited with half of the commits. But what is the actual contribution? Usually
none, as approval is a matter of one click on GitHub, although sometimes you
might spend more time discussing it than if you had written the code yourself,
but you don't because you need to train developers.
Therefore, instead of the number of commits, it is more appropriate to
analyze their content. The simplest measure is to consider the number of changed
lines. But even this can be misleading: if you create a 100-line class and
someone else merely renames the file with it (or splits it into two), they have
“changed” effectively 200 lines, and again you are the author of
one-third.
If you spend a week debugging several commits locally before sending them to
the repository, you are at a disadvantage in the number of changed lines
compared to someone who sends theirs immediately and only then fine-tunes with
subsequent commits. Therefore, it might be wise to analyze, perhaps, summaries
for the entire day. It is also necessary to filter out maintenance commits,
especially those that change the year or version in the header of
all files.
Then there are situations where commits are automatically copied from one
branch to another, or to a different repository. This effectively makes it
impossible to conduct any global statistics.
Analyzing one project is science, let alone comparative analysis. This quite
reminds me of the excellent analytical quiz by
Honza Tichý.
Related: How
the ‘Hall of Fame’ on nette.org is calculated
Well-maintained software should have quality API documentation.
Certainly. However, just as the absence of documentation is a mistake, so too is
its redundancy. Writing documentation comments, much like designing an API or
user interface, requires thoughtful consideration.
By thoughtful consideration, I do not mean the process that occurred in the
developer's mind when they complemented the constructor with this comment:
class ChildrenIterator
{
/**
* Constructor.
*
* @param array $data
* @return \Zend\Ldap\Node\ChildrenIterator
*/
public function __construct(array $data)
{
$this->data = $data;
}
Six lines that add not a single piece of new information. Instead, they
contribute to:
- visual noise
- duplication of information
- increased code volume
- potential for errors
The absurdity of the mentioned comment may seem obvious, and I'm glad if it
does. Occasionally, I receive pull requests that try to sneak similar rubbish
into the code. Some programmers even use editors that automatically clutter the
code this way. Ouch.
Or consider another example. Think about whether the comment told you
anything that wasn't already clear:
class Zend_Mail_Transport_Smtp extends Zend_Mail_Transport_Abstract
{
/**
* EOL character string used by transport
* @var string
* @access public
*/
public $EOL = "\n";
Except for the @return
annotation, the usefulness of this
comment can also be questioned:
class Form
{
/**
* Adds group to the form.
* @param string $caption optional caption
* @param bool $setAsCurrent set this group as current
* @return ControlGroup
*/
public function addGroup($caption = null, $setAsCurrent = true)
If you use expressive method and parameter names (which you should), and they
also have default values or type hints, this comment gives you almost nothing.
It should either be reduced to remove information duplication or expanded to
include more useful information.
But beware of the opposite extreme, such as novels in phpDoc:
/**
* Performs operations on ACL rules
*
* The $operation parameter may be either OP_ADD or OP_REMOVE, depending on whether the
* user wants to add or remove a rule, respectively:
*
* OP_ADD specifics:
*
* A rule is added that would allow one or more Roles access to [certain $privileges
* upon] the specified Resource(s).
*
* OP_REMOVE specifics:
*
* The rule is removed only in the context of the given Roles, Resources, and privileges.
* Existing rules to which the remove operation does not apply would remain in the
* ACL.
*
* The $type parameter may be either TYPE_ALLOW or TYPE_DENY, depending on whether the
* rule is intended to allow or deny permission, respectively.
*
* The $roles and $resources parameters may be references to, or the string identifiers for,
* existing Resources/Roles, or they may be passed as arrays of these - mixing string identifiers
* and objects is ok - to indicate the Resources and Roles to which the rule applies. If either
* $roles or $resources is null, then the rule applies to all Roles or all Resources, respectively.
* Both may be null in order to work with the default rule of the ACL.
*
* The $privileges parameter may be used to further specify that the rule applies only
* to certain privileges upon the Resource(s) in question. This may be specified to be a single
* privilege with a string, and multiple privileges may be specified as an array of strings.
*
* If $assert is provided, then its assert() method must return true in order for
* the rule to apply. If $assert is provided with $roles, $resources, and $privileges all
* equal to null, then a rule having a type of:
*
* TYPE_ALLOW will imply a type of TYPE_DENY, and
*
* TYPE_DENY will imply a type of TYPE_ALLOW
*
* when the rule's assertion fails. This is because the ACL needs to provide expected
* behavior when an assertion upon the default ACL rule fails.
*
* @param string $operation
* @param string $type
* @param Zend_Acl_Role_Interface|string|array $roles
* @param Zend_Acl_Resource_Interface|string|array $resources
* @param string|array $privileges
* @param Zend_Acl_Assert_Interface $assert
* @throws Zend_Acl_Exception
* @uses Zend_Acl_Role_Registry::get()
* @uses Zend_Acl::get()
* @return Zend_Acl Provides a fluent interface
*/
public function setRule($operation, $type, $roles = null, $resources = null, $privileges = null,
Zend_Acl_Assert_Interface $assert = null)
Generated API documentation is merely a reference guide, not a book to read
before sleep. Lengthy descriptions truly do not belong here.
The most popular place for expansive documentation is file headers:
<?php
/**
* Zend Framework
*
* LICENSE
*
* This source file is subject to the new BSD license that is bundled
* with this package in the file LICENSE.txt.
* It is also available through the world-wide-web at this URL:
* http://framework.zend.com/license/new-bsd
* If you did not receive a copy of the license and are unable to
* obtain it through the world-wide-web, please send an email
* to license@zend.com so we can send you a copy immediately.
*
* @category Zend
* @package Zend_Db
* @subpackage Adapter
* @copyright Copyright (c) 2005-2012 Zend Technologies USA Inc. (http://www.zend.com)
* @license http://framework.zend.com/license/new-bsd New BSD License
* @version $Id: Abstract.php 25229 2013-01-18 08:17:21Z frosch $
*/
Sometimes it seems the intention is to stretch the header so long that upon
opening the file, the code itself is not visible. What's the use of a 10-line
information about the New BSD license, which contains key announcements like its
availability in the LICENSE.txt
file, accessible via the
world-wide-web, and if you happen to lack modern innovations like a so-called
web browser, you should send an email to license@zend.com, and they
will send it to you immediately? Furthermore, it's redundantly repeated
4,400 times. I tried sending a request, but the response did not
come 🙂
Also, including the copyright year in copyrights leads to a passion for
making commits like update copyright year to 2014, which changes all
files, complicating version comparison.
Is it really necessary to include copyright in every file? From a legal
perspective, it is not required, but if open source licenses allow users to use
parts of the code while retaining copyrights, it is appropriate to include them.
It's also useful to state in each file which product it originates from,
helping people navigate when they encounter it individually. A good
example is:
/**
* Zend Framework (http://framework.zend.com/)
*
* @link http://github.com/zendframework/zf2 for the canonical source repository
* @copyright Copyright (c) 2005-2014 Zend Technologies USA Inc. (http://www.zend.com)
* @license http://framework.zend.com/license/new-bsd New BSD License
*/
Please think carefully about each line and whether it truly benefits the
user. If not, it's rubbish that doesn't belong in the code.
(Please, commentators, do not perceive this article as a battle of
frameworks; it definitely is not.)
Let's create simple OOP wrapper for encoding and decoding JSON
in PHP:
class Json
{
public static function encode($value)
{
$json = json_encode($value);
if (json_last_error()) {
throw new JsonException;
}
return $json;
}
public static function decode($json)
{
$value = json_decode($json);
if (json_last_error()) {
throw new JsonException;
}
return $value;
}
}
class JsonException extends Exception
{
}
// usage:
$json = Json::encode($arg);
Simple.
But it is very naive. In PHP, there are a ton of bugs (sometime called as
“not-a-bug”) that need workarounds.
json_encode()
is (nearly) the only one function in whole PHP, which behavior is affected by
directive display_errors
. Yes, JSON encoding is affected by
displaying directive. If you want detect error
Invalid UTF-8 sequence
, you must disable this directive. (#52397, #54109, #63004, not fixed).
json_last_error()
returns the last error (if any) occurred during the last JSON encoding/decoding.
Sometimes! In case of error Recursion detected
it returns 0. You
must install your own error handler to catch this error. (Fixed after years in
PHP 5.5.0)
json_last_error()
sometimes doesn't return the last error, but
the last-but-one error. I.e. json_decode('')
with empty string
doesn't clear last error flag, so you cannot rely on error code. (Fixed in
PHP 5.3.7)
json_decode()
returns null if the JSON cannot be decoded or if the encoded data is deeper than
the recursion limit. Ok, but json_encode('null')
return null too.
So we have the same return value for success and failure. Great!
json_decode()
is unable to detect
Invalid UTF-8 sequence
in PHP < 5.3.3 or when PECL
implementation is used. You must check it own way.
json_last_error()
exists since PHP 5.3.0, so minimal required
version for our wrapper is PHP 5.3
json_last_error()
returns only numeric code. If you'd like to
throw exception, you must create own table of messages
(json_last_error_msg()
was added in PHP 5.5.0)
So the simple class wrapper for encoding and decoding JSON now looks
like this:
class Json
{
private static $messages = array(
JSON_ERROR_DEPTH => 'The maximum stack depth has been exceeded',
JSON_ERROR_STATE_MISMATCH => 'Syntax error, malformed JSON',
JSON_ERROR_CTRL_CHAR => 'Unexpected control character found',
JSON_ERROR_SYNTAX => 'Syntax error, malformed JSON',
5 /*JSON_ERROR_UTF8*/ => 'Invalid UTF-8 sequence',
6 /*JSON_ERROR_RECURSION*/ => 'Recursion detected',
7 /*JSON_ERROR_INF_OR_NAN*/ => 'Inf and NaN cannot be JSON encoded',
8 /*JSON_ERROR_UNSUPPORTED_TYPE*/ => 'Type is not supported',
);
public static function encode($value)
{
// needed to receive 'Invalid UTF-8 sequence' error; PHP bugs #52397, #54109, #63004
if (function_exists('ini_set')) { // ini_set is disabled on some hosts :-(
$old = ini_set('display_errors', 0);
}
// needed to receive 'recursion detected' error
set_error_handler(function($severity, $message) {
restore_error_handler();
throw new JsonException($message);
});
$json = json_encode($value);
restore_error_handler();
if (isset($old)) {
ini_set('display_errors', $old);
}
if ($error = json_last_error()) {
$message = isset(static::$messages[$error]) ? static::$messages[$error] : 'Unknown error';
throw new JsonException($message, $error);
}
return $json;
}
public static function decode($json)
{
if (!preg_match('##u', $json)) { // workaround for PHP < 5.3.3 & PECL JSON-C
throw new JsonException('Invalid UTF-8 sequence', 5);
}
$value = json_decode($json);
if ($value === null
&& $json !== '' // it doesn't clean json_last_error flag until 5.3.7
&& $json !== 'null' // in this case null is not failure
) {
$error = json_last_error();
$message = isset(static::$messages[$error]) ? static::$messages[$error] : 'Unknown error';
throw new JsonException($message, $error);
}
return $value;
}
}
This implementation is used in Nette
Framework. There is also workaround for another bug, the JSON bug. In fact, JSON is not subset of
JavaScript due characters \u2028
and \u2029
. They must
be not used in JavaScript and must be encoded too.
(In PHP, detection of errors in JSON encoding/decoding is hell, but it is
nothing compared to detection of errors in PCRE
functions.)
Journey into the heart of the three most known CSS
preprocessors continues, though not in the way I originally planned.
CSS preprocessor is a tool that take code written in their own syntax and
generates the CSS for the browser. The most popular preprocessors are SASS, LESS and
Stylus. We have talked about
installation
and syntax
+ mixins. All three preprocessors have a fundamentally different way of
mixins conception.
Each of them have gallery of finished mixins: For SASS there is a
comprehensive Compass, the LESS has
framework Twitter
Bootstrap or small Elements a Stylus
NIB.
… this was opening sentences of article I started write year and
quarter ago and never finished. I came to the conclusion that all three
preprocessors are useless. They required to do so many compromises that
potential benefits seemed insignificant. Today I will explain it.
…pokračování
There is nothing worse than manually uploading files via FTP,
for example, using Total Commander. (Although, editing files directly on the
server and then desperately trying to synchronize them is even worse.) Once you
fail to automate the process, it consumes much more of your time and increases
the risk of errors, such as forgetting to upload a file.
Today, sophisticated application deployment techniques are used, such as via
Git, but many people still stick to uploading individual files via FTP. For
them, the FTP Deployment tool is designed to automate and simplify the uploading
of applications over FTP.
FTP Deployment is a PHP
script that automates the entire process. You simply specify which directory
(local
) to upload to (remote
). These details are
written into a deployment.ini
file, clicking which can immediately
launch the script, making deployment a one-click affair:
php deployment deployment.ini
What does the deployment.ini
file look like? The
remote
item is actually the only required field; all others are
optional:
; remote FTP server
remote = ftp://user:secretpassword@ftp.example.com/directory
; you can use ftps:// or sftp:// protocols (sftp requires SSH2 extension)
; do not like to specify user & password in 'remote'? Use these options:
;user = ...
;password = ...
; FTP passive mode
passiveMode = yes
; local path (optional)
local = .
; run in test-mode? (can be enabled by option -t or --test too)
test = no
; files and directories to ignore
ignore = "
.git*
project.pp[jx]
/deployment.*
/log
temp/*
!temp/.htaccess
"
; is allowed to delete remote files? (defaults to yes)
allowDelete = yes
; jobs to run before uploading
before[] = local: lessc assets/combined.less assets/combined.css
before[] = http://example.com/deployment.php?before
; jobs to run after uploading and before uploaded files are renamed
afterUpload[] = http://example.com/deployment.php?afterUpload
; directories to purge after uploading
purge[] = temp/cache
; jobs to run after everything (upload, rename, delete, purge) is done
after[] = remote: unzip api.zip
after[] = remote: chmod 0777 temp/cache ; change permissions
after[] = http://example.com/deployment.php?after
; files to preprocess (defaults to *.js *.css)
preprocess = no
; file which contains hashes of all uploaded files (defaults to .htdeployment)
deploymentFile = .deployment
; default permissions for new files
;filePermissions = 0644
; default permissions for new directories
;dirPermissions = 0755
In test mode (when started with the -t
parameter), no file
uploads or deletions occur on the FTP, so you can use it to check if all values
are correctly set.
The ignore
item uses the same format as .gitignore:
log
– ignores all log
files or directories,
even within all subfolders
/log
– ignores the log
file or directory in the
root directory
app/log
– ignores the log
file or directory in
the app
subfolder of the root directory
data/*
– ignores everything inside the data
folder but still creates the folder on FTP
!data/session
– excludes the session
file or
folder from the previous rule
project.pp[jx]
– ignores project.ppj
and
project.ppx
files or directories
Before starting the upload and after it finishes, you can have scripts called
on your server (see before
and after
), which can
switch the server into a maintenance mode, sending a 503 header, for
instance.
To ensure synchronization of a large number of files happens (as far as
possible) transactionally, all files are first uploaded with the
.deploytmp
extension and then quickly renamed. Additionally, a
.htdeployment
file is saved on the server containing MD5 hashes of
all files, and it's used for further web synchronization.
On subsequent runs, it uploads only changed files and deletes removed ones
(unless prevented by the allowdelete
directive).
Files can be preprocessed before uploading. By default, all .css
files are compressed using Clean-CSS and .js
files using Google
Closure Compiler. Before compression, they first expand basic mod_include
directives from Apache. For instance, you can create a
combined.js
file:
<!--#include file="jquery.js" -->
<!--#include file="jquery
.fancybox.js" -->
<!--#include file="main.js" -->
You can request Apache on your local server to assemble this by combining the
three mentioned files as follows:
<FilesMatch "combined\.(js|css)$">
Options +Includes
SetOutputFilter INCLUDES
</FilesMatch>
The server will then upload the files in their combined and compressed form.
Your HTML page will save resources by loading just one JavaScript file.
In the deployment.ini
configuration file, you can create
multiple sections, or even make one configuration file for data and another for
the application, to make synchronization as fast as possible and not always
calculate the fingerprint of a large number of files.
I created the FTP Deployment tool many years ago and it fully covers my
needs for a deployment tool. However, it's important to emphasize that the FTP
protocol, by transmitting the password in plain text, poses a security risk and
you definitely should not use it, for example, on public Wi-Fi.
Few are as keen to emphasize their perceived superiority as
Rails developers. Don't get me wrong, it's a solid marketing strategy.
What's problematic is when you succumb to it to the extent that you see the
rest of the world as mere copycats without a chance to ever catch up. But the
world isn't like that.
Take Dependency Injection, for example. While people in the PHP and
JavaScript communities discovered DI later, Ruby on Rails remains untouched by
it. I was puzzled why a framework with such a progressive image was lagging
behind, and after some digging, I found an answer from various sources on
Google and karmiq, which
states:
Ruby is such a good language that it doesn't need Dependency Injection.
This fascinating argument, moreover, is self-affirming in an elitist
environment. But is it really true? Or is it just blindness caused by pride, the
same blindness that recently led to much-discussed security vulnerabilities
in Rails?
I wondered if perhaps I knew so little about Ruby that I missed some key
aspect, and that it truly is a language that doesn’t need DI. However, the
primary purpose of Dependency
Injection is to clearly pass dependencies so that the code is
understandable and predictable (and thus better testable). But when I look
at the Rails documentation on the “blog in a few minutes” tutorial, I see
something like:
def index
@posts = Post.all
end
Here, to obtain blog posts, they use the static method Post.all
,
which retrieves a list of articles from somewhere (!). From a database? From a
file? Conjured up? I don’t know because DI isn’t used here. Instead,
it’s some kind of static hell. Ruby is undoubtedly a clever language,
but it doesn’t replace DI.
In Ruby, you can override methods at runtime (Monkey patch; similar to
JavaScript), which is a form of Inversion of Control (IoC) that allows for
substituting a different implementation of the static method
Post.all
for testing purposes. However, this does not replace DI,
and it certainly doesn't make the code clearer, rather the opposite.
Incidentally, I was also struck by the Post
class in that it
represents both a single blog post and functions as a repository (the
all
method), which violates the Single
Responsibility Principle to the letter.
The justification often cited for why Ruby doesn't need DI refers to the
article LEGOs,
Play-Doh, and Programming. I read it thoroughly, noting how the author
occasionally confuses “DI” with a “DI framework” (akin to confusing
“Ruby” with “Ruby on Rails”) and ultimately found that it doesn’t
conclude that Ruby doesn’t need Dependency Injection. It says that it
doesn’t need DI frameworks like those known from Java.
One misinterpreted conclusion, if flattering, can completely bewilder a huge
group of intelligent people. After all, the myth that spinach contains an
extraordinary amount of iron has been persistent since 1870.
Ruby is a very interesting language, and like in any other, it pays to use
DI. There are even DI frameworks available for it. Rails is an intriguing
framework that has yet to discover DI. When it does, it will be a major topic
for some of its future versions.
(After attempting to discuss DI with Karmiq, whom I consider the most
intelligent Railist, I am keeping the comments closed, apologies.)