In the Best
PHP Framework for 2015 survey conducted by SitePoint magazine, Nette
secured an impressive 3rd place. Thank you very much to everyone who
voted; I truly did not expect such a fantastic result. [perex]
What I find gratifying is that users seem to be satisfied with Nette,
otherwise, they probably wouldn't have sent their votes. And, of course, the
fact that Nette has thus made itself known in a world where it is less known due
to language barriers.
Another interesting aspect of the results is realizing how many PHP
frameworks are actually used, that there are other popular “local
frameworks,” and that there are still many who do not use any framework
at all.
One of the most interesting parts of Nette, praised even by users of other
frameworks, is the Dependency Injection
Container (hereinafter referred to as Nette DI). See how easily you can use
it anywhere, even outside of Nette. [perex]
Let's consider an application for sending newsletters. I've simplified the
code of the individual classes to the core. Here's an object representing
an email:
class Mail
{
public $subject;
public $message;
}
Someone who knows how to send it:
interface Mailer
{
function send(Mail $mail, $to);
}
We add support for logging:
interface Logger
{
function log($message);
}
And finally, a class that manages the distribution of newsletters:
class NewsletterManager
{
private $mailer;
private $logger;
function __construct(Mailer $mailer, Logger $logger)
{
$this->mailer = $mailer;
$this->logger = $logger;
}
function distribute(array $recipients)
{
$mail = new Mail;
...
foreach ($recipients as $recipient) {
$this->mailer->send($mail, $recipient);
}
$this->logger->log(...);
}
}
The code respects Dependency Injection, meaning each class only works with
the variables that we have passed to it. We also have the option to
implement Mailer and Logger in our own way, for
example like this:
class SendMailMailer implements Mailer
{
function send(Mail $mail, $to)
{
mail($to, $mail->subject, $mail->message);
}
}
class FileLogger implements Logger
{
private $file;
function __construct($file)
{
$this->file = $file;
}
function log($message)
{
file_put_contents(this->file, $message . "\n", FILE_APPEND);
}
}
The DI container is the supreme architect that can create individual
objects (referred to in DI terminology as services) and assemble and configure
them precisely according to our needs.
A container for our application could look something like this:
class Container
{
private $logger;
private $mailer;
function getLogger()
{
if (!$this->logger) {
$this->logger = new FileLogger('log.txt');
}
return $this->logger;
}
function getMailer()
{
if (!$this->mailer) {
$this->mailer = new SendMailMailer;
}
return this->mailer;
}
function createNewsletterManager()
{
return new NewsletterManager($this->getMailer(), $this->getLogger());
}
}
The implementation looks this way so that:
the individual services are created only when needed (lazy)
a double call to createNewsletterManager always uses the same
logger and mailer objects
Create an instance of Container, let it produce a manager, and
you can start spamming users with newsletters:
$container = new Container;
$manager = $container->createNewsletterManager();
$manager->distribute(...);
The essence of Dependency Injection is that no class depends on the
container. Therefore, we can easily replace it with another, perhaps with a
container generated by Nette DI.
Nette DI
Nette DI is indeed a container generator. We instruct it (usually) using
configuration files, and perhaps this configuration generates roughly the same
as the Container class:
A significant advantage is the brevity of the notation. Additionally, we can
add more and more dependencies to individual classes often without needing to
modify the configuration.
Nette DI generates actual PHP container code. It is therefore extremely fast,
and the programmer knows exactly what it does and can even step through it.
The container might have tens
of thousands of lines in the case of large applications, and maintaining
something like that manually would probably not be possible.
Deploying Nette DI into our application is very easy. First, we install it
using Composer (because downloading ZIPs is so outdated):
composer require nette/di
We save the above configuration in a file config.neon and use
the class Nette\DI\ContainerLoader to create the container:
$loader = new Nette\DI\ContainerLoader(__DIR__ . '/temp');
$class = $loader->load(function($compiler) {
$
compiler->loadConfig(__DIR__ . '/config.neon');
});
$container = new $class;
and then again let it create the NewsletterManager object and we
can start sending emails:
But back to ContainerLoader for a moment. The mentioned syntax
is subordinate to one thing: speed. The container is generated once, its code is
written to cache (directory __DIR__ . '/temp'), and for subsequent
requests, it is just loaded from here. Therefore, the loading of the
configuration is placed into a closure in the $loader->load()
method.
During development, it is useful to activate the auto-refresh mode, where the
container is automatically regenerated if any class or configuration file
changes. Just mention true as the second argument in the
ContainerLoader constructor.
As you can see, using Nette DI is definitely not limited to applications
written in Nette, you can deploy it anywhere with just 3 lines of code. Try
playing with it, the whole example is available on GitHub.
It has been three years since Nette 2.0.0 was released. It was
a groundbreaking version that concluded several years of development and
introduced features that are indispensable in Nette development today.
Coincidentally, at that time, major version twos of significant frameworks
such as Zend and Symfony were also released. It's worth mentioning that unlike
these frameworks, Nette did not abandon users of its previous versions. It did
not draw a thick line between versions but instead tried to preserve
compatibility as much as possible. For example, users received a tool that
replaced old class names with new ones in their source codes, etc.
PHP 5.2
The 2.0 series still supported PHP 5.2, including PHP 5.2.0, which was
indeed painful. This version of PHP was one of the less successful, yet Debian
had it pre-installed, and conservative administrators refused to
upgrade it.
Interestingly, since 2010, Nette was written purely in PHP 5.3 with all its
features like namespaces and anonymous functions. The (two) versions for PHP
5.2 were created using a machine converter. This converter not only replaced
class names with non-namespaced variants but also managed to rewrite anonymous
functions and handle various other differences, such as the inability to use
func_get_args() as a function parameter, etc.
Looking back, the most significant contribution of Nette 2.0 was Dependency
Injection. But as the old saying goes:
Dependency Injection is no simple matter. It really isn't. It's a concept
not everyone is well-versed in.
DI replaced the previously used object Service Locator and its static
version, the Environment class, completely overturning the way applications were
designed. It brought a qualitative leap to a new level. Therefore, rewriting an
application that used Environment to Dependency Injection is extremely
challenging, as it essentially means redesigning it better and from scratch.
End of Life
The first day of the year 2014 saw the release of Nette 2.0.14. Yes, it was
a neat coincidence 🙂 This marked the end of the 2.0 series, and the series
entered a one-year phase of critical issues only, where only severe
bugs were fixed. Today, this phase is ending. A few days ago, Nette 2.0.18, the
definitively last version of this series and also the last version for PHP 5.2,
was released.
So farewell and goodbye!
(The 2.1 series now enters the *critical issues only phase
Composer, the most important tool for
PHP developers, offers three methods to install packages:
local composer require vendor/name
global composer global require vendor/name
as a project composer create-project vendor/name
Local Installation
Local installation is the most common. If I have a project where I want to
use Tracy, I enter in the project's root
directory:
composer require tracy/tracy
Composer will update (or create) the composer.json file and
download Tracy into the vendor subfolder. It also generates an
autoloader, so in the code, I just need to include it and can use Tracy
right away:
A completely different situation arises when, instead of a library whose
classes I use in my project, I install a tool that I only run from the
command line.
An example might be ApiGen for generating
clear API documentation. In such cases, the third method is used:
composer create-project apigen/apigen
Composer will create a new folder (and thus a new project)
apigen and download the entire tool and install its
dependencies.
It will have its own composer.json and its own
vendor subfolder.
This method is also used for installations like Nette Sandbox or CodeChecker. However, testing
tools such as Nette Tester or PHPUnit are not installed this way because we use
their classes in tests, calling Tester\Assert::same() or inheriting
from PHPUnit_Framework_TestCase.
Unfortunately, Composer allows tools like ApiGen to be installed using
composer require without even issuing a warning.
This is equivalent to forcing two developers, who don't even know each other
and who work on completely different projects, to share the same
vendor folder. To this one might say:
For heaven's sake, why would they do that?
It just can't work!
Indeed, there is no reasonable reason to do it, it brings no benefit, and it
will stop working the moment there is a conflict of libraries used. It's just a
matter of time, like building a house of cards that will sooner or later
collapse. One project will require library XY in version 1.0, another in version
2.0, and at that point, it will stop working.
Global Installation
The difference between option 1) and 2), i.e., between
composer require and composer global require, is that
it involves not two, but ten different developers and ten unrelated projects.
Thus, it is nonsensical squared.
Because composer global is a bad solution every time, there is
no use case where it would be appropriate. The only advantage is that if you add
the global vendor/bin directory to your PATH, you can easily run
libraries installed this way.
Summary
Use composer require vendor/name if you want to use library
classes.
Never use composer global require vendor/name!
Use composer create-project vendor/name for tools called only
from the command line.
Note: npm uses a different philosophy
due to JavaScript's capabilities, installing each library as a “separate
project” with its own vendor (or node_modules)
directory. This prevents version conflicts. In the case of npm,
global installations of tools, like LESS CSS,
are very useful and convenient.
Learning to type using all ten fingers and mastering the
correct finger placement is undoubtedly a great skill. But between us, I've
spent my entire life “pecking” at the keyboard with two fingers, and when
typing, I place far greater importance on something else. And that is the
layout of the keyboard.
The solution is to create your own keyboard layout. I perfected mine about ten
years ago, and it's suitable for programmers, web designers, and
copywriters, containing all the essential typographic tricks like dash, double and single
quotation marks, etc., intuitively placed. Of course, you can customize the
layout further, as described below.
All typographic characters are accessible via the right Alt, or AltGr. The
layout is intuitive:
Czech double quotation marks „“ AltGr-<AltGr->
Czech single quotation marks ‚‘ AltGr-Shift-<AltGr-Shift->
It's easy and fun. Directly from Microsoft, download the magical and
well-hidden program Microsoft Keyboard
Layout Creator (requires .NET Framework to run).
Upon launching, you'll see an “empty” keyboard, meaning no layout is
defined yet. Starting from scratch isn't ideal, so find the
Load existing keyboard command in the menu and load one of the
standard layouts (like the classic Czech keyboard).
For each key, you can define the character that is typed when the key is
pressed alone and also when combined with modifiers (i.e., Shift,
Ctrl+Alt (right Alt), right Alt +Shift,
Caps Lock, and Shift+Caps Lock). You can also designate a
key as a dead key, meaning the character is typed only after pressing another
key. This is how accents like háček and čárka function on the Czech
keyboard.
The real gem is exporting the finished keyboard layout. The result is a
full-fledged keyboard driver, including an installation program. So, you can
upload your keyboard to the internet and install it on other computers.
Among the top 5 monstrous quirks of PHP certainly belongs the inability to
determine whether a call to a native function was successful or resulted in an
error. Yes, you read that right. You call a function and you don’t know
whether an error has occurred and what kind it was. [perex]
Now you might be smacking your forehead, thinking: surely I can tell by the
return value, right? Hmm…
Return Value
Native (or internal) functions usually return false in case of
failure. There are exceptions, such as
"json_decode":http://php.net/manual/en/function.json-decode.php,
which returns null if the input is invalid or exceeds the nesting
limit, as mentioned in the documentation, so far so good.
This function is used for decoding JSON and its values, thus calling
json_decode('null') also returns null, but as a
correct result this time. We must therefore distinguish null as a
correct result and null as an error:
$res = json_decode($s);
if ($res === null && $s !== 'null') {
// an error occurred
}
It's silly, but thank goodness it's even possible. There are functions,
however, where you can't tell from the return value that an error has occurred.
For example, preg_grep or preg_split return a partial
result, namely an array, and you can't tell anything at all (more in Treacherous Regular
Expressions).
json_last_error & Co.
Functions that report the last error in a particular PHP extension.
Unfortunately, they are often unreliable and it is difficult to determine what
that last error actually was.
For example, json_decode('') does not reset the last error flag,
so json_last_error returns a result not for the last but for some
previous call to json_decode (see How to encode and decode JSON in
PHP?). Similarly, preg_match('invalidexpression', $s) does not
reset preg_last_error. Some errors do not have a code, so they are
not returned at all, etc.
error_get_last
A general function that returns the last error. Unfortunately, it is
extremely complicated to determine whether the error was related to the function
you called. That last error might have been generated by a completely different
function.
One option is to consider error_get_last() only when the return
value indicates an error. Unfortunately, for example, the mail()
function can generate an error even though it returns true. Or
preg_replace may not generate an error at all in case of
failure.
The second option is to reset the “last error” before calling our
function:
@trigger_error('', E_USER_NOTICE); // reset
$file = fopen($path, 'r');
if (error_get_last()['message']) {
// an error occurred
}
The code is seemingly clear, an error can only occur during the call to
fopen(). But that's not the case. If $path is an
object, it will be converted to a string by the __toString method.
If it's the last occurrence, the destructor will also be called. Functions of
URL
wrappers may be called. Etc.
Thus, even a seemingly innocent line can execute a lot of PHP code, which may
generate other errors, the last of which will then be returned by
error_get_last().
We must therefore make sure that the error actually occurred during the call
to fopen:
@trigger_error('', E_USER_NOTICE); // reset
$file = fopen($path, 'r');
$error = error_get_last();
if ($error['message'] && the error['file'] === __FILE__ && $error['line'] === __LINE__ - 3) {
// an error occurred
}
The magic constant 3 is the number of lines between
__LINE__ and the call to fopen. Please no
comments.
In this way, we can detect an error (if the function emits one, which the
aforementioned functions for working with regular expressions usually do not),
but we are unable to suppress it, i.e., prevent it from being logged, etc. Using
the shut-up operator @ is problematic because it conceals
everything, including any further PHP code that is called in connection with our
function (see the mentioned destructors, wrappers, etc.).
Custom Error Handler
The crazy but seemingly only possible way to detect if a certain function
threw an error with the possibility of suppressing it is by installing a custom
error handler using set_error_handler. But it's no joke to
do it
right:
we must also remove the custom handler
we must remove it even if an exception is thrown
we must capture only errors that occurred in the incriminated function
and pass all others to the original handler
The result looks like this:
$prev = set_error_handler(function($severity, $message, $file, $line) use (& $prev) {
if ($file === __FILE__ && $line === __LINE__ + 9) { // magic constant
throw new Exception($message);
} elseif ($prev) { // call the previous user handler
return $prev(...func_get_args());
}
return false; // call the system handler
});
try {
$file = fopen($path, 'r'); // this is the function we care about
} finally {
restore_error_handler();
}
I've responded to many pull requests with “Can you add
tests?” Not because I'm a testophile, or to annoy the person involved.
When you send a pull request that fixes a bug, naturally, you must test it
before submitting to ensure it actually works. Often, one thinks something can
be easily fixed, but lo and behold, it ends up breaking even more. I don’t
want to repeat myself, but by testing it, you created a test, so just attach it.
(Unfortunately, some people really don’t test their code. If it were up to
me, I would give out monthly bans for pull requests made directly in the Github
web editor.)
But that's still not the main reason: A test is the only guarantee that
your fix will work in the future.
It has happened many times that someone sent a pull request that wasn’t
useful to me, but altered functionality important to them. Especially if it was
someone I know and I know they are a good programmer, I would merge it.
I understood what they wanted, it didn’t interfere with anything else, so
I accepted the PR and then I put it out of my mind.
If their pull request included a test, then their code still works today and
will continue to work.
If they didn’t add a test, it might easily happen that some other
modification will break it. Not intentionally, it just happens. Or it has
already happened. And there's no point in complaining about how stupid I am
because I broke their code for the third time, even though I accepted their
pull request three years ago—am I supposed to remember that? No, so perhaps
I’m doing it on purpose… I’m not. No one remembers what we had for lunch
three years ago.
If you care about a functionality, attach a test to it. If you don’t
care about it, don’t send it at all.
“Convince me, why should I use Nette
Tester, what makes it better than PHPUnit?” I always feel a bit uneasy
with these questions because I don't feel the need to convince anyone to use
Tester. However, I couldn't do without it. [perex]
Testing itself is somewhat of a cursed topic. For five years, at every
conference, testing is repeatedly discussed, yet in almost no company is code
“tested.” I put that in quotes because, in reality, all programmers test
every day, but what they don't do is write tests in PHPUnit, for many reasons.
Truth be told, during that brief period when the Nette Framework was tested with
PHPUnit, I also lost the taste for testing. Yet, testing is as crucial to the
development of a framework as, say, a version control system.
But let's take it step by step. Why do all programmers test, even though
they “don't test”?
Imagine you program a function foobar()
function foobar($x, $y) {
// some calculations here
return $val;
}
The first thing every programmer does is to check if it works:
echo foobar(10, 20);
They run it, it prints 30, which is correct, it seems to work, and maybe they
try a few other inputs.
In other words, they test the function. So, they are testing!
Then, what happens next is that the test script is deleted. And
that's exactly the problem! All the developers' arguments that they don't
have time for testing fall apart at this moment because the reality is that
there is time for testing, and tests are even written, but then the programmers
delete them. The irony.
A test is not just a class in PHPUnit; a test is also this one-line script.
More precisely, the test must also contain information about what the correct
return value should be, so it would look like this:
Sometime in the future, if I change the implementation of the
foobar function, or if a colleague modifies it, all I need to do
is run this script, and if it throws a warning, I know immediately that the
function is broken.
And that's all. This is testing. It's that simple. We all do it,
unfortunately, many of you then delete those tests.
Over time, we accumulate a huge amount of such test scripts and the question
arises on how to run them collectively. It can be solved with some shell script,
but I wrote a PHP script for it. Its significant advantage is that it can run
tests in parallel (I typically run 40 threads), which dramatically speeds up
the testing of the entire set. It also neatly displays where exactly in which
file a failure occurred.
Instead of the PHP function assert
I wrote my own functions (class Assert),
which differ mainly in that they clearly and legibly output what the function
should have returned and what it instead returned, so I can quickly identify
where the problem is.
That launcher, the Assert class, and a few other things make up
the aforementioned Nette Tester. It has reliably tested the Nette Framework for
four years.
When someone reports a bug in the foobar function, saying it
returns an empty string instead of the number -1 for inputs
0 and -1, I start by verifying it:
Assert::same(-1, foobar(0, -1));
I run it, and indeed, it outputs:
Failed: '' should be -1
So, I wrote a failing test. I didn't do it because the manuals about
testing say that a test must fail first, or because I follow TDD, but because
I can't think of anything else to do but simply write a short piece of code
that checks the bug report, i.e., a failing test.
The bug really exists and needs to be fixed. In the IDE, I start stepping
through the code, looking for the issue. (I'll write an article about
programmers who code in notepads, whether they're named TextMate or Sublime,
instead of a full-fledged IDE, and therefore cannot step through code, some
other time.) Yes, I could have found the bug without stepping through by just
staring at the code and placing var_dump, echo, or
console.log, but it would take much longer. I want to emphasize
that stepping and testing are not alternatives but completely different
activities that are great to use together.
I find and fix the bug, Assert::same is satisfied
, and I commit not only the function correction foobar but also
the test file to the repository. Thanks to this, such a mistake will never occur
again in the future. And believe me, bugs tend to repeat themselves, a
phenomenon that even has a name: regression.
This conversation might have seemed very obvious to you. And that's good
because it is obvious, and I want to break down prejudices and fears about
testing. But I still haven't answered the initial question: why don't I use
PHPUnit? Because I can't work with it this straightforwardly.
To test foobar, I would have to write a whole class that
inherits from another class, whose name I can't remember. Well, I would use a
template. PHPUnit does not allow tests to be run in parallel, so testing the
whole set takes much longer. In the case of the Nette Framework, it's about
35 seconds versus 2 minutes, which is a significant difference. Moreover,
tests written in PHPUnit can only be run in PHPUnit, they are not standalone
scripts. So there's no way to write a failing test and then step through it and
easily search for the bug in the mentioned way.
The simplest solution, therefore, was to write my trivial testing tool. Over
four years, it has slowly evolved into a full-fledged tool, which I no longer
develop alone, and thanks to the guys from Oracle, it now has integrated support
in NetBeans 8.0. Since it generates output in
TAP format, there should be no problem integrating it into other tools
either.
I won't convince you to use Nette Tester, but I would like to convince you
not to delete the tests you write 🙂
Václav Novotný has prepared an infographic comparing
developer activity in Nette and Symfony. I'm eager and curious to look at it,
but without an explanation of the metric, the numbers can be treacherously
misleading. Exaggerating a bit: with a certain workflow and naive measurement,
I could appear in the statistics as the author of 100% of the code without
having written a single line.
Even with straightforward workflows, comparing the amount of commits is
tricky. Not all commits are equal. If you add five important commits and at the
same time ten people correct typos in your comments, you are, in terms of the
number of commits, the author of one-third of the code. However, this isn't
true; you are the author of the entire code, as corrections of typos are not
usually considered authorship (as we typically perceive it).
In GIT, “merge-commits” further complicate matters. If someone prepares
an interesting commit and you approve it (thus creating a merge-commit), you are
credited with half of the commits. But what is the actual contribution? Usually
none, as approval is a matter of one click on GitHub, although sometimes you
might spend more time discussing it than if you had written the code yourself,
but you don't because you need to train developers.
Therefore, instead of the number of commits, it is more appropriate to
analyze their content. The simplest measure is to consider the number of changed
lines. But even this can be misleading: if you create a 100-line class and
someone else merely renames the file with it (or splits it into two), they have
“changed” effectively 200 lines, and again you are the author of
one-third.
If you spend a week debugging several commits locally before sending them to
the repository, you are at a disadvantage in the number of changed lines
compared to someone who sends theirs immediately and only then fine-tunes with
subsequent commits. Therefore, it might be wise to analyze, perhaps, summaries
for the entire day. It is also necessary to filter out maintenance commits,
especially those that change the year or version in the header of
all files.
Then there are situations where commits are automatically copied from one
branch to another, or to a different repository. This effectively makes it
impossible to conduct any global statistics.
Analyzing one project is science, let alone comparative analysis. This quite
reminds me of the excellent analytical quiz by
Honza Tichý.
Well-maintained software should have quality API documentation.
Certainly. However, just as the absence of documentation is a mistake, so too is
its redundancy. Writing documentation comments, much like designing an API or
user interface, requires thoughtful consideration.
By thoughtful consideration, I do not mean the process that occurred in the
developer's mind when they complemented the constructor with this comment:
class ChildrenIterator
{
/**
* Constructor.
*
* @param array $data
* @return \Zend\Ldap\Node\ChildrenIterator
*/
public function __construct(array $data)
{
$this->data = $data;
}
Six lines that add not a single piece of new information. Instead, they
contribute to:
visual noise
duplication of information
increased code volume
potential for errors
The absurdity of the mentioned comment may seem obvious, and I'm glad if it
does. Occasionally, I receive pull requests that try to sneak similar rubbish
into the code. Some programmers even use editors that automatically clutter the
code this way. Ouch.
Or consider another example. Think about whether the comment told you
anything that wasn't already clear:
class Zend_Mail_Transport_Smtp extends Zend_Mail_Transport_Abstract
{
/**
* EOL character string used by transport
* @var string
* @access public
*/
public $EOL = "\n";
Except for the @return annotation, the usefulness of this
comment can also be questioned:
class Form
{
/**
* Adds group to the form.
* @param string $caption optional caption
* @param bool $setAsCurrent set this group as current
* @return ControlGroup
*/
public function addGroup($caption = null, $setAsCurrent = true)
If you use expressive method and parameter names (which you should), and they
also have default values or type hints, this comment gives you almost nothing.
It should either be reduced to remove information duplication or expanded to
include more useful information.
But beware of the opposite extreme, such as novels in phpDoc:
/**
* Performs operations on ACL rules
*
* The $operation parameter may be either OP_ADD or OP_REMOVE, depending on whether the
* user wants to add or remove a rule, respectively:
*
* OP_ADD specifics:
*
* A rule is added that would allow one or more Roles access to [certain $privileges
* upon] the specified Resource(s).
*
* OP_REMOVE specifics:
*
* The rule is removed only in the context of the given Roles, Resources, and privileges.
* Existing rules to which the remove operation does not apply would remain in the
* ACL.
*
* The $type parameter may be either TYPE_ALLOW or TYPE_DENY, depending on whether the
* rule is intended to allow or deny permission, respectively.
*
* The $roles and $resources parameters may be references to, or the string identifiers for,
* existing Resources/Roles, or they may be passed as arrays of these - mixing string identifiers
* and objects is ok - to indicate the Resources and Roles to which the rule applies. If either
* $roles or $resources is null, then the rule applies to all Roles or all Resources, respectively.
* Both may be null in order to work with the default rule of the ACL.
*
* The $privileges parameter may be used to further specify that the rule applies only
* to certain privileges upon the Resource(s) in question. This may be specified to be a single
* privilege with a string, and multiple privileges may be specified as an array of strings.
*
* If $assert is provided, then its assert() method must return true in order for
* the rule to apply. If $assert is provided with $roles, $resources, and $privileges all
* equal to null, then a rule having a type of:
*
* TYPE_ALLOW will imply a type of TYPE_DENY, and
*
* TYPE_DENY will imply a type of TYPE_ALLOW
*
* when the rule's assertion fails. This is because the ACL needs to provide expected
* behavior when an assertion upon the default ACL rule fails.
*
* @param string $operation
* @param string $type
* @param Zend_Acl_Role_Interface|string|array $roles
* @param Zend_Acl_Resource_Interface|string|array $resources
* @param string|array $privileges
* @param Zend_Acl_Assert_Interface $assert
* @throws Zend_Acl_Exception
* @uses Zend_Acl_Role_Registry::get()
* @uses Zend_Acl::get()
* @return Zend_Acl Provides a fluent interface
*/
public function setRule($operation, $type, $roles = null, $resources = null, $privileges = null,
Zend_Acl_Assert_Interface $assert = null)
Generated API documentation is merely a reference guide, not a book to read
before sleep. Lengthy descriptions truly do not belong here.
The most popular place for expansive documentation is file headers:
<?php
/**
* Zend Framework
*
* LICENSE
*
* This source file is subject to the new BSD license that is bundled
* with this package in the file LICENSE.txt.
* It is also available through the world-wide-web at this URL:
* http://framework.zend.com/license/new-bsd
* If you did not receive a copy of the license and are unable to
* obtain it through the world-wide-web, please send an email
* to license@zend.com so we can send you a copy immediately.
*
* @category Zend
* @package Zend_Db
* @subpackage Adapter
* @copyright Copyright (c) 2005-2012 Zend Technologies USA Inc. (http://www.zend.com)
* @license http://framework.zend.com/license/new-bsd New BSD License
* @version $Id: Abstract.php 25229 2013-01-18 08:17:21Z frosch $
*/
Sometimes it seems the intention is to stretch the header so long that upon
opening the file, the code itself is not visible. What's the use of a 10-line
information about the New BSD license, which contains key announcements like its
availability in the LICENSE.txt file, accessible via the
world-wide-web, and if you happen to lack modern innovations like a so-called
web browser, you should send an email to license@zend.com, and they
will send it to you immediately? Furthermore, it's redundantly repeated
4,400 times. I tried sending a request, but the response did not
come 🙂
Also, including the copyright year in copyrights leads to a passion for
making commits like update copyright year to 2014, which changes all
files, complicating version comparison.
Is it really necessary to include copyright in every file? From a legal
perspective, it is not required, but if open source licenses allow users to use
parts of the code while retaining copyrights, it is appropriate to include them.
It's also useful to state in each file which product it originates from,
helping people navigate when they encounter it individually. A good
example is:
/**
* Zend Framework (http://framework.zend.com/)
*
* @link http://github.com/zendframework/zf2 for the canonical source repository
* @copyright Copyright (c) 2005-2014 Zend Technologies USA Inc. (http://www.zend.com)
* @license http://framework.zend.com/license/new-bsd New BSD License
*/
Please think carefully about each line and whether it truly benefits the
user. If not, it's rubbish that doesn't belong in the code.
(Please, commentators, do not perceive this article as a battle of
frameworks; it definitely is not.)