Feb 10 2014
Feb 10
Amber Williams's picture

Amber Williams  | Human Resources Assistant

Feb

10

2014



1. What's your role at Mediacurrent, both internally and client-related?

I am a "Drupal Developer", which means I take care of the nuts and bolts of building and maintaining Drupal sites.

2. Give us an idea of what professional path brought you here.

I went to school for Computer Science and worked in IT for Georgia Tech. I tried leaving the IT world a few times but kept getting drawn back!

3. How did you first get involved with Drupal?

Some colleagues showed me how they were using Drupal to maintain their department's websites, and I jumped on board immediately. I had inherited a site that used Adobe Contribute to provide a WYSIWYG editing experience.

4. Is there a go-to Drupal module that you like to incorporate whenever possible?

Reroute Email: Especially on Commerce sites, I like to combine this with a small custom module that detects the environment (local/dev/test/prod) and forces emails to redirect (or not) appropriately. That way you can test submitting orders and any other tasks that generate emails without fear.


5. Do you have any recommended reading or books that you would suggest to others?

6. Any mobile apps that you use regularly?

Fun fact: I don't have a phone.

7. What do you like to do in your spare time?

Watch movies and play video games.  Movie wise, I have kind of strange taste. I almost never enjoy Hollywood flicks; I walk out feeling cheap and manipulated. I love foreign movies, especially deadpan French films. I recently saw "Her" and it blew my mind.  As for video games, I'm a big fan of single-player story-driven games, like the ones from TellTale games (Fables and The Walking Dead). Also, I just started playing Starcraft II and have enjoyed crushing Zerg AI...I just moved up from "Very Easy" to "Easy" wutwut.

8. What do you like about Mediacurrent?

The opportunity for growth! I get to work on so many different types of sites and with so many different types of people (both colleagues and clients) that I feel continuously challenged.

xjm
Feb 08 2014
Feb 08

What's new with Drupal 8?

It's been a remarkable couple of weeks for Drupal 8, with several landmark changes, a global sprint, and a surge in core issue queue activity.

Global sprint weekend

The second Global Sprint Weekend was held January 25-26. Over 400 sprinters participated at 39 locations on six continents, with others participating remotely in IRC. Some quick core issue queue statistics from the sprint timeframe:

  • 80 new Drupal 8 core issues created
  • 553 Drupal 8 patches submitted
  • 2468 comments posted on 646 Drupal 8 issues
  • 113 Drupal 8 issues RTBCed

A huge thanks to everyone who participated, and especially to the sprint organizers and mentors who helped make it happen.

Removal of the variable subsystem

A tombstone for variable_get(), variable_set(), variable_delete(), and the {variable} table.
Right on the heels of the Global Sprint Weekend, the last patch to convert variables to config or state was committed, and within a day the old variable subsystem was removed. This was the culmination of a year and a half of work by more than 80 contributors, and an incredible milestone for the Configuration Management Initiative.

Removal of the 7.x to 8.x upgrade path

Now that hook_update_N() implementations will no longer be added for data model changes from Drupal 7, core patch contributors should keep an eye out for patches that might require migration updates instead. For details, read: No more 7.x to 8.x hook_update_N() -- file Migrate issues instead.

Change record drafts

It's now possible to create drafts of API change records, and a draft change record will be required before any API change is committed starting February 14. More information on the new feature and change record process: Change records now needed before commit.

On January 31, in preparation for this change, core contributors reduced the missing change record count from 40 to 20 in 24 hours. We actually halved this long-outstanding documentation debt within a single day. Amazing work!

Theme system conversions

Core theme system contributors have been busy the past several weeks, converting numerous theme functions to Twig and removing all calls to theme() outside drupal_render() (and some automated tests). This important theme system cleanup has been ongoing for more than seven months and blocks a beta release.

Where's Drupal 8 at in terms of release?

Last week, we fixed 14 critical issues and 24 major issues, and opened 5 criticals and 16 majors. That puts us overall at 132 release-blocking critical issues and 473 major issues.

11 beta-blocking issues were fixed last week. There are still 51 of 115 beta blockers that must be resolved and 12 change records that must be written before we can release a Drupal 8 beta.

Here's a quick look at our progress on criticals and beta blockers in January:

A graph showing the number of critical issues posted and fixed each month since September.
We tied our previous record of 48 criticals fixed within a single month, but this time while posting fewer new ones than that. ;) Great work!

A graph showing the issue counts for outstanding and fixed beta blockers week to week in January, as well as beta targets and change records.
We fixed a grand total of 37 beta blockers in January, putting us past the halfway point for the beta! We also made great progress on cleaning up the API documentation debt of our outstanding change records -- from over 50 at the start of the month to 19 at the end (and just 12 as of today)! That said, we also identified 20-odd additional beta-blocking issues over the course of the month, so it's important to keep our focus on these top-priority issues.

Where can I help?

Top criticals to hit this week

Each week, we check with core maintainers and contributors for the "extra critical" criticals that are blocking other work. These issues are often tough problems with a long history. If you're familiar with the problem space of one of these issues and have the time to dig in, help drive it forward by reviewing, improving, and testing its patch, and by making sure the issue's summary is up to date and any API changes are documented with a draft change record.

More ways to help

Notable Commits

The best of git log --after=2014-01-24 --pretty=oneline (191 commits in total):

  • Issue #2099741 by Wim Leers, wwalc, mr.baileys, eaton, dstol, nod_, effulgentsia: Protect WYSIWYG Editors from XSS Without Destroying User Data.
  • Issue #2183923 by tim.plunkett: Break the circular dependency in EntityManager.
  • Issue #2157053 by alexpott, twistor, dawehner, sun: Ensure register_shutdown_function() works with php-fpm (blocks testbot php-fpm).
  • Issue #1939064 by joelpittet, pwieck, farrington, mark.labrecque, Cottser, InternetDevels, mdrummond, drupalninja99, BarisW, jenlampton: Convert theme_links() to Twig.
  • Issue #1939062 by steveoliver, mdrummond, jenlampton, hussainweb, Cottser, joelpittet, jerdavis, ekl1773, dale42, drupalninja99, gabesullice, c4rl: Convert theme_item_list() to Twig.
  • Issue #2168011 by xjm, jessebeach, Damien Tournoud, znerol, Xano: Remove all 7.x to 8.x update hooks and disallow updates from the previous major version.
  • Issue #2167641 by tim.plunkett: EntityInterface::uri() should use route name and not path.
  • Issue #2164827 by Berdir, Xano, tim.plunkett: Rename the entityInfo() and entityType() methods on EntityInterface and EntityStorageControllerInterface.
  • Issue #2167623 by danilenko_dn, sidharthap, Nitesh Sethia, krishnan.n, aitiba, alexpott, ashwinikumar, Barrett, damiankloip, deepakaryan1988, foxtrotcharlie, ianthomas_uk, neetu morwani, nonsie, piyuesh23, Sharique, sivaji, sushantpaste, swentel, vijaycs85, YesCT: Add test for all default configuration to ensure schema exists and is correct.
  • Issue #2177739 by Berdir, alexpott, Gábor Hojtsy: Fix inefficient config factory caching.
  • Issue #2047633 by pwolanin, dawehner, kim.pepper, Xano, amateescu, tim.plunkett: Move definition of menu links to hook_menu_link_defaults(), decouple key name from path, and make 'parent' explicit.
  • Issue #2164367 by alexpott, tim.plunkett, dawehner: Rebuild router as few times as possible per request.
  • Issue #2167109 by Berdir, sun, alexpott, ACF, acrollet, adamdicarlo, Albert Volkman, andreiashu, andyceo, andypost, anenkov, aspilicious, barbun, beejeebus, boombatower, cam8001, chriscalip, chx, cosmicdreams, dagmar, damiankloip, dawehner, deviance, disasm, dixon_, dstol, ebrowet, Gábor Hojtsy, heyrocker, Hydra, ianthomas_uk, japicoder, jcisio, jibran, julien, justafish, jvns, KarenS, kbasarab, kim.pepper, larowlan, Lars Toomre, leschekfm, Letharion, LinL, lirantal, Lukas von Blarer, marcingy, Mike Wacker, mrf, mtift, mtunay, n3or, nadavoid, nick_schuch, Niklas Fiekas, ParisLiakos, pcambra, penyaskito, pfrenssen, plopesc, Pol, Rok Žlender, rvilar, swentel, tim.plunkett, tobiasb, tsvenson, typhonius, vasi1186, vijaycs85, wamilton, webchick, webflo, wizonesolutions, xjm, yched, YesCT, znerol: Remove Variable subsystem.

You can also always check the Change records for Drupal core for the full list of Drupal 8 API changes from Drupal 7.

Drupal 8 Around the Interwebs

Blog posts about Drupal 8 and how much it's going to rock your face.

Drupal 8 in "Real Life"

  • Feb. 14 - 17: Drupal South in Wellington, New Zealand features a keynote by Larry Garfield of WSCCI fame, larowlan and kim.pepper answering everything you you wanted to know about Drupal 8 but were afraid to ask, as well as sessions on Twig, Tour, and more!
  • Feb. 28 - Mar 2: Two events happening simultaneously, DrupalCamp Phoenix and DrupalCamp London have some nice Drupal 8 session proposals, including CMI, multilingual, and more!
  • Mar. 24 - 30: Drupal Developer Days Szeged is going to be the Drupal 8 event of the next months, with a full week of sprinting awesomeness and lots of D8 content. See Five good reasons to register for Drupal Dev Days Szeged now by Gábor Hojtsy for more details.
  • Mar. 28 - 30: If you'd like to collaborate with DevDays Szeged sprinters, but are looking for something in the western hemisphere, check out MidCamp. MidCamp is March 28-30, and there may also be a pre-sprint March 26-27. Contact ZenDoodles for more information.

Whew! That's a wrap!

Do you follow Drupal Planet with devotion, or keep a close eye on the Drupal event calendar, or git pull origin 8.x every morning without fail before your coffee? We're looking for more contributors to help compile these posts. You could either take a few hours once every six weeks or so to put together a whole post, or help with one section more regularly. Contact xjm if you'd like to help communicate all the interesting happenings in Drupal 8!

Feb 08 2014
Feb 08

In Kalatheme in Kalabox on Pantheon for a minute about time, we pulled down a Kalatheme based sub-theme into a recently installed Kalabox on our laptop, so we could run it locally and work on the project using Eclipse or any other IDE.

In this article we explore a simple but realistic Git-based workflow for Multidev and non-Multidev topic branches of a Pantheon dev project.

Branches and environments

When you are going to make a change, make a branch.

When you are going to make a change, make a branch.

$ git status
# On branch adevbranch
nothing to commit (working directory clean)
$ 
$ git checkout -b mytinycontribution
…
mess around
…
works well!
...
$ git commit -am “My tiny contribution makes a big difference. Oh, and downloaded views”
… 
$ git checkout adevbranch
$ git merge mytinycontribution
$ git push origin adevbranch

Cool!

Now, an environment in Drupal is the actual running code. Meaning versioned code + database + files

If you are working with everything in code, and you should, the database and the files basically constitute content plus superficial state (cached images, css, javascript). But you need them to actually see what your commit has done. Hence, “environment”.

The beauty of multidev on Pantheon, for example, is that you are given a full environment for each topic branch on your git workflow.

For more on branches, see References 1.2

For more on multidev, see References 2.5 and resources on Pantheon site.

Making changes and pushing back to a Multidev environment

So however you got it, you change directory into your newly cloned repo, do your changes, commit them, and push back.

$ cd multidevbranch
$ git branch -a
* master
  remotes/origin/HEAD -> origin/master
  remotes/origin/multidevbranch02
  remotes/origin/master
  remotes/origin/master_before_restore_to_nnnnnnnn
  remotes/origin/multidevbranch
$ git checkout multidevbranch
Branch multidevbranch set up to track remote branch multidevbranch from origin.
Switched to a new branch 'multidevbranch'

So now, conscious of the fact that you are not on “master” and not going to screw anything up, you make your changes, test them in your local environment, then if happy:

$ git commit -am “done it”
$ git push origin multidevbranch

Now if you're really getting confident, and someone has approved the fruit of your efforts, perhaps you'd like to actually merge into master (the Pantheon 'dev' environment):

$ git checkout master   # switch to dev branch branch
$ git pull origin master # did anyone else commit anything while I was working? If so fetch it and merge it into local master
$ git merge origin/multidevbranch # merge in your stuff you just pushed to pantheon
$ git push origin master  # merge it into dev environment

Coming into work and keeping your Multidev branch up-to-date

At the risk of redundancy, here is what you do on any morning, actually; also works for after lunch, or getting home and wanting to do something after dinner:

$ cd multidevbranch
$ git branch -a
* master
  remotes/origin/HEAD -> origin/master
  remotes/origin/multidevbranch02
  remotes/origin/master
  remotes/origin/master_before_restore_to_nnnnnnnn
  remotes/origin/multidevbranch
$ git pull origin master
$ git checkout multidevbranch
$ git merge origin/master
$ git push origin victorkane ## bring your multidev environment up-to-date!

Keeping your local Kalabox environment up-to-date

Now that the local branch is up-to-date and useful into the bargain, what happens if others have added files, etc.? Got to keep your local environment up-to-date. For VPS or non-Kalabox situations, just download and untar and/or drush.

For Kalabox, we have terminatur (see References 4.1), a Kalamuna creation, included in the Kalabox setup.

To grab database and files from dev, then download to kalabox:

1. Go to multidevbranch Workflow
2. Clone from dev Environment (database and files and run update.php all checked)
3. Hit button "Clone the Database & Files from Dev..."

4. Use terminatur within Kalabox to refresh local environment with the database and files

$ drush help pulldata
Pulls down the database for a Pantheon site.

Examples:
 drush pulldata sitename.dev               Pulls down the database for a site 
                                           at @terminatur.mysite.dev.         
Arguments:
 sitename                                  The sitename.dev part of  
                                           @terminatur.sitename.dev. 

$ drush help pullfiles
Pulls down the files for a Pantheon site.

Examples:
 drush pullfiles sitename.dev              Pulls down the files for a site at 
                                           @terminatur.mysite.dev.            
Arguments:
 sitename                                  The sitename.dev part of  
                                           @terminatur.sitename.dev. 
Options:
 --destination=</var/www/>                 The destination of your webroot. 

$ drush ta          # refresh aliases usable by terminatur
$ drush sa
...
...
@terminatur.multidevbranch.dev
@terminatur.multidevbranch02.dev
@terminatur.multidevbranch02.dev

So the sitename parameter will be: multidevbranch.dev
And the commands will be:

$ drush pullfiles multidevbranch.dev
$ drush pulldata multidevbranch.dev
$ drush cc all  ## Did you remember to clear cache?

Results:

vagrant@kala:/var/www/multidevbranch$ drush pullfiles multidevbranch.dev

Downloading files... [warning]

Files downloaded. [success]

vagrant@kala:/var/www/multidevbranch$ drush pulldata multidevbranch.dev

How do you want to download your database?

[0] : Cancel

[1] : Get it from the latest Pantheon backup.

[2] : Create a new Pantheon backup and download it.

[3] : Pick one from a list of Pantheon backups.

2

Creating database backup... [warning]

Database backup complete! [success]

Downloading data... [warning]

Data downloaded. [success]

vagrant@kala:/var/www/multidevbranch$

References

  1. Git

    1. Kalabox

      1. This Kalamuna article, Power to the People, explains that Kalabox is built upon a powerful Vagrant driven stack, Kalastack

      2. This Kalamuna article, Ride the Hydra: Reduce Complexity, introduces the three goals for reducing Drupal workflow complexity:

        1. Developers must use a standardized local development platform. (Kalabox)

        2. Deployment (moving code between local, staging, and production environments) must be automated. (Pantheon)

        3. Development must be transparent to site owners and team members alike. (Pantheon workflow, including branches coupled with complete environments (code+db+files), i.e. Multidev  (My own take: Of course you can do “branching on the cheap” and just use Kalabox for that, or own VPS server; mix with additional GitHub remote!).

    2. Kalatheme

        1. Bootwatch bootstrap themes

        2. Wrapbootstrap example of paid bootstrap themes

      1. terminatur

        Bookmark/Search this post with

        Feb 08 2014
        Feb 08

        A few days ago, while I was writing a bit of Silex code and grumbling at Doctrine DBAL's lack of support for a SQL Merge operation, I wondered if it wouldn't be possible to use DBTNG without the rest of Drupal.

        Obviously, although DBTNG is described as having been designed for standalone use: DBTNG should be a stand-alone library with no external dependences other than PHP 5.2 and the PDO database library, in actual use, the Github DBTNG repo has seen no commit in the last 3 years, and the D8 version is still not a Drupal 8 "Component" (i.e. decoupled code), but still a plain library with Drupal dependencies. How would it fare on its own ? Let's give it a try...

        Bring DBTNG to a non-Drupal Composer project

        Since Composer does not support sparse checkouts (yet ?), the simplest way to do bring in DBTNG for this little test is to just import the code manually and declare it to the autoloader manually. Let's start by getting just DBTNG out of the latest Drupal 8 checkout:

        # Create a fresh repository to hold the example
        mkdir dbtng_example
        cd dbtng_example
        git init
        
        # Declare a remote
        git remote add drupal http://git.drupal.org/project/drupal.git
        
        # Enable sparse checkouts to just checkout DBTNG
        git config core.sparsecheckout true
        
        # Declare just what we want to checkout:
        # 1. The DBTNG classes
        echo core/lib/Drupal/Core/Database >> .git/info/sparse-checkout
        # 2. The procedural wrappers, for simplicity in an example
        echo core/includes/database.inc >> .git/info/sparse-checkout
        
        # And checkout DBTNG
        git pull drupal 8.x
        
        ls -l core/
        total 8
        drwxrwxr-x 2 marand www-data 4096 févr.  8 09:32 includes
        drwxrwxr-x 3 marand www-data 4096 févr.  8 09:32 lib
        

        That's it: DBTNG classes are now available in

        core/lib/Drupal/Core/Database

        . We can now build a Composer file with PSR-4 autoloading on that code:

        {
          "autoload": {
            "psr-4": {
              "Drupal\\Core\\Database\\": "core/lib/Drupal/Core/Database/"
            }
          },
          "description": "Drupal Database API as a standalone component",
          "license": "GPL-2.0+",
        }
        

        We can now build the autoloader:

        php composer.phar install
        

        Build the demo app

        For this example, we can use the traditional settings.php configuration for DBTNG, say we store it in app/config/settings.php and point to a typical Drupal 8 MySQL single-server database:

        <?php
        // app/config/settings.php$databases['default']['default'] = array (
         
        'database' => 'somedrupal8db',
         
        'username' => 'someuser',
         
        'password' => 'somepass',
         
        'prefix' => '',
         
        'host' => 'localhost',
         
        'port' => '',
         
        'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
         
        'driver' => 'mysql',
        );
        ?>

        At this point, our dependencies are ready, let's build some Hello, world in app/hellodbtng.php. Since this is just an example, we will just list a table using the DBTNG Select query builder:

        <?php
        // app/hellodbtng.php// Bring in the Composer autoloader.
        require_once __DIR__ . '/../vendor/autoload.php';require_once __DIR__ . '/../core/includes/database.inc';// Finally load DBTNG configuration.
        require_once __DIR__ . '/config/settings.php';$columns = array(
         
        'collection',
         
        'name',
         
        'value',
        );
        // DBTNG FTW !
        $result = db_select('key_value', 'kv')
          ->
        fields('kv', $columns)
          ->
        condition('kv.collection', 'system.schema')
          ->
        range(0, 10)
          ->
        execute();foreach ($result as $v) {
         
        $v = (array) $v;
         
        $value = print_r(unserialize($v['value']), true);
         
        printf("%-32s %-32s %s\n", $v['collection'], $v['name'], $value);
        }
        ?>

        Enjoy the query results

        php app/hellodbtng.php
        
        system.schema                    block                            8000
        system.schema                    breakpoint                       8000
        system.schema                    ckeditor                         8000
        system.schema                    color                            8000
        system.schema                    comment                          8000
        system.schema                    config                           8000
        system.schema                    contact                          8000
        system.schema                    contextual                       8000
        system.schema                    custom_block                     8000
        system.schema                    datetime                         8000
        

        Going further

        In real life:

        • a production project would hopefully not be built like this, by manually extracting files from a repo
        • ... and it would probably not use the procedural wrappers, but wrap DBTNG in a service and pass it configuration using a DIC
        • I seem to remember a discussion in which the full decoupling of DBTNG for D8 was considered but postponed as nice-to-have rather than essential for Drupal 8.0.
        • Which means that a simple integration would probably either
          • use the currently available (but obsolete) pre-7.0 version straight from Github (since that package is not available on Packagist, just declare it directly in composer.json as explained on http://www.craftitonline.com/2012/03/how-to-use-composer-without-packagi... ),
          • or (better) do the required work to decouple DBTNG from D8 core and submit a core patch for that decoupled version, and use it from the newly-independent DBTNG Component.
        Feb 06 2014
        Feb 06

        I recently worked on porting over a website to Drupal that had several dynamic elements throughout the site depending on the IP address of the user. Different content could be shown depending on if the user was within a local network, a larger local network, or completely outside the network.

        When porting the site over, I realized that it wouldn't be possible to enable page caching for any page that had this dynamic content on it. In Drupal, standard page caching is all or nothing. If you have it enabled and a page is "eligible" to be cached, Drupal saves the entire output of the page and uses it for future requests for the same page (I go into much more detail about page caching in previous blog post). In my case, if I enabled it, users who hit within one of the local intranets could trigger a page cache set, and now any users outside the intranet would view that same content.

        I wanted solution that let me either differentiate cache entries per by visitor "type" (but not role), or to at least prevent Drupal from serving cached pages to some of the visitors when a cached page already existed. I found a solution for the latter that I'll describe below. But first...

        Why this is a hard problem

        I already knew I could prevent Drupal from generating a page cache entry using drupal_page_is_cacheable(FALSE);. In fact, there's a popular yet very simple module called Cache Exclude that uses this function and provides an admin interface to specify which pages you want to prevent from being cached.

        But what if you wanted to cache the pages, but force some visitors to view the un-cached version? This is what I needed, but Drupal has no API functions to do this. Many Drupal developers know that hook_boot is run on every page request, even for cache hits. So why can't you implement the hook and tell Drupal you don't want to serve a cached page? The reason is because of the way Drupal bootstraps, and when it determines if it should return a cached page or not.

        There's a whole bootstrap "phase" dedicated to serving a cached page called . If you take a close look, you can see that Drupal doesn't invoke the boot hook until after it already determined it's going to serve a cached page. In other words, there's no going back at this point.

        Enter the "Dynamic Cache" module

        I came across the Dynamic Cache module that seemed solve this problem. Once enabled, this module lets you disable serving a cached page by setting $GLOBALS['conf']['cache'] = false; within your own modules hook_book implementation - exactly what I suggested was not possible above!

        So how was Dynamic Cache doing this? In summary, Dynamic Cache implements hoot_boot, checks if you tried to disable serving the cached page, and if so will "hijack" the bootstrap process to render the whole page and ignore the page cache entry that may exist. In then makes sure to "finish" up the request by completing the bootstrap process itself and calling menu_execute_active_handler(); that is normally done in index.php (but no longer get executed because of the hijack).

        I want to note that what Dynamic Cache is doing is pretty scary in that it's almost hacking core without actually modifying any core functions. This fear is actually what triggered me to explore how the Drupal bootstrap process works under the hood so I could understand if there'd be any potential issues.

        It's not an easy concept to understand initially, especially since for Drupal 7 you have to enable a second module called "Dynamic Cache Bootfix" that hijacks the bootstrap process a second time to properly finish up the request! I don't want to go into much more detail, but the modules code is pretty slim and I encourage developers to take a look. It will help you get a greater understanding of the bootstrap process and the obstacles this module tries to overcome.

        There's also a core issue that is trying to address this problem of not being able to easily disable a cached page from being served. I also encourage you to read thru that to get a better understanding of what the problems are.

        How I implemented it

        In my case, I found that the majority of traffic to the site was from users outside any of the intranets, so I decided to allow them to both trigger cache entries being generated and to be served those cached page entries. For everyone else (a small % of traffic), Drupal would always ignore whatever was in the cache for that page and would also not generate a cache entry:

        function my_module_boot() {
          $location = _my_module_visitor_network();
          if ($location != 'world') {
            # Prevent Drupal from serving a cached page thanks to help from the Dynamic Cache module
            $GLOBALS['conf']['cache'] = false;
            # Prevent Drupal from generating a cached page (standard Drupal function)
            drupal_page_is_cacheable(FALSE);
          }
        }
        

        Note that Dynamic Cache relies on having a heavy module weight so it runs last - which allows me to disable the cache in my own hook_boot. Make sure you read the README that comes with the module so you set everything up properly.

        Also note that I still called drupal_page_is_cacheable(FALSE);. Without this, Drupal may still generate a cached paged based on what this user saw. With my code in place, anonymous users outside the networks I was checking would both generate page cache entries and be served page cache entries. Anonymous users within the networks/intranets would never trigger a cache generation and would never be served a cached page.

        Final Thoughts

        Ideally, I would be able to generate separate page caches for each "type" of visitor I had. I think this is possibly by creating your own cache store (which is not that difficult in Drupal 7) and changing the cache ID for the page to include the visitor type. I think the boost module may also allow for this sort of thing.

        For really high traffic sites, you're probably going to be using something like Varnish anyway - and completely disable Drupal's page caching mechanism. I don't know much about Varnish but I imagine you could put this similar type of logic in the Varnish layer and selectively let some users through and hit Drupal directly to get the dynamically generated page (especially since my check for visitor network is just based on IP address).

        There you have it. Dynamic Cache is by no means an elegant module, but it gets the job done! If you're better informed than I and I made a mistake somewhere in this writeup, please let me know in the comments. I certainly don't want to spread misinformation!

        Feb 06 2014
        Feb 06

        Having studied IT and worked in sysadmin, content & strategy roles, Anthony provides a holistic approach to problem solving and client communication. He delivers concepts to complete solution as SystemSeed's primary point of contact for clients.

        Feb 05 2014
        Feb 05

        I just finished up a small project at work to create a basic resource management calendar to visualize and manage room and other asset reservations. The idea was to have a calendar that displayed reservations for various resources and allow privileged users the ability add reservations themselves. The existing system that was being used was a pain to work with and very time consuming - and I knew this could be done easily in Drupal 7.

        I wanted to share the general setup I used to get this done. I won't go into fine detail, and this is not meant to be a complete step by step guide. I'm happy to answer any questions in the comments.

        Step 1: The "Reservation" content type

        I quickly created a new content type "Resource Reservation" and added a required date field. Due to a bug in a module I used below, I had to use a normal date field and not ISO or Unix (I usually prefer Unix timestamps). These three different types of date fields are explained here. Aside from that, I also made the date field have a required end date and support repeating dates using the Date Repeat Field module (part of the main Date module). I then needed to decide how I would manage the resources and link them to a reservation.

        I created another content type "Resource" and linked it to a reservation using the Entity Reference module. Another option I considered was using a Taxonomy vocabulary with terms for reach resource, and adding a term reference field to the reservation content type. I decided to go for a full blown entity reference to allow greater flexibility in the future for the actual resource node.

        In my case, I created the 6 "Resource" nodes (all rooms in a building) that would be used in my department.

        Step 2: The Calendar

        Years ago at the 2011 DrupalCamp NJ, I attended Tim Plunkett's session "Calendaring in Drupal." Tim provided a great introduction to a new Drupal module called Full Calendar that utilized an existing JavaScript plugin with the same name. I was very impressed with the capability of the module and wrote about it after the camp was over.

        I immediately knew I wanted to use the module and was happy to see it has been well maintained since I last checked it out in 2012. The setup was incredibly simple:

        • Create a new "Page" view displaying node content
        • Set the style to "Full Calendar"
        • Add a filter to only show published "Resource Reservation" node
        • Add the date field that is attached to "Resource Reservation" nodes

        The style plugin for Full Calendar has a good set of options that let you customize the look and functionality of the calendar. I quickly able to shorten it quite a bit to display the start and end times as "7:30a - 2:00p".

        One thing to note is that while you can add any fields you want to the view, the style plugin only utilizes two: A date field and a title field. Both are displayed on the calendar cell - and nothing else. If you add a date field, the style plugin automatically uses it as "the" date field to use, but if you have multiple date fields for whatever reason you can manually specify it in the settings. Similarly, for the title field, you can add any field and tell the plugin which one to use as "the" title for the event. In my case the node title was suitable. If you wanted to display more than one field, try adding them and then add a global field that combines them, then assign that as the title field.

        I loaded up some reservation nodes and viewed them in the calendar and everything was looking great so far. Next I wanted to provide some filtering capability based on the resource of the reservation "events".

        Step 3: Filtering the Calendar by Resource

        In my case there was a desire to be able to display the reservations for select resources at a time instead of all of them at once. This would be a heavily used calendar with lots of events each day, and it would become a mess without some filtering capability. This was easy enough by creating an exposed filter for the calendar view.

        Ideally I would have a filter that exposed all of the possible resources as checkboxes - allowing the user to control what reservations for what resource they are viewing. I'm sure I could have done that by writing my own views filter plugin or doing some form altering, but I settled for this approach:

        • Added a new input filter for my "Resource" entity reference field.
        • Exposed it
        • Made it optional
        • Changed it to "grouped filter" instead of "single filter". This let me specify each Resource individually since there's no out-of-the-box way of listing all available.
        • Used the "radio" widget
        • Allowed multiple selections - this actually changed the radio buttons to checkboxes instead - exactly what I want.
        • Added 6 options for the filter - one for each resource. I looked up the node ID's for each resource and put them in with their appropriate label. Downside is each time a new resource is added I have to manually update the filter.
        • Changed the "filter identifier" to the letter "r", so that the query string params when filters are used aren't so awful looking

        There are two major gotchas here. The first is that if you have more than 4 options to chose from, Views changes the checkboxes to a multi select field (bleh). This is an easy fix:

        function YOUR_MODULE_form_views_exposed_form_alter(&$form, &$form_state) {
          if ($form['#id'] == 'views-exposed-form-calendar-page') { # find your own views ID
            $options =& $form['r']; # my exposed field is called "r" (see last step above)
            if ($options['#type'] == 'select') {
              $options['#type'] = 'checkboxes';
              unset($options['#size']);
              unset($options['#multiple']);
            }
          }
        }
        
        This ensures that the exposed filter is ALWAYS going to be checkboxes. The second gotcha is how views handles the multiple selections. By default, views will "AND" all of the selections together. So if you select "Room 5" and "Room 6", I get reservations that have both selected - which is not possible in my case since I purposely limited the entity reference field on the reservation to only reference one resource. Instead I want views to "Or" them, so it shows any reservations for either "Room 5" or "Room 6". The fix for this is simple, but not obvious:
        • In the filter criteria section in the View UI, I went to "Add/Or, Rearrange" which is a link in the drop down next to the "Add" button.
        • I created a new filter group and dragged my exposed filter into it.
        • The top group has the published filter and the content type filter, and the operator is set to AND.
        • The bottom group has my single exposed filter for the resource, and the operator is set to OR.
        • The two groups are joined together with an AND operator.

        Setting the second group to use OR is the key here. Even though there is just one item in the filter group, it's a special filter because it allows multiple selections. Views recognizes this and will apply the OR operator to each selection that was made within that filter. By default I had everything checked (which is actually the same as having nothing checked, at least in terms of the end result). This makes it obvious to calendar viewers that they can uncheck resources.

        Step 4: Adding Colors for each Resource

        Since the default calendar view includes 6 resources, I wanted each resource to be displayed with a color that corresponded to the resource it was reserving. The Full Calendar module can sort of do this for you with help of the Colors module. This module lets you arbitrarily assign colors to taxonomy terms, content types, and users. Colors then exposes an API for other modules to utilize those color assignments however they want. Full Calendar ships with a sub module called "Full Calendar Colors" that does just this by letting you color the background of the event cells in the calendar based on any of those three types of assignments that may apply.

        In my case, since I wasn't using Taxonomy terms, I couldn't use the Colors module to color my reservations. Someone opened an issue to get Colors working with entity references like in my case, but it's not an easy addition and I couldn't come up with a practical way of adding it to the Colors module myself.

        Instead, I examined the API for Full Calendar and found I could add my own basic implementation in a custom module. Here's the basics of what I did:

        • Add my own color assignment form element to each "Resource" node using form alters and variable set/get.
        • Implement hook_fullcalendar_classes to add a custom class unique to each "Resource" for the calendar cell. Like ".resource-reservation-[nid]".
        • Implement hook_preprocess_fullcalendar to attach my own custom CSS file (created using ctools API functions) to the calendar that has the CSS selectors for each resource reservation with the proper color.

        Finally I added a "legend" block that lists each Resource (with a link to that Resource node) displaying the color as the background, so users can quickly see what the colors in the calendar meant. You could also avoid some of this complexity by removing the ability to assign colors via the node form and just hardcode the color assignments in your theme CSS file. You'd still need to implement hook_fullcalendar_classes.

        Step 5: Reservation Conflicts

        With the basic calendar view completed and displaying the reservations, I shifted focus to the management aspect of the feature. Specifically, I needed to prevent reservations for the same resource to overlap with one another.

        A little bit of digging led to me a great module called Resource Conflict. This module "simply detects conflicts/overlaps between two date-enabled nodes, and lets you respond with Rules". It requires two other modules to work, Rules and Rules Forms. The latter will allow Resource Conflict to set validation errors on the node/add form through a Rules action. Resource Conflict is a very slim but capable module - I was very impressed and happy with its capabilities. Most of the work is done thru Rules.

        The module provides a Rules event "A resource conflict node form is validated". To get this event to trigger, I had to enable "conflict detection" for the Resource Reservation content type (part of the Resource Conflict module). To do this, I edit the Resource Reservation type, went to the new "Resource Conflict" vertical tab, and enabled it by selecting the date field to perform conflict checking on. Additionally, I had to expose this form to Rules via the Rules Forms module. To do this, I enabled the activation messages the module provides, then browsed to the node/add form and enabled it. Done.

        The Resource Conflict module provides a default rule that by default prevents form submissions if there are any other nodes of the same type with an overlapping date. This is too general because I the Rule to only throw a validation error if the conflicting reservation is for the same resource. I disabled that default rule and worked to create a rule to also take the resource into consideration. This part was somewhat complicated and I was happy to find some guidance in the issue queue.

        First, I need to create a Rule Component that encapsulates the logic to compare two Reservation nodes, check if they have the same Resource entity reference, and if so set a form error. Here's how I did that:

        3 Variables:

        • "Any entity" data type, "reservation-conflict" label, "reservation_conflict" machine name, usage as a "parameter"
        • "Any entity" data type, "reservation-unsaved" label, "reservation_unsaved" machine name, usage as a "parameter"
        • "Form" data type, "form" label, "form" machine name, usage as a "parameter"

        5 conditions:

        • "Entity is of type" on the "reservation_conflict" data selector, checking it is a "node" type
        • "Entity is of type" on the "reservation_unsaved" data selector, checking it is a "node" type
        • "Entity has field" on the "reservation_conflict" data selector, checking it has the resource entity reference field
        • "Entity has field" on the "reservation_unsaved" data selector, checking it has the resource entity reference field
        • "Data comparison" to make sure that the values of the two entity reference fields is the same

        2 actions:

        • Fetch entity by ID* - used to re-load the reservation_conflict node so we can use it in the next action (provide the nid of the conflicting reservation)
        • Set an error on the form (provided by Rules Forms module) - The form element is the "reservation-unsaved" form. Then I wrote in a message including a link to the conflicting resource with the tokens

        *I'm not sure why I had to do this. Without reloading the entity here and using the reloaded entity for the tokens in the following action, none of the tokens in the next action worked. What confused me here is why I couldn't just use the "reservation-conflict" entity tokens.

        Rule Component

        Now, with this rule component in place, I could incorporate it into a normal Rule that reacted on the node submission, loading all the conflicting reservations (based on date alone) and looping through each one to execute the component actions for the more complicated comparison. Here's how I did that:

        • React on event "A resource conflict node form is validated"
        • Added condition for "Contains a resource conflict" - this relies on the "node" param that is made available from the event
        • Added action for "Load a list of conflicting nodes". This is provided by the Resource Conflict module and this is where the all the conflict detection is done, comparing other nodes of the same type for conflicting dates. This action is added as a Loop.
        • Add a Rule Component within the action loop, selecting the one we just created.

        Since I setup the component with three variables, I needed to pass them in as arguments to the component after adding it into the loop. For the "reservation-conflict" variable, I fill in "list-item", which is the conflicting reservation from the loop. For the "reservation-unsaved" variable, I supply the original "node" variable from the main event. And for the "form" variable, I just pass in the form variable that was available.

        Main Rule

        Testing the rule proved that I was not able to overlap any dates for the same resource when creating a reservation. Perfect!

        Final Thoughts

        The basic functionality of the resource management was there. Users could add new reservations for existing resources and were alerted if the reservation conflicted with others. Reservations were displayed in a calendar for the department to see, and users could filter out specific resources to provide a cleaner view. Here are some additional notes and considerations:

        • To allow the calendar to scale, you'll want to enable AJAX on the calendar view which will only display events for a given month (+/- two weeks). There's a bug in the stable release of the module related to AJAX but I provided a patch.
        • If you're using repeating date fields, make sure you uncheck "Display all values in the same row" on the date field settings in the view. If you don't, any exposed filters for the date range (which is how the AJAX feature works for Full Calendar) will not apply to dates with multiple values. If you do this properly, only the repeating dates for the given date range will be loaded.
        • There's a bug in the Resource Conflict module that only allows you to use the standard "Date" database storage type for a date field. I'm working on a patch.
        • Remember that you could also implement a "resource" using taxonomy terms instead of entity references. If you do, you'll have a much better time getting the Colors stuff working.

        And that's pretty much it! Let me know if you have any questions in the comments below.

        Feb 05 2014
        Feb 05
        Door Erik (5 februari 2014)

        When I'm totally burried in Drupal 8 code and terminology, I like to take a step back to get an overview. While studying the routing system and controllers I wanted to have an overview of the total page call process. I did some investigation and made this diagram:

        Overview of the Drupal 8 page call process

        Request The HTTP request to the website. For example requesting the homepage of the site. The Symfony kernel uses a request object which represents this HTTP request. It is a very imporant object as all kinds of information will be stored in it which is used to handle the request. Drupal uses this request object to add its own data to it. In Drupal this request object may contain for example: HTTP Get/Post data, HTTP request header and cookie, current user and session, current language. Router The router determines what should be done with the request. The router prepares the request to be processed. Here the user data and the current language are added to the request object. But the core taks of the router is to do routing; to match the requested URL to a class and method which will be used to process to the request. The router it self does not respond to the request, it only determines what should be done with it. The router also check if access permitted. Controller The controller builds the response for the given request. This could be a an HTML page, a JSON string or anything else. For normal website pages the controller will return a render array, but the contoller can also return for example JSON type data. View The View creates the response. Drupal will render the render array using the Twig templating system. The HTTP output is stored in a response object. This response object is what the Symfony kernel uses to store the response in while preparing it. When completed the HTTP response will be returned. Response The response is what the website returns a the end of the call. In case of a page call this response will be a HTTP message with a status code 200 (Ok). But other responses like 'Not found' or 'Forbidden' may also occur if either the path is unkown or access is denied. Besides HTTP the response can be of different type such as JSON.

        More details

        Details of the Drupal 8 page call process

        This diagram contains some more details (available as download), or you can read about the HttpKernel Component in the Symfony documentation.

        Bijlage:  Drupalversie:  Drupal 8

        Lees ook:

        Trainingen

        Cursus, training en opleiding

        Nieuws

        Agenda

        Maandelijkse Drupal cursussen voor beginners en gevorderden. Voor individuele inschrijving of kleine teams. Meer

        Wizzlern nieuwsbrief

        Blijf op de hoogte van onze cursussen en activiteiten (ca. 2x per jaar). Voornaam Achternaam E-mailadres *
        Feb 05 2014
        Feb 05

        APIs

        • What are some of the APIs that have changed from D7 to D8?
        • Javascript frontend changes
          • Reduce use of jQuery
        • PHP API changes
          • Declare script dependencies in Yaml file
          • Removing drupal_add_js and drupal_add_css in favor of attachments
          • Can’t add scripts/css/js in .info files
        • How to test Javascript and frontend
        • Documentation
          • api.drupal.org only covers php documentation

        NodeSquirrel Ad

        Have you heard of/used NodeSquirrel?
        Use "StartToGrow" it's a 12-month free upgrade from the Start plan to the Grow plan. So, using it means that the Grow plan will cost $5/month for the first year instead of $10. (10 GB storage on up to 5 sites)

        Feb 04 2014
        Feb 04

        What flavor is Kalatheme?

        Kalatheme is a very convenient theme to use, and should be the default theme for Panopoly, with all due respect. Peruse its Drupal project page.  Panopoly + Bootstrap 3 + Bootstrap themes + browser based sub-theme generator (<- Bootswatch, etc., etc.!) + views grids + reusable custom CSS classes that can be registered as optional in any panels pane + advanced stuff for the folks that, inline with Kalatheme philosophy, don't like to admit they use it:Sass and Compass Tools.

        I watched an interesting video given by Mike Pirog of Kalamuna, which gives you a really good feel for Kalatheme's philosophy, objectives and look and feel, despite being a few months old. Then take a gander at the Kalatheme docs on d.o. 

        Some cool concepts are:

        • Twitter bootstrap
          • Drupal Libraries API for themes!
          • Straightforward upgrade path for any library
          • Responsive classes
        • One region: content (that's it). Then, panels layouts and panes. Page manager, Panelizer, Panopoly goodness.
          • No more blocks! No more regions!
          • Way, way fewer files!
        • Panopoly layouts + Kalatheme layouts + custom layouts

        Create a Kalatheme sub-theme project right on Pantheon

        • Sign up and/or login to your pantheon dashboard.
        • Add a new site
        • Select the Panopoly distribution
        • SFTP mode is required, and it will be (should be) by default
        • Visit the site to complete the installation of Panopoly. Initially, just use any old theme. I installed the Panopoly News demo too, just to see some stuff.
        • Once the install process is complete, visit your new site as admin.
        • From the Appearances page click on "Install a new theme" and paste in a link the latest stable archive of Kalatheme. I eclicked Install (it works since we are in SFTP mode and the necessary permissions are automatically set up).
        • Initially enable Kalatheme and set it to the default and admin theme. You can safely disregard the error message "You do not have a Bootstrap library installed but that is ok! To get equipped either check out our Start Up Guide or run our Setup Wizard."
        • Now to create your sub-theme based on your favorite Bootswatch theme.
          • Did you remember to clear cache after setting a new theme :)  ?
          • Go back to your Admin > Appearances page.
          • At the top is the Setup Kalatheme link, click on it.
          • Complete the setup webform, with name, bootswatch theme (with preview! I chose Simplex; you can also choose third-party Bootstrap themes, for example there are paid themes at https://wrapbootstrap.com/ ), whether or not you want awesome font included (you do!), then click on Dress me up.
          • Lo and behold it becometh the default theme everywhere! REJOICE, as the instructions say.
        • Important Pantheonic note: Commit your changes on your site dashboard! Then you can switch to Git mode and do a backup or clone the project with Git. This will be important if you want to download a backup to your local laptop or workstation, say using Kalabox.

        Pull it down to your laptop on Kalabox

        "Kalabox is more than just a local development environment. It's an easy to use, site building and deployment toolkit for the masses. Advanced webtools now belong to the people." Built on kalastack using Vagrant and VirtualBox, integrated with Pantheon, I'm interested!

        I shot them an email at kalabox@kalamuna.com to apply for a keycode since kalabox is in private beta. Mike Pirog shot me a nifty code, and I entered it together with my name and address in order to get "boxed". I downloaded the kalabox-1.0-beta4.dmg file for my Mac.

        From the Readme.md (please read in its entirety) included in the install package:

        Requirements

        • Kalabox has been tested mostly on Mac OS X 10.8 and partially tested on 10.7 and 10.6. It may work on 10.6 or lower. If you try it out on an older OS X version, please share your experience.
        • For now, Kalabox supports 64-bit operating systems only. So if you're on a 32-bit machine, just hang tight, as support is coming soon!
        • Vagrant 1.3.5 and VirtualBox 4.2.18
        • 1GB+ of RAM Kalabox dynamically allocates memory to itself based on your available RAM. It needs at least 1GB available to run. 

        Installation

        1. Double click on the installer.
        2. Agree to the terms of service
        3. Rejoice. NOTE: Apple may warn you...

        Connecting with Pantheon

        All you need is an account and a site on Pantheon. Go to the configure tab, enter your username and password, click back to My Sites and start developing! If you're interested in interacting with your Pantheon sites directly from the command line, you can use some of the handy Drush commands that come packaged with the Terminatur. https://github.com/kalamuna/terminatur

        More? https://kalamuna.atlassian.net/wiki/display/kalabox/Kalabox+Home

        After installing in the usual Mac way, I executed it. It asked me for permissions and downloaded some extra stuff... After a while (quite a while, actually, something like 15 minutes with a pretty decent internet connection), I had my Kalabox up and running. Edit: Actually, this is a very short time if you take into consideration that a full Linux server is being downloaded and setup!

        I clicked on the Configure tab and entered my Pantheon credentials and logged in.

        Then I clicked on My Sites and all my sites were to be found. I clicked on one, I thought I checked Download my files also, and chose the nifty option Create a new Pantheon backup and download it, and hit Submit.

        The site was downloaded and I was greated with the Boomshakalaka! message that my site was good to go right here on my laptop. I clicked Great in answer to the offer Give it a try. There was my site right in my local browser!

        I had forgotten to click on the Download my files also, so the images weren't present. So from the My Sites tab, I clicked on the gear just below the front page thumbnail of my site, and selected Refresh, then selected the Files checkbox only, and clicked Refresh. My images appeared on my site :)

        I then clicked on the Home tab, and then selected SSH. Local Terminal opened at /home/vagrant. I cd'd to /var/www and then to my site and did a drush status

        Cool.

        Work on it in Eclipse IDE, for example

        "Kalastack uses NFS file sharing. You can access your server webroot at ~/kalabox/www on your host machine. This way you can use your local IDE to edit files on your server."

        Well, that was easy!

        In a later article, we'll deal with Pantheon integrated workflow using Kalabox. Can't wait!

        Bookmark/Search this post with

        Feb 04 2014
        Feb 04

        Drupal Global Training Days are quickly approaching and we do not want you to miss out on the February date.  Global Training Days is an initiative by the Drupal Association to introduce new and beginning users to Drupal.  The Drupal Association is partnering with training companies to make this happen.  We’ll be hosting this initiative once a quarter focusing on one of two curriculums:

        ·      “Introduction to Drupal” is a full-day training on the basics of Drupal.  Attendees will leave having successfully built a Drupal site.  It is ideal for those interested in exploring Drupal as a career path.

        ·      “What is Drupal?” is a half-day workshop addressing the basics of Drupal, and will give an overview in evaluating or implementing Drupal. 

        There are already 12 global communities hosting their own trainings.  Each training company can make it their own event and provide more detail on their web pages, which the Drupal Association will post for you on our site.  To accommodate schedules, you can chose either February 28th or March 1st (full-days or half-days).  Additionally, we will provide educational resources for you to use for your training.

        If you can’t make it to our February date, no worries.  Mark your calendar for our upcoming 2014 dates:  Friday May 30th (or Saturday May 31st), Friday August 29th (or August 30th), Friday November 14th (or November 15th). 

        We asked one of our currently registered training companies why they participate in Global Training Days - “For us at Blink Reaction, Global Training Days are a no-brainer. Getting started is one of the biggest obstacles to evaluating or using Drupal. The training days are a great opportunity to introduce new people to Drupal, its community, and the opportunities they present. The more people who are using Drupal and participating in the community, the stronger the project gets, and the more we can all achieve with it!” - Amy Cham, Blink Reaction

        Let’s spread the Drupal love! Click here to sign up for the February 28th/March 1st Global Training Day.  As always, please feel free to reach out if you have any questions.

        Looking forward to seeing and hearing all about your trainings!

        Drupal on,

        LShey

        joe
        Feb 04 2014
        Feb 04

        In an earlier post, Kyle wrote a great introduction to the new configuration management system in Drupal 8. He demonstrated how end users can leverage this new system to easily migrate site configuration between environment, which helps eliminate the "did you remember to check the boxes in the right order?" problem for site builders everywhere. In this post, I take a look at configuration management from the perspective of a module developer. What do we need to write in our custom code to ensure that our configuration settings are easy to deploy?

        So What Exactly is Configuration?

        Configuration is any setting that changes the way an instance of Drupal behaves. For example, the toggle which turns on and off the JavaScript aggregator. Some sites need JavaScript aggregation, some do not. Configuration allows us to use the same code base to serve multiple sites without modifying any code. When you build a view or create an image style, that's configuration. 

        The converse of configuration is user generated content; nodes, comments, uploaded files, etc. The image someone uploads to a blog post is user generated content, while the image style we're utilizing to scale that image is configuration.

        Drupal 8 has two distinct types of configuration. These are Simple configuration—like on/off toggles and the settings required by modules—and configuration entities, which are used to store complex instance configuration (e.g., views). Configuration entities are an extension of the core entity system. They provide a full suite of CRUD  (Create/Read/Update/Delete) hooks that fire when configurations are modified, like when someone edits a node.

        As an example, let's pretend we're building a module that interfaces with a remote video encoding service. In order for our module to do anything, it's going to need to know the API id and key to communicate with the external API. The module might need also to know the URL it should access and, perhaps, the maximum number of encoding jobs it should run at any given time. These are examples of simple configuration; a few string and integer values that must be present for our module to work.

        But our module might provide also a user interface that enables administrators to create new encoding profiles and specify different parameters for encoding videos. For example, what bit-rate? What audio format? Do we even want audio? This type of configuration isn't required for our module to operate. We may have zero or tens of instances of this configuration entity, depending on our specific use case. Some sites only need one video format; some need dozens.

        In addition to providing a user interface (UI) for creating new instances of this configuration, we might want also to provide a set of hooks. Other developers could implement these hooks then, so their modules could provide configuration for our module or even alter any user created configuration. This sounds a bit like handling user generated content, right? Well by leveraging the core entity system to create configuration entities, Drupal 8 provides this functionality with minimal code duplication.

        The rest of this post and video focus on working with simple configuration. The more complex configuration entities are a topic for another day.

        Reading and Writing Simple Configuration Data

        Goodbye {variables} table and variable_get/set/delete(). Hello YAML files and $config objects. If you read Kyle's post and watched his video, you already know that configuration in Drupal 8 is stored in YAML files instead of the database. This makes it super easy to commit the value of a settings form to version control and deploy it across multiple environments.

        As a module developer, I need to be able to read data from and write data to these YAML files. Instead of just reading and writing from the files directly, however, in Drupal 8 we use a Config object that handles basic CRUD for these YAML files on our behalf. This ensures a simple and consistent API for accessing data. As a module developer, I no longer have to worry about Drupal Core storing data in XML or changing the location of files. I can just rely on the Config object to handle everything while I happily make use of simple ::get(), ::set(), and ::save() methods.

        Configuration is stored in a YAML file named after the corresponding module. In the video, we're working with a module named chad, so our configuration is stored in a file named chad.settings.yml. The settings portion of this filename is somewhat arbitrary; it can be whatever we want. In fact, a single module can have multiple configuration files just by changing this portion of the filename. The system module in Drupal Core, which has a huge number of configuration settings, is a great example. It uses a few dozen files to group related configuration settings together. If you're only using a single configuration file, however, convention is to name it {MY_MODULE}.settings.yml.

        The Drupal Core ConfigFactory class is a module developer's gateway to reading and writing configuration data, and it is used to instantiate a Config object based on the contents of a specified configuration file. The new Config object can then be used to perform CRUD operations on that data. But we don't call ConfigFactory directly. Instead we rely on the Drupal services container to do so on our behalf. This extra layer of abstraction makes our code more robust, and it allows us to change how Config objects are made without breaking our code.

        Show Me the Code

        Here's a quick example of creating a new Config object based on chad.settings.yml using the services container and reading a value from that config object. (If you need a refresher or want to refer back, here's a post I wrote on Getting Started with Forms in Drupal 8.)

        // Load the content of chad.settings.yml into a Config object.
        $config = \Drupal::config('chad.settings');
        // Read a value from the YAML file.
        $last_name = $config->get('name.last');

        Assuming a file with the following data in it, we can expect that the variable $last_name is now equal to IsAwesome.

        chad.settings.yml
        name:
            first: Chad
            last: IsAwesome

        We can update also the configuration data by setting a new value using the Config object and then calling its save method. This effectively tells it to write its content to the associated YAML file.

        $config->set('name.last', 'Smith');
        $config->save();

        If you want to know more about how this system works, the documentation on drupal.org is a great place to start. Once you're done reading that, I suggest digging into the system module and seeing how it deals with all its various settings.

        Feb 04 2014
        Feb 04
        Dave Terry's picture

        Dave Terry  | Partner, Client Services

        Feb

        04

        2014

        There was an article by David Baker that caught my attention. David is a long-time consultant for the web agency marketplace who offers blunt, but sage advice that comes from maintaining a strong pulse on what is happening in our industry.Essentially, he challenged Principals to think about what success really looks like at your agency.

        Along these lines, at the beginning of each year, Mediacurrent does a "State of the Union" where our purpose is twofold: (1.) We provide a retrospective on the prior year, where we attempt to be as open and transparent as possible with our team (2.) We talk about strategy and where we are headed as a company. We reinforce our mission, culture, and inject a certain theme that we deem as critical to achieving our goals. This year we talked about the importance of alignment at Mediacurrent.

        David's underlying message was that it is crucial that you know what various success factors look like. Here is what we shared with the Mediacurrent team at our State of the Union last week:

        1. Being able to give back to our community feels good and our roots should never be forgotten.
        2. We will almost always have more opportunity than capacity, allowing us to be choosy about which clients receive the benefit of Mediacurrent’s expertise.
        3. Our positioning in the marketplace is laser-tight and will allow the right prospect with the right project at the right time find the right solution with Mediacurrent.
        4. Our culture creates below industry average turnover and will contiune to allow us to attract top talent.
        5. Our customers and prospects appreciate what we do; we will be judged more on value versus how many hours we invoice.
        6. Our content is so intriguing and compelling that our prospects and existing network look forward to receiving it.
        7. Our knowledge, training, processes, and strategic input will continue to be sought after by new team members and customers.
        8. We have a diverse portfolio and an appropriate blend of work. We will never be too heavily dependent on one customer.
        9. We can raise pricing at any time without the fear of negative repercussions.
        10. Unsolicited referrals will become the norm and second-nature.
        11. All of our key processes are, and will continue to be, automated and well-documented.
        12. We will continue to foster a positive and supportive environment. We feel fulfilled, happy, and generally look forward to coming to work because we enjoy the people (teammates and customers) that we work with on a daily basis.

        Finally, I would assert that these items are applicable to not just web agencies, but any professional service organization. What are your thoughts?  What are some factors that will help your company define success?

        Additional Resources

         | Mediacurrent Blog Post

        Feb 04 2014
        Feb 04

        Download Podcast 122
        DrupalEasy_ep122_20140203.mp3

        Tim Plunkett (tim.plunkett), Drupal 8 core contributor joins Andrew Riley, Ted Bowman, and Mike Anello to discuss Views in core, the state of the Drupal 8 configuration managment initiative, and to make his prediction as to when we’ll see the first Drupal 8 beta. All that plus a few face at the DA, Panopoly 1.1, and four non-module picks-of-the-week.

        Interview

        Five Questions

        Each podcast we ask our guests the same five questions. Here are the answers:

        1. MacBook Air, phpStorm with vim plugin
        2. Matthew Tift (mtift)
        3. Philadelphia
        4. Advanced caching and performance techniques
        5. Panels

        Three Stories

        1. Hello Drupal Community blog post by Lauren Shey, the new Drupal Association Community Outreach Coordinator. Lauren will be responsible for arranging Global Training Days, Community Webinars, keeping up with Drupal Camps and finding ways to support the community. Follow Lauren on Twitter at @lsheydrupal.

        Sponsors

        Picks of the Week

        • Andrew - Compass - a CSS framework
        • Ted - Drupal Camp NJ Sunday mentoring - don’t leave camps early - some camps have valuble events/sprints after camps.
        • Mike - HootSuite Pro - a nice tool that can take an RSS feed and automatically push out its items to various social media outlets.
        • Tim - Uncommitted - a python script you can run via cron to remind you of VCS repos with uncommitted code on your filesystem.

        Upcoming Events

        • Florida DrupalCamp 2014 - Saturday and Sunday, March 8-9, 2014, Florida Technical College, Orlando, Florida.
        • GLADCamp - Greater Los Angeles Drupal Camp - Free - March 8 and 9, 2014.

        Follow us on Twitter

        Intro Music

        Drupal Way by Marcia Buckingham (acmaintainer) (vocals, bass and mandolin) and Charlie Poplees (guitar). The lyrics by Marcia Buckingham, music by Kate Wolfe.

        Subscribe

        If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or hair product suggestions for Ted. If you'd rather just send us an email, please use our contact page.

        Average:

        Your rating: None Average: 5 (1 vote)

        Feb 04 2014
        Feb 04

        This is part 2 in my series of articles about creating a custom field. I recommend reading Part 1: Field type first, if you have not done so already.

        After creating the field type it is now time to create the field widget.

        a) Create the file

        The field widget must be located as follows:
        <module_name>/lib/Drupal/<module_name>/Plugin/field/widget/<field_widget_name>.php
        N.B. The field widget name should be in CamelCase.

        b) Add Contains, namespace and use

        In the newly created field type file add a brief comment to explain what it consists of:

        /**
        * @file
        * Contains \Drupal\<module_name>\Plugin\field\widget\<field_widget_name>.
        */

        N.B. The "Contains..." line should match the location and name of this file.

        Then add the namespace as follows:

        namespace Drupal\<module_name>\Plugin\field\widget;

        N.B. I cannot emphasise enough: it is vital that the namespace matches the location of the file otherwise it will not work.

        Then add the following uses:

        use Drupal\Core\Entity\Field\FieldItemListInterface;

        This provides a variable type required within the field widget class.

        use Drupal\field\Plugin\Type\Widget\WidgetBase;

        This provides the class that the field widget will extend.

        c) Add widget details annotation

        The annotation should appear as follows:

        /**
        * Plugin implementation of the '<field_widget_id>' widget.
        *
        * @FieldWidget(
        *   id = "<field_widget_id>",
        *   label = @Translation("<field_widget_label>"),
        *   field_types = {
        *     "<field_type_id>"
        *   }
        * )
        */

        N.B. All text represented by a <placeholder> should be appropriately replaced according to requirements. The field_type_id must match the id of a field type and the field_widget_id should match the default widget specified in the field type (see Part 1 of this article).

        d) Add field widget class

        Create the field widget class as follows:

        class <field_widget_name> extends WidgetBase {}

        N.B. The <field_widget_name> must match the name of this file (case-sensitive).

        The field widget class needs to contain the formElement() function that defines how the field will appear on data input forms:

         /**
           * {@inheritdoc}
           */
          public function formElement(FieldItemListInterface $items, $delta, array $element, array &$form, array &$form_state) {    $element['forename'] = array(
              '#title' => t('Forename'),
              '#type' => 'textfield',
              '#default_value' => isset($items[$delta]->forename) ? $items[$delta]->forename : NULL,
            );
            $element['surname'] = array(
              '#title' => t('Surname'),
              '#type' => 'textfield',
              '#default_value' => isset($items[$delta]->surname) ? $items[$delta]->surname : NULL,
            );
            $element['age'] = array(
              '#title' => t('Age'),
              '#type' => 'number',
              '#default_value' => isset($items[$delta]->age) ? $items[$delta]->age : NULL,
            );
            return $element;
          }

        The above example includes element types of textfield and number, other element types include:

        • radios
        • checkboxes
        • email
        • url

        I intend to delve into other element types in a future article.

        And there we have it: a complete (basic) field widget. Here is a simple example, similar to that described above.

        Feb 03 2014
        Feb 03

        As someone working on some drupal site you often want to understand what happens on a specific site. This blog post will describe a tool which helps to understand the site faster and easier.

        One example you wanna see are the executed database queries. In Drupal 7 we had the wonderful Devel module which showed a list of executed database queries below the page but there are way more information you might want to know:

        • PHP Configuration
        • The needed time and memory
        • List of enabled themes/modules
        • Routing information (aka. hook_menu in D7)
        • The requested cache/Key-Value data
        • Information about the Request raw data

        Symfony has a nice toolbar at the bottom
        which stores this information, shows it and make it available as separate page
        for additional research.

        The founder of symfony (fabpot) gave me some
        initial version of a drupal integration. Sadly Luca Lusso started independent on a version, so we merged the code together and continue
        on https://drupal.org/node/2165731.

        So here is out it looks like (click images for larger version):

        so you see quite a big amount of integrations already. Let's list what we have at the moment:

        • PHP Config
        • Request
        • Forms
        • Database
        • Modules/Themes
        • Routing
        • Cache
        • State
        • (Config: There is a working branch relying on a core patch: https://drupal.org/node/2184231)
        • Your ideas!

        You could certainly ask yourself whether this is a total replacement of the devel module. There is an
        ongoing discussion at https://drupal.org/node/257770 whether to use the symfony toolbar/an alternative php one.

        Please try out the module on Drupal 8 and come up with more ideas and help us.

        Feb 03 2014
        Feb 03

        Access professional Drupal training at Drupalize.Me

        A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

        Feb 03 2014
        Feb 03

        Brightcove is a video hosting platform that integrates with your Drupal site through the Brightcove module. They have a really solid service, and if you need an enterprise-grade solution you should definitely check them out (full disclosure: Brightcove is a customer of Pronovix).

        But what really sets Brightcove apart in my opinion, is their dedication to and support for the Drupal community. I'm a regular attendee of a whole string of Drupal events throughout Europe and some of the largest events in the U.S., and Brightcove is often a major presence. Whether it's through the live streaming of the Drupalcon keynotes, support for the Drupal Association or through sponsorship to some of our events, Brightcove invests in the Drupal community.

        While many software-as-a-service providers leave the maintenance of their integration modules for Drupal to their customers or to unpaid volunteers from the community, Brightcove has been working with us to make sure there is someone who can take care of critical issues. I think this allocation of resources demonstrates the company's commitment to its clients and the communities that support them.

        Brightcove Video Cloud is a popular tool, and–as a result–has a lot of feature requests in the issue queue. We haven't been able to address all of them...yet. But, at the end of 2013 Brightcove asked us to implement two frequently requested features on top of our maintenance engagement. These features are particularly important if you are running a larger website. They are: caching and exportables.

        • Caching: We are adding support for database, file and memcache caching. This will help high-traffic sites, as they will now be able to configure a local cache for the video results that a site derives from Brightcove Video Cloud. This is critical to preventing timeouts when popular sites are pulling down video lists from Video Cloud.
        • Exportables: When you have an enterprise development workflow, you need to move configurations between different environments. In Drupal 7, the established way to do this is through features and the exportables you can create with it. This is now possible for Brightcove Video Cloud Drupal module configurations.

        You can try out both features in the 7.x-5.0 branch.

        Please note that this version of the Brightcove integration module has been separated from the Media integration module because of an API update between Media module version 1 and 2, that makes it more convenient to handle them separately. It also makes the Brightcove integration module less confusing and easier to maintain, because there will be one supported release for Drupal 7. For more information visit the project’s Drupal.org page.

        If you want to learn more about the work we did with the Brightcove module so far, read our developer’s blog post about Upgrading the Drupal Brightcove module.

        Feb 02 2014
        Feb 02
        Tags: Drupal, pantheon

        More and more of my clients are using Pantheon to host their Drupal based web applications. This is not an ad, it's just a fact. I'm finding more and more of my development work involves cloning Pantheon based workflow instances and coding and site building within that workflow, and I've seen how it has improved greatly over the years. Now I had to import a quite large Drupal 6 site for a client hoping for a trouble-free Drupal oriented hosting experience while we got on with the site renovation project. While the process was straightforward, and the necessary documentation is there (see References), I thought I'd share my experience as warm and fuzzies for others having to do the same:

        Regular import

        From your regular Pantheon dashboard (initial login page after registering for an account) you simply click on the Add a site link and provide a name and click on the Create Site link. In a little while you are offered the choice of Start from scratch and Import manually radio buttons. Starting from scratch offers Drupal 6, Drupal 7 or a host of Distribution choices that allow you to start up an off-the-shelf solution via installation profile.

        Selecting the latter offers a variety of alternatives for manual import. In the old days of Pantheon, one would just upload a tarball with database.sql in the Drupal document root. But things are much more organized now. The manual upload is divided into Code, Database and Files archives, each of which should be tarrred/gzipped or zipped into its own separate file. Also, for each there are URL (default) and File upload options. It says “Archives must be in tar/gz or zip format. Uploads are limited to 100MB in size. Imports via url are limited to 500MB.”

        Now, the URL method, rather than uploading from you laptop, is much better because it's a server to server file transfer, with no dependency on the browser window connection, which may time out, etc.. So how do I provide that? Very simple, just create your three code, database and files folder tarred or zipped files and stick them into the default document root of your VPS or even shared hosting (a secure http over ssl (“https”) URL would provide the best security). Once your site is created on Pantheon, you can quickly delete or move these files from your VPS or shared hosting.

        I created my three archives to import one site that did not exceed these limits in the following manner (following References 1):

        Creating the code archive (after changing directory on the command line into the Drupal document root, and taking care to exclude .git and the files directory – note the ending dot signifying the current directory):

        mysite@myserver:~/mysite7-legacy$ tar czvf /var/www/4pantheon/mysite_code.tgz –exclude=sites/default/files* --exclude=.git* .

        Creating the database archive (from the Drupal document root and using drush although you can use mysqldump of course):

        mysite@myserver:~/mysite7-legacy$ drush sql-dump | gzip > /var/www/4pantheon/mysite.sql.gz

        Creating the files archive (from the files directory itself – note the ending dot):

        mysite@myserver:~/mysite7-legacy$ tar czvf /var/www/4pantheon/mysite_files.tgz .
        

        So I ended up with the three files exposed in a web document root as URL's:

        • http://example.com/4pantheon/mysite_code.tgz
        • http://example.com/4pantheon/mysite.sql.gz
        • http://example.com/4pantheon/ mysite_files.tgz

        I then entered these URL's into the import site manually form fields with URL option selected (default), and hit the red Import Site button.

        If the database is close to the 500 MB limit, that means it is actually several GB in size untarred or unzipped. So it could be quite a few minutes of one server talking to the other and then Pantheon unzipping and stuffing the sql into the database.

        Now, you can pack quite a few GB of database into a zip or gzip file, and clearing cache (or even truncating cache tables) prior to creating the file will significantly reduce its size also. Not so much for GB of files folder assets, however. Anyway, the good news is that you can create the site with just the codebase and then once you obtain its ssh credentials, you can use alternative methods for database and files tarball uploads of unlimited size.

        I'm going to repeat that:

        The good news is that you can create the site with just the codebase and then once you obtain its ssh credentials, you can use alternative methods for database and files tarballs of unlimited size.

        Here's how it's done.

        Highly irregular import

        Now for the fun part. What if my database file is bigger than 500 MB, even zipped or g'zipped? What if my files folder is GB's in size and of course zipping doesn't really help anyway? Let's see about a fun way to take care of the files folder first.

        Files

        Turns out you can just omit the files folder by leaving the Import manually Files archive field blank althogether. Then, once the site has been created, we can Sftp or rsync the files in directly.

        rsync is really cool. It's one of those really flexible command-line Linux utilities that just works, and saves an enormous amount of time and bandwidth too.

        Based on Reference 2, the well documented support doc rsync and SFTP, here's what I did to upload almost 3GB of user files to my new Pantheon site with rsync:

        • Added my public key from my Pantheon dashboard

          • Click Add key button

          • Paste in public key

          • Click Add Key button

        • I then went to my site dashboard by clicking on my site home page image and clicked on Connection info and obtained the following info:

        Git

        SSH clone URL:

        ssh://codeserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111@codeserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in:2222/~/repository.git xfrmlegacy
        

        Database

        Command Line

        mysql -u pantheon -pverylongpantheonpassword -h dbserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in -P 12801 pantheon
        Host: 
        dbserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in
        Username:
        pantheon
        Password:
        verylongpantheonpassword
        Port:
        12801
        DB Name:
        pantheon
        

        SFTP

        Command Line
        sftp -o Port=2222 dev.n1nn1111-1n1n-n11n-1n11n1n11111@appserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in
        Host:
        appserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in
        Username:
        dev.n1nn1111-1n1n-n11n-1n11n1n11111
        Password:
        Use your dashboard password
        Port:
        2222

        So, I grabbed this info and “kept it in my records”.

        Then I simply changed directories into the parent directory containing my files directory on the copy of the drupal site running on my server and shunted my files over to my new Pantheon site via rsync with the following commands:

        mysite@myserver:~/mysite7-legacy$ export ENV=dev
        mysite@myserver:~/mysite7-legacy$ export SITE=n1nn1111-1n1n-n11n-1n11n1n11111
        mysite@myserver:~/mysite7-legacy$ rsync -rlvz --size-only --ipv4 --progress -e 'ssh -p 2222' files/*  $ENV.$SITE@appserver.$ENV.$SITE.drush.in:files/
        The authenticity of host '[appserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in]:2222 ([166.78.242.215]:2222)' can't be established.
        RSA key fingerprint is b5:ea:23:eb:7b:7b:0d:17:c7:13:47:92:ea:70:c1:b5.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '[appserver.dev.n1nn1111-1n1n-n11n-1n11n1n11111.drush.in]:2222,[166.78.242.215]:2222' (RSA) to the list of known hosts.
        sending incremental file list
        ... 

        A while later, all my files (various GB!) placed in ./sites/default/files on Pantheon! Cool. Yes.

        Database

        Turns out you can just leave the Import manually Database archive field blank also. Then, once the site is created you can use best practices remote database tools to deploy a database of any size. In my case I just shaved the database down by truncating cache tables, etc., so it fit in the less than 500 MB size limit as a gzip'd file.

        See Reference 3.

        A little help from my friends

        Whenever you hit Support and raise a ticket on Pantheon, you get a response really quickly, like in a few minutes. Just sayin'. So I did all this with more than a little help from my friends.

        Once example was that the legacy Drupal 6 site had its files directory, not in ./sites/default/files, but in a ./files directory just off the Drupal document root. Support clued me in, in just a few minutes (See Reference 4):

        “If you are importing a site which has files in another location (e.g. "/files") you will need to move the files into the standard location, and add, commit and push a symlink from that location to the new location via git:

        $ ln -s ./sites/default/files ./files
        $ git add files
        $ git commit files -m "adding legacy files location symlink"
        $ git push origin master

        Your legacy file paths should now work, and your fies will be stored in our cloud files location!”

        I was told to be sure to make sure it a relative symlink like the example and not an absolute system path.

        References

        1. Importing an existing Drupal site to Pantheon

          1. rsync and SFTP

            1. Accessing MySQL databases

              1. Non-standard files locations

                1. Hire us to do this and other stuff for you

                2. Even better, hire us to mentor you on how to do it and other stuff yourself

                Bookmark/Search this post with

                Feb 02 2014
                Feb 02

                I've been involved quite a lot lately in helping out with Panopoly - and it has just got to a 1.1 release because of great work @dsnopek, @mrfelton and @populist. Panopoly builds heavily on Panels, and I saw that they are getting close to releasing a 3.4-version

                @japerry started to maintain panels, and he is working hard moving Panels forward, so everybody else can benefit from this great module.

                However, it is a daunting task. As of writing there is 632 open issues in the issue queue for panels. You really do not have to be a coder to help sort out the issues. There is a lot of tasks, you can easily do, so @japerry can focus on the important stuff.

                Jan 31 2014
                Jan 31

                The upcoming Drupal 8 includes many changes large and small that will improve the lives of site builders, site owners, and developers. In a series we're calling, "D8FTW," we look at some of these improvements in more detail, including and especially the non-obvious ones.

                Breadcrumbs have long been the bane of every Drupal developer's existence. In simple cases, they work fine out of the box. Once you get even a little complex, though, they get quite unwieldy.

                That's primarily because Drupal 7 and earlier don't have a breadcrumb system. They just have an effectively-global value that modules can set from "anywhere," and some default logic that tries to make a best-guess based on the menu system if not otherwise specified. That best guess, however, is frequently not enough and letting multiple modules or themes specify a breadcrumb "anywhere" is a recipe for strange race conditions. Contrib birthed a number of assorted tools to try to make breadcrumbs better but none of them really took over, because the core system just wasn't up to the task.

                Enter Drupal 8. In Drupal 8, breadcrumbs have been rewritten from the ground up to use the new system's architecture and style. In fact, breadcrumbs are now an exemplar of a number of "new ways" in Drupal 8. The result is the first version of Drupal where we can proudly say "Hooray, breadcrumbs rock!"

                More power to the admin

                There are two key changes to how breadcrumbs work in Drupal 8. The first is how they're placed. In Drupal 7 and earlier, there was a magic $breadcrumb variable in the page template. As a stray variable, it didn't really obey any rules about placement, visibility, caching, or anything else. That made sense when there were 100 modules and a slightly fancy blog was the typical Drupal use case. In a modern enterprise-ready CMS, though, having lots of special-case exceptions like that hurts the overall system.

                In Drupal 8, breadcrumbs are an ordinary block. That’s it. Site administrators can place that block in any region they'd like, control visibility of it, even put it on the page multiple times right from the UI. (The new Blocks API makes that task easy; more on that another time.) And any new functionality added to blocks, either by core or contrib, will apply equally well to the breadcrumb block as to anything else. Breadcrumbs are no longer a unique and special snowflake.

                More predictability to the developer

                The second change is more directly focused at developers. Gone are the twin menu_set_breadcrumb() and menu_get_breadcrumb functions that acted as a wrapper around a global variable. Instead, breadcrumbs are powered by a chained negotiated service.

                A chained negotiated whosawhatsis? Let's define a few new terms, each of which introduces a crucial change in Drupal 8. A service is simply an object that does something useful for client code and does so in an entirely stateless fashion. That is, calling it once or calling it a dozen times with the same input will always yield the same result. Services are hugely important in Drupal 8. Whenever possible, logic in a modern system like Drupal 8 should be encapsulated into services rather than simply inlined into application code somewhere else. If a service requires another service, then that dependency should be passed to it in its constructor and saved rather than manually created on the fly. Generally, only a single instance of a service will exist throughout the request but it's not hard-coded to that.

                A negotiated service is a service where the code that is responsible for doing whatever needs to be done could vary. You call one service and ask it to do something, and that service will, in turn, figure out some other service to pass the request along to rather than handling it itself. That's an extremely powerful technique because the whole "figuring out" process is completely hidden from you, the developer. To someone writing a module, whether there's one object or 50 responsible for determining breadcrumbs is entirely irrelevant. They all look the same from the caller’s point of view.

                The simplest and most common "figuring out" mechanism is a pattern called Chain of Responsibility. In short, the system has a series of objects that could handle something, and some master service just asks each one, in turn, "Hey, you got this?" until one says yes, then stops. It's up to each object to decide in what circumstances it cares.

                Breadcrumbs in Drupal 8 implement exactly this pattern. The breadcrumb block depends on the breadcrumb_manager service, which by default is an object of the BreadcrumbManager class. That object is simply a wrapper around many objects that implement BreadcrumbBuilderInterface, which it implements itself as well. When the breadcrumb block calls $breadcrumb_manager->build() that object will simply forward the request on to one of the other breadcrumb builders it knows about; including those you, as a module developer, provide.

                Core ships with five such builders out of the box. One is a default that will build a breadcrumb off of the path and always runs last. Then there are four specialty builders for forum nodes, taxonomy term entity pages, stand-alone comment pages, and book pages. Core does not currently ship with one that uses the menu tree — as was the case in Drupal 7 — because the menu system is still in flux and calculating that was quite difficult. That could certainly be re-added in contrib or later in core, however.

                Let's try it!

                Let's add our own new builder that will make all "News" nodes appear as breadcrumb children of a View we've created at /news. Although all we need to do is implement the BreadcrumbBuilderInterface, it's often easier to start from the BreadcrumbBuilderBase utility class. (Side note: This may turn into one or more traits before 8.0 is released.) We'll add a class to our module like so:

                <?php
                // mymodule/lib/Drupal/mymodule/NewsBreadcrumbBuilder.php namespace Drupal\mymodule;

                use

                Drupal\Core\Breadcrumb\BreadcrumbBuilderBase;

                class

                NewsBreadcrumbBuilder extends BreadcrumbBuilderBase {
                 
                /**
                   * {@inheritdoc}
                   */
                 
                public function applies(array $attributes) {
                    if (
                $attributes['_route'] == 'node_page') {
                      return
                $attributes['node']->bundle() == 'news';
                    }
                  }
                /**
                   * {@inheritdoc}
                   */
                 
                public function build(array $attributes) {
                   
                $breadcrumb[] = $this->l($this->t('Home'), NULL);
                   
                $breadcrumb[] = $this->l($this->t(News), 'news');
                   return
                $breadcrumb;
                  }
                }
                ?>

                Two methods, that's it! In the applies() method, we are passed an array of values about the current request. In our case, we know that this builder only cares about showing the node page, and only when the node being shown is of type "news". So we return TRUE if that's the case, indicating that our build() method should be called, or FALSE to say "ignore me!"

                The second method, then, just builds the breadcrumb array however we feel like. In this case we're just going to hard code a few links but we could use whatever logic we want, safe in the knowledge that our code, and only our code, will be in control of the breadcrumb on this request. A few important things to note:

                • The $this->l() and $this->t() methods are provided by the base class, and function essentially the same as their old procedural counterparts but are injectable; we'll discuss what that means in more detail in a later installment.
                • The breadcrumb does not include the name of the page we're currently viewing. The theme system is responsible for adding that (or not).

                Now we need to tell the system about our class. To do that, we define a new service (remember those?) referencing our new class. We'll do that in our *.services.yml file, which exists for exactly this purpose:

                # mymodule.services.yml
                services:
                  mymodule.breadcrumb:
                    class: Drupal\mymodule\NewsBreadcrumbBuilder
                    tags:
                      - { name: breadcrumb_builder, priority: 100 }

                Similar to an "info hook" in previous Drupal versions, we're defining a service named mymodule.breadcrumb. It will be an instance of our breadcrumb class. If necessary we could pass arguments to our our class's constructor as well. Importantly, though, we also tag the service. Tagged services are a feature of the Symfony DependencyInjection component specifically and tell the system to automatically connect our builder to the breadcrumb manager. The priority specifies in what order various builders should be called, highest first. In case two applies() methods might both return true, whichever builder has the higher priority will be used and the other ignored.

                That's a wrap

                And that's it. One simple class, a few lines of YAML, and we've slotted our new breadcrumb rule into the system. What have we gained over the old system?

                • There's no more mystery about who called drupal_set_breadcrumb() when, and who overrode who. It's all clear and predictable (and documented, if you look at the priorities). If you want to know where a given breadcrumb is coming from, just look for any services tagged breadcrumb_builder or classes that implement BreadcrumbBuilderInterface. A BreadcrumbBuilder object is the only place it could come from.
                • All of the breadcrumb logic is in classes that are easily unit testable with PHPUnit.
                • Breadcrumbs are no longer a special snowflake for front-end developers who can now treat them the exact same as any other block. That includes disabling them entirely, or selectively.
                • The builders are very extensible. Modules can very easily register builders that rely on user customization or other services to build very robust breadcrumb management tools without bumping into any existing code or assumptions.

                The chain-of-responsibility pattern is used in a number of places in Drupal 8, including the process for determining the active theme as well as user authentication, among others. All work in essentially the same way. It's a good approach any time different systems may want to be responsible for a task in different situations, but only one will be responsible at a time. We'll likely see more examples in both core and contrib.

                Finally, because everything is a service, it is possible for a site-specific module to completely disable other modules' breadcrumb logic without hacking them. In fact, you could take over the entire breadcrumb system completely on your site and become the One True Breadcrumb(tm) without so much as touching a core file. But we'll discuss how to do that in our next installment.

                Any other non-obvious wins in Drupal 8 that deserve a mention? Let us know and we'll add it to the list.

                xjm
                Jan 31 2014
                Jan 31

                At DrupalCon Prague, we decided not to provide a Drupal 7 to Drupal 8 upgrade path using the database update system, and to instead provide data migration using a Drupal 8 data migration API based on the Migrate module. As of today, Drupal 7 sites can no longer be upgraded to Drupal 8 with update.php, and all implementations of hook_update_N() have been removed from Drupal 8 core.

                Going forward, hook_update_N() should only be included to provide 8.x to 8.x updates, once the 8.x to 8.x upgrade path is supported.

                If your patch introduces a data model change that would previously have required a hook_update_N() implementation, consider instead whether a new or changed data migration is needed. Migration works by calling APIs, so most changes (for example entities) are covered already. But if your data change is not covered by an API (like changing a raw configuration object name or key) then you need to:

                1. Document the data model change in the issue and reference the core issue that introduces it under the "related issues" section.
                2. Update the summary of your main issue to indicate that the data model change has a corresponding Migrate issue.

                If you are unsure whether new migration code is needed, file the issue anyway, and Migrate maintainers will review it and close it if it is not needed.

                Jan 31 2014
                Jan 31

                Replication is a wonderful thing for your clients. Having a 'hot spare' of their database(s) for redundancy, or being able to off-load read operations from the main database to increase performance, giving your client peace-of-mind about their data and application. I won't go into setting up MySQL Replication; there are more than a few guides on that already out there (here's the official documentation). Once you do have Replication running, you need to make sure that it remains running, reliably, all the time. How best to accomplish this?

                The Way Monitoring Had Been

                The typical method is to use SLAVE STATUS to look at information about the setup.

                mysql> SHOW SLAVE STATUS\G
                *************************** 1. row ***************************
                               Slave_IO_State: Waiting for master to send event
                                  Master_Host: stg04-cf.copperfroghosting.net
                                  Master_User: root
                                  Master_Port: 3306
                                Connect_Retry: 60
                              Master_Log_File: mysql-bin.000006
                          Read_Master_Log_Pos: 106
                               Relay_Log_File: stg06-cf-relay-bin.000002
                                Relay_Log_Pos: 251
                        Relay_Master_Log_File: mysql-bin.000006
                             Slave_IO_Running: Yes
                            Slave_SQL_Running: Yes
                              Replicate_Do_DB:
                          Replicate_Ignore_DB:
                           Replicate_Do_Table:
                       Replicate_Ignore_Table:
                      Replicate_Wild_Do_Table:
                  Replicate_Wild_Ignore_Table:
                                   Last_Errno: 0
                                   Last_Error:
                                 Skip_Counter: 0
                          Exec_Master_Log_Pos: 106
                              Relay_Log_Space: 409
                              Until_Condition: None
                               Until_Log_File:
                                Until_Log_Pos: 0
                           Master_SSL_Allowed: No
                           Master_SSL_CA_File:
                           Master_SSL_CA_Path:
                              Master_SSL_Cert:
                            Master_SSL_Cipher:
                               Master_SSL_Key:
                        Seconds_Behind_Master: 0
                Master_SSL_Verify_Server_Cert: No
                                Last_IO_Errno: 0
                                Last_IO_Error:
                               Last_SQL_Errno: 0
                               Last_SQL_Error:
                1 row in set (0.00 sec)

                There are a few key pieces of information provided here.

                • Slave_IO_Running tells us if the Slave is able to connect to the Master.
                • Slave_SQL_Running indicates if data received from the Master is being processed.
                • Last_IO_Errno, Last_IO_Error, Last_SQL_Errno, Last_SQL_Error, are all pretty much what they say, the last error number and error from the IO or SQL threads
                • Seconds_Behind_Master shows difference between last timestamp read in the binlogs and current time. This is important to understand. It does not report directly the delay between when information is updated/inserted on the master and recorded on the slave. A slow network can cause an artificially inflated number, as well as long running queries or blocking/locking operations.

                Until recently, we had been relying on Seconds_Behind_Master to tell us if Replication was working, and if it was behind the Master by any appreciable level. And, of course, we found ourselves in a perfect storm situation where Replication had silently failed. Data was being sent over to the Slave, was being read by the IO thread, but even though the SQL thread was reporting no errors, the data was not inserted into the Slave. Due to the binlogs being read, Seconds_Behind_Master was reporting 0.

                Solving The Problem

                So how do we solve the problem presented by Seconds_Behind_Master not being 100% reliable? By relying on the replication itself to give us the data. The Percona Toolkit has a couple of very handy scripts for this very purpose. On the Master, we now use pt-heartbeat to insert data.

                mysql> select * from heartbeat\G
                *************************** 1. row ***************************
                                   ts: 2013-12-13T14:50:06.001550
                            server_id: 1
                                 file: mysql-bin.000009
                             position: 1639186
                relay_master_log_file: NULL
                  exec_master_log_pos: NULL
                1 row in set (0.00 sec)

                The relay_master_log_file and relay_master_log_file will be NULL if you are using row based replication.

                pt-heartbeat will update this row at an interval you specify, down to 0.1 seconds. Then, we read this same data from the Slave, and can calculate the actual time delay from when the data was inserted to when it became available to read. This can be accomplished just using pt-heartbeat with it's --monitor or --check switches, or, Percona also provides a set of Nagios plugins for monitoring MySQL. Metal Toad employs the pmp-check-mysql-replication-delay plugin. We also considered using pmp-check-pt-table-checksum to verify the integrity of a few selected tables, but you must use statement based replication, which is not possible for the applications we host.

                So, don't rely on MySQL's own data! Use Replication to verify Replication is still working, and have a system in place to notify you the moment there is an issue.

                If you want to test all the pieces, pt-slave-delay can be used to artificially force the Slave to lag behind the master.

                Jan 31 2014
                Jan 31

                Overview

                The single biggest reason that the Drupal Association is such a great place to work is the Drupal community. You all dedicated your blood, sweat, and tears to make Drupal amazing, and that includes work on Drupal.org, our community’s home. As the organization charged with maintaining Drupal.org, we’ve relied on a pastiche of volunteer support, contractors (at various rate scales), and staff. One of the side effects is that there is a lot of confusion - what role should staff play? When do we hire contractors? When is it ok to ask someone to volunteer their time vs. pay them for it?

                At the Association, we are going to spend a fair amount of time digging into these questions over the next few months. I want to kick that conversation off today with a request for feedback as we start to develop a Procurement Policy that the Board will vote to adopt. The Procurement Policy specifically addresses paid vs. volunteer work, and will not take into consideration the third part of this conversation, staff. We’ll tackle that in another time.

                Please note that this is simply a draft, to get feedback, so none of this is considered final.

                Is this paid work, or volunteer work?

                In thinking about how we might create a Procurement Policy, we decided that there are probably no hard and fast rules for this conversation. Rather, we were able to draft some indicators - if, while considering if the work were paid or volunteer work, we could tick off several or most of these indicators, the answer would lean towards hiring a contractor, rather than looking for a volunteer.

                For paid vs. volunteer work, the indicators we identified are:

                • Responsiveness/Urgency: If an issue comes up that is urgent in nature or requires extreme responsiveness, we may consider a contractor for the role.
                • Is it mission critical?: If the project is holding up Drupal core development or impacts Drupal.org severely, we may consider hiring a contractor rather than trying to find a volunteer.
                • Is it time bound?: If the project has a beginning, a middle, and an end, it can be well suited for a volunteer. Ongoing projects with no end date might be better handled by a contractor (if staff are not available). Think ongoing security updates or server maintenance. In many cases, we might take this kind of work to a contractor, but still use volunteers in an advisory capacity (think Jeremy and Testbots or Narayan and servers). It's just not fair to ask these guys to labor on endlessly and burn them out.
                • Unique skill set: If a project requires a very unique skill set - one not found broadly in the Drupal community - it may make more sense to hire a contractor or work on the project in-house. We DO want to broaden the knowledge of how D.O works in the community, so we will always seek new people to work with as volunteers, but especially if the project is urgent and the skill set is unique, we would seek a contractor to complete the work. It’s worth noting that we will include a requirement for documentation in all contracted work.
                • Does it increase the velocity of contributors? If the project makes it easier for volunteers to contribute work to the project (core, D.O, or otherwise), then it may make sense just to pay for the work to ensure it gets done and makes everyone's life easier. To be clear - we are not suggesting we would hire people to write core code - but we may hire people to write code for D.O that increases the velocity of people who are writing core code, i.e. making their lives easier.
                • Is there a volunteer waiting in the wings? Drupal and the Drupal Association value learning, and we want our community to grow and learn. Volunteer opportunities are a great way to do that. So, if we have a volunteer waiting in the wings, we need to strongly consider that, or find ways to involve the volunteer in a contractor’s work so that everybody wins.

                If a project comes up that can check several of these boxes, we'd likely hire a contractor. This is the framework we would use, but we won't apply it rigidly.

                In Kind Trades

                For in-kind trades, we are looking to ensure that we are doing things with transparency and that we are filling actual needs here at the Association. Here, things are a little more rule-bound:

                • We have a need: Lots of folks offer up products and services for the Association to use free of charge. However, we don't need a lot of them at the moment they are offered (though we may need them in the future. Come back and talk to us in the future!). We won't conduct an in-kind trade for a product or service that does not help us meet our mission.
                • We can use it: There are lots of things we "need" at the Association that we simply don't have the capacity to use well. If we don't have the staff or the plan to use the tool or service, we will not conduct the trade.
                • Public bidding process: As with our procurement policy, for any tool or service with a value of $25,000 or greater, we will conduct a public bidding process, making it clear that we are requesting an in-kind trade. The public bidding process gives other companies a chance to participate. The language would align with our current purchasing policy and be worded like this: Trades or gifts of services and/or products valued at $25,000 or more must demonstrate that a competitive bidding process had been undertaken. The Association must demonstrate that they have requested quotes from and evaluated at least three different vendors to demonstrate that value is being received for in-kind trade made and that the best possible price point has been achieved. Bids cannot be broken up to avoid the $25,000 mark. For needs less than $25,000, we will not need to conduct a public bidding process, but may choose to do so anyhow.
                • Recording of In-Kind Income: All goods and services received and given as part of the in-kind trade will be recorded on our books as such, and will be made visible on our public financial statements. All trades will be documented with a Letter of Agreement that establishes the value of the product and the traded item for IRS records.

                What Next?

                Now we need to know what you think. Please share your comments with us so we can explore this issue more and come up with a policy that makes sense. Thanks in advance for your questions and ideas!

                Flickr photo of DrupalCon Munich volunteers: pdjohnson

                Jan 31 2014
                Jan 31

                The monthly Drupal core bug fix release window is scheduled for this Wednesday. However, the last Drupal 7 bug fix release was only a month ago, and I also won't be available next week to coordinate a release. As a result, there won't be a Drupal 7.x bug fix release in February.

                Upcoming release windows include:

                • Wednesday, February 19 (security release window)
                • Wednesday, March 5 (bug fix release window)

                For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

                xjm
                Jan 31 2014
                Jan 31

                Starting February 14, issues that require API change records must have these change records written before patches are committed. This is Drupal 8 core's valentine to contributed modules. :)

                What issues are affected?

                Any Drupal core issues that introduce backwards-compatibility-breaking API changes are required to have change records documenting the change. Up until now, these change records were created after the issues were committed. Going forward, the change records need to be written and reviewed before the issue is marked RTBC.*

                * Note that in rare cases, core maintainers may allow certain critical patches to go in before the change record is written, for example, in the case of a critical bug, or a high-impact issue that is blocking other work, but please don't count on that. ;)

                How does the new process work?

                1. Follow the normal development process while the patch is being worked on.
                2. Make sure the API change tag is added to issues that break backwards compatibility. (In general, API changes should be documented in the issue summary.)
                3. Once you get the API change approved by a core maintainer, the Needs change record tag can be added to the issue. (Note that the previous tag "Needs change notification" is no longer used.)
                4. Create a change record with the Published checkbox unchecked (the default option), and then remove the "Needs change record" tag from the issue. (All draft change records can be found on the draft change record list.) The 'published' field on Drupal.org change record nodes
                5. In order for the issue to be marked RTBC and committed by a core maintainer, a draft change record documenting the approved API changes is required.
                6. Once the patch for the issue is committed, the core maintainer will simply mark the issue fixed (like any regular issue). The "Published" checkbox can then be checked to make the change record appear in the main Change record list.

                Why are we making this change?

                As we complete Drupal 8 APIs and move toward the first Drupal 8 beta, it's increasingly important that our API documentation is accurate, including our API change records. With the previous process, change records have gone unwritten for months -- 24 change records are still outstanding. Furthermore, the previous process (wherein the issue title, status, priority, category, and tags were all changed) was also convoluted and error-prone, and interfered with accurate core metrics.

                Sounds great! How can I help?

                We need your help to get both outstanding and upcoming change records written so that core and contrib developers can use this critical documentation. Help us stabilize our APIs by:

                Jan 31 2014
                Jan 31

                There are a couple of scenarios we see on pretty much any Drupal-powered website we work on. The first and foremost among those is often that our client wants to, you know, actually be able to easily manage their content. At the same time we need to be able to fit their content into the information architecture and design of the site. When we're talking about entities, nodes, and taxonomy terms it is pretty easy for content managers to go in and edit content. But what about little blocks of text on the home page, in the footer – or featured call-outs on various pages? These kinds of features have been a constant struggle to ensure that they exist through the process, but allow the content itself to change as needed.

                Front-end example of the Pane Module at work.

                Over the years we've tried many different Drupal modules and solutions including, Core blocks, Boxes, Beans, and Fieldable Panels Panes, but none of them quite worked right for both us and our clients. So we wrote our own module called Pane. The key features that we needed from our new module were:

                • Exportability – allow it to be exported through CTools and Features; neither Core blocks, Beans or Fieldable Panels Panes are easily exported
                • Separation of configuration from content – allow the guarantee of existence and format but allow the content to vary; when Boxes are exported they export the content along with configuration
                • Integration with CTools/Panels – we're heavily invested into the CTools universe and needed our tool to work nicely with those; most of the modules above stem from the block system and don't play nicely with CTools
                • Internationalization – allow the content to vary based on the current language; Beans and Fieldable Panels Panes are entities and could presumably be translated, but there's a layer of complexity involved in that which makes it more challenging

                What the Pane module allows a developer to do is create a Pane and embed it on either a CTools Page Manager page as a CTools Content Type or through the normal Block interface. From there the Pane can be edited through the normal Panels or Block interface to edit HTML through a WYSIWYG or if using the Pane Entity Reference plugin to add and order references to various entity types much like the Entity Reference module. Those referenced entities can be output using either a display mode or a View. 

                Once the module is installed and the permissions configured, an admin can go to Administer -> Structure -> Panes and see a list of current Panes and add new ones.

                They can also edit Panes through the normal Panels interface or through the In Place Editor.

                And then exporting can be done through the normal Features interface. The configuration and the data can be exported separately and generally only the container is exported. But if you know that the content isn't going to change and needs to be locked down in Features it can be there as well.

                This module makes a lot of sense both for developers and content administrators. Let us know what you think in the comments.

                jam
                Jan 31 2014
                Jan 31

                A quick blast from the past this week from the first Drupal community event at which I was recording material for a podcast. This gives this particular event an extra sparkle in my memory. True to form for the Drupal community around the world, many of the people I met at this camp have become friends with whom I stay in touch with or even get to see now and then at a DrupalCon or Drupal Camp somewhere. Community ftw!

                ---Original post from April, 2012---

                I just got back from a fantastic weekend with a couple hundred members of the Bulgarian Drupal community at Drupal Camp Sofia 2012. There was a lot of Drupal going on all weekend; some great sessions, lots of hallway track and an evening with the best-dancing Drupal community I have met so far at a little place called Veseloto Selo. A seemingly calm taverna-stye restaurant basically exploded from family-style eating to people dancing on the tables ... well, standing on the chairs at least.

                I wanted to introduce you to a few of the people that were there and some of the sounds of the weekend. This podcast features a bagpiper I encountered in a park in central Sofia, the birds outside my window and parts of conversations with Hristo Atanasov, Kaloyan Petrov, Martin Martinov, Svilen Sabev, and Mario Peshev.

                Stand out quotes:

                • "Because of Drupal, I was able to sustain a business model as a freelancer."
                • "Open source teaches you to share without losing anything. You share; you win and the other person also wins. It's contributing, it's sharing, it's like an open world after all! It's kind of a way of living."
                • "I have a module that runs on 300 sites. It makes me really thankful for all the other modules that I use."

                Thanks for listening! And thank you Drupal Bulgarians for the hospitality!

                Jan 30 2014
                Jan 30

                Master

                • What is the Master module?
                • Does this integrate with Features?
                • How does this work in a Dev-Stage-Live configuration?
                • How do you go about defining your Master modules?
                • I was looking at your “Introducing Master” blog post, and noticed that any module that is not declared as either a master module, or a dependent of a master module are considered redundant and won’t be active on the site. Is that right? And what’s the reason for such tight control?
                • I saw that there was a D6 version, and that it had configuration in the UI. But it looks like the only way to use Master in D7 is with Drush is that right?
                  • Are there any plans to introduce a UI in D7?

                Use Cases

                • There are a few commands on the project page. “drush master-status”, “drush master-execute”, “drush master-removables” and “drush master-absent”. Can you explain what those are and what they do?
                  • drush master-status
                  • drush master-execute
                  • drush master-removables
                  • drush master-absent
                • What are scopes?
                • How can this be used to find modules that are enabled, but not in use?

                Comments on your blog post

                • I'm curious, a lot of this functionality is similar to that of install profiles and features. Also it is very development oriented. Once the project ages and people forget to add dependencies to code, wouldn't the ensure command be a recipe for disaster? Maybe I need to try it to understand the exact purpose :)
                  • If you've got a different workflow you might be interested in using the --no-disable option.

                NodeSquirrel Ad

                Have you heard of/used NodeSquirrel?
                Use "StartToGrow" it's a 12-month free upgrade from the Start plan to the Grow plan. So, using it means that the Grow plan will cost $5/month for the first year instead of $10. (10 GB storage on up to 5 sites)

                Jan 30 2014
                Jan 30

                Master

                • What is the Master module?
                • Does this integrate with Features?
                • How does this work in a Dev-Stage-Live configuration?
                • How do you go about defining your Master modules?
                • I was looking at your “Introducing Master” blog post, and noticed that any module that is not declared as either a master module, or a dependent of a master module are considered redundant and won’t be active on the site. Is that right? And what’s the reason for such tight control?
                • I saw that there was a D6 version, and that it had configuration in the UI. But it looks like the only way to use Master in D7 is with Drush is that right?
                  • Are there any plans to introduce a UI in D7?

                Use Cases

                • There are a few commands on the project page. “drush master-status”, “drush master-execute”, “drush master-removables” and “drush master-absent”. Can you explain what those are and what they do?
                  • drush master-status
                  • drush master-execute
                  • drush master-removables
                  • drush master-absent
                • What are scopes?
                • How can this be used to find modules that are enabled, but not in use?

                Comments on your blog post

                • I'm curious, a lot of this functionality is similar to that of install profiles and features. Also it is very development oriented. Once the project ages and people forget to add dependencies to code, wouldn't the ensure command be a recipe for disaster? Maybe I need to try it to understand the exact purpose :)
                  • If you've got a different workflow you might be interested in using the --no-disable option.

                NodeSquirrel Ad

                Have you heard of/used NodeSquirrel?
                Use "StartToGrow" it's a 12-month free upgrade from the Start plan to the Grow plan. So, using it means that the Grow plan will cost $5/month for the first year instead of $10. (10 GB storage on up to 5 sites)

                nk
                Jan 29 2014
                Jan 29

                As you might've heard, migrate is my last core work -- once migrate is done, I am finished with core development. Meanwhile, I will not participate in any other core issue except those blocking migrate (and the entity query conversion meta). As I have been doing core work for a long time, I am sure there are a few questions. Let me quickly go over them:

                Is this a sign of Drupal in crisis?

                Absolutely not. Drupal is stronger than ever -- my disagreements and the unacceptable manner in which I have expressed them drained contributors and has driven some away. My contributions are no longer useful enough to counter this, by far. I was not working on beta blockers lately so likely the release won't slip because of this.

                Are you gone completely?

                I have taken a break from core work before but by now my vision of Drupal differs from others' not just in technical matters but also in decision-making processes and the future of the community. So yes, I will not participate in core development for a long while, likely forever. However, I will still attend DrupalSouth, Dev Days at Szeged and the camp in New York -- these are necessary to finish Migrate. I will come to DrupalCon Amsterdam -- however, DrupalCon Austin doesn't seem likely at this point. I will still use Drupal. Likely maintain a contrib module or two. I'll be on IRC, on smaller channels, like #drupal-migrate in case you need to ask something. And my email, the contact tab, etc. won't stop working.

                What about stepping down considerately?

                I have dropped all elevated roles I had on drupal.org, groups.drupal.org and qa.drupal.org. The Drupal Code of Conduct asks that those stepping down "take the proper steps to ensure that others can pick up where they left off" -- Others have been working on everything I've participated in in D8. There's no issue that is bottlenecked by me, and I don't foresee any disruption to the project. And again, I am not dropping off the face of the Earth -- please, ask away as necessary.

                tom
                Jan 29 2014
                Jan 29

                After years of developing all types of web solutions, Tom made the strategic decision to focus his efforts into making Drupal a better platform. In 2010, he led the successful exit of his KirkDesigns through a joint venture with Web at Ease. That event formed SystemSeed, which he Co-Founded in 2010 with Parrish McIntyre.

                Jan 29 2014
                Jan 29
                Alex McCabe's picture

                Alex McCabe  | Drupal Developer

                Jan

                29

                2014

                Configuring Views and Date is a simple enough task for any reasonably experienced site builder, but I’ve found that configuring Views to correctly handle and display repeating events is another matter. To get us started, I’ve installed the following modules: 

                - Views
                Views UI

                - Date 
                Date Repeat Field
                Date Views

                I created a content type called “Event”. The next step is to add and configure a Date field. While configuring, make sure you set “Repeating Date” to “Yes”. 

                The rest of the settings are more or less up to you depending on your needs, but if you’re not sure, the defaults should be fine. You’ll notice that the “Number of Values” setting is grayed out. This is because repeating Date fields required Unlimited values.

                Now that your Date field has been configured, create a piece of test content. This will come in handy while you’re building your View. Any sort of repeating event will do. For this example, I’ll be using an event that repeats on the second Thursday of each month for the next three years.

                Now that we have some content to work with, we can create our view. This view will have one Page type display, showing content of the Event type as an unformatted list of fields. Generally, the first thing you will want to do is add a filter to the view to prevent it from showing past events. If you do want past events to be shown, skip this section, otherwise, add a new filter for the Date field on your Event content type. If you chose to use a Date field with both a Start and End date, you’ll want to select the Start date for the filter.

                Generally, I prefer Minute granularity, and you must select “Yes” for “Add multiple value identifier”. Next, set the filter itself to “greater than or equal to” and a relative time of “now”.

                Now your view will only display current and future events. With the addition of this filter, the single event in your preview should have changed to multiple events. The next step: adding and configuring the Date field itself.

                The settings for the Date field are fairly simple. In most cases, you’ll want to hide the repeat rule. However, you must uncheck “Display all values in same row” under “Multiple field settings”, or the view will not correctly display the dates.

                The resulting view should be a perfectly serviceable list of your upcoming events. However, there is one optional extra step: for a given event, only displaying the next occurrence, not any past occurrences or more than one future occurrence. To do this, you’ll need to enable two settings under the Advanced tab on the right. The first is Aggregate, which is under the “Use aggregation” option. 

                The second option we need to enable is Distinct, which is under the “Query settings” option:

                This, combined with the filter we made earlier, will cause your view to display only the next upcoming event for a recurring event series. And with that, you’re done: your view now displays recurring date fields correctly.

                Additional Resources

                Maps and Drupal 7 (Part 1) | Mediacurrent Blog Post

                Jan 29 2014
                Jan 29

                on January 29, 2014 //

                When a client doesn't have time for a glamorous, blue-sky project, what's a consultant to do?

                Soon after I started working at Lullabot, I got my first client, and like all clients this one had a problem. They were a university whose site was running on Drupal 6: 2500 page nodes filled with HTML. The layout was largely managed through the WYSIWYG, and the quality of the HTML was all over the place. They wanted to take this site and migrate it to Drupal 7, with a properly designed content model and responsive layout, possibly using Panels.

                In six months.

                With one developer on staff.

                Cautious optimism

                Despite the difficult demands and daunting schedule, I was excited! It is relatively rare to be approached by someone with a solid technical background who wants to do what is architecturally right for their organization. We started off trying to get a handle on what content was out there, and what the content types and fields might look like. As we progressed, it became apparent that each department was really doing their own thing, and any attempt to build a content model was going to require some discussion to get them all on the same page.

                At the same time, we were discussing issues around migration of the content, and it became apparent that getting these HTML blobs into fields was going to be a big problem. In some cases, if the HTML is tightly structured, you can automate this process by scraping the HTML and extracting the data. However in this case, with no consistency at all, turning this HTML into fielded data was going to be an almost completely manual task.

                Reality

                Only a couple weeks into the project I realized that the schedule was completely unrealistic for what they were attempting to do. So I sat down with the client and we started talking about their priorities. I knew something had to give, but you can't figure out what until you know what is most important. As we talked, it became apparent that in this case, the schedule was a 100% hard dependency - the new site needed to be launched in time for the start of the next school year. Not only that, people were reasonably happy using the site, with the exception of pain around media handling.

                Given this, I recommended that they simply migrate their existing architecture to Drupal 7. This would reduce the number of unknowns to a very small number (mostly related to individual module upgrades) and would give them a very basic migration path. In order to start getting some more structure around their layouts, they would start using Panelizer in some cases (like landing pages) which would give their editors more freedom to place blocks of content without having to hand code HTML. On top of that, we now also had time to address some of the problems around media handling with the addition of some modules that were new for Drupal 7, and a bit of custom code.

                Many devs would look at this solution and shake their heads. You've taken a site that was not much more than hand-coded HTML shoved into a CMS, and turned it into more of the same. What a waste, what a failure!

                I would respectfully disagree. As consultants, our job is not to make a site with the best possible architecture, but to make a site with the best possible architecture within the framework of the client's priorities. Knowing the kind of site this client wanted to build, I was a little reluctant to propose the solution I did, even though I knew it was the best of all the available solutions. While this client was disappointed that they couldn't build the site the way they wanted, they were also hugely relieved to have a plan that looked manageable and achievable. It allowed them to build the site in a way that enabled future upgrades as time permitted, but didn't force the investment immediately.

                Success

                What does a successful consulting project look like? It is a juggling act, and to some extent the rules are different for every one. One of the most important things that we as consultants can do, especially when we are devs or architects at heart, is to leave our own priorities at the door and focus on the client. What are their priorities? What are their pain points? What are their criteria for success? Taking the time to pull all of this data out of the client, and using it to craft a solution, is really the heart of our job, and for me personally, it is what gives me the most joy and satisfaction.

                This is where the real puzzles are solved, where you can make the most of your experience, where you can take all the data you have, and craft something the client didn't even know they wanted in the first place. Now you have a plan that makes sense and meet's the client's goals, both spoken and unspoken. That, my friends, is what success looks like.

                Greg Dunlap

                Senior Drupal Architect

                Want Greg Dunlap to speak at your event? Contact us with the details and we’ll be in touch soon.

                joe
                Jan 28 2014
                Jan 28

                Forms are an essential part of any web application. They are the primary mechanism for collecting input from our users, and without them Drupal wouldn't be very useful. As such, they're also one of the first things people want to learn when they start learning Drupal. Forms are fundamental to creating Drupal modules, whether you're asking someone to leave a review of your video or giving an administrator the option to turn JavaScript aggregation off.

                Form basics

                There are two key elements to crafting forms.  The workflow a form goes through, including how Drupal locates the form to display on a page, handling validation when someone submits a form, and ultimately doing something with the collected data. And the definition of the form itself, in which one determines if your form will have checkboxes, textfields, upload widgets, and/or any user-facing text.

                Form definition

                The way that forms are defined in Drupal hasn't changed that much between Drupal 7 and Drupal 8, and I'm not going to go into too much detail here. Form definitions are still a Drupal render array made up of Form API elements that are ultimately parsed down to the HTML that is presented to the browser. The biggest change to crafting forms is the addition of some new HTML5 elements that can be defined in the Form API array, like tel number date.

                Form workflow

                Truth be told, form workflow hasn't changed that much at a high level  either. We still have the concepts of building, validation, and submission. And they're still all available for us to hook into by simply conforming to a specific pattern. It's really just the pattern that has changed. So lets take a look at that.

                With the move to more modern PHP usage and Object Oriented Progamming patterns in Drupal 8,  we now have the concept of form objects defined by a form class. All form classes implement the interface, which states that any form object should have getFormId, buildForm, validateForm, and submitForm methods. It turns out that this matches up nicely with the build, validate, and submit workflow. By conforming to this interface, we ensure that Drupal knows how to process each step of the workflow for the form in question, given any form object. Before we look at some sample code, however, lets talk just a little bit more about a typical form workflow.

                When a user visits a URL on a site,  /contact for example, Drupal needs to return the HTML representation of the required form so that it can be displayed in the user's browser. In order to get that form definition, Drupal loads the required form object. Then Drupal calls the buildForm() method on that object. This returns a Form API array that Drupal can turn into HTML. Likely this HTML includes also a button that a user can click. Clicking the button generates an HTTP Post request to the URL defined as the action of the form. In Drupal, this is the same URl in which the form is displayed (i.e., /contact). 

                This time, however, when Drupal gets the request for /contact it notices also that the request contains $_POST data. This means that the form being requested has actually just been submitted, and it should proceed to the next step in the workflow, which is validation. So Drupal instantiates our form object and calls the validateForm() method, which it knows is present because we're implementing the FormInterface. If the validation handler determines there are any errors in the data it flags them, and Drupal halts processing. It displays the form to the user to get errors fixed, and then it waits for the user to submit the form again before proceding. If no errors are found, Drupal moves on to the submission step of the workflow by calling our form objects submitForm(). Here we perform whatever logic is necessary with the data we collect in the form, like save it to the database or a config file.

                Once you know how it works, the entire process is actually quite simple and beautiful. And it hasn't changed all that much, even since the Drupal 4.7 era. Many people love to hate it, but it's easy to argue that Form API is one of the strongest features in Drupal.

                Show me some code already!

                Ready to wire it all up? The first thing you'll need to do is create a route for your form. In our example, it looks like this:

                chad.settings:
                  path: '/admin/config/system/chad'
                  defaults:
                    _form: 'Drupal\chad\Form\SettingsForm'
                    _title: 'Chad Settings'
                  requirements:
                    _permission: 'administer site configuration'
                

                The only difference between this route and one that displays non-form content on a page is the _form key instead of the usual _content key. Here _form tells Drupal the location of the class that it should use when constructing our form object. Note that we simply specify the class name here and not the method, like SettingsForm::buildForm. Because we've defined this route as a form, Drupal will call buildForm whenever someone requests /admin/config/system/chad.

                Our form class then looks like the following and lives in lib/Drupal/chad/Form/SettingsForm.php

                /**
                 * @file
                 */
                
                namespace Drupal\chad\Form;
                
                use Drupal\Core\Form\ConfigFormBase;
                
                class SettingsForm extends ConfigFormBase {
                
                  /**
                   * {@inheritdoc}
                   */
                  public function getFormId() {
                    return 'chad_settings';
                  }
                
                  /**
                   * {@inheritdoc}
                   */
                  public function buildForm(array $form, array &$form_state) {
                
                    // Build our Form API array here.
                
                    return parent::buildForm($form, $form_state);
                  }
                
                  /**
                   * {@inheritdoc}
                   */
                  public function submitForm(array &$form, array &$form_state) {
                
                    // Handle submitted values in $form_state here.
                
                    return parent::submitForm($form, $form_state);
                  }
                
                }
                

                Also note that we've opted to extend the Drupal\Core\Form\ConfigFormBase class which provides some additional boilerplate code for system settings forms. There is a Drupal\Core\Form\FormBase class also. This is a great starting point for most forms because it handles injection of common dependencies. Nevertheless, anything that implements the FormInterface will work.

                See the previous post in this series, Drupal 8: Writing a Hello World Module, for background on the code this video utilizes.

                Finally, watch the video to see it all wired together and working:

                Jan 28 2014
                Jan 28

                I have been experimenting with the Alpha release of Drupal 8 and so I'm sharing some of my experiences so that you can avoid the pitfalls I have encountered.

                First I would like to give credit to the two articles I used during the exercise:

                Hopefully this article will provide a third point-of-view to make your task easier.

                a) Create the file

                In D8 the location of files is very important. The field type must be located as follows:
                <module_name>/lib/Drupal/<module_name>/Plugin/field/field_type/<field_type_name>.php
                N.B. The field type name should be in CamelCase.

                b) Add Contains, namespace and use

                In the newly created field type file add a brief comment to explain what it consists of:

                /**
                * @file
                * Contains \Drupal\<module_name>\Plugin\field\field_type\<field_type_name>.
                */

                N.B. The "Contains..." line should match the location and name of this file.

                Then add the namespace as follows:

                namespace Drupal\<module_name>\Plugin\field\field_type;

                N.B. It is vital that the namespace matches the location of the file otherwise it will not work.

                Then add the following uses:

                use Drupal\field\Plugin\Type\FieldType\ConfigFieldItemBase;

                This provides the class that the field item will extend.

                use Drupal\field\FieldInterface;

                This provides a variable type required within the field item class.

                c) Add field details annotation

                Annotations are an important part of Drupal 8 and must not be treated as simple comments! :o) The annotation should appear as follows:

                /**
                * Plugin implementation of the '<field_type_name>' field type.
                *
                * @FieldType(
                *   id = "<field_type_id>",
                *   label = @Translation("<field_type_label>"),
                *   description = @Translation("<field_type_description>"),
                *   default_widget = "<field_type_default_widget>",
                *   default_formatter = "<field_type_default_formatter>"
                * )
                */

                N.B. All text represented by a <placeholder> should be appropriately replaced according to requirements. The default_widget and default_formatter must match the ids of a widget and a formatter.

                d) Add field item class

                Create the field item class as follows:

                class <field_type_name> extends ConfigFieldItemBase {}

                N.B. The <field_type_name> must match the name of this file (case-sensitive).

                The class should contain the following:

                i. schema()

                The schema() function defines the sub-field(s) that make up the field item. Here is an example:

                  /**
                   * {@inheritdoc}
                   */
                  public static function schema(FieldInterface $field) {
                    return array(
                      'columns' => array(
                        'forename' => array(
                          'type' => 'varchar',
                          'length' => 256,
                          'not null' => TRUE,
                        ),
                        'surname' => array(
                          'type' => 'varchar',
                          'length' => 256,
                          'not null' => TRUE,
                        ),
                        'age' => array(
                          'type' => 'int',
                          'not null' => TRUE,
                        ),
                      ),
                    );
                  }

                ii. isEmpty()

                The isEmpty() function defines what constitutes an empty field item, e.g.

                  /**
                   * {@inheritdoc}
                   */
                  public function isEmpty() {
                    $value = $this->get('forename')->getValue();
                    return $value === NULL || $value === '';
                  }

                iii. getPropertyDefinitions()

                The getPropertyDefinitions() function defines the data types of the fields, e.g.

                  /**
                   * {@inheritdoc}
                   */
                  static $propertyDefinitions;
                  /**
                   * {@inheritdoc}
                   */
                  public function getPropertyDefinitions() {
                    if (!isset(static::$propertyDefinitions)) {
                      static::$propertyDefinitions['forename'] = array(
                        'type' => 'string',
                        'label' => t('Forename'),
                      );
                      static::$propertyDefinitions['surname'] = array(
                        'type' => 'string',
                        'label' => t('Surname'),
                      );
                      static::$propertyDefinitions['age'] = array(
                        'type' => 'integer',
                        'label' => t('Age'),
                      );
                    }
                    return static::$propertyDefinitions;
                  }

                Here is a simple example, similar to that described above.

                Jan 28 2014
                Jan 28

                I have been experimenting with the Alpha release of Drupal 8 and so I'm sharing some of my experiences so that you can avoid the pitfalls I have encountered.

                First I would like to give credit to the two articles I used during the exercise:

                Hopefully this article will provide a third point-of-view to make your task easier.

                a) Create the file

                In D8 the location of files is very important. The field type must be located as follows:
                <module_name>/lib/Drupal/<module_name>/Plugin/field/field_type/<field_type_name>.php
                N.B. The field type name should be in CamelCase.

                b) Add Contains, namespace and use

                In the newly created field type file add a brief comment to explain what it consists of:

                /**
                * @file
                * Contains \Drupal\<module_name>\Plugin\field\field_type\<field_type_name>.
                */

                N.B. The "Contains..." line should match the location and name of this file.

                Then add the namespace as follows:

                namespace Drupal\<module_name>\Plugin\field\field_type;

                N.B. It is vital that the namespace matches the location of the file otherwise it will not work.

                Then add the following uses:

                use Drupal\field\Plugin\Type\FieldType\ConfigFieldItemBase;

                This provides the class that the field item will extend.

                use Drupal\field\FieldInterface;

                This provides a variable type required within the field item class.

                c) Add field details annotation

                Annotations are an important part of Drupal 8 and must not be treated as simple comments! :o) The annotation should appear as follows:

                /**
                * Plugin implementation of the '<field_type_name>' field type.
                *
                * @FieldType(
                *   id = "<field_type_id>",
                *   label = @Translation("<field_type_label>"),
                *   description = @Translation("<field_type_description>"),
                *   default_widget = "<field_type_default_widget>",
                *   default_formatter = "<field_type_default_formatter>"
                * )
                */

                N.B. All text represented by a <placeholder> should be appropriately replaced according to requirements. The default_widget and default_formatter must match the ids of a widget and a formatter (see Part 2 of this article).

                d) Add field item class

                Create the field item class as follows:

                class <field_type_name> extends ConfigFieldItemBase {}

                N.B. The <field_type_name> must match the name of this file (case-sensitive).

                The class should contain the following:

                i. schema()

                The schema() function defines the sub-field(s) that make up the field item. Here is an example:

                  /**
                   * {@inheritdoc}
                   */
                  public static function schema(FieldInterface $field) {
                    return array(
                      'columns' => array(
                        'forename' => array(
                          'type' => 'varchar',
                          'length' => 256,
                          'not null' => TRUE,
                        ),
                        'surname' => array(
                          'type' => 'varchar',
                          'length' => 256,
                          'not null' => TRUE,
                        ),
                        'age' => array(
                          'type' => 'int',
                          'not null' => TRUE,
                        ),
                      ),
                    );
                  }

                ii. isEmpty()

                The isEmpty() function defines what constitutes an empty field item, e.g.

                  /**
                   * {@inheritdoc}
                   */
                  public function isEmpty() {
                    $value = $this->get('forename')->getValue();
                    return $value === NULL || $value === '';
                  }

                iii. getPropertyDefinitions()

                The getPropertyDefinitions() function defines the data types of the fields, e.g.

                  /**
                   * {@inheritdoc}
                   */
                  static $propertyDefinitions;
                  /**
                   * {@inheritdoc}
                   */
                  public function getPropertyDefinitions() {
                    if (!isset(static::$propertyDefinitions)) {
                      static::$propertyDefinitions['forename'] = array(
                        'type' => 'string',
                        'label' => t('Forename'),
                      );
                      static::$propertyDefinitions['surname'] = array(
                        'type' => 'string',
                        'label' => t('Surname'),
                      );
                      static::$propertyDefinitions['age'] = array(
                        'type' => 'integer',
                        'label' => t('Age'),
                      );
                    }
                    return static::$propertyDefinitions;
                  }

                Here is a simple example, similar to that described above.

                Continue to Part 2: Field widget to continue creating a custom field.

                Jan 28 2014
                Jan 28

                On January 25th and 26th, for the first time in Sevilla, the local community took part of the Drupal Global Sprint Weekend , a worldwide event where people join together and contribute back to Drupal. We are very proud for having hosted this event, which filled our office with a group of 20 individuals willing to contribute and mentor contributors.

                Drupal Global Sprint Weekend Sevilla Group PhotoDrupal Global Sprint Weekend Sevilla Group Photo

                You can see the tasks we worked on by searching the issue queues with the tag D8SVQ and all the issues of the global spring using the tag SprintWeekend.

                Not only local Drupaleros joined, but some of the more brilliant Drupal devs in Spain traveled to Sevilla for joining the fun and helping with the mentoring. A big hug for all of you and all the participants and hope you had a great time here and you visit us again!

                Sharing their passion for Drupal and the Drupal Community there made that some people convinced to register to Drupal Developer Days in Szeged and they booked in site! You should do that if you haven’t already!

                Druplicon CookiesWe had Druplicon cookies. Thanks @jlbellido for baking them

                We want to thank the Spanish Drupal Association, which sponsored the event by providing budget for snacks, food and drinks for keeping people focused and caffeinated on work. Thanks also for Forcontu, lead company on Drupal training in Spanish, which sponsored with a Drupal 7 Expert book for raffling between those who finished a patch on Saturday in his first contributing experience.

                We really enjoyed the experience so much and we are looking forward to host and participate in other Sprints quite soon… Keep in touch!

                Jan 28 2014
                Jan 28

                When preparing for a big event, it is our job to make sure the general public sees exactly what is expected, and with the help of Amazon Web Service (AWS) we did! All planning comes with a few standard issue assessments/steps: Identify need, identify options, and begin to build!

                Identifying Need:
                Metal Toad Media has hosted the Emmy’s website for four years, and every year in preparation for the Emmy’s big events; during nominations (July 18, 2013) and the live event (September 22, 2013), Metal Toad Media prepared by building out more servers to handle the large traffic increase. This past year, between nominations and the live event, Dsire was given the monumental task of rebuilding the Emmy’s website - read about their adventure here.

                In September, we were notified Dsire had finished the new site and new code base. The site would experience the same traffic hikes during the live event as seen during nominations, thus, we ran our load test to ensure the new site would perform as expected. Clearance to do the test came a week before the event, and the new requirements were much more CPU intensive than originally anticipated. During the test we found the CPU load was 8:1 (server process requirements were 8x higher than what we had provisioned).

                To give an idea of the traffic spike, before nominations the standard was a few thousand hits a day. During the live event, there were over 3.5 million page views and 220 million http hits in a 3-hour period.

                Identifying Options
                Metal Toad insures 100% up-time of the website for the Emmy live event, so with test results expressing urgent need, and four days time, we had three options:

                • Create a landing page that would not crash (a non-professional option that would contractually work)
                • Get more servers (which would allow us to be prepared, but would cost tens of thousands in equipment that would not be utilized anytime after the event, and the set-up time would take far more than the four days we had until the event)
                • Or, host using AWS and build a custom cloud for the client’s needs. In this instance, AWS cloud offered several advantages:
                  • More flexibility to scale up and down during events
                  • Redundancy in multiple physical locations (availability zones)
                  • Avoids costs of a second move back to co-location

                Building a Custom Cloud



                When we determined constructing an AWS custom cloud for Emmys.com would be the best solution, we started where we lacked the most, processing power for the web app. By using the performance data from our before and after tests to calculate the number of CPU’s we would need, we found that 11 Web servers each with 8 CPUs and 30GB of RAM were necessary; these would be the backbone of the Drupal site, running PHP, apache, glusterfs, and Memcache.

                Since Drupal is a database heavy application, we also needed to move the database server to AWS. Instead of spinning up more EC2 instances and managing this manually, we leveraged their RDS. This provisioned the database service and a slave server in a separate availability zone, each was configured with 8 CPU, 65GB of ram, and had 20,000 reserved IOPS.

                Next, we needed a way to balance traffic among the webheads. To do this we used AWS Elastic Load Balancer which allows us to balance traffic between web servers and in the event that one failed, it would be removed from circulation.

                Lastly, we weren’t going to be able to build, configure, and maintain these servers by hand. So, we leveraged our existing Puppet master and modified the classes to work with AWS, this allowed us to spin up the AWS Instances in minutes instead of hours or days.

                The End Result
                As Vice President Tony put it: “After studying the stats from the Creative Arts show, we realized substantially more hardware was needed. In less than four days, our devops team built a new cloud stack, and migrated all three sites plus a new mobile domain, and the iPhone data feeds.”

                It was a great feeling to see the Emmy’s website survive the rush of the 65th annual live show. Metal Toad found a strong utilization for custom cloud building, and the general public got a reminder that if Neil Patrick Harris ever does Dancing with the Stars, no one stands a chance.

                Ben
                Jan 27 2014
                Jan 27

                Certified to Rock, an automated grassroots answer to Drupal developer certification, has been out-of-date for 2 years. This is so bad that, for example, the system scores alexpott (Drupal 8 co-maintainer) as a 5 and vijaycs85 (one of the top Drupal 8 contributors) as a 1!

                Now, a 5 is not bad but compared to other Drupal co-maintainers you might think that Alex deserves a higher score. The problem is that the data that powers Certified to Rock (CTR) has not been "refreshed" recently so it doesn't know about all the awesome work Alex has been doing for Drupal. And while much of the system is automated, a refresh is not a single button push away.

                The team behind CTR cares very much about how out of date the data is but has done very little to change that. That's what this post is about and why you (if you have an opinion at all about CTR) should keep reading.

                CTR, for sale to a good owner

                It was briefly announced a month ago but we're serious, CTR is for sale. The CTR team (hereon referred to as "we") has been unable to give it the time, attention, and updates it needs to adequately serve its purpose.

                We very much still believe in the mission behind CTR and we want to find an owner (or owners) who at least somewhat align with that mission. Our theory is that a monetary exchange is a strong indicator that someone values CTR and will make it succeed. I'd like to explain a bit about what Certified to Rock is, why it exists, and hopefully make a strong case for why I hope its mission can continue with you.

                What is CTR and how does it work

                CTR gathers and scrapes public data about contributors and their contributions to the Drupal project and, using a private algorithm, distills that data down into a number between 1 and 11. CTR currently has a score for 82,000 people who have contributed to the Drupal project. CTR is an answer to the problem of how to certify talent for the Drupal project. Our answer is to make it somewhat easier to understand the public contributions to the project. We think this is a better method than a test (or tests) administered by a company. We've written more about this idea at http://certifiedtorock.com/criticisms-of-certification-programs.

                The metrics and scoring algorithm is private to help protect against gaming of the system. By keeping it private we hope to encourage people to contribute to the Drupal project in their own ways, not in whatever specific ways increase their CTR score. Think of why Google's Page Rank algorithm and system is private, CTR's stance is for similar reasons. We've written more about this at http://certifiedtorock.com/about-certified-to-rock-for-drupal and our blog.

                Why you should believe in CTR

                Traditional certification for Drupal already exists, it's just not prevalent or well accepted. This "opportunity" will lead others to return to the problem space and try again. We believe that one of the best ways to measure and understand an individual's skill with Drupal is by encouraging them to participate and contribute, in the open, to the betterment of themselves and the Drupal project and community. The CTR of right now is the beginning of that process, its first incarnation. It is not perfect, it doesn't make enough (or often enough) measurements, and it could do a lot more to measure the contributions of non developers, site builders, designers, and business owners, among others. And that's where you might come in.

                What's for sale

                • Drupal 7 site with contributed and custom modules and custom theme
                • The code to gather and score people
                • Current database dump from the site
                • The domain certifiedtorock.com and the @certifiedtorock twitter account
                • Development and maintenance documents
                • Source artwork (a mix of svg, xcf and psd)

                Why you should buy it

                • Fame, glory, satisfaction of intellectual curiosity
                • Identifying talent - good for hiring or referrals
                • Ads or premium listings

                Fame/glory/curiosity
                When we made CTR we did it as proof of our ideas as an experiment in finding an alternative to the traditional certifications of OSS. At GVS (the original company behind CTR), a fair number of people who hired us or referred work to us had a positive impression of CTR. It was also really intellectually stimulating to research how to build a ranking system like this and apply best practices from other communities and rankings to the Drupal world.

                Identifying talent
                One of the things we did prior to releasing a new set of results was look for people whose score had moved up or down the most. This was a form of QA but also was a great way to find that "new person" doing great work in the Drupal world but hasn't been hired to a dream job yet. Imagine CTR is a crystal ball, showing great future Drupal talent.

                Ads or premium listings
                The site gets a fair amount of traffic even though it has essentially been retired for 2 years. When it was active it got thousands of visitors per month, and the content is very thin. If it had more content (e.g. company specific pages) it would get a lot more traffic. Based on our conversations and analysis of site traffic, we believe that a lot of people who visit it are in the market to hire a Drupal developer or themer. Those people are about to make a purchase decision and are therefore extremely valuable to advertisers.

                If you are a Drupal shop, think about your advertising budget for a minute. Think about your community reputation. Think about the pain you have in the hiring process. Think about your repeatable income from side projects. Consider that CTR can improve all three of those things.

                What to offer

                It should be clear that our primary goal with selling CTR is to prolong and improve its mission, to better the Drupal project by encouraging open and honest contributions, and to make it easier to understand the different contributions to Drupal. Offers are considered almost entirely on how you plan to address those goals. Finally, we would like to repay our hosting and Github service fees but otherwise are not expecting CTR to raise wild amounts of money from the sale.

                Replies to this post

                Note, before you criticize CTR (and you are very much welcome to do so) I please, pretty please, please ask that you familiarize yourself with its context and existing criticisms before doing so. And finally, I've used "we" a lot in this post and that's because CTR is a group effort. The original "groupies" are listed here and we grew that list since 2011 by about 5 people. You can leave feedback via the CTR contact form, tweet @certifiedtorock, or via my contact form.

                Pages