Re-add old blog posts from Astro

This commit is contained in:
Oliver Davies 2024-03-09 23:35:00 +00:00
parent 076239fa25
commit 051e154c65
178 changed files with 13479 additions and 7 deletions

View file

@ -6,6 +6,6 @@ sculpin_content_types:
podcast_episodes:
permalink: /podcast/:basename/
posts:
enabled: false
permalink: /blog/:basename/
talks:
permalink: /talks/:basename/

View file

@ -0,0 +1,8 @@
<figure class="block">
<img src="{{ image.src }}" alt="{{ image.alt }}" class="p-1 border">
{% if caption %}
<figcaption class="mt-1 mb-0 italic text-sm text-center text-gray-800">
{{ caption }}
</figcaption>
{% endif %}
</figure>

View file

@ -0,0 +1,10 @@
<div class="my-4 flex justify-center {{ class }}">
<blockquote
class="twitter-tweet"
lang="en"
{% if not data_cards %}data-cards="hidden"{% endif %}
{% if no_parent %}data-conversation="none"{% endif %}
>
{{ content|raw }}
</blockquote>
</div>

View file

@ -1,6 +1,8 @@
<div>
<iframe
allowfullscreen
class="w-full border border-gray-500 aspect-[16/9]"
frameborder="0"
src="https://www.youtube.com/embed/{{ id }}?rel=0&iv_load_policy=3"
></iframe>
</div>

View file

@ -0,0 +1,16 @@
{% extends 'page' %}
{% block content_wrapper %}
<time datetime="{{ page.date|date('Y-m-d') }}">{{ page.date|date('F jS, Y') }}</time>
{{ parent() }}
{% endblock %}
{% block content_bottom %}
{% include 'daily-email-form.html.twig' with {
intro: 'Sign up here and get more like this delivered straight to your inbox every day.',
title: 'Was this useful?',
} %}
{% include 'about-me.html.twig' %}
{% endblock %}

View file

@ -0,0 +1,29 @@
---
title: 10 years working full time with Drupal and PHP
excerpt: 10 years ago today, I started working for Horse & Country TV in what was my full-time Drupal development role.
tags:
- drupal
- personal
- php
date: 2020-07-19
---
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">10 years ago today, I started my first full-time Web Developer job, working for <a href="https://twitter.com/HorseAndCountry?ref_src=twsrc%5Etfw">@HorseAndCountry</a> on their (at the time) <a href="https://twitter.com/hashtag/Drupal?src=hash&amp;ref_src=twsrc%5Etfw">#Drupal</a> 6 website.</p>&mdash; Oliver Davies (@opdavies) <a href="https://twitter.com/opdavies/status/1284744784037335040?ref_src=twsrc%5Etfw">July 19, 2020</a></blockquote>
10 years ago today, I started working for [Horse & Country TV](https://horseandcountry.tv) in what was my full-time Drupal development role.
I'd been learning and working with Drupal for a couple of years prior to this, working on some personal and freelance projects, but when I was looking to move back to this area of Wales, this job on my doorstep was ideal.
Initially starting as the sole Developer before another started a few months later, I remember being very excited to see and learn how this site has been built. Some of the main things that I remember working on was re-developing the Events section and adding paid events with [Ubercart](https://www.drupal.org/project/ubercart), and expanding my module development knowledge by adding a custom block that programmatically showed the current and next programme on the channel.
As well as working with Drupal itself, it was a great opportunity to get more hands-on experience with Linux servers and to learn new tools such as [Git](https://git-scm.com) for version control.
I also remember being asked to contribute to a public issue on Drupal.org as part of the interview process to demonstrate my debugging abilities. I decided to look at [this Drupal 6 issue](https://www.drupal.org/node/753898), and posted a comment with some updated code that I then forwarded on, and then uploaded a patch to the issue queue. This is still one of my favourite approaches for interviews, and one that I've used myself since when interviewing people for roles that use open source technologies. I much prefer this to working on internal, company specific coding tests, as it gives the interviewee some real world experience and exposure to the project itself and its community, rather than just how to _use_ it.
Posting on a Drupal core issue and submitting patches was a bit scary at the time, but I think paved the way for me later contributing to core and other Drupal and open source projects. In fact, I was a Contribution Day mentor at DrupalCon Los Angeles in 2015 and helped someone get _their_ first commit to core when [a fix was committed to Drupal 8](https://git.drupalcode.org/project/drupal/commit/9cdd22c).
After this role, I've worked for various agencies working primarily with Drupal and PHP, as well as for the [Drupal Association](https://www.drupal.org/assocation) itself. Whilst in recent years I've also started working with other frameworks like Symfony and Vue.js, Drupal and PHP has always been my core specialism.
I've been very excited by the developments in both PHP and Drupal in recent versions, and I'm looking forward to the next 10 years working with them.
Thank you Horse & Country for giving me the chance to start on my full-time Drupal journey!

84
source/_posts/2014.md Normal file
View file

@ -0,0 +1,84 @@
---
title: "2014"
date: 2015-03-20
excerpt: A look back at 2014.
tags:
- drupal-association
- drupalcamp-london
- personal
tweets: true
---
A lot happened in 2014. Here are some of the main things that I'd like to
highlight.
## Joined the Drupal Association
This was the main thing for me this year, in May I left
[Precedent](http://precedent.com) and joined the
[Drupal Association](https://assoc.drupal.org). I work on the Engineering team,
focused mainly on [Drupal.org](https://www.drupal.org) but I've also done some
theming work on the DrupalCon [Amsterdam](http://amsterdam2014.drupal.org) and
[Latin America](http://latinamerica2015.drupal.org) websites, and some
pre-launch work on [Drupal Jobs](https://jobs.drupal.org).
Some of the tasks that I've worked on so far are:
- Fixing remaining issues from the Drupal.org Drupal 7 upgrade.
- Improving pages for
[Supporting Partners](https://www.drupal.org/supporters/partners),
[Technology Supporters](https://www.drupal.org/supporters/technology) and
[Hosting Partners](https://www.drupal.org/supporters/hosting). These
previously were manually updated pages using HTML tables, which are now
dynamic pages built with [Views](https://www.drupal.org/project/views) using
organisation nodes.
- Configuring human-readable paths for user profiles using
[Pathauto](https://www.drupal.org/project/pathauto). Only a small change, but
made a big difference to end-users.
- Migration of user data from profile values to fields, and various user profile
improvements. This was great because now we can do things like reference
mentors by their username and display their picture on your profile, as well
as show lists of peope listing a user as their mentor. This, I think, adds a
more personal element to Drupal.org because we can see the actual people and
not just a list of names on a page.
I've started keeping a list of tasks that I've been involved with on my
[Work](/work/) page, and will be adding more things as I work on them.
### Portland
I was able to travel to Portland, Oregon twice last year to meet with the rest
of the Association staff. Both times I met new people and it was great to spend
some work and social time with everyone, and it was great to have everyone
together as a team.
## My First DrupalCamp
In February, I attended [DrupalCamp London](http://2014.drupalcamplondon.co.uk).
This was my first time attending a Camp, and I managed to attend some great
sessions as well as meet people who I'd never previously met in person. I was
also a volunteer and speaker, where I talked about
[Git Flow](/blog/what-git-flow/) - a workflow for managing your Git projects.
{% include 'tweet' with {
content: '<p>Great presentation by <a href="https://twitter.com/opdavies">@opdavies</a> on git flow at <a href="https://twitter.com/search?q=%23dclondon&amp;src=hash">#dclondon</a> very well prepared and presented. <a href="http://t.co/tDINp2Nsbn">pic.twitter.com/tDINp2Nsbn</a></p>&mdash; Greg Franklin (@gfranklin) <a href="https://twitter.com/gfranklin/statuses/440104311276969984">March 2, 2014</a>'
} %}
I was also able to do a little bit of sprinting whilst I was there, reviewing
other people's modules and patches.
Attending this and [DrupalCon Prague](https://prague2013.drupal.org) in 2013
have really opened my eyes to the face-to-face side of the Drupal community, and
I plan on attending a lot more Camps and Cons in the future.
## DrupalCon Amsterdam
I was also able to travel to Holland and attend
[DrupalCon Amsterdam](https://amsterdam2014.drupal.org) along with other members
of Association staff.
## DrupalCamp Bristol
In October, we started planning for
[DrupalCamp Bristol](http://www.drupalcampbristol.co.uk). I'm one of the
founding Committee members,

View file

@ -0,0 +1,30 @@
---
title: Accessible Bristol site launched
date: 2012-11-15
excerpt:
I'm happy to report that the Accessible Bristol was launched this week, on
Drupal 7.
tags:
- accessibility
- accessible-bristol
- nomensa
---
I'm happy to announce that the
[Accessible Bristol](http://www.accessiblebristol.org.uk) website was launched
this week, on Drupal 7. The site has been developed over the past few months,
and uses the [User Relationships](http://drupal.org/project/user_relationships)
and [Privatemsg](http://drupal.org/project/privatemsg) modules to provide a
community-based platform where people with an interest in accessibility can
register and network with each other.
The site has been developed over the past few months, and uses the
[User Relationships](http://drupal.org/project/user_relationships) and
[Privatemsg](http://drupal.org/project/privatemsg) modules to provide a
community-based platform where people with an interest in accessibility can
register and network with each other.
The group is hosting a launch event on the 28th November at the Council House,
College Green, Bristol. Interested? More information is available at
<http://www.accessiblebristol.org.uk/events/accessible-bristol-launch> or go to
<http://buytickets.at/accessiblebristol/6434> to register.

View file

@ -0,0 +1,79 @@
---
title: Add a Taxonomy Term to Multiple Nodes Using SQL
date: 2010-07-07
excerpt: How to add a new taxonomy term to multiple nodes in Drupal using SQL.
tags:
- database
- drupal-6
- drupal-planet
- sequal-pro
- sql
- taxonomy
---
In preparation for my Blog posts being added to
[Drupal Planet](http://drupal.org/planet), I needed to create a new Taxonomy
term (or, in this case, tag) called 'Drupal Planet', and assign it to new
content to imported into their aggregator. After taking a quick look though my
previous posts, I decided that 14 of my previous posts were relevant, and
thought that it would be useful to also assign these the 'Drupal Planet' tag.
I didn't want to manually open each post and add the new tag, so I decided to
make the changes myself directly into the database using SQL, and as a follow-up
to a previous post -
[Quickly Change the Content Type of Multiple Nodes using SQL](/blog/change-content-type-multiple-nodes-using-sql/).
**Again, before changing any values within the database, ensure that you have an
up-to-date backup which you can restore if you encounter a problem!**
The first thing I did was create the 'Drupal Planet' term in my Tags vocabulary.
I decided to do this via the administration area of my site, and not via the
database. Then, using [Sequel Pro](http://www.sequelpro.com), I ran the
following SQL query to give me a list of Blog posts on my site - showing just
their titles and nid values.
```sql
SELECT title, nid FROM node WHERE TYPE = 'blog' ORDER BY title ASC;
```
I made a note of the nid's of the returned nodes, and kept them for later. I
then ran a similar query against the term_data table. This returned a list of
Taxonomy terms - showing the term's name, and it's unique tid value.
```sql
SELECT NAME, tid FROM term_data ORDER BY NAME ASC;
```
The term that I was interested in, Drupal Planet, had the tid of 84. To confirm
that no nodes were already assigned a taxonomy term with this tid, I ran another
query against the database. I'm using aliases within this query to link the
node, term_node and term_data tables. For more information on SQL aliases, take
a look at <http://w3schools.com/sql/sql_alias.asp>.
```sql
SELECT * FROM node n, term_data td, term_node tn WHERE td.tid = 84 AND n.nid = tn.nid AND tn.tid = td.tid;
```
As expected, it returned no rows.
The table that links node and term_data is called term_node, and is made up of
the nid and vid columns from the node table, as well as the tid column from the
term_data table. Is it is here that the additional rows would need to be
entered.
To confirm everything, I ran a simple query against an old post. I know that the
only taxonomy term associated with this post is 'Personal', which has a tid
value of 44.
```sql
SELECT nid, tid FROM term_node WHERE nid = 216;
```
Once the query had confirmed the correct tid value, I began to write the SQL
Insert statement that would be needed to add the new term to the required nodes.
The nid and vid values were the same on each node, and the value of my taxonomy
term would need to be 84.
Once this had completed with no errors, I returned to the administration area of
my Drupal site to confirm whether or not the nodes had been assigned the new
term.

View file

@ -0,0 +1,84 @@
---
title: Adding Custom Theme Templates in Drupal 7
date: 2012-04-19
excerpt: >
Today, I had a situation where I was displaying a list of teasers for news
article nodes. The article content type had several different fields assigned
to it, including main and thumbnail images. In this case, I wanted to have
different output and fields displayed when a teaser was displayed compared to
when a complete node was displayed.
tags:
- drupal
- drupal-planet
---
Today, I had a situation where I was displaying a list of teasers for news
article nodes. The article content type had several different fields assigned to
it, including main and thumbnail images. In this case, I wanted to have
different output and fields displayed when a teaser was displayed compared to
when a complete node was displayed.
I have previously seen it done this way by adding this into in a node.tpl.php
file:
```php
if ($teaser) {
// The teaser output.
}
else {
// The whole node output.
}
```
However, I decided to do something different and create a separate template file
just for teasers. This is done using the hook_preprocess_HOOK function that I
can add into my theme's template.php file.
The function requires the node variables as an argument - one of which is
theme_hook_suggestions. This is an array of suggested template files that Drupal
looks for and attempts to use when displaying a node, and this is where I'll be
adding a new suggestion for my teaser-specific template. Using the `debug()`
function, I can easily see what's already there.
```php
array (
0 => 'node__article',
1 => 'node__343',
2 => 'node__view__latest_news',
3 => 'node__view__latest_news__page',
)
```
So, within my theme's template.php file:
```php
/**
* Implementation of hook_preprocess_HOOK().
*/
function mytheme_preprocess_node(&$variables) {
$node = $variables['node'];
if ($variables['teaser']) {
// Add a new item into the theme_hook_suggestions array.
$variables['theme_hook_suggestions'][] = 'node__' . $node->type . '_teaser';
}
}
```
After adding the new suggestion:
```php
array (
0 => 'node__article',
1 => 'node__343',
2 => 'node__view__latest_news',
3 => 'node__view__latest_news__page',
4 => 'node__article_teaser',
)
```
Now, within my theme I can create a new node--article-teaser.tpl.php template
file and this will get called instead of the node--article.tpl.php when a teaser
is loaded. As I'm not specifying the node type specifically and using the
dynamic <em>\$node->type</em> value within my suggestion, this will also apply
for all other content types on my site and not just news articles.

View file

@ -0,0 +1,104 @@
---
title: Announcing the Drupal VM Generator
date: 2016-02-15
excerpt: For the past few weeks, Ive been working on a personal side project based on Drupal VM - the Drupal VM Generator.
tags:
- drupal
- drupal-planet
- drupal-vm
- drupal-vm-generator
- symfony
---
For the past few weeks, Ive been working on a personal side project based on
Drupal VM. Its called the [Drupal VM Generator][1], and over the weekend Ive
added the final features and fixed the remaining issues, and tagged the 1.0.0
release.
![](/images/blog/drupalvm-generate-repo.png)
## What is Drupal VM?
[Drupal VM][2] is a project created and maintained by [Jeff Geerling][3]. Its a
[Vagrant][4] virtual machine for Drupal development that is provisioned using
[Ansible][5].
What is different to a regular Vagrant VM is that uses a file called
`config.yml` to configure the machine. Settings such as `vagrant_hostname`,
`drupalvm_webserver` and `drupal_core_path` are stored as YAML and passed into
the `Vagrantfile` and the `playbook.yml` file which is used when the Ansible
provisioner runs.
In addition to some essential Ansible roles for installing and configuring
packages such as Git, MySQL, PHP and Drush, there are also some roles that are
conditional and only installed based on the value of other settings. These
include Apache, Nginx, Solr, Varnish and Drupal Console.
## What does the Drupal VM Generator do?
> The Drupal VM Generator is a Symfony application that allows you to quickly
> create configuration files that are minimal and use-case specific.
Drupal VM comes with an [example.config.yml file][6] that shows all of the
default variables and their values. When I first started using it, Id make a
copy of `example.config.yml`, rename it to `config.yml` and edit it as needed,
but a lot of the examples arent needed for every use case. If youre using
Nginx as your webserver, then you dont need the Apache virtual hosts. If you
are not using Solr on this project, then you dont need the Solr variables.
For a few months, Ive kept and used boilerplace versions of `config.yml` - one
for Apache and one for Nginx. These are minimal, so have most of the comments
removed and only the variables that I regularly need, but these can still be
quite time consuming to edit each time, and if there are additions or changes
upstream, then I have two versions to maintain.
The Drupal VM Generator is a Symfony application that allows you to quickly
create configuration files that are minimal and use-case specific. It uses the
[Console component][7] to collect input from the user, [Twig][8] to generate the
file, the [Filesystem component][9] to write it.
Based on the options passed to it and/or answers that you provide, it generates
a custom, minimal `config.yml` file for your project.
Heres an example of it in action:
!['An animated gif showing the interaction process and the resulting config.yml file'](/images/blog/drupalvm-generate-example-2.gif)
You can also define options when calling the command and skip any or all
questions. Running the following would bypass all of the questions and create a
new file with no interaction or additional steps.
# Where do I get it?
The project is hosted on [GitHub][1], and there are installation instructions
within the [README][10].
<div class="github-card" data-github="opdavies/drupal-vm-generator" data-width="400" data-height="" data-theme="default"></div>
The recommended method is via downloading the phar file (the same as Composer
and Drupal Console). You can also clone the GitHub repository and run the
command from there. Im also wanting to upload it to Packagist so that it can be
included if you manage your projects with Composer.
Please log any bugs or feature requests in the [GitHub issue tracker][11], and
Im more than happy to receive pull requests.
If youre interested in contributing, please feel free to fork the repository
and start doing so, or contact me with any questions.
**Update 17/02/16:** The autoloading issue is now fixed if you require the
package via Composer, and this has been tagged as the [1.0.1 release][12]
[1]: https://github.com/opdavies/drupal-vm-generator
[2]: http://www.drupalvm.com
[3]: http://www.jeffgeerling.com
[4]: http://www.vagrantup.com
[5]: https://www.ansible.com
[6]: https://github.com/geerlingguy/drupal-vm/blob/master/example.config.yml
[7]: http://symfony.com/doc/current/components/console/introduction.html
[8]: http://twig.sensiolabs.org
[9]: http://symfony.com/doc/current/components/filesystem/introduction.html
[10]:
https://github.com/opdavies/drupal-vm-generator/blob/master/README.md#installation
[11]: https://github.com/opdavies/drupal-vm-generator/issues
[12]: https://github.com/opdavies/drupal-vm-generator/releases/tag/1.0.1

View file

@ -0,0 +1,192 @@
---
title: Automating Sculpin Builds with Jenkins CI
date: 2015-07-21
excerpt: How to use Jenkins to automate building Sculpin websites.
tags:
- jenkins
- sculpin
---
As part of re-building this site with [Sculpin](http://sculpin.io), I wanted to
automate the deployments, as in I wouldn't need to run a script like
[publish.sh](https://raw.githubusercontent.com/sculpin/sculpin-blog-skeleton/master/publish.sh)
locally and have that deploy my code onto my server. Not only did that mean that
my local workflow was simpler (update, commit and push, rather than update,
commit, push and deploy), but if I wanted to make a quick edit or hotfix, I
could log into GitHub or Bitbucket (wherever I decided to host the source code)
from any computer or my phone, make the change and have it deployed for me.
I'd started using [Jenkins CI](http://jenkins-ci.org) during my time at the
Drupal Association, and had since built my own Jenkins server to handle
deployments of Drupal websites, so that was the logical choice to use.
## Installing Jenkins and Sculpin
If you dont already have Jenkins installed and configured, I'd suggest using
[Jeff Geerling](http://jeffgeerling.com/) (aka geerlingguy)'s
[Ansible role for Jenkins CI](https://galaxy.ansible.com/list#/roles/440).
I've also released an
[Ansible role for Sculpin](https://galaxy.ansible.com/list#/roles/4063) that
installs the executable so that the Jenkins server can run Sculpin commands.
## Triggering a Build from a Git Commit
I created a new Jenkins item for this task, and restricted where it could be run
to `master` (i.e. the Jenkins server rather than any of the nodes).
### Polling from Git
I entered the url to the
[GitHub repo](https://github.com/opdavies/oliverdavies.uk) into the **Source
Code Management** section (the Git option _may_ have been added by the
[Git plugin](https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin) that I have
installed).
As we dont need any write access back to the repo, using the HTTP URL rather
than the SSH one was fine, and I didnt need to provide any additional
credentials.
Also, as I knew that Id be working a lot with feature branches, I entered
`*/master` as the only branch to build. This meant that pushing changes or
making edits on any other branches would not trigger a build.
![Defining the Git repository in Jenkins](/images/blog/oliverdavies-uk-jenkins-git-repo.png)
I also checked the **Poll SCM** option so that Jenkins would be routinely
checking for updated code. This essentially uses the same syntax as cron,
specifying minutes, hours etc. I entered `* * * * *` so that Jenkins would poll
each minute, knowing that I could make this less frequent if needed.
This now that Jenkins would be checking for any updates to the repo each minute,
and could execute tasks if needed.
### Building and Deploying
Within the **Builds** section of the item, I added an _Execute Shell_ step,
where I could enter a command to execute. Here, I pasted a modified version of
the original publish.sh script.
```bash
#!/bin/bash
set -uex
sculpin generate --env=prod --quiet
if [ $? -ne 0 ]; then echo "Could not generate the site"; exit 1; fi
rsync -avze 'ssh' --delete output_prod/ prodwww2:/var/www/html/oliverdavies.uk/htdocs
if [ $? -ne 0 ]; then echo "Could not publish the site"; exit 1; fi
```
This essentially is the same as the original file, in that Sculpin generates the
site, and uses rsync to deploy it somewhere else. In my case, `prodwww2` is a
Jenkins node (this alias is configured in `/var/lib/jenkins/.ssh/config`), and
`/var/www/html/oliverdavies.uk/htdocs` is the directory from where my site is
served.
## Building Periodically
There is some dynamic content on my site, specifically on the Talks page. Each
talk has a date assigned to it, and within the Twig template, the talk is
positoned within upcoming or previous talks based on whether this date is less
or greater than the time of the build.
The YAML front matter:
```yaml
---
...
talks:
- title: Test Drive Twig with Sculpin
location: DrupalCamp North
---
```
The Twig layout:
```twig
{% verbatim %}
{% for talk in talks|reverse if talk.date >= now %}
{# Upcoming talks #}
{% endfor %}
{% for talk in talks if talk.date < now %}
{# Previous talks #}
{% endfor %}
{% endverbatim %}
```
I also didnt want to have to push an empty commit or manually trigger a job in
Jenkins after doing a talk in order for it to be positioned in the correct place
on the page, so I also wanted Jenkins to schedule a regular build regardless of
whether or not code had been pushed, so ensure that my talks page would be up to
date.
After originally thinking that I'd have to split the build steps into a separate
item and trigger that from a scheduled item, and amend my git commit item
accordingly, I found a **Build periodically** option that I could use within the
same item, leaving it intact and not having to make amends.
I set this to `@daily` (the same `H H * * *` - `H` is a Jenkins thing), so that
the build would be triggered automatically each day without a commit, and deploy
any updates to the site.
![Setting Jenkins to periodically build a new version of the site.](/images/blog/oliverdavies-uk-jenkins-git-timer.png)
## Next Steps
This workflow works great for one site, but as I roll out more Sculpin sites,
I'd like to reduce duplication. I see this mainly as Ill end up creating a
separate `sculpin_build` item thats decoupled from the site that its building,
and instead passing variables such as environment, server name and docroot path
as parameters in a parameterized build.
I'll probably also take the raw shell script out of Jenkins and save it in a
text file that's stored locally on the server, and execute that via Jenkins.
This means that Id be able to store this file in a separate Git repository with
my other Jenkins scripts and get the standard advantages of using version
control.
## Update
Since publishing this post, I've added some more items to the original build
script.
### Updating Composer
```bash
if [ -f composer.json ]; then
/usr/local/bin/composer install
fi
```
Updates project dependencies via
[Composer](https://getcomposer.org/doc/00-intro.md#introduction) if
composer.json exists.
### Updating Sculpin Dependencies
```bash
if [ -f sculpin.json ]; then
sculpin install
fi
```
Runs `sculpin install` on each build if the sculpin.json file exists, to ensure
that the required custom bundles and dependencies are installed.
### Managing Redirects
```bash
if [ -f scripts/redirects.php ]; then
/usr/bin/php scripts/redirects.php
fi
```
I've been working on a `redirects.php` script that generates redirects from a
.csv file, after seeing similar things in the
[Pantheon Documentation](https://github.com/pantheon-systems/documentation) and
[That Podcast](https://github.com/thatpodcast/thatpodcast.io) repositories. This
checks if that file exists, and if so, runs it and generates the source file
containing each redirect.

View file

@ -0,0 +1,72 @@
---
title: Back to the future with Gits diff and apply commands
date: 2018-04-23
excerpt: How to revert files using Git, but as a new commit to prevent force pushing.
tags:
- git
---
This is one of those “theres probably already a better way to do this”
situations, but it worked.
I was having some issues this past weekend where, despite everything working
fine locally, a server was showing a “500 Internal Server” after I pushed some
changes to a site. In order to bring the site back online, I needed to revert
the site files back to the previous version, but as part of a new commit.
The `git reset` commands removed the interim commits which meant that I couldnt
push to the remote (force pushing, quite rightly, isnt allowed for the
production branch), and using `git revert` was resulting in merge conflicts in
`composer.lock` that Id rather have avoided if possible.
This is what `git log --oneline -n 4` was outputting:
```
14e40bc Change webflo/drupal-core-require-dev version
fc058bb Add services.yml
60bcf33 Update composer.json and re-generate lock file
722210c More styling
```
`722210c` is the commit SHA that I needed to go back to.
## First Solution
My first solution was to use `git diff` to create a single patch file of all of
the changes from the current point back to the original commit. In this case,
Im using `head~3` (four commits before `head`) as the original reference, I
could have alternatively used a commit ID, tag or branch name.
```
git diff head head~3 > temp.patch
git apply -v temp.patch
```
With the files are back in the former state, I can remove the patch, add the
files as a new commit and push them to the remote.
```
rm temp.patch
git add .
git commit -m 'Back to the future'
git push
```
Although the files are back in their previous, working state, as this is a new
commit with a new commit SHA reference, there is no issue with the remote
rejecting the commit or needing to attempt to force push.
## Second Solution
The second solution is just a shorter, cleaner version of the first!
Rather than creating a patch file and applying it, the output from `git diff`
can be piped straight into `git apply`.
```
git diff head~3 head | git apply -v
```
This means that theres only one command to run and no leftover patch file, and
I can go ahead and add and commit the changes straight away.

View file

@ -0,0 +1,102 @@
---
title: Building Gmail Filters with PHP
date: 2016-07-15
excerpt: How to use PHP to generate and export filters for Gmail.
tags:
- gmail
- php
promoted: true
---
Earlier this week I wrote a small PHP library called [GmailFilterBuilder][0]
that allows you to write Gmail filters in PHP and export them to XML.
I was already aware of a Ruby library called [gmail-britta][1] that does the
same thing, but a) Im not that familiar with Ruby so the syntax wasnt that
natural to me - its been a while since I wrote any Puppet manifests, and b) it
seemed like a interesting little project to work on one evening.
The library contains two classes - `GmailFilter` which is used to create each
filter, and `GmailFilterBuilder` that parses the filters and generates the XML
using a [Twig][2] template.
## Usage
For example:
```php
# test.php
require __DIR__ '/vendor/autoload.php';
use Opdavies\GmailFilterBuilder\Builder;
use Opdavies\GmailFilterBuilder\Filter;
$filters = [];
$filters[] = Filter::create()
->has('from:example@test.com')
->labelAndArchive('Test')
->neverSpam();
new Builder($filters);
```
In this case, an email from `example@test.com` would be archived, never marked
as spam, and have a label of "Test" added to it.
With this code written, and the GmailFilterBuilder library installed via
Composer, I can run `php test.php` and have the XML written to the screen.
This can also be written to a file - `php test.php > filters.xml` - which can
then be imported into Gmail.
## Twig Extensions
I also added a custom Twig extension that I moved into a separate
[twig-extensions][5] library so that I and other people can re-use it in other
projects.
Its a simple filter that accepts a boolean and returns `true` or `false` as a
string, but meant that I could remove three ternary operators from the template
and replace them with the `boolean_string` filter.
Before:
```twig
{% verbatim %}
{{ filter.isArchive ? 'true' : 'false' }}
{% endverbatim %}
```
After:
```twig
{% verbatim %}
{{ filter.isArchive|boolean_string }}
{% endverbatim %}
```
This can then be used to generate output like this, whereas having blank values
would have resulted in errors when importing to Gmail.
```xml
<apps:property name='shouldArchive' value='true'/>
```
## Example
For a working example, see my personal [gmail-filters][3] repository on GitHub.
## Resources
- [The GmailFilterBuilder library on Packagist][4]
- [My Gmail filters on GitHub][3]
- [My Twig Extensions on Packagist][5]
[0]: https://github.com/opdavies/gmail-filter-builder
[1]: https://github.com/antifuchs/gmail-britta
[2]: http://twig.sensiolabs.org
[3]: https://github.com/opdavies/gmail-filters
[4]: https://packagist.org/packages/opdavies/gmail-filter-builder
[5]: https://packagist.org/packages/opdavies/twig-extensions

View file

@ -0,0 +1,19 @@
---
title: 'Building oliverdavies.uk with Sculpin: Part 1 - initial setup and configuration'
excerpt: |
First part of the "Building oliverdavies.uk" series, covering the initial
Sculpin setup and configuration.
tags: [sculpin]
draft: true
date: ~
---
Based on <https://github.com/opdavies/sculpin-skeleton>.
Uses <https://github.com/opdavies/docker-image-sculpin-serve>.
`app/config/sculpin_kernel.yml`:
`app/config/sculpin_site.yml`:
`app/config/sculpin_site_prod.yml`:

View file

@ -0,0 +1,37 @@
---
title: Building the new PHPSW Website
date: 2018-02-28
excerpt:
Earlier this week we had another hack night, working on the new PHPSW user
group website.
tags:
- phpsw
- symfony
- tailwind-css
has_tweets: true
---
Earlier this week we had another hack night, working on the new [PHPSW user
group][0] website.
<div class="mb-4">
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Hacking away on the new <a href="https://twitter.com/phpsw?ref_src=twsrc%5Etfw">@phpsw</a> website with <a href="https://twitter.com/DaveLiddament?ref_src=twsrc%5Etfw">@DaveLiddament</a> and <a href="https://twitter.com/kasiazien?ref_src=twsrc%5Etfw">@kasiazien</a>. <a href="https://t.co/kmfjdQSOUq">pic.twitter.com/kmfjdQSOUq</a></p>&mdash; Oliver Davies (@opdavies) <a href="https://twitter.com/opdavies/status/968224364129906688?ref_src=twsrc%5Etfw">February 26, 2018</a></blockquote>
</div>
Its built with Symfony so its naturally using Twig for templating. Ive become
a big fan of the utility based approach to CSS and [Tailwind CSS][1] in
particular, so Im using that for all of the styling, and using [Webpack
Encore][2] to compile all of the assets.
We have an integration with Meetup.com which were using to pull all of our
previous event data and store them as JSON files for Symfony to parse and
render, which it then uses to generate static HTML to upload onto the server.
Were in the process of populating all of the past data, but look out for a v1
launch soon. In the meantime, feel free to take a peek at our [GitHub
repository][3].
[0]: https://phpsw.uk
[1]: https://tailwindcss.com
[2]: https://github.com/symfony/webpack-encore
[3]: https://github.com/phpsw/phpsw-ng

View file

@ -0,0 +1,42 @@
---
title: Change the Content Type of Multiple Nodes Using SQL
date: 2010-07-01
excerpt:
In this post, I will be changing values within my Drupal 6 site's database to quickly change the content type of multiple nodes.
tags:
- content-types
- database
- drupal
- drupal-6
- drupal-planet
- sequel-pro
- sql
---
In this post, I will be changing values within my Drupal 6 site's database to
quickly change the content type of multiple nodes. I will be using a test
development site with the core Blog module installed, and converting Blog posts
to a custom content type called 'News article'.
**Before changing any values within the database, ensure that you have an
up-to-date backup which you can restore if you encounter a problem!**
To begin with, I created the 'News article' content type, and then used the
Devel Generate module to generate some Blog nodes.
Using [Sequel Pro](http://www.sequelpro.com), I can query the database to view
the Blog posts (you can also do this via the
[Terminal](http://guides.macrumors.com/Terminal) in a Mac OS X/Linux,
[Oracle SQL Developer](http://www.oracle.com/technology/software/products/sql/index.html)
on Windows, or directly within
[phpMyAdmin](http://www.phpmyadmin.net/home_page/index.php)):
Using an SQL 'Update' command, I can change the type value from 'blog' to
'article'. This will change every occurance of the value 'blog'. If I wanted to
only change certain nodes, I could add a 'Where' clause to only affect nodes
with a certain nid or title.
Now, when I query the database, the type is shown as 'article'.
Now, when I go back into the administration section of my site and view the
content, the content type now shows at 'News article'.

View file

@ -0,0 +1,68 @@
---
title: Checking if a user is logged into Drupal (the right way)
date: 2013-01-09
excerpt: How to check if a user is logged in by using Drupal core API functions.
tags:
- drupal
- drupal-6
- drupal-7
- drupal-planet
- php
---
I see this regularly when working on Drupal sites when someone wants to check
whether the current user is logged in to Drupal (authenticated) or not
(anonymous).
```php
global $user;
if ($user->uid) {
// The user is logged in.
}
```
or
```php
global $user;
if (!$user->uid) {
// The user is not logged in.
}
```
The better way to do this is to use the
[user_is_logged_in()](http://api.drupal.org/api/drupal/modules!user!user.module/function/user_is_logged_in/7)
function.
```php
if (user_is_logged_in()) {
// Do something.
}
```
This returns a boolean (TRUE or FALSE) depending or not the user is logged in.
Essentially, it does the same thing as the first example, but there's no need to
load the global variable.
A great use case for this is within a `hook_menu()` implementation within a
custom module.
```php
/**
* Implements hook_menu().
*/
function mymodule_menu() {
$items['foo'] = array(
'title' => 'Foo',
'page callback' => 'mymodule_foo',
'access callback' => 'user_is_logged_in',
);
return $items;
}
```
There is also a
[user_is_anonymous()](http://api.drupal.org/api/drupal/modules!user!user.module/function/user_is_anonymous/7)
function if you want the opposite result. Both of these functions are available
in Drupal 6 and higher.

View file

@ -0,0 +1,22 @@
---
title: Checkout a specific revision from SVN from the command line
date: 2012-05-23
excerpt: How to checkout a specific revision from a SVN (Subversion) repository.
tags:
- svn
- version-control
---
How to checkout a specific revision from a SVN (Subversion) repository.
If you're checking out the repository for the first time:
```bash
$ svn checkout -r 1234 url://repository/path
```
If you already have the repository checked out:
```bash
$ svn up -r 1234
```

View file

@ -0,0 +1,38 @@
---
title: Cleanly retrieving user profile data using an Entity Metadata Wrapper
excerpt: How to use Drupal 7's EntityMetadataWrapper to cleanly retrieve user profile field data.
tags:
- drupal
- drupal-7
- drupal planet
- php
date: 2021-02-23
---
Today I needed to load some Drupal user data via a [profile2](https://www.drupal.org/project/profile2) profile. When looking into this, most resources that I found suggest using this approach and calling the `profile2_load_by_user()` function directly and passing in the user object:
```php
$account = user_load(...);
$accountWrapper = new EntityDrupalWrapper('user', $account);
// or `$accountWrapper = entity_metadata_wrapper('user', $account);
$profile = profile2_load_by_user($account->value());
// or `$profile = profile2_load_by_user($account);`
$profileWrapper = new EntityDrupalWrapper('profile2', $profile);
$firstName = $profileWrapper->get('field_first_name')->value();
```
This though requires a few steps, and as I'm a fan of object-orientated code and Entity Metadata Wrappers, I wanted to find a cleaner solution.
This is my preferred method that uses method chaining. It returns the same value, is less code, and in my opinion, it's cleaner and easier to read.
```php
$firstName = $accountWrapper
->get('profile_user_basic')
->get('field_first_name')
->value();
```

View file

@ -0,0 +1,29 @@
---
title: Conditional Email Addresses in a Webform
date: 2010-05-06
excerpt:
How to send webform emails to a different email address based on another
field.
tags:
- conditional-email
- drupal-6
- drupal-planet
- webform
---
I created a new Webform to serve as a simple Contact form, but left the main
configuration until after I created the form components. I added 'Name',
'Email', 'Subject' and 'Message' fields, as well as a 'Category' select list.
Below 'Options', I entered each of my desired options in the following format:
```ini
Email address|Visible name
```
I went back to the form configuration page and expanded 'Conditional Email
Recipients', and selected my Category. Note that the standard 'Email To' field
above it needs to be empty. Originally, I made the mistake of leaving addresses
in that field which resulted in people being sent emails regardles of which
category was selected. I then configured the rest of the form.
Then, when I went to the finished form, the category selection was available.

View file

@ -0,0 +1,70 @@
---
title: Configuring the Reroute Email Module
date: 2014-12-22
excerpt:
How to configure the Reroute Email module, to prevent sending emails to real
users from your pre-production sites!
tags:
- drupal
- drupal-6
- drupal-7
- drupal-planet
- email
draft: true
---
[Reroute Email](https://www.drupal.org/project/reroute_email) module uses
`hook_mail_alter()` to prevent emails from being sent to users from
non-production sites. It allows you to enter one or more email addresses that
will receive the emails instead of delivering them to the original user.
> This is useful in case where you do not want email sent from a Drupal site to
> reach the users. For example, if you copy a live site to a test site for the
> purpose of development, and you do not want any email sent to real users of
> the original site. Or you want to check the emails sent for uniform
> formatting, footers, ...etc.
As we don't need the module configured on production (we don't need to reroute
any emails there), it's best to do this in code using settings.local.php (if you
have one) or the standard settings.php file.
The first thing that we need to do is to enable rerouting. Without doing this,
nothing will happen.
```php
$conf['reroute_email_enable'] = TRUE;
```
The next option is to whether to show rerouting description in mail body. I
usually have this enabled. Set this to TRUE or FALSE depending on your
preference.
```php
$conf['reroute_email_enable_message'] = TRUE;
```
The last setting is the email address to use. If you're entering a single
address, you can add it as a simple string.
```php
$conf['reroute_email_address'] = 'person1@example.com';
```
In this example, all emails from the site will be rerouted to
person1@example.com.
If you want to add multiple addresses, these should be added in a
semicolon-delimited list. Whilst you could add these also as a string, I prefer
to use an array of addresses and the `implode()` function.
```php
$conf['reroute_email_address'] = implode(';', array(
'person1@example.com',
'person2@example.com',
'person3@example.com',
));
```
In this example, person2@example.com and person3@example.com would receive their
emails from the site as normal. Any emails to addresses not in the array would
continue to be redirected to person1@example.com.

View file

@ -0,0 +1,46 @@
---
title: Continuous Integration vs Continuous Integration
excerpt: My views on the definitions of "continuous integration".
tags:
- git
date: 2021-10-07
---
![A meme with Spider-Man pointing at Spider-Man, both labelled with 'Continuous Integration'](/images/blog/continuous-integration-spiderman.jpg)
There seem to be two different definitions for the term "continuous integration" (or "CI") that I've come across whilst reading blogs, listening to podcasts, and watching video tutorials.
## Tooling
The first is around remote tools such as GitHub Actions, GitLab CI, Bitbucket Pipelines, Circle CI, and Jenkins, which automatically run tasks whenever you push or merge (or "integrate") code - such as code linting, performing static analysis checks, running automated tests, or building a deployment artifact.
These focus on code quality and replicate steps that you can run locally, ensuring that the build is successful and that if the CI checks pass then the code can be deployed.
My issue with this definition is that it may not be continuous. You could push code once a day or once a year, and it would perform the same checks and have the same outcomes and benefits.
## Workflow
The second definition isn't about tools - it's about how often you update, merge and push code (which commonly leads to feature branch vs trunk-based development, and Git Flow vs GitHub Flow discussions). How often are you pulling in the latest code, testing it with your local changes, and pushing your code for everyone else to see?
If you're using feature branches, how long do they last, and how quickly are they merged into the main branch?
Weekly? Daily? Hourly?
The workflow definition doesn't need GitHub, GitLab, or Bitbucket to run checks - it's about keeping your local code continuously (or as often as possible) updated and integrated with the remote code.
This ensures that you're developing from the latest stable version and not one that is days or weeks out of date.
This means that merge conflicts and much less common as you're always pulling in the latest code and ensuring that it can be integrated.
## Conclusion
One definition isn't dependent on the other.
You don't need the tooling and automation to use a continuous integration workflow, but I'd recommend it. It's useful to know and have confidence that the build passes, especially if you're pulling and pushing code several times a day, but it isn't a prerequisite.
If you're working on a new feature or fixing a bug, pull down the latest code,
test your changes, and push it back as often as possible.
If you watch a video, read a blog post, or listen to a podcast about continuous integration or "How to set up CI", remember that it's not just about the tooling.
There's a different workflow and mindset to consider that introduces other complementary concepts such as automated testing and test-driven development, pair and mob programming, feature flags, and continuous delivery.

View file

@ -0,0 +1,161 @@
---
title: Create a Better Photo Gallery in Drupal - Part 1
date: 2010-08-11
excerpt:
How I started converting and migrating a Coppermine photo gallery into Drupal.
tags:
- cck
- drupal
- drupal-6
- drupal-planet
- photo-gallery
- sequel-pro
- sql
- views
- views-attach
---
Recently, I converted a client's static HTML website, along with their
Coppermine Photo Gallery, into a Drupal-powered website.
Over the next few posts, I'll be replicating the process that I used during the
conversion, and how I added some additional features to my Drupal gallery.
To begin with, I created my photo gallery as described by
[Jeff Eaton](http://www.lullabot.com/about/team/jeff-eaton) in
[this screencast](http://www.lullabot.com/articles/photo-galleries-views-attach),
downloaded all my client's previous photos via FTP, and quickly added them into
the new gallery using the
[Imagefield Import](http://drupal.org/project/imagefield_import) module (which I
mentioned
[previously](/blog/quickly-import-multiples-images-using-imagefieldimport-module/)).
When I compare this to the previous gallery, I can see several differences which
I'd like to include. The first of which is the number of photos in each gallery,
and the date that the most recent photo was added.
To do this, I'd need to query my website's database. To begin with, I wanted to
have a list of all the galleries on my site which are published, and what
they're unique node ID values are. To do this, I opened Sequel Pro and entered
the following code:
```sql
SELECT title
AS title, nid
AS gallery_idFROM node
WHERE type = 'gallery'
AND status = 1;
```
As the nid value of each gallery corresponds with the 'field_gallery_nid' field
within the content_type_photo field, I can now query the database and retrieve
information about each specific gallery.
For example, using [aliasing](http://www.w3schools.com/sql/sql_alias.asp) within
my SQL statement, I can retrieve a list of all the published photos within the
'British Squad 2008' gallery by using the following code:
```sql
SELECT n.title, n.nid, p.field_gallery_nid
FROM node n, content_type_photo p
WHERE p.field_gallery_nid = 105
AND n.status = 1
AND n.nid = p.nid;
```
I can easily change this to count the number of published nodes by changing the
first line of the query to read SELECT COUNT(\*).
```sql
SELECT COUNT(*)
FROM node n, content_type_photo p
WHERE p.field_gallery_nid = 105
AND n.status = 1
AND n.nid = p.nid;
```
As I've used the [Views Attach](http://drupal.org/project/views_attach) module,
and I'm embedding the photos directly into the Gallery nodes, I easily add this
to each gallery by creating a custom node-gallery.tpl.php file within my theme.
I can then use the following PHP code to retrieve the node ID for that specific
gallery:
```php
<?php
$selected_gallery = db_result(db_query("
SELECT nid
FROM {node}
WHERE type = 'gallery'
AND title = '$title'
"));
?>
```
I can then use this variable as part of my next query to count the number of
photos within that gallery, similar to what I did earlier.
```php
<?php
$gallery_total = db_result(db_query("
SELECT COUNT(*)
FROM {content_type_photo}
WHERE field_gallery_nid = $selected_gallery
"));
?>
```
Next, I wanted to display the date that the last photo was displayed within each
album. This was done by using a similar query that also sorted the results in a
descending order, and limited it to one result - effectively only returning the
created date for the newest photo.
```php
<?php
$latest_photo = db_result(db_query("
SELECT n.created
FROM {node} n, {content_type_photo} p
WHERE p.field_gallery_nid = $selected_gallery
AND n.nid = p.nid
ORDER BY n.created DESC LIMIT 1
"));
?>
```
This was all then added into a 'print' statement which displayed it into the
page.
```php
<?php
if ($selected_gallery_total != 0) {
$output = '<i>There are currently ' . $selected_gallery_total . ' photos in this gallery.';
$output .= 'Last one added on ' . $latest_photo . '</i>';
print $output;
}
?>
```
OK, so let's take a look at the Gallery so far:
You will notice that the returned date value for the latest photo added is
displaying the UNIX timestamp instead of in a more readable format. This can be
changed by altering the 'print' statement to include a PHP 'date' function:
```php
<?php
if ($selected_gallery_total != 0) {
$output = '<i>There are currently ' . $selected_gallery_total . ' photos in this gallery.';
$output .= 'Last one added on ' . date("l, jS F, Y", $latest_photo) . '.</i>';
print $output;
}
?>
```
The values that I've entered are from
[this page](http://php.net/manual/en/function.date.php) on PHP.net, and can be
changed according on how you want the date to be displayed.
As I've added all of these photos today, then the correct dates are being
displayed. However, on the client's original website, the majority of these
photos were pubished several months or years ago, and I'd like the new website
to still reflect the original created dates. As opposed to modifying each
individual photograph, I'll be doing this in bulk in my next post.

View file

@ -0,0 +1,58 @@
---
title: Create a Better Photo Gallery in Drupal - Part 2
date: 2010-08-17
excerpt: Updating the galleries created and modified dates.
tags:
- drupal-6
- drupal-planet
- photo-gallery
- sequel-pro
- sql
---
At the end of my last post, I'd finished creating the first part of the new
photo gallery, but I wanted to change the dates of the published photos to
reflect the ones on the client's original website.
Firstly, I'll refer to the previous list of published galleries that I created
before, and create something different that also displays the created and
modified dates. Picking the node ID of the required gallery, I used the
following SQL query to display a list of photos.
```sql
SELECT n.title, n.nid, n.created, n.changed, p.field_gallery_nid
FROM node n, content_type_photo pWHERE n.type = 'photo'
AND p.field_gallery_nid = 103AND n.nid = p.nid
ORDER BY n.nid ASC;
```
When I look back at the old photo gallery, I can see that the previous 'last
added' date was June 27, 2008. So, how do I update my new photos to reflect that
date? Using <http://www.onlineconversion.com/unix_time.htm>, I can enter the
required date in its readable format, and it will give me the equivilent UNIX
timestamp. To keep things relatively simple, I'll set all photos within this
gallery to the same time.
The result that I'm given is '1217149200'. I can now use an UPDATE statement
within another SQL query to update the created and modified dates.
```sql
UPDATE node
INNER JOIN content_type_photo
ON node.nid = content_type_photo.nid
SET
node.created = 1217149200,
node.changed = 1217149200
WHERE content_type_photo.field_gallery_nid = 103
```
Now when I query the database, both the created and modified dates have been
updated, and when I return to the new photo gallery, the updated value is being
displayed.
Once the changes have been applied, it's a case of repeating the above process
for each of the required galleries.
In the next post, I'll explain how to add a count of published galleries and
photos on the main photo gallery page, as well as how to install and configure
the [Shadowbox](http://drupal.org/project/shadowbox) module.

View file

@ -0,0 +1,64 @@
---
title: Create a Better Photo Gallery in Drupal - Part 2.1
date: 2010-10-22
excerpt: The missing code to get totals of galleries and photos.
tags:
- drupal
---
Today, I realised that I hadn't published the code that I used to create the
total figures of galleries and photos at the top of the gallery (I said at the
end of
[Part 2](/blog/create-better-photo-gallery-drupal-part-2/ 'Create a Better Photo Gallery in Drupal - Part 2')
that I'd include it in
[Part 3](/blog/create-better-photo-gallery-drupal-part-3/ 'Create a Better Photo Gallery in Drupal - Part 3'),
but I forgot). So, here it is:
```php
<?php
// Queries the database and returns a list of nids of published galleries.
$galleries = db_query("SELECT nid FROM {node} WHERE type = 'gallery' AND status = 1");
// Resets the number of photos.
$output = 0;
// Prints a list of nids of published galleries.
while($gallery = db_fetch_array($galleries)) {
$gallery_id = $gallery['nid'];
$photos = $photos + db_result(db_query("SELECT COUNT(*) FROM node n, content_type_photo ctp WHERE n.status = 1 AND n.type = 'photo' AND ctp.field_gallery_nid = $gallery_id AND n.nid = ctp.nid"));
}
// Prints the output.
print 'There ';
if($photos == 1) {
print 'is';
}
else {
print 'are';
}
print ' currently ';
print $photos . ' ';
if($photos == 1) {
print 'photo';
}
else {
print 'photos';
}
print ' in ';
// Counts the number of published galleries on the site.
$galleries = db_result(db_query("SELECT COUNT(*) FROM {node} WHERE TYPE = 'gallery' AND STATUS = 1"));
// Prints the number of published galleries.
print $galleries;
if ($galleries == 1) {
print ' gallery';
}
else {
print ' galleries';
}
print '.';
?>
```
It was applied to the view as a header which had the input format set to PHP
code.

View file

@ -0,0 +1,49 @@
---
title: Create a Better Photo Gallery in Drupal - Part 3
date: 2010-10-13
excerpt: Grouping galleries by category.
tags:
- drupal
---
The next part of the new gallery that I want to implement is to group the
galleries by their respective categories. The first step is to edit my original
photo_gallery view and add an additional display.
I've called it 'Taxonomy', and it's similar to the original 'All Galleries'
view. The differences are that I've added the taxonomy term as an argument,
removed the header, and updated the path to be `gallery/%`. The other thing that
I need to do is overwrite the output of the original 'All Galleries' View by
creating a file called `views-view--photo-gallery--page-1.tpl.php` and placing
it within my theme directory.
Within that file, I can remove the standard content output. This still outputs
the heading information from the original View. I can now use the function
called 'views_embed_view' to embed my taxonomy display onto the display. The
views_embed_view function is as follows:
```php
<?php views_embed_view('my_view', 'block_1', $arg1, $arg2); ?>
```
So, to display the galleries that are assigned the taxonomy of 'tournaments', I
can use the following:
```php
<?php print views_embed_view('photo_gallery', 'page_2', 'tournaments'); ?>
```
To reduce the amount of code needed, I can use the following 'while' loop to
generate the same code for each taxonomy term. It dynamically retrieves the
relevant taxonomy terms from the database, and uses each name as the argument
for the view.
```php
<?php
$terms = db_query("SELECT * FROM {term_data} WHERE vid = 1");
while ($term = db_fetch_array($terms)) {
print '<h3>' . $term['name'] . '</h3>';
print views_embed_view('gallery', 'page_2', $term['name']);
}
?>
```

View file

@ -0,0 +1,44 @@
---
title: Create a Block of Social Media Icons using CCK, Views and Nodequeue
date: 2010-06-23
excerpt: How to create a block of social media icons in Drupal.
tags:
- drupal
- drupal-6
- drupal-planet
- nodequeue
- oliverdavies.co.uk
- views
---
I recently decided that I wanted to have a block displayed in a sidebar on my
site containing icons and links to my social media profiles -
[Twitter](http://twitter.com/opdavies), [Facebook](http://facebook.com/opdavies)
etc. I tried the [Follow](http://drupal.org/project/follow) module, but it
lacked the option to add extra networks such my
[Drupal.org](http://drupal.org/user/381388) account, and my
[RSS feed](http://oliverdavies.co.uk/rss.xml). I started to create my own
version, and then found
[this Blog post](http://www.hankpalan.com/blog/drupal-themes/add-your-social-connections-drupal-icons)
by Hank Palan.
I created a 'Social icon' content type with the body field removed, and with
fields for a link and image - then downloaded the favicons from the appropriate
websites to use.
However, instead of using a custom template (node-custom.tpl.php) file, I used
the Views module.
I added fields for the node titles, and the link from the node's content. Both
of these are excluded from being displayed on the site. I then re-wrote the
output of the Icon field to create the link using the URL, and using the node's
title as the image's alternative text and the link's title.
I also used the [Nodequeue](http://drupal.org/project/nodequeue) module to
create a nodequeue and arrange the icons in the order that I wanted them to be
displayed. Once this was added as a relationship within my View, I was able to
use node's position in the nodequeue as the sort criteria.
To complete the process, I used the
[CSS Injector](http://drupal.org/project/css_injector) module to add some
additional CSS styling to position and space out the icons.

View file

@ -0,0 +1,71 @@
---
title: Create a Flickr Photo Gallery Using Feeds, CCK and Views
date: 2010-06-28
excerpt:
In this tutorial, I'll show you how to create a photo gallery which uses
photos imported from Flickr.
tags:
- drupal-planet
- drupal-6
- photo-gallery
- views
- cck
- imagecache
- feeds
- filefield
- flickr
- imagefield
---
In this tutorial, I'll show you how to create a photo gallery which uses photos
imported from [Flickr](http://www.flickr.com).
The modules that I'll use to create the Gallery are:
- [CCK](http://drupal.org/project/cck)
- [Feeds](http://drupal.org/project/feeds)
- [Feeds Image Grabber](http://drupal.org/project/feeds_imagegrabber)
- [FileField](http://drupal.org/project/filefield)
- [ImageAPI](http://drupal.org/project/imageapi)
- [ImageCache](http://drupal.org/project/imagecache)
- [ImageField](http://drupal.org/project/imagefield)
- [Views](http://drupal.org/project/views)
The first thing that I did was to create a content type to store my imported
images. I named it 'Photo', removed the Body field, and added an Image field.
Next, I installed and configured the Feeds and Image Grabber module. I used an
overridden default Feed to import my photos from Flickr using the following
settings:
- **Basic settings:** I changed the Refresh time to 15 minutes.
- **Processor settings:** I changed the content type to 'Photo', and the
author's name from 'anonymous'.
- **Processor mapping:** I added a new mapping from 'Item URL (link)' to 'Photo
(FIG)'. The Photo FIG target is added by the Image Grabber module.
Next, I needed to create the actual Feed, which I did by clicking 'Import'
within the Navigation menu, and clicking 'Feed'. I gave it a title, entered the
URL to my RSS feed from Flickr, and enabled the Image Grabber for this feed.
Once the Feed is created, the latest 20 images from the RSS feed are imported
and 20 new Photos nodes are created. In the example below, the image with the
'Photo' label is the Image field mapped by the Image Grabber module. It is this
image that I'll be displaying within my Gallery.
With the new Photo nodes created, I then created the View to display them.
The View selects the image within the Photo content type, and displays in it a
grid using an ImageCache preset. The View is limited to 20 nodes per page, and
uses a full pager if this is exceeded. The nodes are sorted by the descending
post date, and filtered by whether or not they are published, and only to
include Photo nodes.
As an additional effect, I also included the 'Feeds Item - Item Link' field,
which is basically the original link from the RSS feed. By checking the box the
exclude the item from the display, it is not shown, but makes the link available
to be used elsewhere. By checking the box 'Re-write the output for this field'
on the 'Content: Photo' field, I was able to add the replacement token (in this
case, [url]) as the path for a link around each image. This meant that when
someone clicked a thumbnail of a photo, they were directed to the Flickr website
instead of the node within my Drupal site.

View file

@ -0,0 +1,59 @@
---
title: Create Multigroups in Drupal 7 using Field Collections
date: 2011-08-28
excerpt:
How to replicate CCKs multigroups in Drupal 7 using the Field Collections
module.
tags:
- cck
- drupal-7
- drupal-planet
- entity-api
- field-collection
- fields
- multigroup
---
One of my favourite things lately in Drupal 6 has been CCK 3, and more
specifically, the Content Multigroups sub-module. Basically this allows you to
create a fieldset of various CCK fields, and then repeat that multiple times.
For example, I use it on this site whist creating invoices for clients. I have a
fieldset called 'Line Item', containing 'Description', 'Quantity' and 'Price'
fields. With a standard fieldset, I could only have one instance of each field -
however, using a multigroup, I can create multiple groups of line items which I
then use within the invoice.
But at the time of writing this, there is no CCK 3 version for Drupal 7. So, I
created the same thing using
[Field Collection](http://drupal.org/project/field_collection) and
[Entity](http://drupal.org/project/entity) modules.
With the modules uploaded and enabled, go to admin/structure/field-collections
and create a field collection.
With the module enabled, you can go to your content type and add a Field
Collection field. By default, the only available Widget type is 'Hidden'.
Next, go to admin/structure/field-collections and add some fields to the field
collection - the same way that you would for a content type. For this collection
is going to contain two node reference fields - Image and Link.
With the Field Collection created, I can now add it as a field within my content
type.
Whilst this works perfectly, the field collection is not editable from the node
edit form. You need to load the node, and the collection is displayed here with
add, edit, and delete buttons. This wasn't an ideal solution, and I wanted to be
able to edit the fields within the collection from the node edit form - the same
way as I can using multigroups in Drupal 6.
After some searching I found
[a link to a patch](http://drupal.org/node/977890#comment-4184524) which when
applied adds a 'subform' widget type to the field collection field and allows
for it to be embedded into, and editable from within the node form. Going back
to the content type fields page, and clicking on 'Hidden' (the name of the
current widget), I can change it to subform and save my changes.
With this change applied, when I go back to add or edit a node within this
content type, my field collection will be easily editable directly within the
form.

View file

@ -0,0 +1,62 @@
---
title: Create an Omega Subtheme with LESS CSS Preprocessor using Omega Tools and Drush
date: 2012-04-16
excerpt: How to create an Omega subtheme on the command line using Drush.
tags:
- drupal
- drupal-7
- drupal-planet
- less
- omega
- theming
---
In this tutorial I'll be showing how to create an
[Omega](http://drupal.org/project/omega) subtheme using the
[Omega Tools](http://drupal.org/project/omega_tools) module, and have it working
with the [LESS CSS preprocessor](http://lesscss.org).
The first thing that I need to do is download the Omega theme and the Omega
Tools and [LESS](http://drupal.org/project/less 'LESS module on drupal.org')
modules, and then to enable both modules. I'm doing this using Drush, but you
can of course do this via the admin interface at admin/modules.
```bash
$ drush dl less omega omega_tools;
$ drush en -y less omega_tools
```
With the Omega Tools module enabled I get the drush omega-subtheme command that
creates my Omega subtheme programatically. Using this command, I'm creating a
new subtheme, enabling it and setting it as the default theme on my site.
```bash
$ drush omega-subtheme "Oliver Davies" --machine_name="oliverdavies" --enable --set-default
```
By default, four stylesheets are created within the subtheme's css directory.
The first thing that I'm going to do is rename `global.css` to `global.less`.
```bash
$ mv css/global.css css/global.less
```
Now I need to find all references to global.css within my oliverdavies.info
file. I did this using `$ nano oliverdavies.info`, pressing `Ctrl+W` to search,
then `Ctrl+R` to replace, entering `global.css` as the search phrase, and then
`global.less` as the replacement text. After making any changes to
oliverdavies.info, I need to clear Drupal's caches for the changes to be
applied.
```bash
$ drush cc all
```
I tested my changes by making some quick additions to my global.less file and
reloading the page.
If your changes aren't applied, then confirm that your global.less file is
enabled within your theme's configuration. I did this by going to
admin/appearance/settings/oliverdavies, clicking on the Toggle styles tab within
_Layout configuration_ and finding global.less at the bottom of _Enable optional
stylesheets_.

View file

@ -0,0 +1,50 @@
---
title: Create a Slideshow of Multiple Images Using Fancy Slide
date: 2010-05-25
excerpt: How to create a slideshow of images using Drupals Fancy Slide module.
tags:
- drupal
- drupal-6
- drupal-planet
- fancy-slide
- slideshow
---
Whilst updating my About page, I thought about creating a slideshow of several
images instead of just the one static image. When I looking on Drupal.org, the
only slideshow modules were to create slideshows of images that were attached to
different nodes - not multiple images attached to one node. Then, I found the
[Fancy Slide](http://drupal.org/project/fancy_slide) module. It's a jQuery
Slideshow module with features that include integration with the
[CCK](http://drupal.org/project/cck),
[ImageCache](http://drupal.org/project/imagecache) and
[Nodequeue](http://drupal.org/project/nodequeue) modules.
I added an CCK Image field to my Page content type, and set the number of values
to 3, then uploaded my images to the Page.
Whilst updating my About page, I thought about creating a slideshow of several
images instead of just the one static image. When I looking on Drupal.org, the
only slideshow modules were to create slideshows of images that were attached to
different nodes - not multiple images attached to one node. Then, I found the
[Fancy Slide](http://drupal.org/project/fancy_slide) module. It's a jQuery
Slideshow module with features that include integration with the
[CCK](http://drupal.org/project/cck),
[ImageCache](http://drupal.org/project/imagecache) and
[Nodequeue](http://drupal.org/project/nodequeue) modules. Once the Images were
added, I went to the Fancy Slide settings page and created the slideshow.
I added the dimensions of my images, the type of animation, specified the node
that contained the images, the slideshow field, delay between slides and
transition speed. With the slideshow created, it now needed embedding into the
page.
I added the following code into my About page, as described in the Fancy Slide
readme.txt file - the number representing the ID of the slideshow.
```php
<?php print theme('fancy_slide', 1); ?>
```
In my opinion, this adds a nice effect to the About page. I like it because it's
easy to set up, and easy to add additional images later on if required.

View file

@ -0,0 +1,50 @@
---
title: Create Virtual Hosts on Mac OS X Using VirtualHostX
date: 2010-07-02
excerpt:
How to use the VirtualHostX application to manage virtual hosts on Mac OS X.
tags:
- drupal-6
- drupal-planet
- mamp
- virtual-hosts
- virtualhostx
---
This isn't a Drupal related topic per se, but it is a walk-through of one of the
applications that I use whilst doing Drupal development work. I assume, like
most Mac OS X users, I use [MAMP](http://www.mamp.info/en/index.html) to run
Apache, MySQL and PHP locally whilst developing. I also use virtual hosts in
Apache to create local .dev domains which are as close as possible to the actual
live domains. For example, if I was developing a site called mysite.com, my
local development version would be mysite.dev.
Normally, I would have to edit the hosts file and Apache's httpd.conf file to
create a virtual host. The first to set the domain and it's associated IP
address, and the other to configure the domain's directory, default index file
etc. However, using [VirtualHostX](http://clickontyler.com/virtualhostx), I can
quickly create a virtual host without having to edt any files. Enter the virtual
domain name, the local path and the port, and apply the settings. VirtualHostX
automatically restarts Apache, so the domain is ready to work straight away. You
can also enter custom directives from within the GUI.
There's also an option to share the host over the local network. Next, I intend
on configuring a virtual Windows PC within VMware Fusion to view these domains
so that I can do cross-browser testing before putting a site live.
I ensured that my Apache configuration within MAMP was set to port 80, and that
VirtualHostX was using Apache from MAMP instead of Apple's built-in Apache.
**Note:** One problem that I had after setting this up, was that I was receving
an error when attempting to open a Drupal website which said _'No such file or
directory'._
After some troubleshooting, I found out that Web Sharing on my Mac had become
enabled (I don't know why, I've never enabled it), and that this was causing a
conflict with Apache. Once I opened my System Preferences and disabled it,
everything worked fine!
This, along with [MAMP](http://www.mamp.info/en/index.html),
[Coda](http://www.panic.com/coda), [Sequel Pro](http://www.sequelpro.com), and
[Transmit](http://www.panic.com/transmit), has become an essential tool within
my development environment.

View file

@ -0,0 +1,40 @@
---
title: Create a Zen Sub-theme Using Drush
date: 2013-09-06
excerpt: How to quickly create a Zen sub-theme using Drush.
tags:
- drupal
- drupal-planet
- drush
- zen
- theming
---
How to use [Drush](https://drupal.org/project/drush) to quickly build a new
sub-theme of [Zen](https://drupal.org/project/zen).
First, download the [Zen](https://drupal.org/project/zen 'The Zen theme') theme
if you haven't already done so.
```bash
$ drush dl zen
```
This will now enable you to use the "drush zen" command.
```bash
$ drush zen "Oliver Davies" oliverdavies --description="A Zen sub-theme for oliverdavies.co.uk" --without-rtl
```
The parameters that I'm passing it are:
1. The human-readable name of the theme.
2. The machine-readable name of the theme.
3. The description of the theme (optional).
4. A flag telling Drush not to include any right-to-left elements within my
sub-theme as these aren't needed (optional).
This will create a new theme in sites/all/themes/oliverdavies.
For further help, type `$ drush help zen` to see the Drush help page for the zen
command.

View file

@ -0,0 +1,82 @@
---
title: Creating a custom PHPUnit command for DDEV
excerpt: How to create a custom command to run PHPUnit commands in DDEV.
tags:
- ddev
- drupal
- drupal-planet
- php
date: 2020-08-28
---
To begin with, let's create an empty file for our command:
```bash
touch .ddev/commands/web/phpunit
```
Commands are located within the `.ddev/commands` directory, with a sub-directory for the container name in which the command should be executed - or `host` if it's a command that is to be run on the host machine.
As [the example repo](https://github.com/opdavies/ddev-phpunit-command-example) has a `web` sub-directory to mimic my Drupal application structure, the command should be run inside the web container so the file should be placed within the `.ddev/commands/web` directory.
As we want the command to be 'phpunit', the filename should also be `phpunit`.
This is an example of a basic command, which is a simple bash script:
```bash
#!/usr/bin/env bash
echo 'running phpunit...'
```
To begin with, let's echo some simple text to check that the command is working. It should also be listed if you run the `ddev` command.
To check the working directory that it used when the command is run, add the following line in the command file:
```bash
echo $(pwd)
```
In the example, it is `/var/www/html/web`. Note that we are already inside the `web` sub-directory.
## Running PHPUnit
To run PHPUnit, I can add the following to the command file:
```
../vendor/bin/phpunit --config .. $*
```
As we're already in the `web` directory, the command needs to go up on level before running the PHPUnit command, and uses `--config` to define the path to the `phpunit.xml.dist` file which is also in the parent directory.
Using `$*` adds any additional arguments from the CLI to the command inside the container.
The command could be made simpler by overridding the `working_directory` value in `.ddev/config`:
```json
working_dir:
web: /var/www/html
```
This means that we start in `/var/www/html` rather than inside the `web` directory, and that we can simplify the command to be:
```
vendor/bin/phpunit $*
```
Because the `phpunit.xml.dist` file is inside the working directory, I no longer need to specify its path.
## Adding documentation
To add documentation and help text to the command, add these lines to the command file:
```bash
## Description: Run PHPUnit tests inside the web container.
## Usage: phpunit
## Example: "ddev phpunit" or with additional arguments such as "ddev phpunit --testdox"
```
These will be parsed and shown when someone runs `ddev phpunit -h`, and can be used to show various examples such as adding additional arguments for the PHPUnit command.
With this all in place, we can run commands like `ddev phpunit` or `ddev phpunit --testdox`, or even `ddev phpunit modules/custom/opdavies_talks --filter=TalkEventDateTest` for a Drupal project, and have that command and tests running inside DDEV!
For more information on DDEV and creating custom commands, see the [DDEV documentation](https://ddev.readthedocs.io/en/stable/users/extend/custom-commands).

View file

@ -0,0 +1,366 @@
---
title: Creating a Custom PHPUnit Command for Docksal
date: 2018-05-06
excerpt:
How to write custom commands for Docksal, including one to easily run PHPUnit
tests in Drupal 8.
tags:
- docksal
- drupal
- drupal-8
- drupal-planet
- phpunit
- testing
---
This week Ive started writing some custom commands for my Drupal projects that
use Docksal, including one to easily run PHPUnit tests in Drupal 8. This is the
process of how I created this command.
## What is Docksal?
Docksal is a local Docker-based development environment for Drupal projects and
other frameworks and CMSes. It is our standard tool for local environments for
projects at [Microserve][0].
There was a [great talk][1] recently at Drupaldelphia about Docksal.
## Why write a custom command?
One of the things that Docksal offers (and is covered in the talk) is the
ability to add custom commands to the Docksals `fin` CLI, either globally or as
part of your project.
As an advocate of automated testing and TDD practitioner, I write a lot of tests
and run PHPUnit numerous times a day. Ive also given [talks][6] and have
[written other posts][7] on this site relating to testing in Drupal.
There are a couple of ways to run PHPUnit with Docksal. The first is to use
`fin bash` to open a shell into the container, move into the docroot directory
if needed, and run the `phpunit` command.
```bash
fin bash
cd /var/www/docroot
../vendor/bin/phpunit -c core modules/custom
```
Alternatively, it can be run from the host machine using `fin exec`.
```
cd docroot
fin exec '../vendor/bin/phpunit -c core modules/custom'
```
Both of these options require multiple steps as we need to be in the `docroot`
directory where the Drupal code is located before the command can be run, and
both have quite long commands to run PHPUnit itself - some of which is repeated
every time.
By adding a custom command, I intend to:
1. Make it easier to get set up to run PHPUnit tests - i.e. setting up a
`phpunit.xml` file.
1. Make it easier to run the tests that wed written by shortening the command
and making it so it can be run anywhere within our project.
I also hoped to make it project agnostic so that I could add it onto any project
and immediately run it.
## Creating the command
Each command is a file located within the `.docksal/commands` directory. The
filename is the name of the command (e.g. `phpunit`) with no file extension.
To create the file, run this from the same directory where your `.docksal`
directory is:
```bash
mkdir -p .docksal/commands
touch .docksal/commands/phpunit
```
This will create a new, empty `.docksal/commands/phpunit` file, and now the
`phpunit` command is now listed under "Custom commands" when we run `fin`.
![](/images/blog/docksal-phpunit-command/1.gif)
You can write commands with any interpreter. Im going to use bash, so Ill add
the shebang to the top of the file.
```bash
#!/usr/bin/env bash
```
With this in place, I can now run `fin phpunit`, though there is no output
displayed or actions performed as the rest of the file is empty.
## Adding a description and help text
Currently the description for our command when we run `fin` is the default "No
description" text. Id like to add something more relevant, so Ill start by
adding a new description.
fin interprets lines starting with `##` as documentation - the first of which it
uses as the description.
```bash
#!/usr/bin/env bash
## Run automated PHPUnit tests.
```
Now when I run it, I see the new description.
![](/images/blog/docksal-phpunit-command/2.gif)
Any additional lines are used as help text with running `fin help phpunit`. Here
Ill add an example command to demonstrate how to run it as well as some more
in-depth text about what the command will do.
```bash
#!/usr/bin/env bash
## Run automated PHPUnit tests.
##
## Usage: fin phpunit <args>
##
## If a core/phpunit.xml file does not exist, copy one from elsewhere.
## Then run the tests.
```
Now when I run `fin help phpunit`, I see the new help text.
![](/images/blog/docksal-phpunit-command/3.gif)
## Adding some content
### Setting the target
As I want the commands to be run within Docksals "cli" container, I can specify
that with `exec_target`. If one isnt specified, the commands are run locally on
the host machine.
```
#: exec_target = cli
```
### Available variables
These variables are provided by fin and are available to use within any custom
commands:
- `PROJECT_ROOT` - The absolute path to the nearest `.docksal` directory.
- `DOCROOT` - name of the docroot folder.
- `VIRTUAL_HOST` - the virtual host name for the project. Such as
`myproject.docksal`.
- `DOCKER_RUNNING` - (string) "true" or "false".
<div class="note" markdown="1">
**Note:** If the `DOCROOT` variable is not defined within the cli container, ensure that its added to the environment variables in `.docksal/docksal.yml`. For example:
```
version: "2.1"
services:
cli:
environment:
- DOCROOT
```
</div>
### Running phpunit
When you run the `phpunit` command, there are number of options you can pass to
it such as `--filter`, `--testsuite` and `--group`, as well as the path to the
tests to execute, such as `modules/custom`.
I wanted to still be able to do this by running `fin phpunit <args>` so the
commands can be customised when executed. However, as the first half of the
command (`../vendor/bin/phpunit -c core`) is consistent, I can wrap that within
my custom command and not need to type it every time.
By using `"$@"` I can capture any additional arguments, such as the test
directory path, and append them to the command to execute.
Im using `$PROJECT_ROOT` to prefix the command with the absolute path to
`phpunit` so that I dont need to be in that directory when I run the custom
command, and `$DOCROOT` to always enter the sub-directory where Drupal is
located. In this case, its "docroot" though I also use "web" and Ive seen
various others used.
```bash
DOCROOT_PATH="${PROJECT_ROOT}/${DOCROOT}"
DRUPAL_CORE_PATH="${DOCROOT_PATH}/core"
# If there is no phpunit.xml file, copy one from elsewhere.
# Otherwise run the tests.
${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@"
```
For example, `fin phpunit modules/custom` would execute
`/var/www/vendor/bin/phpunit -c /var/www/docroot/core modules/custom` within the
container.
I can then wrap this within a condition so that the tests are only run when a
`phpunit.xml` file exists, as it is required for them to run successfully.
```bash
if [ ! -e ${DRUPAL_CORE_PATH}/phpunit.xml ]; then
# If there is no phpunit.xml file, copy one from elsewhere.
else
${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@"
fi
```
### Creating phpunit.xml - step 1
My first thought was that if a `phpunit.xml` file doesnt exist was to duplicate
cores `phpunit.xml.dist` file. However this isnt enough to run the tests, as
values such as `SIMPLETEST_BASE_URL`, `SIMPLETEST_DB` and
`BROWSERTEST_OUTPUT_DIRECTORY` need to be populated.
As the tests wouldn't run at this point, Ive exited early and displayed a
message to the user to edit the new `phpunit.xml` file and run `fin phpunit`
again.
```bash
if [ ! -e ${DRUPAL_CORE_PATH}/phpunit.xml ]; then
echo "Copying ${DRUPAL_CORE_PATH}/phpunit.xml.dist to ${DRUPAL_CORE_PATH}/phpunit.xml."
echo "Please edit it's values as needed and re-run 'fin phpunit'."
cp ${DRUPAL_CORE_PATH}/phpunit.xml.dist ${DRUPAL_CORE_PATH}/phpunit.xml
exit 1;
else
${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@"
fi
```
However this isnt as streamlined as I originally wanted as it still requires
the user to perform an additional step before the tests can run.
### Creating phpunit.xml - step 2
My second idea was to keep a pre-configured file within the project repository,
and to copy that into the expected location. That approach would mean that the
project specific values would already be populated, as well as any
customisations made to the default settings. I decided on
`.docksal/drupal/core/phpunit.xml` to be the potential location.
Also, if this file is copied then we can go ahead and run the tests straight
away rather than needing to exit early.
If a pre-configured file doesnt exist, then we can default back to copying
`phpunit.xml.dist`.
To avoid duplication, I created a reusable `run_tests()` function so it could be
executed in either scenario.
```bash
run_tests() {
${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@"
}
if [ ! -e ${DRUPAL_CORE_PATH}/phpunit.xml ]; then
if [ -e "${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml" ]; then
echo "Copying ${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml to ${DRUPAL_CORE_PATH}/phpunit.xml"
cp "${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml" ${DRUPAL_CORE_PATH}/phpunit.xml
run_tests "$@"
else
echo "Copying ${DRUPAL_CORE_PATH}/phpunit.xml.dist to ${DRUPAL_CORE_PATH}/phpunit.xml."
echo "Please edit it's values as needed and re-run 'fin phpunit'."
cp ${DRUPAL_CORE_PATH}/phpunit.xml.dist ${DRUPAL_CORE_PATH}/phpunit.xml
exit 1;
fi
else
run_tests "$@"
fi
```
This means that I can execute less steps and run a much shorter command compared
to the original, and even if someone didnt have a `phpunit.xml` file created
they could have copied into place and have tests running with only one command.
## The finished file
```bash
#!/usr/bin/env bash
#: exec_target = cli
## Run automated PHPUnit tests.
##
## Usage: fin phpunit <args>
##
## If a core/phpunit.xml file does not exist, one is copied from
## .docksal/core/phpunit.xml if that file exists, or copied from the default
## core/phpunit.xml.dist file.
DOCROOT_PATH="${PROJECT_ROOT}/${DOCROOT}"
DRUPAL_CORE_PATH="${DOCROOT_PATH}/core"
run_tests() {
${PROJECT_ROOT}/vendor/bin/phpunit -c ${DRUPAL_CORE_PATH} "$@"
}
if [ ! -e ${DRUPAL_CORE_PATH}/phpunit.xml ]; then
if [ -e "${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml" ]; then
echo "Copying ${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml to ${DRUPAL_CORE_PATH}/phpunit.xml"
cp "${PROJECT_ROOT}/.docksal/drupal/core/phpunit.xml" ${DRUPAL_CORE_PATH}/phpunit.xml
run_tests "$@"
else
echo "Copying phpunit.xml.dist to phpunit.xml"
echo "Please edit it's values as needed and re-run 'fin phpunit'."
cp ${DRUPAL_CORE_PATH}/phpunit.xml.dist ${DRUPAL_CORE_PATH}/phpunit.xml
exit 0;
fi
else
run_tests "$@"
fi
```
Its currently available as a [GitHub Gist][2], though Im planning on moving it
into a public GitHub repository either on my personal account or the [Microserve
organisation][3], for people to either use as examples or to download and use
directly.
Ive also started to add other commands to projects such as `config-export` to
standardise the way to export configuration from Drupal 8, run Drupal 7 tests
with SimpleTest, and compile front-end assets like CSS within custom themes.
I think its a great way to shorten existing commands, or to group multiple
commands into one like in this case, and I can see a lot of other potential uses
for it during local development and continuous integration. Also being able to
run one command like `fin init` and have it set up everything for your project
is very convenient and a big time saver!
<div class="note" markdown="1">
Since writing this post, Ive had a [pull request][8] accepted for this command to be added as a [Docksal add-on][9]. This means that the command can be added to any Docksal project by running `fin addon install phpunit`. It will be installed into the `.docksal/addons/phpunit` directory, and displayed under "Addons" rather than "Custom commands" when you run `fin`.
</div>
## Resources
- [PHPUnit](https://phpunit.de)
- [PHPUnit in Drupal 8][4]
- [Main Docksal website](https://docksal.io)
- [Docksal documentation](https://docksal.readthedocs.io)
- [Docksal: one tool to rule local and CI/CD environments][1] - Docksal talk
from Drupaldelphia
- [phpcs example custom command][5]
- [phpunit command Gist][2]
- [Docksal addons blog post][9]
- [Docksal addons repository][10]
[0]: {{site.companies.microserve.url}}
[1]: https://youtu.be/1sjsvnx1P7g
[2]: https://gist.github.com/opdavies/72611f198ffd2da13f363ea65264b2a5
[3]: {{site.companies.microserve.github}}
[4]: https://www.drupal.org/docs/8/phpunit
[5]:
https://github.com/docksal/docksal/blob/develop/examples/.docksal/commands/phpcs
[6]: /talks/tdd-test-driven-drupal
[7]: /articles/tags/testing
[8]: https://github.com/docksal/addons/pull/15
[9]: https://blog.docksal.io/installing-addons-in-a-docksal-project-172a6c2d8a5b
[10]: https://github.com/docksal/addons

View file

@ -0,0 +1,52 @@
---
title: Creating Local and Staging sites with Drupal's Domain Module Enabled
date: 2013-07-17
excerpt: How to use aliases within Domain module for pre-production sites.
tags:
- databases
- domain
- drupal
- drupal-planet
- table-prefixing
---
The
[Domain Access project](https://drupal.org/project/domain 'The Domain Access project on Drupal.org')
is a suite of modules that provide tools for running a group of affiliated sites
from one Drupal installation and a single shared database. The issue is that the
domains are stored within the database so these are copied across when the data
is migrated between environments, whereas the domains are obviously going to
change.
Rather than changing the domain settings within the Domain module itself, the
best solution I think is to use table prefixes and create a different domain
table per environment. With a live, staging and local domains, the tables would
be named as follows:
```bash
live_domain
local_domain
staging_domain
```
Within each site's settings.php file, define the prefix for the domain table
within the databases array so that each site is looking at the correct table for
its environment.
```php
$databases['default']['default'] = array(
'driver' => 'mysql',
'database' => 'foobar',
'username' => 'foo',
'password' => 'bar',
'host' => 'localhost',
'prefix' => array(
'default' => '',
'domain' => 'local_', // This will use the local_domain table.
// Add any other prefixed tables here.
),
);
```
Within each environment-specific domain table, update the subdomain column to
contain the appropriate domain names.

View file

@ -0,0 +1,162 @@
---
title: Creating and using custom tokens in Drupal 7
date: 2013-02-16
excerpt:
This post outlines the steps required to create your own custom tokens in
Drupal.
tags:
- drupal
- drupal-7
- drupal-planet
- tokens
---
This post outlines the steps required to create your own custom tokens in
Drupal.
When writing the recent releases of the
[Copyright Block](http://drupal.org/project/copyright_block) module, I used
tokens to allow the user to edit and customise their copyright message and place
the copyright_message:dates token in the desired position. When the block is
rendered, the token is replaced by the necessary dates.
We will be using the fictional _foo_ module to demonstrate this.
## Requirements
- [Token module](http://drupal.org/project/token)
## Recommended
- [Devel module](http://drupal.org/project/devel) - useful to run `dpm()` and
`kpr()` functions
- [Copyright Block module](http://drupal.org/project/copyright_block) - 7.x-2.x
and 6.x-1.x use tokens, handy as a reference
## Implementing hook_token_info()
The first thing that we need to do is define the new token type and/or the token
itself, along with it's descriptive text. To view the existing tokens and types,
use `dpm(token_get_info());`, assuming that you have the
[Devel module](http://drupal.org/project/devel) installed.
```php
/**
* Implements hook_token_info().
*/
function foo_token_info() {
$info = array();
// Add any new tokens.
$info['tokens']['foo']['bar'] = t('This is my new bar token within the foo type.');
// Return them.
return $info;
}
```
In this case, the token called _bar_ resides within the _foo_ group.
If I needed to add a new token within an existing token type, such as 'node',
the syntax would be `$info['tokens']['node']['bar']`.
## Implementing hook_tokens()
Now that the Token module is aware of our new token, we now need to determine
what the token is replaced with. This is done using `hook_tokens()`. Here is the
basic code needed for an implementation:
```php
/**
* Implements hook_tokens().
*/
function foo_tokens($type, $tokens, array $data = array(), array $options = array()) {
$replacements = array();
// Code goes here...
// Return the replacements.
return $replacements;
}
```
The first thing to check for is the type of token using an `if()` function, as
this could be an existing type like 'node', 'user' or 'site', or a custom token
type like 'foo'. Once we're sure that we're looking at the right type(s), we can
use `foreach ($tokens as $name => $original)` to loop through each of the
available tokens using a `switch()`. For each token, you can perform some logic
to work out the replacement text and then add it into the replacements array
using `$replacements[$original] = $new;`.
```php
/**
* Implements hook_tokens().
*/
function foo_tokens($type, $tokens, array $data = array(), array $options = array()) {
$replacements = array();
// The first thing that we're going to check for is the type of token - node,
// user etc...
if ($type == 'foo') {
// Loop through each of the available tokens.
foreach ($tokens as $name => $original) {
// Find the desired token by name
switch ($name) {
case 'bar':
$new = '';
// Work out the value of $new...
// Add the new value into the replacements array.
$replacements[$original] = $new;
break;
}
}
}
// Return the replacements.
return $replacements;
}
```
## Example
An example from Copyright Block module:
```php
/**
* Implements hook_tokens().
*/
function copyright_block_tokens($type, $tokens, array $data = array(), array $options = array()) {
$replacements = array();
if ($type == 'copyright_statement') {
foreach ($tokens as $name => $original) {
switch ($name) {
case 'dates':
$start_year = variable_get('copyright_block_start_year', date('Y'));
$current_year = date('Y');
$replacements[$original] = $start_year < $current_year ? $start_year . '-' . $current_year : $start_year;
break;
}
}
}
return $replacements;
}
```
## Using token_replace()
With everything defined, all that we now need to do is pass some text through
the `token_replace()` function to replace it with the values defined within
`hook_token()`.
```php
$a = t('Something');
// This would use any token type - node, user etc.
$b = token_replace($a);
// This would only use foo tokens.
$c = token_replace($a, array('foo'));
```

View file

@ -0,0 +1,82 @@
---
title: Croeso PHP South Wales!
date: 2018-08-01
excerpt: Last night was the first meetup of Cardiffs PHP South Wales user group.
tags:
- php
- php-south-wales
- meetups
has_tweets: true
---
Last night was the first meetup of Cardiffs [PHP South Wales user group][0]! It
was a great first event, and it was great to meet a lot of new people as well as
catch up some familiars within the 36 (according to meetup.com) attendees -
including some [PHP South West][9] regulars.
Organised by Steve and Amy McDougall, it was held in Barclays [Eagle Lab][1]
which was a great space, and it was cool to be back in Brunel House having
worked in that building previously whilst at Appnovation.
{% include 'tweet' with {
class: 'my-6',
data_cards: true,
content: '<p lang="en" dir="ltr">Pretty cool being back in the centre of Cardiff. <a href="https://t.co/kh7Oi2tPDD">pic.twitter.com/kh7Oi2tPDD</a></p>&mdash; Oliver Davies (@opdavies) <a href="https://twitter.com/opdavies/status/1024377438611156992?ref_src=twsrc%5Etfw">July 31, 2018</a>',
} %}
## Speakers
[Rob Allen][2] was the main speaker, who gave an interesting talk and a brave
live demo on serverless PHP and OpenWhisk. I always enjoy watching Rob speak,
which Ive done a number of times at different events, and it was great to be
able to chat for a while after the meetup too.
{% include 'tweet' with {
class: 'my-6',
data_cards: true,
content: '<p lang="en" dir="ltr">Great to see <a href="https://twitter.com/akrabat?ref_src=twsrc%5Etfw">@akrabat</a> speaking about serverless PHP at the first <a href="https://twitter.com/phpSouthWales?ref_src=twsrc%5Etfw">@phpSouthWales</a> meetup. <a href="https://twitter.com/hashtag/php?src=hash&amp;ref_src=twsrc%5Etfw">#php</a> <a href="https://twitter.com/hashtag/phpc?src=hash&amp;ref_src=twsrc%5Etfw">#phpc</a> <a href="https://twitter.com/hashtag/cardiff?src=hash&amp;ref_src=twsrc%5Etfw">#cardiff</a> <a href="https://t.co/Q9YaQ6O1fB">pic.twitter.com/Q9YaQ6O1fB</a></p>&mdash; Oliver Davies (@opdavies) <a href="https://twitter.com/opdavies/status/1024359937063956484?ref_src=twsrc%5Etfw">July 31, 2018</a>',
} %}
We also had a couple of lightning talks, starting with [Ismael Velasco][3]
giving an introduction to progressive web applications (PWAs). I can see some
potential uses for this on my current work project, and I look forward to seeing
the full talk soon).
I gave an updated version of my [Tailwind CSS lightning talk][4], and enjoyed
being able to show some examples of new sites using Tailwind such as [Laravel
Nova][5], [Spatie][6]s new website and PHP South Wales itself!
{% include 'tweet' with {
class: 'my-6',
data_cards: true,
content: '<p lang="en" dir="ltr">Lightning talk time, first <a href="https://twitter.com/IsmaelVelasco?ref_src=twsrc%5Etfw">@IsmaelVelasco</a> talking about <a href="https://twitter.com/hashtag/PWA?src=hash&amp;ref_src=twsrc%5Etfw">#PWA</a> 😎🎉 <a href="https://t.co/KrJGZlIp7V">pic.twitter.com/KrJGZlIp7V</a></p>&mdash; PHP South Wales (@phpSouthWales) <a href="https://twitter.com/phpSouthWales/status/1024377906456420352?ref_src=twsrc%5Etfw">July 31, 2018</a>',
} %}
## Conclusion
Its great to have a meetup in Cardiff again, and having thought about organsing
something myself previously, Im glad to see someone step forward to do so. This
shows that there's still a strong PHP community in Cardiff and South Wales, and
hopefully this will be the first meetup of many. Ill look forward to seeing the
local community grow!
Thanks again to Steve and Amy for organising, Eagle Labs for hosting, the
sponsors, and Rob and Ismael for speaking.
It would be great to see even more people at the next one. If youre interested,
take a look at the [groups website][0], [meetup.com group][7] and [Twitter
profile][8]. Alternatively, get in touch with myself or one of the organisers
for more information.
**Croeso ac iechyd da PHP South Wales!**
[0]: https://www.phpsouthwales.uk
[1]: https://labs.uk.barclays/locations/cardiff-en
[2]: https://twitter.com/akrabat
[3]: https://twitter.com/IsmaelVelasco
[4]: /talks/taking-flight-with-tailwind-css
[5]: https://nova.laravel.com
[6]: https://spatie.be
[7]: https://www.meetup.com/PHP-South-Wales
[8]: https://twitter.com/phpsouthwales
[9]: https://phpsw.uk

View file

@ -0,0 +1,140 @@
---
title: Debugging Drupal Commerce Promotions and Adjustments using Illuminate Collections (Drupal 8)
date: 2018-10-24
excerpt: Using Laravels Illuminate Collections to debug an issue with a Drupal Commerce promotion.
tags:
- drupal
- drupal-8
- drupal-commerce
- drupal-planet
- illuminate-collections
- laravel-collections
- php
promoted: true
---
Today I found another instance where I decided to use [Illuminate
Collections][0] within my Drupal 8 code; whilst I was debugging an issue where a
[Drupal Commerce][1] promotion was incorrectly being applied to an order.
No adjustments were showing in the Drupal UI for that order, so after some
initial investigation and finding that `$order->getAdjustments()` was empty, I
determined that I would need to get the adjustments from each order item within
the order.
If the order were an array, this is how it would be structured in this
situation:
```php
$order = [
'id' => 1,
'items' => [
[
'id' => 1,
'adjustments' => [
['name' => 'Adjustment 1'],
['name' => 'Adjustment 2'],
['name' => 'Adjustment 3'],
]
],
[
'id' => 2,
'adjustments' => [
['name' => 'Adjustment 4'],
]
],
[
'id' => 3,
'adjustments' => [
['name' => 'Adjustment 5'],
['name' => 'Adjustment 6'],
]
],
],
];
```
## Getting the order items
I started by using `$order->getItems()` to load the orders items, converted
them into a Collection, and used the Collections `pipe()` method and the
`dump()` function provided by the [Devel module][2] to output the order items.
```php
collect($order->getItems())
->pipe(function (Collection $collection) {
dump($collection);
});
```
## Get the order item adjustments
Now we have a Collection of order items, for each item we need to get its
adjustments. We can do this with `map()`, then call `getAdjustments()` on the
order item.
This would return a Collection of arrays, with each array containing its own
adjustments, so we can use `flatten()` to collapse all the adjustments into one
single-dimensional array.
```php
collect($order->getItems())
->map(function (OrderItem $order_item) {
return $order_item->getAdjustments();
})
->flatten(1);
```
There are a couple of refactors that we can do here though:
- Use `flatMap()` to combine the `flatten()` and `map()` methods.
- Use [higher order messages][3] to delegate straight to the `getAdjustments()`
method on the order, rather than having to create a closure and call the
method within it.
```php
collect($order->getItems())
->flatMap->getAdjustments();
```
## Filtering
In this scenario, each order item had three adjustments - the correct promotion,
the incorrect one and the standard VAT addition. I wasnt concerned about the
VAT adjustment for debugging, so I used `filter()` to remove it based on the
result of the adjustments `getSourceId()` method.
```php
collect($order->getItems())
->flatMap->getAdjustments()
->filter(function (Adjustment $adjustment) {
return $adjustment->getSourceId() != 'vat';
});
```
## Conclusion
Now I have just the relevant adjustments, I want to be able to load each one to
load it and check its conditions. To do this, I need just the source IDs.
Again, I can use a higher order message to directly call `getSourceId()` on the
adjustment and return its value to `map()`.
```php
collect($order->getItems())
->flatMap->getAdjustments()
->filter(function (Adjustment $adjustment) {
return $adjustment->getSourceId() != 'vat';
})
->map->getSourceId();
```
This returns a Collection containing just the relevant promotion IDs being
applied to the order that I can use for debugging.
Now just to find out why the incorrect promotion was applying!
[0]: https://laravel.com/docs/collections
[1]: https://drupalcommerce.org
[2]: https://www.drupal.org/project/devel
[3]: https://laravel-news.com/higher-order-messaging

View file

@ -0,0 +1,102 @@
---
title: Debugging PHP in Docker with Xdebug, Neovim and DAP
date: ~
tags:
- docker
- neovim
- dap
- xdebug
- php
- drupal
draft: true
---
I've been a full-time Neovim user for a year at the time of writing this post and whilst I was a semi-regular Xdebug user, it's something that I've managed to work around and have mostly resorted to `var_dump()`, `dump()`, or `dd()` instead for debugging.
This week though, whilst working on some particularly tricky PHP code, I decided to spend some time and get Xdebug working and be able to use a step debugger within Neovim.
https://gist.githubusercontent.com/opdavies/688a3c8917893bf34a3da32ff69c1837/raw/112e16634930d312cd04c525de42a198c8a32bb9/dap.lua
## Installing Xdebug
Installing Xdebug itself within Docker was straight forward. I was able to add two lines to my existing `RUN` command - `pecl install xdebug` to install the extension and `docker-php-ext-enable xdebug` to enable it.
Now when I run `php -v` inside my container, I can see that it mentions Xdebug.
## Configuring Xdebug
https://www.youtube.com/watch?v=ZIGdBSD6zvU
```
xdebug.mode=develop,debug
xdebug.client_host=host.docker.internal
xdebug.discover_client_host=0
xdebug.output_dir=/tmp/xdebug
xdebug.log=/tmp/xdebug/xdebug-example.log
xdebug.start_with_request=yes
```
## Installing DAP plugins
I use [Packer](https://github.com/wbthomason/packer.nvim) for managing my Neovim plugins so I needed to install some additional ones to add the DAP (debug adapter protocol) functionality.
```lua
use "mfussenegger/nvim-dap"
use "rcarriga/nvim-dap-ui"
use "theHamsta/nvim-dap-virtual-text"
use "nvim-telescope/telescope-dap.nvim"
```
## Installing DAP dependencies
[https://github.com/mfussenegger/nvim-dap/wiki/Debug-Adapter-installation#PHP](https://github.com/mfussenegger/nvim-dap/wiki/Debug-Adapter-installation#PHP)
There's also a prerequisite for install the `vscode-php-debug` adapter.
I configure my laptop with Ansible, so I added a new `debugger` role that is responsible for cloning this repository and installing its contents:
[https://github.com/opdavies/dotfiles/blob/7681c535269049556736f1f857c8c9fd800857a3/roles/debugger/tasks/php.yaml](https://github.com/opdavies/dotfiles/blob/7681c535269049556736f1f857c8c9fd800857a3/roles/debugger/tasks/php.yaml)
## Configuring DAP for Xdebug
```lua
dap.adapters.php = {
type = "executable",
command = "node",
args = { os.getenv("HOME") .. "/build/vscode-php-debug/out/phpDebug.js" }
}
dap.configurations.php = {
{
type = "php",
request = "launch",
name = "Listen for Xdebug",
port = 9003,
pathMappings = {
["/var/www/html"] = "${workspaceFolder}"
}
}
}
```
I first needed to configure the adapter to use `vscode-php-debug` and then add a DAP configuration.
The default port for the step debugger is now 9003 rather than 9000 so I changed this from the default, and as I'm working with PHP inside a container, I also added a path mapping so that my code could be found.
## Testing the connection
> [Step Debug] Creating socket for 'host.docker.internal:9003', getaddrinfo: Invalid argument.
```yaml
services:
php:
volumes:
- "/tmp/xdebug:/tmp/xdebug"
extra_hosts:
- "host.docker.internal:host-gateway"
```
---
keymaps:
https://github.com/opdavies/docker-drupal-example

View file

@ -0,0 +1,141 @@
---
title: Decorating an Entity Metadata Wrapper to add and refactor methods
excerpt: How to use the Decorator design pattern with Drupal 7's EntityMetadataWrapper to extend it, and add and refactor custom methods.
tags:
- drupal
- drupal-7
- drupal-planet
- php
date: 2021-02-24
---
Following [yesterday's Entity Metadata Wrapper blog post](/blog/cleanly-retrieving-user-profile-data-using-entity-metadata-wrapper) and as I continued to work on this task, I noticed some duplication and found that I was repeating several of the same chaining steps in different methods in the same file. For example:
```php
public function getFirstName(): string {
return $this
->get('profile_user_basic') // Get the pupil's profile.
->get('field_first_name')
->value();
}
private function getTeacherFirstName(): string {
$this
->get('profile_student') // Get the pupil's profile.
->get('field_class') // Get the pupil's class.
->get('field_teacher') // Get the class' teacher.
->get('profile_user_basic') // Get the teacher's profile.
->get('field_first_name')
->value();
}
```
In both cases, the last three lines are the same, where the same profile type is loaded, and the value is loaded from a field.
I wanted to find a way to remove this duplication whilst also making the code more readable. Ideally, this would mean adding a method like `getFirstNameFromBasicProfile()` that would group the last three steps.
## Extending the EntityDrupalWrapper
I've done this before, where I've created a custom wrapper class with its own methods and extends `EntityDrupalWrapper`. This is how that might look:
```php
final class PupilWrapper extends \EntityDrupalWrapper {
public function __construct(\stdClass $data, $info = []) {
parent::__construct('user', $data, $info);
}
public function getFirstName(): string {
return $this->getFirstNameFromBasicProfile();
}
public function getTeacherFirstName(): string {
return $this
->get('profile_student')
->get('field_class')
->get('field_teacher')
->getFirstNameFromBasicProfile();
}
private function getFirstNameFromBasicProfile(): string {
return $this
->get('profile_user_basic')
->get('field_first_name')
->value();
}
}
```
Whilst this has worked in previous situations, this time I had this error:
> Error: Call to undefined method EntityDrupalWrapper::getFirstNameFromBasicProfile() in Drupal\my_module\EntityWrapper\PupilWrapper->getTeacherFirstName
This is because the `get()` method is returning an instance of `EntityStructureWrapper` (another class that extends `EntityDrupalWrapper`) which means that `getFirstNameFromBasicProfile()` is not accessible though it's in the same file.
I tried overridding the `get()` method but wasn't able to get this to work.
## Decorating the EntityDrupalWrapper
Another option that I tried was to follow the Decorator design pattern, and add a new class that takes an `EntityDrupalWrapper` as an argument as uses it internally but doesn't extend it. Here's an example:
```php
final class PupilWrapper {
private $accountWrapper;
public function __construct(\EntityMetadataWrapper $accountWrapper) {
$this->accountWrapper = $accountWrapper;
}
public function getFirstName(): string {
return $this->getFirstNameFromBasicProfile();
}
public function getTeacherFirstName(): string {
return $this
->get('profile_student')
->get('field_class')
->get('field_teacher')
->getFirstNameFromBasicProfile();
}
private function getFirstNameFromBasicProfile(): string {
return $this
->get('profile_user_basic')
->get('field_first_name')
->value();
}
}
```
In this case, the constructor argument is an instance of `EntityMetadataWrapper` so that it could be either an `EntityDrupalWrapper` or `EntityStructureWrapper`.
### Re-adding required wrapper methods
As the `get()` method is missing, this would cause an error:
> Error: Call to undefined method Drupal\my_module\EntityWrapper\PupilWrapper::get() in Drupal\my_module\EntityWrapper\PupilWrapper->getFirstName()
However, we can re-add it, have it get the value from `accountWrapper` and return another instance of `PupilWrapper` so that `getFirstNameFromBasicProfile()` will be available.
```php
public function get(string $property): self {
return new self($this->accountWrapper->get($property));
}
```
The `value()` method is also required, but this can delegate to the decorated wrapper:
> Error: Call to undefined method Drupal\my_module\EntityWrapper\PupilWrapper::value() in Drupal\my_module\EntityWrapper\PupilWrapper->getFirstName()
```php
public function value(): string {
return $this->accountWrapper->value();
}
```
## Conclusion
This was the first time that I tried extending Drupal 7's entity metadata wrappers in this way, but it worked well, removes the duplication and cleans up the code further.

View file

@ -0,0 +1,27 @@
---
title: Display a Custom Menu in a Drupal 7 Theme Template File
date: 2012-08-18
excerpt: The code needed to display a menu in a Drupal 7 template file.
tags:
- aria
- drupal
- drupal-7
- drupal-planet
- php
---
For reference, this is the code needed to display a menu in a Drupal 7 template
file, including the navigation ARIA role.
```php
$menu_name = 'menu-footer-menu';
$menu_id = 'footer-menu';
print theme('links', array(
'links' => menu_navigation_links($menu_name),
'attributes' => array(
'id' => $menu_id,
'role' => 'navigation',
'class'=> array('links', 'inline')
)
));
```

View file

@ -0,0 +1,67 @@
---
title: Display Git Branch or Tag Names in your Bash Prompt
date: 2013-04-27
excerpt: Whilst watching Drupalize.me's recent Introduction to Git series, I thought it was useful the way that the current Git branch or tag name was displayed in the bash prompt. Here's how to do it.
tags:
- drupal
- drupal-planet
- git
- terminal
---
Whilst watching [Drupalize.me](http://drupalize.me 'Drupalize.me')'s recent
[Introduction to Git series](http://drupalize.me/series/introduction-git-series 'Introduction to Git on Drupalize.me'),
I thought it was useful the way that the current Git branch or tag name was
displayed in the bash prompt.
Here's how to do it.
For example (with some slight modifications):
```bash
oliver@oliver-mbp:~/Development/drupal(master) $
oliver@oliver-mbp:~/Development/a11y_checklist(7.x-1.0) $
```
Here's how to do it.
To begin with, create a new file to contain the functions,
```bash
vim ~/.bash/git-prompt
```
Paste the following code into the file, and save it.
```bash
parse_git_branch () {
git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/'
}
parse_git_tag () {
git describe --tags 2> /dev/null
}
parse_git_branch_or_tag() {
local OUT="$(parse_git_branch)"
if [ "$OUT" == " ((no branch))" ]; then
OUT="($(parse_git_tag))";
fi
echo $OUT
}
```
Edit your `.bashrc` or `.bash_profile` file to override the PS1 value.
```bash
vim ~/.bashrc
```
Add the following code at the bottom of the file, and save it.
```bash
source ~/.bash/git-prompt
PS1="\u@\h:\w\$(parse_git_branch_or_tag) $ "
```
Restart your Terminal or type `source ~/.bashrc` to see your changes.

View file

@ -0,0 +1,30 @@
---
title: Display the Number of Facebook fans in PHP
date: 2011-03-15
excerpt: How to use PHP to display the number of fans of a Facebook page.
tags:
- php
---
Replace the \$page_id value with your Page ID number (unless you want to show
the number of fans for this site).You can find your Page ID by logging into your
Facebook account, going to 'Adverts and Pages', clicking 'Edit page', and
looking at the URL.
For example, mine is
<https://www.facebook.com/pages/edit/?id=143394365692197&sk=basic>.
I've also wrapped the output in a number_format() function so that it properly
formatted with commas etc - like where I've used it within the
[Gold Event listing](http://www.horseandcountry.tv/events/paid) on the Horse &
Country TV website.
```php
$page_id = "143394365692197";
$xml = @simplexml_load_file("http://api.facebook.com/restserver.php?method=facebook.fql.query&amp;query=SELECT%20fan_count%20FROM%20page%20WHERE%20page_id=".$page_id."") or die ("a lot");
$fans = $xml->page->fan_count;
print number_format($fans);
```
This code was originally found at
<http://wp-snippets.com/display-number-facebook-fans>.

View file

@ -0,0 +1,93 @@
---
title: Dividing Drupal's process and preprocess functions into separate files
date: 2012-05-24
excerpt: If you use a lot of process and preprocess functions within your Drupal theme, then your template.php can get very long and it can become difficult to find a certain piece of code. Following the example of the Omega theme, I've started separating my process and preprocess functions into their own files.
tags:
- code
- drupal
- preprocessing
- theming
---
If you use a lot of process and preprocess functions within your Drupal theme,
then your template.php can get very long and it can become difficult to find a
certain piece of code. Following the example of the
[Omega theme](http://drupal.org/project/omega 'The Omega theme on Drupal.org'),
I've started separating my process and preprocess functions into their own
files. For example, mytheme_preprocess_node can be placed within a
preprocess/node.inc file, and mytheme_process_page can be placed within
process/page.inc.
The first step is to use the default mytheme_process() and mytheme_preprocess()
functions to utilise my custom function. So within my template.php file:
```php
<?php
/**
* Implements hook_preprocess().
*
* Initialises the mytheme_invoke() function for the preprocess hook.
*/
function mytheme_preprocess(&$variables, $hook) {
mytheme_invoke('preprocess', $hook, $variables);
}
/**
* Implements hook_process().
*
* Initialises the mytheme_invoke() function for the process hook.
*/
function mytheme_process(&$variables, $hook) {
mytheme_invoke('process', $hook, $variables);
}
```
Now, to write the `mytheme_invoke()` function:
```php
<?php
/**
* Invokes custom process and preprocess functions.
*
* @param string $type
* The type of function we are trying to include (i.e. process or preprocess).
*
* @param array $variables
* The variables array.
*
* @param string $hook
* The name of the hook.
*
* @see mytheme_preprocess()
* @see mytheme_process()
*/
function mytheme_invoke($type, $hook, &$variables) {
global $theme_key;
// The name of the function to look for (e.g. mytheme_process_node).
$function = $theme_key . '_' . $type . '_' . $hook;
// If the function doesn't exist within template.php, look for the
// appropriate include file.
if (!function_exists($function)) {
// The file to search for (e.g. process/node.inc).
$file = drupal_get_path('theme', $theme_key) . '/' . $type . '/' . $type . '-' . str_replace('_', '-', $hook) . '.inc';
// If the file exists, include it.
if (is_file($file)) {
include($file);
}
}
// Try to call the function again.
if (function_exists($function)) {
$function($variables);
}
}
```
As `mytheme_invoke()` checks to see if the function already exists before
searching for checking the include files, I could still add the functions into
template.php as normal and this would override any corresponding include file.

View file

@ -0,0 +1,22 @@
---
title: Docker resources
excerpt: A list of Docker resources that I've compiled.
tags:
- docker
date: 2021-04-13
---
I've been speaking with a few people recently about Docker. Here are some resources that I found useful when I was learning Docker:
- [The Docker documentation](https://docs.docker.com)
- [Shipping Docker video course by Chris Fidao](https://serversforhackers.com/shipping-docker)
- [Docker for PHP Developers eBook and video course by Paul Redmond](https://leanpub.com/docker-for-php-developers)
- [Docker for Developers eBook by Chris Tankersley](https://leanpub.com/dockerfordevs)
- [Docker YouTube video playlist that I've curated](https://www.youtube.com/playlist?list=PLHn41Ay7w7kdt1thq6N6hpVABb2YNI50b)
- [AltF4Stream on Twitch](https://www.twitch.tv/thealtf4stream)
- [The "Full Stack Live" stream on Twitch](https://www.twitch.tv/fullstacklive)
- ["Docker Mastery" course on Udemy](https://www.udemy.com/course/docker-mastery)
I'm sure that I'll remember some others after I publish this post, but I'll come back and add them here afterward.
Know of any more? Let me know on [Twitter](https://twitter.com/opdavies).

View file

@ -0,0 +1,73 @@
---
title: Don't Bootstrap Drupal, Use Drush
date: 2013-11-19
excerpt: Avoid bootstrapping Drupal manually in your scratch files - Drush has you covered!
tags:
- drupal-planet
- drush
- php
---
There are times when doing Drupal development when you need to run a custom PHP
script, maybe moving data from one field to another, that doesn't warrant the
time and effort to create a custom module. In this scenario, it would be quicker
to write a .php script and bootstrap Drupal to gain access to functions like
`node_load()` and `db_query()`.
To bootstrap Drupal, you would need to add some additional lines of code to the
stop of your script. Here is an alternative way.
```php
<?php
// Bootstrap Drupal.
$drupal_path = $_SERVER['DOCUMENT_ROOT'];
define('DRUPAL_ROOT', $drupal_path);
require_once DRUPAL_ROOT . '/includes/bootstrap.inc';
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);
// Do stuff.
$node = node_load(1);
```
The script would need be placed in the root of your Drupal directory, and you
would then have had to open a browser window and visit
http://example.com/foo.php to execute it. This is where the "drush php-script"
command (or "drush scr" for short) is useful, and can be used to execute the
script from the command line.
```bash
$ drush scr foo.php
```
It also means that I no longer need to manually bootstrap Drupal, so my script
is much cleaner.
```php
// Just do stuff.
$node = node_load(1);
```
I prefer to keep these scripts outside of my Drupal directory in a separate
"scripts" directory (with Drupal in a "drupal" directory on the same level).
This makes it easier to update Drupal as I don't need to worry about
accidentally deleting the additional files. From within the drupal directory, I
can now run the following command to go up one level, into the scripts directory
and then execute the script. Note that you do not need to include the file
extension.
```bash
$ drush scr ../scripts/foo
```
Or, if you're using
[Drush aliases](http://deeson-online.co.uk/labs/drupal-drush-aliases-and-how-use-them 'Drupal, Drush aliases, and how to use them'):
```bash
$ drush @mysite.local scr foo
```
If you commonly use the same scripts for different projects, you could also
store these within a separate Git repository and checkout the scripts directory
using a
[Git submodule](http://git-scm.com/book/en/Git-Tools-Submodules 'Git Submodules').

View file

@ -0,0 +1,59 @@
---
title: Download Different Versions of Drupal with Drush
date: 2013-12-31
excerpt: How to download different versions of Drupal core using Drush.
tags:
- drupal
- drupal-planet
- drush
---
If you use
[Drush](https://raw.github.com/drush-ops/drush/master/README.md 'About Drush'),
it's likely that you've used the `drush pm-download` (or `drush dl` for short)
command to start a new project. This command downloads projects from Drupal.org,
but if you don't specify a project or type "drush dl drupal", the command will
download the current stable version of Drupal core. Currently, this will be
Drupal 7 with that being the current stable version of core at the time of
writing this post.
But what if you don't want Drupal 7?
I still maintain a number of Drupal 6 sites and occassionally need to download
Drupal 6 core as opposed to Drupal 7. I'm also experimenting with Drupal 8 so I
need to download that as well.
By declarding the core version of Drupal, such as "drupal-6", Drush will
download that instead.
```bash
$ drush dl drupal-6
```
This downloads the most recent stable version of Drupal 6. If you don't want
that, you can add the --select and additionally the --all options to be
presented with an entire list to chose from.
```bash
$ drush dl drupal-6 --select
$ drush dl drupal-6 --select --all
```
If you want the most recent development version, just type:
```bash
$ drush dl drupal-6.x
```
The same can be done for other core versions of Drupal, from Drupal 5 upwards.
```bash
# This will download Drupal 5
$ drush dl drupal-5
# This will download Drupal 8
$ drush dl drupal-8
```
For a full list of the available options, type "drush help pm-download" into a
Terminal window or take a look at the entry on
[drush.ws](http://drush.ws/#pm-download, 'The entry for pm-download on drush.ws').

View file

@ -0,0 +1,38 @@
---
title: 'Drupal 8.5.0 Released'
date: 2018-03-09
excerpt: This week, the latest version of Drupal 8 was released.
tags:
- drupal
- drupal-core
---
This week the latest minor version of Drupal 8, 8.5.0, was released.
> This new version makes Media module available for all, improves migrations
> significantly, stabilizes the Content Moderation and Settings Tray modules,
> serves dynamic pages faster with BigPipe enabled by default, and introduces a
> new experimental entity layout user interface. The release includes several
> very important fixes for workflows of content translations and supports
> running on PHP 7.2.
Ive been very impressed by the new release cycle Drupal 8 and the usage of
semantic versioning. Though it adds a greater maintenance overhead for module,
theme, installation profile and distribution developers to ensure that our
projects are still working properly, having the ability to add new modules into
Drupal core as well as new installation profiles like the [Unami demonstration
profile][2] is pretty cool!
For example, in addition to Unami, 8.5 alone adds media in core, two
experimental modules have been marked as stable, an experimental new layout
builder has been added and lots of PHP 7.2 improvements have been committed to
make 8.5 fully PHP 7.2 compatible.
Im already looking forward to see whats coming in 8.6 later this year!
For more information on the 8.5 release, see the [blog post on Drupal.org][1].
[0]: https://dri.es/drupal-8-5-0-released
[1]: https://www.drupal.org/blog/drupal-8-5-0
[2]:
https://www.drupal.org/docs/8/umami-drupal-8-demonstration-installation-profile

View file

@ -0,0 +1,95 @@
---
title: "Drupal 8 Commerce: Fixing 'No Such Customer' error on checkout"
date: 2018-08-15
excerpt: Fixing a Drupal Commerce error when a user tries to complete a checkout.
tags:
- drupal
- drupal-8
- drupal-commerce
- stripe
---
Recently I was experiencing an issue on the Drupal 8 website Im working on,
where a small number of users were not able to complete the checkout process and
instead got a generic `The site has encountered an unexpected error` message.
Looking at the log, I was able to see the error being thrown (the customer ID
has been redacted):
> Stripe\Error\InvalidRequest: No such customer: cus_xxxxxxxxxxxxxx in
> Stripe\ApiRequestor::\_specificAPIError() (line 124 of
> /var/www/vendor/stripe/stripe-php/lib/ApiRequestor.php).
Logging in to the Stripe account, I was able to confirm that the specified
customer ID did not exist. So where was it coming from, and why was Drupal
trying to retrieve a non-existent customer?
## Investigation
After some investigation, I found a table in the database named
`user__commerce_remote_id` which stores the remote customer ID for each payment
method (again, the customer ID has been redacted).
![A screenshot of a row in the user__commerce_remote_id table](/images/blog/commerce-stripe-error/remote-id-table.png){.border.p-1}
The `entity_id` and `revision_id` values in this case refer to the user that the
Stripe customer has been associated with.
As there was no customer in Stripe with this ID, I think that this must be a
customer ID from the test environment (the data from which was deleted before
the site went live).
### Drupal code
This I believe is the Drupal code where the error was being triggered:
```php
// modules/contrib/commerce_stripe/src/Plugin/Commerce/PaymentGateway/Stripe.php
public function createPayment(PaymentInterface $payment, $capture = TRUE) {
...
$owner = $payment_method->getOwner();
if ($owner && $owner->isAuthenticated()) {
$transaction_data['customer'] = $this->getRemoteCustomerId($owner);
}
try {
$result = \Stripe\Charge::create($transaction_data);
ErrorHelper::handleErrors($result);
}
catch (\Stripe\Error\Base $e) {
ErrorHelper::handleException($e);
}
...
}
```
### Stripe code
I can also see in the Stripe library where the original error is generated.
```php
private static function _specificAPIError($rbody, $rcode, $rheaders, $resp, $errorData)
{
$msg = isset($errorData['message']) ? $errorData['message'] : null;
$param = isset($errorData['param']) ? $errorData['param'] : null;
$code = isset($errorData['code']) ? $errorData['code'] : null;
switch ($rcode) {
...
case 404:
return new Error\InvalidRequest($msg, $param, $rcode, $rbody, $resp, $rheaders);
...
}
}
```
## Solution
After confirming that it was the correct user ID, simply removing that row from
the database allowed the new Stripe customer to be created and for the user to
check out successfully.

View file

@ -0,0 +1,37 @@
---
title: Drupal Association
date: 2014-05-03
excerpt: Next week, I'll be working for the Drupal Association.
tags:
- drupal
- personal
---
Today was my last day working at [Precedent](http://www.precedent.com). Next
week, I'll be starting my
[new job](https://assoc.drupal.org/node/18923 'Drupal.org Developer') at the
[Drupal Association](http://assoc.drupal.org) working on Drupal's home -
[Drupal.org](http://www.drupal.org).
I was at Precedent for just over a year and had the opportunity to work on
several Drupal projects from project leading to ad-hoc module and theme
development, including my largest Drupal build to date.
I was also lucky enough to go to
[DrupalCon Prague](http://prague2013.drupal.org) as well as
[DrupalCamp London](http://2014.drupalcamplondon.co.uk).
I was able to [contribute some code](https://drupal.org/project/eventsforce)
back into the community and encourage other team members to do the same.
It was good to be able to introduce some new tools like
[Vagrant](http://www.vagrantup.com), [Puppet](http://www.puppetlabs.com),
[SASS](http://www.sass-lang.com) and [Compass](http://www.compass-style.org)
into the team. I was pleased to introduce and champion the
[Git Flow](http://danielkummer.github.io/git-flow-cheatsheet 'Git Flow Cheat Sheet')
branching model, which them became the standard approach for all Drupal
projects, and hopefully soon all development projects.
Working for the Drupal Association and on Drupal.org was an opportunity that I
couldn't refuse, and is certainly going to be a fun and interesting challenge. I
can't wait to get started!

View file

@ -0,0 +1,14 @@
---
title: Drupal automated testing workshop notes
excerpt: If you attended my automated testing with Drupal workshop this weekend, here are the links.
tags:
- drupal
date: 2020-11-15
---
If you attended my automated testing and test driven development workshop this weekend at DrupalCamp NYC, or at DrupalCamp London in March, [here are the notes][notes] that we went through during the session.
There is also [a separate code repository][code] that contains the example code, broken down commit by commit, and uses GitHub Actions to run the tests automatically on each push.
[code]: https://github.com/opdavies/workshop-drupal-automated-testing-code
[notes]: https://github.com/opdavies/workshop-drupal-automated-testing

View file

@ -0,0 +1,153 @@
---
title: Exporting Drupal body classes to use with Tailwind CSS
excerpt: How I've exported content from Drupal's body fields so they aren't missed by Tailwind's JIT mode or PurgeCSS.
date: 2022-07-02
tags:
- drupal
- tailwind-css
---
I was recently [asked a question](https://www.drupal.org/project/tailwindcss/issues/3271487) in the issue queue for my [Tailwind starter kit Drupal theme](https://www.drupal.org/project/tailwindcss), about how to use classes within content with Tailwind CSS.
The 5.x version of the theme has the JIT (just in time) compiler enabled by default and whilst it can work using Twig files in your theme, it doesn't know about classes used within content that is stored within the database.
This is something that I've needed to solve in some of my own projects before too so there are a few options but I'd not recommend turning off the JIT compiler or PurgeCSS.
## Adding classes to a safelist
The first option is to use the `safelist` option within the `tailwind.config.js` file:
```js
module.exports = {
content: [
'./templates/**/*.html.twig'
],
safelist: [
'bg-red-500',
'text-3xl',
'lg:text-4xl',
]
}
```
Adding any classes to the safelist will force them to be generated, or prevent them from being purged, even if they are not found within the theme's templates files.
This is refered to within the Tailwind CSS documentation for [safelisting classes](https://tailwindcss.com/docs/content-configuration#safelisting-classes):
> One example of where this can be useful is if your site displays user-generated content and you want users to be able to use a constrained set of Tailwind classes in their content that might not exist in your own sites source files.
## Extracting the safelist to a file
In some projects, I found that I was adding a lot of classes to the safelist so I extracted the classes into a file instead.
Whilst it could be a JavaScript object that could be imported, as long as Tailwind sees the classes being used, the files just need to exist in a file that can be scanned - even just a plain text file called `safelist.txt`:
```
bg-red-500
text-3xl
lg:text-4xl
```
Rather than using the `safelist`, I can add the safelist file to `content` instead:
```js
module.exports = {
content: [
'./safelist.txt',
'./templates/**/*.html.twig'
]
}
```
## Creating a safelist file automatically with Drush
What we could also do is create the safelist file automatically based on the contents of the database using a custom Drush command.
### Creating the command
This can be done by creating a new PHP class within a custom module and use the `@command` annotation to specify the command to run:
```php
<?php
declare(strict_types=1);
namespace Drupal\opdavies_blog\Command;
final class ExportBodyValuesForThemePurgingCommand {
/**
* Drush command to export body field values into a file.
*
* @command opdavies:export-body-values-for-theme-purging
*/
public function handle(): void {
// ...
}
}
```
In this example, the file is `modules/custom/opdavies_blog/src/Command/ExportBodyValuesForThemePurgingCommand.php`.
### Injecting the database service
It can now add it as a service within the `opdavies_blog.services.yml` file:
```yaml
services:
Drupal\opdavies_blog\Command\ExportBodyValuesForThemePurgingCommand:
arguments: ['@database']
tags:
- { name: drush.command }
```
As we're going to need to query the database, I've added the database service as a dependency of my command and also created a constructor method and a property within the command class:
```php
private Connection $database;
public function __construct(Connection $database) {
$this->database = $database;
}
```
### Completing the handle method
As well as the database, I've added some properties to contain the table names to query as well as the name of the file to output:
```php
private static array $tableNames = [
'block_content__body',
'node__body',
];
private string $filename = 'safelist.txt';
```
Within the `handle()` method, I'm using an [Illuminate Collection](/talks/using-illuminate-collections-outside-laravel) to loop over the array of tables, query the database, export the values, and write them into the file:
```php
public function handle(): void {
$values = collect(self::$tableNames)
->flatMap(fn(string $tableName) =>
$this->getValuesFromTable($tableName))
->implode(PHP_EOL);
file_put_contents($this->getFilePath(), $values);
}
private function getFilePath(): string {
return drupal_get_path('theme', 'opdavies') . DIRECTORY_SEPARATOR
. $this->filename;
}
private function getValuesFromTable(string $tableName): array {
return $this->database->select($tableName)
->fields($tableName, ['body_value'])
->execute()
->fetchCol();
}
```
Now, when Tailwind CSS is run, it will find the exported body contents within the safelist file, and ensure that the appropriate classes are generated.

View file

@ -0,0 +1,112 @@
---
title: Drupal Bristol Testing Workshop
date: 2018-06-28
excerpt: Yesterday evening, I did my first workshop, held at the Drupal Bristol user group.
tags:
- composer
- docksal
- drupal
- drupal-8
- drupal-bristol
- php
- phpunit
- testing
---
Yesterday evening, I did [my first workshop][16] (and I believe, the first
workshop) held at the [{{ site.events['drupal_bristol'].name }}][14] user group.
The subject was automated testing with PHPUnit in Drupal 8, in preparation for
my talk at [{{ site.events['drupal_dev_days_18'].name }}][12] next week and to
help process some ideas for my [testing book][15].
Here are some details about what we covered, and some of my thoughts in review.
## Local Environment
Before the meetup, I set up a [repository on GitHub][0] that contains a
Composer-based Drupal 8 installation, based on the [Drupal 8 Composer
template][4] along with the [Examples module][5] (which includes some PHPUnit
tests) with a pre-configured [Docksal][2] environment to use locally - Docksal
being our standard local development environment that we use at
{{ site.companies['microserve'].name }} for all of our projects, so something
that Im familiar with using.
In addition to the default stack, I added [the PHPUnit add-on that I wrote][6]
so that it was easier to run tests, [configured settings.php using environment
variables][7] and added a custom `fin init` command to install the Composer
dependencies and install Drupal. This meant after that installing Docksal,
everyone had a running Drupal 8 website after only running `git clone` and
`fin init`, and could then run tests straight away using
`fin phpunit web/modules/contrib/examples/phpunit_example`.
## Exercises
Once everyone was set up, we moved on to talk about why testing is important and
the different options available to run them, before looking at the different
types of tests available in Drupal 8. For each test type, I explained what it
was used for and everyone completed an exercise on each - creating a test of
that type, initially seeing it fail, and then doing the work to get it to pass.
The completed code that I wrote beforehand for these is available in their own
[GitHub repository][8], including all of the tests as well as the implementation
code.
Once these exercises were completed, we looked at creating a blog page using
test driven development - the example that I use in the [TDD - Test-Driven
Drupal talk][9], to give a more real-word scenario. It would have been good to
have gone through this as an exercise too, if wed have had more time.
## Wrap Up
To finish, I demonstrated the PHPUnit integration within PHPStorm (which is
working with Docksal) and showed some of the tests that I wrote for the [Private
Message Queue][10] and [System User][11] modules, to see how things like adding
items to queues and processing them, ensuring that emails are sent, to the right
users and contain the right data, can be tested, as well as some of the tests
that weve written on my work project over the last few months.
## Slides
I didnt record this workshop, but I have exported the slides and embedded them
below:
{% include 'talk/slides' with {
speakerdeck: {
id: '2679401cb2ad421789d372cb8d38e368',
ratio: '1.77777777777778',
}
} %}
## Thoughts
I was very happy with how my first workshop went, it was a great experience for
me and it seemed that the attendees all learnt something and found it
interesting.
A couple of people mentioned about providing handouts to refer the code examples
whilst working on the exercises, rather than relying on the slides and avoiding
the need to sometimes switch back and forth between slides. Ive found that I
can export the slide deck as PNGs or JPGs from Deckset, so Ill definitely do
that next time.
Im giving the [Test-Driven Drupal][9] talk at the [Drupal Dev Days
conference][12] next week, and Im hoping to give it again at other meetups and
events in the UK. If youd like me to do either at your meetup or event, [get in
touch][13].
[0]: https://github.com/opdavies/drupal-testing-workshop
[1]: https://github.com/drupal-composer/drupal-project
[2]: https://docksal.io
[3]: {{site.companies['microserve'].url}} [4]:
https://github.com/drupal-composer/drupal-project [5]:
https://www.drupal.org/project/examples [6]:
/articles/creating-a-custom-phpunit-command-for-docksal [7]:
/articles/using-environment-variables-settings-docksal [8]:
https://github.com/opdavies/drupal-testing-workshop-exercises [9]:
/talks/tdd-test-driven-drupal [10]:
https://www.drupal.org/project/private_message_queue [11]:
https://www.drupal.org/project/system_user [12]:
{{site.events.drupal_dev_days_18.url}} [13]: /contact [14]:
{{site.events.drupal_bristol.url}} [15]: /test-driven-drupal [16]:
https://groups.drupal.org/node/520891

View file

@ -0,0 +1,35 @@
---
title: Drupal VM Generator 2.9.1 Released
date: 2016-12-30
excerpt: Ive released some new versions of the Drupal VM Generator.
tags:
- drupal-vm-generator
- releases
---
The main updates are:
- Fixed an `InvalidResponseException` that was thrown from within the
`boolean_as_string` Twig filter from the opdavies/twig-extensions library when
the `config:generate` command was run in non-interactive mode.
- Adding a working test suite for the existing commands, using PhpUnit and
Symfonys Process component. This is now linked to [Travis CI][2], and the
tests are run on each commit and pull request.
- The version requirements have been changed to allow 2.7 versions of the used
Symfony Components, as well as the 3.x versions. This was done to resolve a
conflict when also installing Drush globally with Composer.
## Next Steps
Currently the project is based on Drupal VM 3.0.0 which is an outdated version
([4.1.0][3] was released today). Adding updates and supporting the newer
versions is a high priority, as well as keeping in sync with new releases. This
will be easier with the test suite in place.
My initial thoughts are that version 2.10.0 will support Drupal VM 4.0.0, and if
needed, 2.11.0 will ship shortly afterwards and support Drupal VM 4.1.0.
[0]: http://www.drupalvmgenerator.com
[1]: https://github.com/opdavies/drupal-vm-generator/tree/master/tests/Command
[2]: https://travis-ci.org/opdavies/drupal-vm-generator
[3]: https://github.com/geerlingguy/drupal-vm/releases/tag/4.1.0

View file

@ -0,0 +1,46 @@
---
title: DrupalCamp Bristol 2018 Statement
date: 2018-01-30
excerpt: Unfortunately, we wont be running DrupalCamp Bristol this year.
tags:
- drupal-planet
- drupalcamp-bristol
meta:
image:
url: /images/blog/drupalcamp-bristol-17-logo.jpg
width: 228
height: 448
type: image/img
---
Its with heavy hearts that we are announcing there wont be a DrupalCamp
Bristol 2018. The committee have looked at the amount of work required to put
the camp on and the capacity we all have and the two numbers are irreconcilable.
Seeing Drupalists from all over the country and from overseas come to Bristol to
share knowledge and ideas is something we take pride in. The past three camps
have been fantastic, but as a trend we have left it later and later to organise.
This year is the latest we have left to organise and we believe this is because
we are all a bit fatigued right now, so it seems like a good place to stop and
take stock.
In our washup of last years camp we spoke a lot about what DrupalCamp is and
who it is for. Traditionally we have tried to get a good mix of speakers from
within the Drupal community and from the wider tech community. This does mean we
dilute the Drupal aspect of the camp, but the benefits it brings in terms of
bringing together different views gives the camp greater value in our eyes.
Its because of this mix of talks and wider shifts in the community in getting
us off the island that we have been thinking about rebranding to reflect the
mix of talks that the camp hosts. The fact is DrupalCamps dont just cover
Drupal anymore. There is Symfony, Composer, OOP principles, React, etc.
Well take the gap this year to reevaluate who DrupalCamp Bristol is for and
where it fits into the schedule of excellent tech events that take place in
Bristol through the year, and we look forward to seeing you in 2019, refreshed
and more enthusiastic than ever!
The DrupalCamp Bristol organising committee
Tom, Ollie, Emily, Sophie, Rob, Mark

View file

@ -0,0 +1,27 @@
---
title: Speakers and sessions announced for DrupalCamp Bristol 2019
date: 2019-05-31
excerpt: DrupalCamp Bristol is returning next month, and the accepted speakers and sessions have just been announced.
tags:
- dcbristol
- drupalcamp
- drupalcamp-bristol
---
<p class="lead" markdown="1">DrupalCamp Bristol is returning next month for a one-day, single-track conference, and we have just finished announcing the accepted sessions and speakers. It includes a mixture of new and returning speakers, presenting sessions including **Drupal in a microservice architecture**, **Automate to manage repetitive tasks with Ansible** and **Doing good with Drupal**.</p>
Find out more about all of our sessions and speakers on [the DrupalCamp Bristol
website][website], as well as view the schedule for the day.
Also, at the time of writing, [early bird tickets are still available][tickets]
for a few more days!
In the meantime, the videos from the 2017 Camp are on [our YouTube
channel][youtube], including the opening keynote from [Emma Karayiannis][emma]:
{% include 'youtube-video' with { id: honnav4YlAA } %}
[emma]: https://twitter.com/embobmaria
[tickets]: https://2019.drupalcampbristol.co.uk/tickets
[website]: https://2019.drupalcampbristol.co.uk
[youtube]: https://opdavi.es/dcbristol17-videos

View file

@ -0,0 +1,68 @@
---
title: DrupalCamp Bristol 2017 - Early Bird Tickets, Call for Sessions, Sponsors
date: 2017-05-15
excerpt: In less than two months time, DrupalCamp Bristol will be back for our third year.
tags:
- drupal
- drupal-planet
- drupalcamp
- drupalcamp-bristol
meta:
image:
url: /assets/image/blog/drupalcamp-bristol-17-logo.jpg
height: 228
width: 448
type: image/jpg
---
<p class="text-center" markdown="1">![DrupalCamp Bristol 2017 logo](/images/blog/drupalcamp-bristol-17-logo.jpg)</p>
In less than two months time, [DrupalCamp Bristol][0] will be back for our third
year! (July seems to come around quicker each year). This is this years
schedule and venues:
- 30th June - CXO (Business) day - [Watershed][1]
- 1st July - Developer conference - [University of Bristol, School of
Chemistry][2]
- 2nd July - Contribution sprints - Venue TBC
Today we announced [Emma Karayiannis][3] as our Saturday keynote speaker, and
well be announcing some of the other speakers later this week.
Not submitted your session yet? The [session submissions][12] are open until May
31st. Were looking for talks not only on Drupal, but other related topics such
as PHP, Symfony, server administration/DevOps, project management, case studies,
being human etc. If you want to submit but want to ask something beforehand,
please [send us an email][4] or ping us on [Twitter][5].
Not spoken at a DrupalCamp before? No problem. Were looking for both new and
experienced speakers, and have both long (45 minutes) and short (20 minutes)
talk slots available.
Not bought your tickets yet? [Early bird tickets][10] for the CXO and conference
days are still available! The sprint day tickets are free but limited, so do
register for a ticket to claim your place.
We still have [sponsorships opportunities][6] available (big thanks to
[Microserve][7], [Deeson][8] and [Proctors][9]) who have already signed up), but
be quick if you want to be included in our brochure so that we can get you added
before our print deadline! Without our sponsors, putting on this event each year
would not be possible.
Any other questions? Take a look at [our website][0] or get in touch via
[Twitter][5] or [email][11].
[0]: https://2017.drupalcampbristol.co.uk
[1]: http://www.watershed.co.uk
[2]: http://www.bris.ac.uk/chemistry
[3]: http://emmakarayiannis.com
[4]: mailto:speakers@drupalcampbristol.co.uk
[5]: https://twitter.com/DrupalCampBris
[6]: https://2017.drupalcampbristol.co.uk/sponsorship
[7]: https://microserve.io
[8]: https://www.deeson.co.uk
[9]: http://www.proctors.co.uk
[10]:
https://www.eventbrite.co.uk/e/drupalcamp-bristol-2017-tickets-33574193316#ticket
[11]: mailto:info@drupalcampbristol.co.uk
[12]: https://2017.drupalcampbristol.co.uk/#block-dcb2017-page-title

View file

@ -0,0 +1,24 @@
---
title: DrupalCamp London 2014
date: 2014-02-09
excerpt: It's all booked, I'm going to be attending DrupalCamp London.
tags:
- drupal
- drupalcamp-london
- git
- git-flow
---
It's all booked, I'm going to be attending
[DrupalCamp London](http://2014.drupalcamplondon.co.uk) this year, my first
DrupalCamp!
I'm going as a volunteer, so I'm going to be helping with the registrations on
the Saturday morning and for another couple hours elsewhere over the weekend.
I've also offered to help organise and oversee some code sprints, although I'm
definitely wanting to do some sprinting of my own and attend a few sessions.
I'm looking forward to meeting some new people as well as catching up with some
people that I met at [DrupalCon Prague](http://prague2013.drupal.org).
If you're also coming, see you there!

View file

@ -0,0 +1,39 @@
---
title: DrupalCamp London 2019 - Tickets Available and Call for Sessions
date: 2018-11-20
excerpt: DrupalCamp London early-bird tickets are now available, and their call for sessions is open.
tags:
- conferences
- drupal
- drupalcamp
- drupalcamp-london
has_tweets: true
---
It was announced this week that [early-bird tickets are now available][0] for
[DrupalCamp London 2019][1], as well as their [call for sessions being open][2].
{% include 'tweet' with {
content: '<p lang="en" dir="ltr">The time is finally here. You can now purchase your tickets. Early Bird finishes on 2nd January 2019 - <a href="https://t.co/aG6jstmWzv">https://t.co/aG6jstmWzv</a> <a href="https://twitter.com/hashtag/Drupal?src=hash&amp;ref_src=twsrc%5Etfw">#Drupal</a></p>&mdash; DrupalCamp London (@DrupalCampLDN) <a href="https://twitter.com/DrupalCampLDN/status/1064584179113971712?ref_src=twsrc%5Etfw">November 19, 2018</a>
',
} %}
Ive attended, given talks and volunteered previously, would definitely
recommend others doing so, and I plan on attending and submitting again myself
for 2019. If theres something in particular that youd like to see me give a
talk on, let me know - Id be happy to hear any suggestions. Alternatively, if
youd like to submit and would like some help writing an abstract or want some
feedback on a talk idea, please get in touch.
_Note: I am not an organiser of DrupalCamp London, nor am I involved with the
session selection process._
Hopefully there will be no [#uksnow][3] this year!
DrupalCamp London is the 1-3 March 2019. Early bird tickets are available until
2 January 2019, and the call for sessions is open until 21 January.
[0]: https://twitter.com/DrupalCampLDN/status/1064584179113971712
[1]: https://drupalcamp.london
[2]: https://drupalcamp.london/get-involved/submit-a-session
[3]: /articles/tweets-drupalcamp-london

View file

@ -0,0 +1,29 @@
---
title: Testing Workshop at DrupalCamp London 2020
excerpt: This year, Im teaching a workshop at DrupalCamp London.
tags:
- drupal
- drupalcamp
- testing
date: 2020-02-05
lead_image:
url: /images/blog/testing-workshop-drupalcamp-london/lead.jpg
---
<img src={frontmatter.lead_image.url} />
This year, Im teaching a workshop at DrupalCamp London.
The subject will be automated testing and test driven development in Drupal 8,
and it will be on Friday 13th March 2020, between 1pm and 5pm.
In the workshop, Ill cover the methodology, approaches and terminology involved
with automated testing, look at some examples and work through some exercises,
and then take a test driven development approach to creating a new Drupal
module.
There are also other workshops on topics including Composer, Drupal Commerce,
profiling, and chatbots.
For more information and to register, go to the
[DrupalCamp London website](https://opdavi.es/dclondon20 'Find out more and register on the DrupalCamp London website').

View file

@ -0,0 +1,82 @@
---
title: Easier Git Repository Cloning with insteadOf
date: 2019-03-07
excerpt: How to simplify 'git clone' commands by using the insteadOf configuration option within your .gitconfig file.
tags:
- git
---
When working on client or open source projects, I clone a lot of
[Git](https://git-scm.com) repositories - either from GitHub, GitLab, Bitbucket
or Drupal.org. The standard `git clone` commands though provided by these sites
can be quite verbose with long repository URLs and use a mixture of different
protocols, and Id regularly need to go back to each website and look up the
necessary command every time.
For example, here is the command provided to clone Drupals
[Override Node Options module](https://www.drupal.org/project/override_node_options):
```
git clone --branch 8.x-2.x https://git.drupal.org/project/override_node_options.git
```
We can though simplify the command to make it easier and quicker to type, using
a Git configuration option called `insteadOf`.
## What is insteadOf?
From the
[Git documentation](https://git-scm.com/docs/git-config#git-config-urlltbasegtinsteadOf):
> **url.[base].insteadOf:**
>
> Any URL that starts with this value will be rewritten to start, instead, with
> [base]. In cases where some site serves a large number of repositories, and
> serves them with multiple access methods, and some users need to use different
> access methods, this feature allows people to specify any of the equivalent
> URLs and have Git automatically rewrite the URL to the best alternative for
> the particular user, even for a never-before-seen repository on the site. When
> more than one insteadOf strings match a given URL, the longest match is used.
Whilst examples are sparse,
[it seems like](https://stackoverflow.com/questions/1722807/how-to-convert-git-urls-to-http-urls)
insteadOf is used for resolving protocol issues with repository URLs. However,
we can use it to simplify our clone commands, as mentioned above.
## Example: cloning Drupal contrib projects
When working on Drupal core, or on a module, theme or distribution, you need to
have a cloned version of that repository to generate patch files from, and apply
patches to.
Again, here is the provided command to clone the Override Node Options module:
```
git clone --branch 8.x-2.x https://git.drupal.org/project/override_node_options.git
```
At the time of writing, the Git repository URL follow this same format -
`https://git.drupal.org/project/{name}.git` (also the `.git` file extension is
optional).
To shorten and simplify this, I can add this snippet to my `~/.gitconfig` file:
```
[url "https://git.drupal.org/project/"]
insteadOf = do:
insteadOf = drupal:
```
With that added, I can now instead run `git clone drupal:{name}` or
`git clone do:{name}` to clone the repository, specifying the projects machine
name.
For example, to clone the Override Node Options module, I can now do this using
just `git clone drupal:override_node_options`.
This, I think, is definitely quicker and easier!
## Resources
You can view my entire `.gitconfig` file, as well as my other dotfiles, in
[my dotfiles repository on GitHub](https://github.com/opdavies/dotfiles/blob/master/.gitconfig).

View file

@ -0,0 +1,78 @@
---
title: Easier Sculpin Commands with Composer and NPM Scripts
date: 2017-01-07
excerpt: In this video, I show you how I've simplied my Sculpin and Gulp workflow using custom Composer and NPM scripts.
tags:
- composer
- gulp
- sculpin
---
In this video, I show you how I've simplied my Sculpin and Gulp workflow using
custom Composer and NPM scripts.
My website includes several various command line tools - e.g. [Sculpin][4],
[Gulp][5] and [Behat][6] - each needing different arguments and options,
depending on the command being run. For example, for Sculpin, I normally include
several additional options when viewing the site locally - the full command that
I use is
`./vendor/bin/sculpin generate --watch --server --clean --no-interaction`.
Typing this repeatedly is time consuming and could be easily mis-typed,
forgotten or confused with other commands.
In this video, I show you how I've simplied my Sculpin and Gulp workflow using
custom Composer and NPM scripts.
<div class="embed-container">
<iframe width="560" height="315" src="https://www.youtube.com/embed/eiWDV_63yCQ" frameborder="0" allowfullscreen></iframe>
</div>
## Scripts
Here are the scripts that Im using - they are slightly different from those in
the video. I use the `--generate` and `--watch` options for Sculpin and the
`gulp watch` command for NPM. I had to change these before the recording as I
was using the [demo magic][0] script to run the commands, and existing from a
watch session was also ending the script process.
### composer.json
```json
"scripts": {
"clean": "rm -rf output_*/",
"dev": "sculpin generate --clean --no-interaction --server --watch",
"production": "sculpin generate --clean --no-interaction --env='prod' --quiet"
}
```
Run with `composer run <name>`, e.g. `composer run dev`.
### package.json
```json
"scripts": {
"init": "yarn && bower install",
"dev": "gulp watch",
"production": "gulp --production"
}
```
Run with `npm run <name>`, e.g. `npm run production`.
You can also take a look at the full [composer.json][1] and [package.json][2]
files within my site repository on [GitHub][3].
## Resources
- [Composer scripts][7]
- [oliverdavies.uk composer.json][1]
- [oliverdavies.uk package.json][2]
[0]: https://github.com/paxtonhare/demo-magic
[1]: https://github.com/opdavies/oliverdavies.uk/blob/master/composer.json
[2]: https://github.com/opdavies/oliverdavies.uk/blob/master/package.json
[3]: https://github.com/opdavies/oliverdavies.uk
[4]: https://sculpin.io
[5]: http://gulpjs.com
[6]: http://behat.org
[7]: https://getcomposer.org/doc/04-schema.md#scripts

View file

@ -0,0 +1,32 @@
---
title: Easily Embed TypeKit Fonts into your Drupal Website
date: 2011-02-14
excerpt: How to use the @font-your-face module to embed TypeKit fonts into your Drupal website.
tags:
- drupal-6
- drupal-planet
- typekit
---
To begin with, you will need to
[register for a TypeKit account](https://typekit.com/plans) - there is a free
version if you just want to try it out.
Next, you'll need to create a kit that contains the fonts that you want to use
on your website. I've used
[FF Tisa Web Pro](http://typekit.com/fonts/ff-tisa-web-pro).
Under 'Kit Settings', ensure that your website domain (e.g. mysite.com) is
listed under 'Domains'.
Download and install the
[@font-your-face](http://drupal.org/project/fontyourface) module onto your
Drupal site, and to go admin/settings/fontyourface to configure it. Enter
[your TypeKit API key](https://typekit.com/account/tokens), and click 'Import
Typekit' to import your kits and fonts.
Go to admin/dist/themes/fontyourface, and click 'Browse fonts to enable'. Click
on the name of the font that you want to enable, check 'Enabled', and click
'Edit font' to save your changes.
With the font enabled, you can now use it in your CSS.

View file

@ -0,0 +1,62 @@
---
title: Programmatically Load an Entityform in Drupal 7
date: 2015-12-22
excerpt: How to programmatically load, render and embed an entityform in Drupal 7.
tags:
- drupal
- drupal-7
- drupal-planet
- entityform
---
I recently had my first experience using the
[Entityform module](https://www.drupal.org/project/entityform) in a project. It
was quite easy to configure with different form types, but then I needed to
embed the form into an overlay. I was expecting to use the `drupal_get_form()`
function and render it, but this didnt work.
Here are the steps that I took to be able to load, render and embed the form.
## Loading the Form
The first thing that I needed to do to render the form was to load an empty
instance of the entityform using `entityform_empty_load()`. In this example,
`newsletter` is the name of my form type.
```php
$form = entityform_empty_load('newsletter');
```
This returns an instance of a relevant `Entityform` object.
## Rendering the Form
The next step was to be able to render the form. I did this using the
`entity_form_wrapper()` function.
As this function is within the `entityform.admin.inc` file and not autoloaded by
Drupal, I needed to include it using `module_load_include()` so that the
function was available.
```php
module_load_include('inc', 'entityform', 'entityform.admin');
$output = entityform_form_wrapper($form, 'submit', 'embedded'),
```
The first argument is the `Entityform` object that was created in the previous
step (Ive [submitted a patch](https://www.drupal.org/node/2639584) to type hint
this within entityform so that its clearer what is expected), which is
required.
The other two arguments are optional. The second argument is the mode (`submit`
is the default value), and the last is the form context. `page` is the default
value, for use on the submit page, however I changed this to `embedded`.
I could then pass this result into my theme function to render it successfully
within the relevant template file.
## Resources
- [The entityform module](https://www.drupal.org/project/entityform)
- [My issue and patch to add the type hint to the entityform_form_wrapper function](https://www.drupal.org/node/2639584)

View file

@ -0,0 +1,81 @@
---
title: Experimenting with events in Drupal 8
date: 2018-08-21
excerpt: Trying a different way of structuring Drupal modules, using event subscribers and autowiring.
tags:
- drupal
- drupal-8
- drupal-planet
- php
- symfony
promoted: true
---
Ive been experimenting with moving some code to Drupal 8, and Im quite
intrigued by a different way that Ive tried to structure it - using event
subscribers, building on some of the takeaways from Drupal Dev Days.
Here is how this module is currently structured:
![](/images/blog/events-drupal-8/1.png){.border .p-1}
Note that there is no `opdavies_blog.module` file, and rather than calling
actions from within a hook like `opdavies_blog_entity_update()`, each action
becomes its own event subscriber class.
This means that there are no long `hook_entity_update` functions, and instead
there are descriptive, readable event subscriber class names, simpler action
code that is responsibile only for performing one task, and youre able to
inject and autowire dependencies into the event subscriber classes as services -
making it easier and cleaner to use dependency injection, and simpler write
tests to mock dependencies when needed.
The additional events are provided by the
[Hook Event Dispatcher module](https://www.drupal.org/project/hook_event_dispatcher).
## Code
`opdavies_blog.services.yml`:
```yaml
services:
Drupal\opdavies_blog\EventSubscriber\PostToMedium:
autowire: true
tags:
- { name: event_subscriber }
Drupal\opdavies_blog\EventSubscriber\SendTweet:
autowire: true
tags:
- { name: event_subscriber }
```
<div class="note" markdown="1">
Adding `autowire: true` is not required for the event subscriber to work. Im using it to automatically inject any dependencies into the class rather than specifying them separately as arguments.
</div>
`src/EventSubscriber/SendTweet.php`:
```php
namespace Drupal\opdavies_blog\EventSubscriber;
use Drupal\hook_event_dispatcher\Event\Entity\EntityUpdateEvent;
use Drupal\hook_event_dispatcher\HookEventDispatcherInterface;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
class SendTweet implements EventSubscriberInterface {
...
public static function getSubscribedEvents() {
return [
HookEventDispatcherInterface::ENTITY_UPDATE => 'sendTweet',
];
}
public function sendTweet(EntityUpdateEvent $event) {
// Perform checks and send the tweet.
}
}
```

View file

@ -0,0 +1,103 @@
---
title: "Using Feature Flags with Sculpin"
excerpt: |
How I've started using feature flags within a client's Sculpin website.
tags: [sculpin]
date: 2022-01-09
---
<div class="flex justify-center">
<img class="mb-4 h-auto w-[150px]" src="/images/sculpin-jackson.png" />
</div>
## Background
I was asked last week to add a new feature, a Facebook pixel for measuring and
building advertising campaigns, to a client's website which I built using the
[Sculpin](https://sculpin.io) static site generator.
The site uses settings within the `app/config/sculpin_site.yml` file for
storing site IDs and usernames. For this, I would add something like:
```yaml
facebook:
pixel:
id: "abc123"
```
It can then be retrieved with `{{ site.facebook.pixel.id }}`.
If I then needed to disable the pixel, then I'd typically remove the pixel
ID:
```yaml
facebook:
pixel:
id: ~
```
## Introducing feature flags
A technique that I like to use on other projects is using
[feature flags](https://www.atlassian.com/continuous-delivery/principles/feature-flags)
(aka feature toggles).
Whilst, in this instance, a feature flag wouldn't separate deploying code from
toggling a feature - a static site will need to be re-generated and deployed -
I thought that there was value in being able to easily toggle a feature without
changing its configuration or removing code within the site's templates.
## Feature flags in Sculpin
My Sculpin feature flag implementation was to add a `feature_flags` key within
`sculpin_site.yml`, with each feature's name as the key and a boolean value to
set whether it's enabled - similar to how the Drupal 7 version of the
[Feature Toggle module](https://www.drupal.org/project/feature_toggle) works.
This is how I added the Facebook pixel feature flag:
```yaml
feature_flags:
add_facebook_pixel: true
```
## Using the Facebook pixel feature flag
The Facebook pixel code is stored within it's own partial that I can include
from my `source/_layouts/app.html.twig` layout, including the pixel ID and
whether or not the feature flag is enabled.
```twig
{% verbatim %}
{% include "facebook-pixel" with {
is_enabled: site.feature_flags.add_facebook_pixel,
pixel_id: site.facebook.pixel.id,
} only %}
{% endverbatim %}
```
Within the partial, I can check that both the feature flag is enabled and that
there is a Facebook pixel ID, and only add the pixel code if both conditions
return a truthy value.
```twig
{% if is_enabled and pixel_id %}
<script>
!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0';
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,'script',
'https://connect.facebook.net/en_US/fbevents.js');
fbq('init', '{{ pixel_id }}');
fbq('track', 'PageView');
</script>
{% endif %}
```
Now the pixel can be removed just by setting `add_facebook_pixel: false` in
`sculpin_site.yml`, and without changing any other configuration or templates.

View file

@ -0,0 +1,49 @@
---
title: Finding the last commit that a patch applies to
excerpt: How to find the last commit in a Git repository that a patch applies to.
date: 2020-03-26
tags:
- bash
- git
draft: true
---
```bash
#!/usr/bin/env bash
# https://www.drupal.org/files/issues/2018-08-28/group-configurable-entities-as-group-content-2797793-58.patch
patch_filename=group-configurable-entities-as-group-content-2797793-58.patch
first_commit=6e8c22a
last_commit=8.x-1.x
find_commits_between() {
first_commit=$1
last_commit=$2
git rev-list --reverse --ancestry-path $first_commit^...$last_commit
}
reset_repo() {
git reset --hard $1 >& /dev/null
}
apply_patch() {
git apply --check $patch_filename >& /dev/null
}
for sha1 in $(find_commits_between $first_commit $last_commit); do
echo "Trying ${sha1}..."
reset_repo $sha1
apply_patch
if [[ $? -eq 0 ]]; then
echo "Patch applies"
continue
fi
echo "Patch does not apply"
exit 1
done
```

View file

@ -0,0 +1,130 @@
---
title: Fixing Drupal SimpleTest issues inside Docker Containers
date: 2017-05-05
excerpt: How I managed to get my Drupal SimpleTest tests to run and pass within Docker containers.
tags:
- docker
- drupal
- drupal-planet
- simpletest
- testing
---
Ive been a Drupal VM user for a long time, but lately Ive been using a
combination Drupal VM and Docker for my local development environment. There
were a couple of issues preventing me from completely switching to Docker - one
of which being that when I tried running of my Simpletest tests, a lot of them
would fail where they would pass when run within Drupal VM.
Heres an excerpt from my `docker-compose.yml` file:
**TL;DR** You need to include the name of your web server container as the
`--url` option to `run-scripts.php`.
Ive been a [Drupal VM][1] user for a long time, but lately Ive been using a
combination Drupal VM and [Docker][0] for my local development environment.
There were a couple of issues preventing me from completely switching to
Docker - one of which being that when I tried running of my Simpletest tests, a
lot of them would fail where they would pass when run within Drupal VM.
Heres an excerpt from my `docker-compose.yml` file:
```yaml
services:
php:
image: wodby/drupal-php:5.6
volumes:
- ./repo:/var/www/html
nginx:
image: wodby/drupal-nginx:7-1.10
environment:
NGINX_BACKEND_HOST: php
NGINX_SERVER_ROOT: /var/www/html/web
ports:
- "80:80"
volumes_from:
- php
...
```
Nginx and PHP-FPM are running in separate containers, the volumes are shared
across both and the Nginx backend is set to use the `php` container.
This is the command that I was using to run the tests:
```bash
$ docker-compose run --rm \
-w /var/www/html/web \
php \
php scripts/run-tests.sh \
--php /usr/local/bin/php \
--class OverrideNodeOptionsTestCase
```
This creates a new instance of the `php` container, sets the working directory
to my Drupal root and runs Drupals `run-tests.sh` script with some arguments.
In this case, I'm running the `OverrideNodeOptionsTestCase` class for the
override_node_options tests. Once complete, the container is deleted because of
the `--rm` option.
This resulted in 60 of the 112 tests failing, whereas they all passed when run
within a Drupal VM instance.
```
Test summary
------------
Override node options 62 passes, 60 fails, 29 exceptions, and 17 debug messages
Test run duration: 2 min 25 sec
```
Running the tests again with the`--verbose` option, I saw this message appear in
the output below some of the failing tests:
> simplexml_import_dom(): Invalid Nodetype to import
\*\*Up After checking that I had all of the required PHP extensions installed, I
ran `docker-compose exec php bash` to connect to the `php` container and ran
`curl http://localhost` to check the output. Rather than seeing the HTML for the
site, I got this error message:
> curl: (7) Failed to connect to localhost port 80: Connection refused
Whereas `curl http://nginx` returns the HTML for the page, so included it with
the `--url` option to `run-tests.sh`, and this resulted in my tests all passing.
```bash
$ docker-compose run --rm \
-w /var/www/html/web \
php \
php scripts/run-tests.sh \
--php /usr/local/bin/php \
--url http://nginx \
--class OverrideNodeOptionsTestCase
```
```
Test summary
------------
Override node options 121 passes, 0 fails, 0 exceptions, and 34 debug messages
Test run duration: 2 min 31 sec
```
**Note:** In this example I have separate `nginx` and `php` containers, but I've
tried and had the same issue when running Nginx and PHP-FPM in the same
container - e.g. called `app` - and still needed to add `--url http://app` in
order for the tests to run successfully.
I dont know if this issue is macOS specfic (I know that [Drupal CI][2] is based
on Docker, and I dont know if its an issue) but Im going to test also on my
Ubuntu Desktop environment and investigate further and also compare the test run
times for Docker in macOS, Docker in Ubuntu and within Drupal VM. Im also going
to test this with PHPUnit tests with Drupal 8.
[0]: https://www.docker.com
[1]: https://www.drupalvm.com
[2]: https://www.drupal.org/drupalorg/docs/drupal-ci

View file

@ -0,0 +1,29 @@
---
title: Forward one domain to another using mod_rewrite and .htaccess
date: 2012-05-23
excerpt: How to use the .htaccess file to forward to a different domain.
tags:
- .htaccess
- apache
- code
- drupal
- mod_rewrite
---
How to use the .htaccess file to forward to a different domain.
Within the mod_rewrite section of your .htaccess file, add the following lines:
RewriteCond %{HTTP_HOST} ^yoursite\.co\.uk$
RewriteRule (.*) http://yoursite.com/$1 [R=301,L]
This automatically forwards any users from http://yoursite.co.uk to
http://yoursite.com. This can also be used to forward multiple domains:
RewriteCond %{HTTP_HOST} ^yoursite\.co\.uk$ [OR]
RewriteCond %{HTTP_HOST} ^yoursite\.info$ [OR]
RewriteCond %{HTTP_HOST} ^yoursite\.biz$ [OR]
RewriteCond %{HTTP_HOST} ^yoursite\.eu$
RewriteRule (.*) http://yoursite.com/$1 [R=301,L]
If any of the RewriteCond conditions apply, then the RewriteRule is executed.

View file

@ -0,0 +1,136 @@
---
title: git format-patch is your Friend
date: 2014-05-21
excerpt: An explanation of the "git format-patch" command, and how it could be used in Drupal's Git workflow.
tags:
- drupal
- drupal-planet
- git
- patches
---
An explanation of the "git format-patch" command, and how it could be used in
Drupal's Git workflow.
## The Problem
As an active contributor to the [Drupal](http://drupal.org) project, I spend a
lot of time working with other peoples modules and themes, and occassionally
have to fix a bug or add some new functionality.
In the Drupal community, we use a patch based workflow where any changes that I
make get exported to a file detailing the differences. The patch file (\*.patch)
is attached to an item in an issue queue on Drupal.org, applied by the
maintainer to their local copy of the code and reviewed, and hopefully
committed.
There is an option that the maintainer can add to the end of their commit
message.
For example:
```bash
--author="opdavies <opdavies@381388.no-reply.drupal.org>"
```
This differs slightly different for each Drupal user, and the code can be found
on their Drupal.org profile page.
If this is added to the end of the commit message, the resulting commit will
show that it was committed by the maintainer but authored by a different user.
This will then display on Drupal.org that youve made a commit to that project.
![A screenshot of a commit that was authored by rli but committed by opdavies](/images/blog/git-format-patch.png)
The problem is that some project maintainers either dont know about this option
or occasionally forget to add it. [Dreditor](http://dreditor.org) can suggest a
commit message and assign an author, but it is optional and, of course, not all
maintainers use Dreditor (although they probably should).
The `git format-patch` command seems to be the answer, and will be my preferred
method for generating patch files in the future rather than `git diff`.
## What does it do Differently?
From the [manual page](http://git-scm.com/docs/git-format-patch):
> Prepare each commit with its patch in one file per commit, formatted to
> resemble UNIX mailbox format. The output of this command is convenient for
> e-mail submission or for use with git am.
Here is a section of a patch that I created for the
[Metatag module](http://drupal.org/project/metatag) using `git format-patch`:
```bash
From 80c8fa14de7f4a83c2e70367aab0aedcadf4f3b0 Mon Sep 17 00:00:00 2001
From: Oliver Davies &lt;oliver@oliverdavies.co.uk&gt;
Subject: [PATCH] Exclude comment entities when checking if this is the page,
otherwise comment_fragment.module will break metatag
---
```
As mentioned above, the patch is structured in an email format. The commit
message is used as the subject line, and the date that the commit was made
locally is used for the date. What were interested in is the “From” value. This
contains your name and email address from your `~/.gitconfig` file and is used
to author the patch automatically.
Everything below this is the same as a standard patch file, the same as if was
generated with `git diff`.
The full patch file can be found at
<https://drupal.org/files/issues/metatag-comment-fragment-conflict-2265447-4.patch>.
## The Process
How did I create this patch? Here are the steps that I took:
1. Clone the source repository using
`$ git clone --branch 7.x-1.x http://git.drupal.org/project/metatag.git` and
move into that directory.
2. Create a branch for this patch using
`$ git checkout -b 2265447-comment-fragment-conflict`.
3. Add and commit any changes as normal.
4. Generate the patch file using
`$ git format-patch 7.x-1.x --stdout > metatag-comment-fragment-conflict-2265447-4.patch`.
_Note:_ I am defining 7.x-1.x in the last step as the original branch to compare
(i.e. the original branch that we forked to make our issue branch). This will
change depending on the project that you are patching, and its version number.
Also, commits should always be made against the development branch and not the
stable release.
By default, a separate patch file will be created for each commit that weve
made. This is overridden by the `--stdout` option which combines all of the
patches into a single file. This is the recommended approach when uploading to
Drupal.org.
The resulting patch file can be uploaded onto a Drupal.org issue queue, reviewed
by the Testbot and applied by a module maintainer, and you automatically get the
commit attributed. Problem solved.
## Committing the Patch
If you need to commit a patch that was created using `git format-patch`, the
best command to do this with is the `git am` command.
For example, within your repository, run:
```bash
$ git am /path/to/file
$ git am ~/Code/metatag-comment-fragment-conflict-2265447-4.patch
```
You should end up with some output similar to the following:
```bash
Applying: #2272799 Added supporters section
Applying: #2272799 Added navigation tabs
Applying: #2272799 Fixed indentation
Applying: #2272799 Replaced URL
```
Each line is the commit message associated with that patch.
Assuming that there are no errors, you can go ahead and push your updated code
into your remote repository.

View file

@ -0,0 +1,11 @@
---
title: Coloured output with PHPUnit and GitHub Actions
excerpt: How to have colours in your PHPUnit output when running with GitHub Actions.
date: 2020-08-12
tags:
- github-actions
- phpunit
- testing
---
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">If you&#39;re using GitHub Actions to run tests for your PHP projects and want colours in the output, append `--colors=always` to your phpunit command. <a href="https://t.co/0AVwxCP4Bv">pic.twitter.com/0AVwxCP4Bv</a></p>&mdash; Oliver Davies (@opdavies) <a href="https://twitter.com/opdavies/status/1260608152225157121?ref_src=twsrc%5Etfw">May 13, 2020</a></blockquote>

View file

@ -0,0 +1,16 @@
---
title: Going to DrupalCon
date: 2013-07-26
excerpt: Precedent are sending myself and two of our other Drupal Developers to Drupalcon Prague.
tags:
- drupalcon
- precedent
---
[Precedent](http://www.precedent.co.uk) are sending myself and two of our other
Drupal Developers to [Drupalcon Prague](http://prague2013.drupal.org).
Having wanted to attend the last few Drupalcons (London, especially) but not
being able to, I'm definitely looking forward to this one.
See you there!

View file

@ -0,0 +1,50 @@
---
title: Going "Full Vim" for my development work
excerpt: I've recently been using Neovim for all of my coding, as well as for my blog posts and slide decks.
tags:
- vim
date: 2021-07-08
---
For the past few months, I've gone "full Vim" ([Neovim][], to be exact) when writing any code - including anything for work or freelance projects, this blog post, and any presentation slide decks.
I've been a long-time casual Vim user, enabling Vi or Vim mode within other editors, including Sublime Text, PhpStorm, and VS Code, and using Vim to write Git commit messages or to edit single files before closing it again. I remember searching how to add Vim features like relative line numbers in other editors, and trying things that would work within Vim but not when using in a plugin or extension.
I've seen people and companies like [Lorna Jane Mitchell][], Suz Hinton ([noopkat][]), and [Thoughtbot][] using Vim in their presentations and videos for a long time but I haven't tried to switch before myself.
Inspired by them and others including [Robin Malfait][], [TheAltF4Stream][], [Codico][], [Michael Dyrynda][], [ThePrimeagen][] with their recent live streams, videos, podcasts, and courses, I decided to give it a try.
## Plugins
You can see the [full list of plugins on GitHub](https://github.com/opdavies/dotfiles/blob/main/.config/nvim/plugins.vim), but here are some of the main ones that I've been using so far:
- [fzf](https://github.com/junegunn/fzf.vim) - a fuzzy-finder to easily locate and open files.
- [CoC](https://github.com/neoclide/coc.nvim) and [Intelephense](https://intelephense.com) - adds auto-completion and code snippet support, including refactorings such as renaming symbols.
- [NERDTree](https://github.com/preservim/nerdtree) - a tree explorer, though I usually use the fuzzy finder so this isn't used that often.
- [Git gutter](https://github.com/airblade/vim-gitgutter) - displays Git diff information in the gutter of the current file.
- [Blamer](https://github.com/APZelos/blamer.nvim) - inspired by the GitLens plugin for VS Code, displays `git blame` information for the current line.
- [Nord](https://github.com/arcticicestudio/nord-vim), [jellybeans](https://github.com/nanotech/jellybeans.vim), and [ayu](https://github.com/ayu-theme/ayu-vim) - different themes that I'm trying and switching between.
## Configuration
If you'd like to see my full Neovim configuration, see the `.config/nvim` directory and the `init.vim` file in my [Dotfiles repository on GitHub](https://github.com/opdavies/dotfiles/tree/main/.config/nvim).
## Conclusion
I'm enjoying my first few months of using Vim full-time, and so far, I haven't looked back. I''ve had no issues using it in a Windows/WSL 2 environment either, which was great.
I have a [cheat sheet on GitHub Gists](https://gist.github.com/opdavies/f944261b54f70b43f2297cab6779cf59) where I note the current things that I'm trying to learn and commit to memory.
As I use it and learn more, I'm sure that I'll be posting more Vim-related content here too.
Have any Vim/Neovim suggestions, tips, or tricks? Let me know on Twitter.
[codico]: https://www.twitch.tv/codico
[lorna jane mitchell]: https://lornajane.net
[michael dyrynda]: https://dyrynda.com.au
[neovim]: https://neovim.io
[noopkat]: https://www.twitch.tv/noopkat
[robin malfait]: https://twitter.com/malfaitrobin
[thealtf4stream]: https://www.twitch.tv/thealtf4stream
[theprimeagen]: https://twitter.com/theprimeagen
[thoughtbot]: https://thoughtbot.com

View file

@ -0,0 +1,44 @@
---
title: How to add a date popup calendar onto a custom form
date: 2012-05-23
excerpt: How to use a date popup calendar within your custom module.
tags:
- calendar
- date
- drupal
- drupal-7
- drupal-planet
- form-api
- forms
---
How to use a date popup calendar within your custom module.
First, I need to download the
[Date](http://drupal.org/project/date 'Date module on Drupal.org') module, and
make my module dependent on date_popup by adding the following line into my
module's .info file.
```ini
dependencies[] = date_popup
```
Within my form builder function:
```php
$form['date'] = array(
'#title' => t('Arrival date'),
// Provided by the date_popup module
'#type' => 'date_popup',
// Uses the PHP date() format - http://php.net/manual/en/function.date.php
'#date_format' => 'j F Y',
// Limits the year range to the next two upcoming years
'#date_year_range' => '0:+2',
// Default value must be in 'Y-m-d' format.
'#default_value' => date('Y-m-d', time()),
);
```

View file

@ -0,0 +1,50 @@
---
title: How to Create and Apply Patches
date: 2010-10-10
excerpt: How to create and apply patches, ready for the Drupal.org issue queues.
tags:
- drupal-6
- drupal-planet
- modules
- patches
---
Earlier this year, I posted a solution to
[an issue](http://drupal.org/node/753898) on the Drupal.org issue queue.
Originally, I just posted the code back onto the issue, but have now created a
patch that can easily be applied to any Drupal 6 installation. Here is a
run-through of the process of creating and applying a patch. In this case, I
made changes to the `user_pass_validate()` function that's found within
`modules/user/user.pages.inc`.
To begin with, a download a fresh copy of Drupal 6.19 and created a copy of the
original user.pages.inc file. Within the duplicate file, I made the same changes
to the function that I did in earlier code, and saved the changes. Now, within
my Terminal, I can navigate to Drupal's root directory and create the patch.
```bash
diff -rup modules/user/user.pages.inc modules/user/user.pages2.inc > /Users/oliver/Desktop/different_messages_for_blocked_users.patch
```
This command compares the differences between the two files, and creates the
specified patch file.
To apply the patch to my Drupal installation, I go back to Terminal and run the
following code:
```bash
patch -p0 < /Users/oliver/Desktop/different_messages_for_blocked_users.patch
```
If, for some reason, I need to reverse the patch, I can run this code:
```bash
patch -p0 -R < /Users/oliver/Desktop/different_messages_for_blocked_users.patch
```
And that's it!
There is also a Git patch creation workflow, which is described at
<http://groups.drupal.org/node/91424>. Thanks to
[Randy Fay](http://randyfay.com) for making me aware of this, and suggesting a
slight change to my original patch creation command.

View file

@ -0,0 +1,26 @@
---
title: How to fix Vagrant Loading the Wrong Virtual Machine
date: 2014-10-06
excerpt: Here are the steps that I took to fix Vagrant and point it back at the correct VM.
tags:
- vagrant
- virtualbox
---
A few times recently, I've had instances where
[Vagrant](https://www.vagrantup.com) seems to have forgotten which virtual
machine it's supposed to load, probably due to renaming a project directory or
the .vagrant directory being moved accidentally.
Here are the steps that I took to fix this and point Vagrant back at the correct
VM.
1. Stop the machine from running using the `$ vagrant halt` command.
1. Use the `$ VBoxManage list vms` command to view a list of the virtual
machines on your system. Note the ID of the correct VM that should be
loading. For example,
`"foo_default_1405481857614_74478" {e492bfc3-cac2-4cde-a396-e81e37e421e2}`.
The number within the curly brackets is the ID of the virtual machine.
1. Within the .vagrant directory in your project (it is hidden by default),
update the ID within the machines/default/virtualbox/id file.
1. Start the new VM with `$ vagrant up`.

View file

@ -0,0 +1,178 @@
---
title: How to Install and Configure Subversion (SVN) Server on Ubuntu
date: 2011-10-19
excerpt: How to install and configure your own SVN server.
tags:
- svn
- ubuntu
- version-control
---
Recently, I needed to set up a Subversion (SVN) server on a Ubuntu Linux server.
This post is going to outline the steps taken, and the commands used, to install
and configure the service.
Note: As I was using Ubuntu, I was using the 'apt-get' command to download and
install the software packages. If you're using a different distribution of
Linux, then this command may be different. I'm also assuming that Apache is
already installed.
Firstly, I'm going to ensure that all of my installed packages are up to date,
and install any available updates.
```bash
$ sudo apt-get update
```
Now, I need to download the subversion, subversion-tools and libapache2
packages.
```bash
$ sudo apt-get install subversion subversion-tools libapache2-svn
```
These are all of the packages that are needed to run a Subversion server.
## Create subversion directory
Now, I need to create the directory where my repositories are going to sit. I've
chosen this directory as I know that it's one that is accessible to my managed
backup service.
```bash
$ sudo mkdir /home/svn
```
## Create a test repository
First, I'll create a new folder in which I'll create my test project, and then
I'll create a repository for it.
```bash
$ sudo mkdir ~/test
$ sudo svnadmin create /home/svn/test -m 'initial project structure'
```
This will create a new repository containing the base file structure.
## Adding files into the test project
```bash
$ cd ~/test
$ mkdir trunk tags branches
```
I can now import these new directories into the test repository.
```bash
$ sudo svn import ~/test file:///home/svn/test -m 'Initial project directories'
```
This both adds and commits these new directories into the repository.
In order for Apache to access the SVN repositories, the `/home/svn` directory
needs to be owned by the same user and group that Apache runs as. In Ubuntu,
this is usually www-data. To change the owner of a directory, use the chown
command.
```bash
$ sudo chown -R www-data:www-data /home/svn
```
## Configuring Apache
The first thing that I need to do is enable the dav_svn Apache module, using the
a2enmod command.
```bash
$ sudo a2enmod dav_svn
```
With this enabled, now I need to modify the Apache configuration file.
```bash
$ cd /etc/apache2
$ sudo nano apache2.conf
```
At the bottom of the file, add the following lines, and then save the file by
pressing Ctrl+X.
```
<Location "/svn">
DAV svn
SVNParentPath /home/svn
</Location>
```
With this saved, restart the Apache service for the changes to be applied.
```bash
sudo service apache2 restart
```
I can now browse through my test repository by opening Firefox, and navigating
to `http://127.0.0.1/svn/test`. Here, I can now see my three directories,
although they are currently all empty.
## Securing my SVN repositories
Before I start committing any files to the test repository, I want to ensure
that only authorised users can view it - currently anyone can view the
repository and it's contents, as well as being able to checkout and commit
files. To do this, I'm going to require the user to enter a username and a
password before viewing or performing any actions with the repository.
Re-open apache2.conf, and replace the SVN Location information with this:
```
<Location "/svn">
DAV svn
SVNParentPath /home/svn
AuthType Basic
AuthName "My SVN Repositories"
AuthUserFile /etc/svn-auth
Require valid-user
</Location>
```
Now I need to create the password file.
```bash
$ htpasswd -cm /etc/svn-auth oliver
```
I'm prompted to enter and confirm my password, and then my details are saved.
The Apache service will need to be restarted again, and then the user will need
to authenticate themselves before viewing the repositories.
## Checking out the repository and commiting files
For example, now want to checkout the files within my repository into a new
directory called 'test2' within my home directory. Firstly, I need to create the
new directory, and then I can issue the checkout command.
```bash
$ cd ~
$ mkdir test2
$ svn checkout http://127.0.0.1/svn/test/trunk test2
```
I'm passing the command two arguments - the first is the URL of the repository's
trunk directory, and the second is the directory where the files are to be
placed. As no files have been commited yet into the trunk, it appears to be
empty - but if you perform an ls -la command, you'll see that there is a hidden
.svn directory.
Now you can start adding files into the directory. Once you've created your
files, perform a svn add command, passing in individual filenames as further
arguments.
```bash
$ svn add index.php
$ svn add *
```
With all the required files added, they can be committed using
`svn commit -m 'commit message'` command, and the server can be updated using
the svn up command.

View file

@ -0,0 +1,29 @@
---
title: How to put your PHP application in a subdirectory of another site with Nginx
date: 2018-03-12
excerpt: How to configure Nginx to serve a PHP application from within a subdirectory of another.
tags:
- nginx
- php
---
In January, [Chris Fidao][0] posted a video to [Servers for Hackers][1] showing
how to put different PHP applications in different subdirectories and have them
serving on different paths with Nginx. Ive had to do this a few times
previously, and its great to have this video as a reference.
> In this video, we work through how to put your PHP application in a
> subdirectory of another site.
>
> For example, we may have an application running at example.org but need a
> second application running at example.org/blog.
>
> This feels like it should be simple, but it turns out to be more complex and
> fraught with confusing Nginx configurations! To make matter worse (or,
> perhaps, to illustrate this point), a quick Google search reveals a TON of
> confusing, non-working examples.
<https://serversforhackers.com/c/nginx-php-in-subdirectory>
[0]: https://twitter.com/fideloper
[1]: https://serversforhackers.com

View file

@ -0,0 +1,217 @@
---
title: How to run Drupal 8 PHPUnit Tests within Docksal from PhpStorm
date: 2018-07-19
excerpt: How to configure PhpStorm to run automated tests within Docksal.
tags:
- docksal
- drupal
- drupal-8
- phpstorm
- phpunit
- testing
promoted: true
---
Ive recently re-watched [A Clean PHPUnit Workflow in PHPStorm][0] on
[Laracasts][1], where Jeffrey configures PhpStorm to run tests from within the
IDE. With Drupal 8 using PHPUnit too, I decided to try and do the same with a
local D8 site.
Though because Im using [Docksal][4] for my local development environment
which, at least on a Mac, runs Docker containers within a virtual machine, there
were some additional steps needed to achieve this and to have the tests run
within the Docksal virtual machine and using the correct containers.
In this post, Ill be using my [Drupal Testing Workshop codebase][2] as an
example, which is based on the [Drupal Composer project][3] with some
pre-configured Docksal configuration.
This post is separated into a few different sections:
- [Allow PhpStorm to connect to the CLI container](#allow-phpstorm-to-connect-to-the-cli-container)
- [Add a new deployment server](#add-a-new-deployment-server)
- [Configure PHP interpreter](#configuring-the-php-interpreter)
- [Set up PHPUnit in PhpStorm](#set-up-phpunit-in-phpstorm)
- [Running tests](#running-tests)
## Allow PhpStorm to connect to the CLI container
The first thing to do is to allow PhpStorm to connect to Docksals CLI container
to allow it to run the tests. We can do this by exposing the containers SSH
port so that its available to the host machine and PhpStorm.
As this is going to be unique to my environment, Im going to add this to
`.docksal/docksal-local.yml` which I have in `.gitignore`, rather than
committing it into the repository and enforcing the same port number for
everyone else and potentially causing conflicts.
In this case Ill expose port 22 in the container to port 2225 locally.
```
version: '2.1'
services:
cli:
ports:
- '2225:22'
```
Once added, run `fin start` to rebuild the projects containers.
You can verify the change by running `fin ps` and you should see something like
`0.0.0.0:2225->22/tcp` under Ports for the CLI container.
## Add a new Deployment server
Now PhpStorm can connect to Docksal, I can configure it to do so by adding a new
deployment server.
- Open PhpStorms preferences, and go to 'Build, Execution, Deployment' and
'Deployment'.
- Click 'Add' to configure a new deployment server.
- Enter a name like 'Docksal', and select SFTP as the server type.
![Adding a new deployment server](/images/blog/phpstorm-phpunit-docksal/deployment-1.png){.with-border
.sm:max-w-sm}
### Connection settings
On the Connection tab:
- Enter your domain name - e.g. `drupaltest.docksal` as the SFTP host. This will
resolve to the correct local IP address.
- Enter the exposed port for the CLI container that was entered in the previous
step.
- Enter "docker" as both the username and password.
You should now be able to click "Test SFTP connection" and get a successfully
connected confirmation message.
![Configuring a new deployment server](/images/blog/phpstorm-phpunit-docksal/deployment-2.png)
### Mapping settings
On the Mappings tab, add `/var/www` as the deployment path so that PhpStorm is
looking in the correct place for the project code.
![Add mappings to the deployment server](/images/blog/phpstorm-phpunit-docksal/deployment-3.png){.with-border}
## Configuring the PHP Interpreter
In Preferences, search for 'PHP' within 'Languages & Frameworks', and add a new
CLI interpreter.
![The PHP preferences in PhpStorm](/images/blog/phpstorm-phpunit-docksal/cli-interpreter-1.png){.with-border}
In this case Ive called it 'Docksal PHP 7.1', used the Docksal deployment
configuration, and set the path to the PHP executable to `/usr/local/bin/php`
(the same path that we would get if we ran `fin run which php`). You should see
both the deployment host URL displayed as well as the remote PHP version and
configuration filenames.
![Configuring a new CLI interpreter](/images/blog/phpstorm-phpunit-docksal/cli-interpreter-2.png){.with-border}
This can now be selected as the CLI interpreter for this project.
![Selecting the new CLI interpreter in the PHP preferences](/images/blog/phpstorm-phpunit-docksal/cli-interpreter-3.png){.with-border}
## Set up PHPUnit in PhpStorm
In Preferences, search for 'Test Frameworks' and add a new framework.
![Adding a new test framework (PHPUnit) in PHPStorm](/images/blog/phpstorm-phpunit-docksal/phpunit-1.png){.with-border}
Select 'PHPUnit by Remote Interpreter' and then the 'Docksal PHP 7.1' that we
created in the last step.
Select 'Use Composer autoloader' for the PHPUnit library setting so that
PhpStorm uses the version required by Drupal core, and set the path to
`/var/www/vendor/autoload.php`.
Also specify the path to the default (phpunit.xml) configuration file. This will
depend on how your project is structured, in this case its at
`/var/www/web/core/phpunit.xml`.
![Configuring PHPUnit in PHPstorm](/images/blog/phpstorm-phpunit-docksal/phpunit-4.png){.with-border}
## Running tests
With PHPUnit configured, next to each test class and method, you can see a green
circle (or a red one if the test failed the last run). You can click the circle
and select to run that test class or method. You can also right-click
directories in the project sidebar to run all of the tests within that
directory.
![Running a test within PhpStorm](/images/blog/phpstorm-phpunit-docksal/running-tests-1.png){.with-border}
When the tests start running, a new tool window will open that shows you all of
the selected tests, how long each test took to run and whether it passed or
failed. You can also see the CLI output from PHPUnit itself next to it.
![The tests results being displayed](/images/blog/phpstorm-phpunit-docksal/running-tests-2.png){.with-border}
From here, you also have the ability to re-run all of the tests, as well as a
single test method or a specific test class.
Any test failures are shown here too, and for some failures like differences
between two arrays you can use PhpStorms internal comparison tools to view the
difference rather than needing to do so on the command line.
![Showing a failing test](/images/blog/phpstorm-phpunit-docksal/test-failure-1.png){.with-border}
![Displaying the difference between two arrays](/images/blog/phpstorm-phpunit-docksal/test-failure-2.png){.with-border
.sm:max-w-md}
### Keyboard shortcuts
As per the video, Ive also added some keyboard shortcuts to my keymap, so I can
press ⌘T to run the current test method or class that Im in, and ⇧⌘T to re-run
the last test.
![Adding a keyboard shortcut to run the current test](/images/blog/phpstorm-phpunit-docksal/keyboard-shortcuts-1.png){.with-border}
![Adding a keyboard shortcut to re-run the last test](/images/blog/phpstorm-phpunit-docksal/keyboard-shortcuts-2.png){.with-border}
### Database issues
When running functional tests that require a database, I was getting a database
error like the one below:
> Drupal\Core\Installer\Exception\InstallerException : Resolve all issues below
> to continue the installation. For help configuring your database server, see
> the <a href="https://www.drupal.org/getting-started/install">installation
> handbook</a>, or contact your hosting provider.
In `settings.php`, I check for the presence of `/.dockerenv` to ensure that
were inside a Docker container, as well as the presence of a
`docksal.settings.yml` file. The latter contains the database credentials for
Drupal to connect to the MySQL database.
```php
if (file_exists('/.dockerenv') && file_exists(__DIR__ . '/docksal.settings.php')) {
include __DIR__ . '/docksal.settings.php';
}
```
In order to get the tests to run, I had to prevent this file from being loaded
during the tests. I can do this by checking that `SIMPLETEST_DB`, an environment
variable set in phpunit.xml is not present.
```php
// settings.php
if (file_exists('/.dockerenv') && file_exists(__DIR__ . '/docksal.settings.php') && !getenv('SIMPLETEST_DB')) {
include __DIR__ . '/docksal.settings.php';
}
```
With this extra condition, the database credentials are loaded correctly and the
functional tests run properly.
Happy testing!
[0]: https://laracasts.com/series/php-bits/episodes/2
[1]: https://laracasts.com
[2]: https://github.com/opdavies/drupal-testing-workshop
[3]: https://github.com/drupal-composer/drupal-project
[4]: https://docksal.io

View file

@ -0,0 +1,92 @@
---
title: How to Use Environment Variables for your Drupal Settings with Docksal
date: 2018-06-04
excerpt: How to leverage environment variables with Drupal and Docksal.
tags:
- docksal
- drupal
- drupal-planet
---
Within the [Docksal documentation for Drupal settings][0], the example database
settings include hard-coded credentials to connect to the Drupal database. For
example, within a `settings.php` file, you could add this:
```php
$databases['default']['default'] = [
'driver' => 'mysql',
'host' => 'db',
'database' => 'myproject_db',
'username' => 'myproject_user',
'password' => 'myproject_pass',
];
```
Whilst this is fine, it does mean that there is duplication in the codebase as
the database credentials can also be added as environment variations within
`.docksal/docksal.env` - this is definitely the case if you want to use a custom
database name, for example.
Also if one of these values were to change, then Drupal wouldn't be aware of
that and would no longer be able to connect to the database.
It also means that the file cant simply be re-used on another project as it
contains project-specific credentials.
We can improve this by using the environment variables within the settings file.
The relevant environment variables are `MYSQL_DATABASE` for the database name,
and `MYSQL_USER` and `MYSQL_PASSWORD` for the MySQL username and password. These
can be set in `.docksal/docksal.env`, and will need to be present for this to
work.
For example:
```
DOCKSAL_STACK=default
MYSQL_DATABASE=myproject_db
MYSQL_USER=myproject_user
MYSQL_PASSWORD=myproject_pass
```
With these in place, they can be referenced within the settings file using the
`getenv()` function.
```
$databases['default']['default'] = [
'driver' => 'mysql',
'host' => 'db',
'database' => getenv('MYSQL_DATABASE'),
'username' => getenv('MYSQL_USER'),
'password' => getenv('MYSQL_PASSWORD'),
];
```
Now the credentials are no longer duplicated, and the latest values from the
environment variables will always be used.
However, you may see a message like this when you try and load the site:
> Drupal\Core\Database\DatabaseAccessDeniedException: SQLSTATE[HY000][1045]
> Access denied for user ''@'172.19.0.4' (using password: NO) in
> /var/www/core/lib/Drupal/Core/Database/Driver/mysql/Connection.php on line 156
If you see this, the environment variables arent being passed into Docksals
`cli` container, so the values are not being populated. To enable them, edit
`.docksal/docksal.yml` and add `MYSQL_DATABASE`, `MYSQL_PASSWORD` and
`MYSQL_USER` to the `environment` section of the `cli` service.
```yaml
version: '2.1'
services:
cli:
environment:
- MYSQL_DATABASE
- MYSQL_PASSWORD
- MYSQL_USER
```
After changing this file, run `fin start` to rebuild the project containers and
try to load the site again.
[0]: https://docksal.readthedocs.io/en/master/advanced/drupal-settings

View file

@ -0,0 +1,77 @@
---
title: Ignoring PHPCS sniffs for PHPUnit tests
excerpt: How to exclude certain PHPCS sniffs within your PHPUnit tests, so that you can write your tests methods how you'd like without getting coding standards errors.
tags:
- drupal
- drupal-planet
- php
- phpunit
date: 2021-01-04
---
**Note:** This post is written with a Drupal context, but applies to any PHP project.
This is a test that I wrote recently, which uses the camel case method name that is recommended by the Drupal and PSR-2 coding standards:
```php
public function testThatPathAliasesAreNotTransferredToTheNewLanguageWhenOneIsAdded(): void {
// ...
}
```
It has a long method name that describes the test that is being run. However, it's quite hard to read. Generally, I prefer to write tests like this, using the `@test` annotation (so that I can remove the `test` prefix) and snake case method names:
```php
/** @test */
public function path_aliases_are_not_transferred_to_the_new_language_when_one_is_added(): void {
// ...
}
```
This to me is a lot easier to read, particularly for long and descriptive test method names, and is commonly used within parts of the PHP community.
This approach, however, can result in some errors from PHPCS:
- The open comment tag must be the only content on the line
- Public method name "DefinedLanguageNodeTest::path_aliases_are_not_transferred_to_the_new_language_when_one_is_added" is not in lowerCamel format
We can avoid the errors by excluding the files when running PHPCS, or modifying rules within phpcs.xml (or phpcs.xml.dist) file to change the severity value for the rules. These approaches would mean either ignoring all PHPCS sniffs within the test files or ignoring some checks within all files, neither of which is an ideal approach.
## Ignoring whole or partial files
We can tell PHPCS to ignore whole or partial files by adding comments - there's [an example of this](https://git.drupalcode.org/project/drupal/-/blob/ad34608ab0bb115c53f4aaa0573c30dd8dc5b23a/sites/default/default.settings.php#L3 "Drupal's default.settings.php file with a 'coding standards ignore' comment") at the top of `default.settings.php` file:
```php
// @codingStandardsIgnoreFile
```
The `@codingStandards` syntax, however, is deprecated and will be removed in PHP_CodeSniffer version 4.0. The new syntax to do this is:
```php
// phpcs:ignoreFile
```
As well as `phpcs:ignoreFile` which ignores all of the sniffs in an entire file, there are also commands to disable and re-enable PHPCS at different points within the same file:
```php
// Stop PHPCS checking.
// phpcs:disable
// Start PHPCS checking.
// phpcs:enable
```
## Disabling specific rules in a file
As well as excluding a section of code from checks, with `phpcs:ignore` you can also specify a list of sniffs to ignore. For example:
```php
// phpcs:disable Drupal.Commenting.DocComment, Drupal.NamingConventions.ValidFunctionName
```
By adding this to the top of the test class, these specific sniffs will be ignored so no errors will be reported, and any other sniffs will continue to work as normal.
If you're unsure what the names of the sniffs are that you want to ignore, add `-s` to the PHPCS command to have it include the sniff names in its output.
For more information on ignoring files, folders, part of files, and limiting results, see the [Advanced Usage page for the PHP CodeSniffer project](https://github.com/squizlabs/PHP_CodeSniffer/wiki/Advanced-Usage) on GitHub.
You can also see this being used in [some of the tests for this website](https://github.com/opdavies/oliverdavies-uk/tree/production/web/modules/custom/blog/tests/src/Kernel).

View file

@ -0,0 +1,35 @@
---
title: Imagefield Import Archive
date: 2011-05-23
excerpt: I've finally uploaded my first module onto Drupal.org!
tags:
- drupal-planet
- imagefield-import
---
I've finally uploaded my first module onto Drupal.org!
I've written many custom modules, although the vast majority of them are either
small tweaks for my own sites, or company/site-specific modules that wouldn't be
good to anyone else, so there would be nothing achieved by contributing them
back to the community. Previously, I've blogged about the
[Imagefield Import](http://drupal.org/project/imagefield_import) module - a
module that I use on a number of sites, both personal and for clients - and what
I've looked for lately is for a way to speed up the process again. Gathering my
images together and manually copying them into the import directory takes time -
especially if I'm working somewhere with a slow Internet connection and I'm
FTP-ing the images into the directory. Also, it's not always the easiest
solution for site users - especially non-technical ones.
So, I wrote the Imagefield Import Archive module. Including comments, the module
contains 123 lines, and builds upon the existing functionality of the Imagefield
Import module by adding the ability for the user to upload a .zip archive of
images. The archive is then moved into the specified import directory and
unzipped before being deleted, and the user is directed straight to the standard
Imagefield Import page where their images are waiting to be imported, just as
usual.
The module is currently a
[sandbox project](http://drupal.org/sandbox/opdavies/1165110) on Drupal.org,
although I have applied for full project access so that I can be added as a
fully-fledged downloadable module.

View file

@ -0,0 +1,25 @@
---
title: Improve JPG Quality in Imagecache and ImageAPI
date: 2010-06-02
excerpt: How to fix the quality of uploaded images in Drupal.
tags:
- drupal-planet
- drupal-6
- imagecache
---
Whilst uploading images for my Projects and Testimonials sections, I noticed
that the Imagecache-scaled images weren't as high a quality the originals on my
Mac. I did some searching online and found out that, by default, Drupal
resamples uploaded jpgs to 75% of their original quality.
To increase the quality of your images, change the setting in the two following
places:
- admin/settings/imageapi/config
- admin/settings/image-toolkit
The first one is for ImageAPI. Primarily, this means Imagecache presets. The
second one is for core's image.inc. This is used for resizing profile pictures
in core, and some contrib modules. Once changed, I did have to flush each of the
Imagecache presets (admin/dist/imagecache) for the changes to take effect.

View file

@ -0,0 +1,67 @@
---
title: Include CSS Fonts by Using a SASS each Loop
date: 2014-11-18
excerpt: How to use an SASS each loop to easily add multiple fonts to your CSS.
tags:
- compass
- drupal-planet
- fonts
- sass
---
How to use an @each loop in SASS to quickly include multiple font files within
your stylesheet.
Using a file structure similar to this, organise your font files into
directories, using the the font name for both the directory name and for the
file names.
```bash
.
├── FuturaBold
│ ├── FuturaBold.eot
│ ├── FuturaBold.svg
│ ├── FuturaBold.ttf
│ └── FuturaBold.woff
├── FuturaBoldItalic
│ ├── FuturaBoldItalic.eot
│ ├── FuturaBoldItalic.svg
│ ├── FuturaBoldItalic.ttf
│ └── FuturaBoldItalic.woff
├── FuturaBook
│ ├── FuturaBook.eot
│ ├── FuturaBook.svg
│ ├── FuturaBook.ttf
│ └── FuturaBook.woff
├── FuturaItalic
│ ├── FuturaItalic.eot
│ ├── FuturaItalic.svg
│ ├── FuturaItalic.ttf
│ └── FuturaItalic.woff
```
Within your SASS file, start an `@each` loop, listing the names of the fonts. In
the same way as PHP's `foreach` loop, each font name will get looped through
using the `$family` variable and then compiled into CSS.
```scss
@each $family in FuturaBook, FuturaBold, FuturaBoldItalic, FuturaItalic {
@font-face {
font-family: #{$family};
src: url('../fonts/#{$family}/#{$family}.eot');
src: url('../fonts/#{$family}/#{$family}.eot?#iefix') format('embedded-opentype'),
url('../fonts/#{$family}/#{$family}.woff') format('woff'),
url('../fonts/#{$family}/#{$family}.ttf') format('truetype'),
url('../fonts/#{$family}/#{$family}.svg##{$family}') format('svg');
font-weight: normal;
font-style: normal;
}
}
```
When the CSS has been compiled, you can then use in your CSS in the standard
way.
```scss
font-family: "FuturaBook";
```

View file

@ -0,0 +1,100 @@
---
title: Include environment-specific settings files on Pantheon
date: 2014-11-27
excerpt: How to load a different settings file per environment on Pantheon.
tags:
- drupal
- drupal-planet
- pantheon
- settings.php
---
I was recently doing some work on a site hosted on
[Pantheon](http://getpantheon.com) and came across an issue, for which part of
the suggested fix was to ensure that the `$base_url` variable was explicitly
defined within settings.php (this is also best practice on all Drupal sites).
The way that was recommended was by using a `switch()` function based on
Pantheon's environment variable. For example:
```php
switch ($_SERVER['PANTHEON_ENVIRONMENT']) {
case 'dev':
// Development environment.
$base_url = 'dev-my-site.gotpantheon.com';
break;
case 'test':
// Testing environment.
$base_url = 'test-my-site.gotpantheon.com';
break;
case 'live':
// Production environment.
$base_url = 'live-my-site.gotpantheon.com';
break;
}
```
Whilst this works, it doesn't conform to the DRY (don't repeat yourself)
principle and means that you also might get a rather long and complicated
settings file, especially when you start using multiple switches and checking
for the value of the environment multiple times.
My alternative solution to this is to include an environment-specific settings
file.
To do this, add the following code to the bottom of settings.php:
```php
if (isset($_SERVER['PANTHEON_ENVIRONMENT'])) {
if ($_SERVER['PANTHEON_ENVIRONMENT'] != 'live') {
// You can still add things here, for example to apply to all sites apart
// from production. Mail reroutes, caching settings etc.
}
// Include an environment-specific settings file, for example
// settings.dev.php, if one exists.
$environment_settings = __DIR__ . '/settings.' . $_SERVER['PANTHEON_ENVIRONMENT'] . '.php';
if (file_exists($environment_settings)) {
include $environment_settings;
}
}
```
This means that rather than having one long file, each environment has it's own
dedicated settings file that contains it's own additional configuration. This is
much easier to read and make changes to, and also means that less code is loaded
and parsed by PHP. Settings that apply to all environments are still added to
settings.php.
Below this, I also include a
[similar piece of code](/blog/include-local-drupal-settings-file-environment-configuration-and-overrides/)
to include a settings.local.php file. The settings.php file then gets committed
into the [Git](http://git-scm.com) repository.
Within the sites/default directory, I also include an example file
(example.settings.env.php) for reference. This is duplicated, renamed and
populated accordingly.
```php
<?php
/**
* This is a specific settings file, just for the x environment. Any settings
* defined here will be included after those in settings.php.
*
* If you have also added a settings.local.php file, that will override any
* settings stored here.
*
* No database credentials should be stored in this file as these are included
* automatically by Pantheon.
*/
$base_url = '';
```
The environment specific files are also committed into Git and pushed to
Pantheon, and are then included automatically on each environment.

View file

@ -0,0 +1,62 @@
---
title: Include a Local Drupal Settings file for Environment Configuration and Overrides
date: 2014-12-20
excerpt: How to create and include a local settings file to define and override environment-specific variables.
tags:
- drupal
- drupal-6
- drupal-7
- drupal-8
- drupal-planet
- settings.php
---
How to create and include a local settings file to define and override
environment-specific variables, and keep sensitive things like your database
credentials and API keys safe.
At the bottom of settings.php, add the following code:
```php
$local_settings = __DIR__ . '/settings.local.php';
if (file_exists($local_settings)) {
include $local_settings;
}
```
This allows for you to create a new file called settings.local.php within a
sites/\* directory (the same place as settings.php), and this will be included
as an extension of settings.php. You can see the same technique being used
within Drupal 8's
[default.settings.php](http://cgit.drupalcode.org/drupal/tree/sites/default/default.settings.php#n621)
file.
Environment specific settings like `$databases` and `$base_url` can be placed
within the local settings file. Other settings like
`$conf['locale_custom_strings_en']` (string overrides) and
`$conf['allow_authorize_operations']` that would apply to all environments can
still be placed in settings.php.
settings.php though is ignored by default by Git by a .gitignore file, so it
won't show up as a file available to be committed. There are two ways to fix
this. The first is to use the `--force` option when adding the file which
overrides the ignore file:
```bash
git add --force sites/default/settings.php
```
The other option is to update the .gitignore file itself so that settings.php is
no longer ignored. An updated .gitignore file could look like:
```bash
# Ignore configuration files that may contain sensitive information.
sites/*/settings.local*.php
# Ignore paths that contain user-generated content.
sites/*/files
sites/*/private
```
This will allow for settings.php to be added to Git and committed, but not
settings.local.php.

View file

@ -0,0 +1,85 @@
---
title: Install and Configure the Nomensa Accessible Media Player in Drupal
date: 2012-07-14
excerpt: This week I released the first version of the Nomensa Accessible Media Player module for Drupal 7. Here's some instructions of how to install and configure it.
tags:
- accessibility
- drupal
- drupal-planet
- nomensa
---
This week I released the first version of the Nomensa Accessible Media Player
module for Drupal 7. Here's some instructions of how to install and configure
it.
_The official documentation for this module is now located at
<https://www.drupal.org/node/2383447>. This post was accurate at the time of
writing, whereas the documentation page will be kept up to date with any future
changes._
## Initial configuration
### Download the Library
The library can be downloaded directly from GitHub, and should be placed within
you _sites/all/libraries/nomensa_amp_ directory.
```bash
drush dl libraries nomensa_amp
git clone https://github.com/nomensa/Accessible-Media-Player sites/all/libraries/nomensa_amp
cd sites/all/libraries/nomensa_amp
rm -rf Accessible-media-player_2.0_documentation.pdf example/ README.md
drush en -y nomensa_amp
```
### Configure the Module
Configure the module at <em>admin/config/media/nomensa-amp</em> and enable the
players that you want to use.
## Adding videos
Within your content add links to your videos. For example:
### YouTube
```html
<a href="http://www.youtube.com/watch?v=Zi31YMGmQC4">Checking colour contrast</a>
```
### Vimeo
```html
<a href="http://vimeo.com/33729937">Screen readers are strange, when you're a stranger by Leonie Watson</a>
```
## Adding captions
The best way that I can suggest to do this is to use a File field to upload your
captions file:
1. Add a File field to your content type;
1. On your page upload the captions file.
1. Right-click the uploaded file, copy the link location, and use this for the
path to your captions file.
For example:
```html
<a href="http://www.youtube.com/watch?v=Zi31YMGmQC4">Checking colour contrast</a> <a class="captions" href="http://oliverdavies.co.uk/sites/default/files/checking-colour-contrast-captions.xml">Captions for Checking Colour Contrast</a>
```
## Screencast
<div class="embed-container">
<iframe
src="https://player.vimeo.com/video/45731954"
width="500"
height="313"
frameborder="0"
webkitallowfullscreen
mozallowfullscreen
allowfullscreen>
</iframe>
</div>

View file

@ -0,0 +1,15 @@
---
title: Installing Nagios on CentOS
date: 2012-04-17
excerpt: How to install Nagios on CentOS.
tags:
- nagios
- centos
- linux
---
A great post details that details the steps needed to install
[Nagios](http://nagios.org) - a popular open source system and network
monitoring software application - on CentOS.
<http://saylinux.net/story/009506/how-install-nagios-centos-55>

View file

@ -0,0 +1,13 @@
---
title: Interview with a Drupal Expert (with Code Enigma)
excerpt: I recently did an interview with Code Enigma for their blog.
tags:
- drupal
- interview
- personal
date: 2020-08-31
---
I recently did an interview with Drupal and PHP agency [Code Enigma](https://www.codeenigma.com) for their blog, in which we talked about getting started in the Drupal community, [working for 10 years full-time with Drupal and PHP](/blog/10-years-working-full-time-drupal-php), companies adopting open source technologies, and my favourite Drupal events.
To read it, go to [the Code Enigma blog](https://blog.codeenigma.com/interview-with-a-drupal-expert-9fcd8e0fad28 "'Interview with a Drupal Expert' on the Code Enigma blog").

View file

@ -0,0 +1,29 @@
---
title: Introducing a Drupal distribution for meetup websites
excerpt: I'm starting development on a new Drupal distribution for building meetup group websites.
tags:
- drupal
- drupal-9
- drupal-distribution
- drupal-meetup-distribution
- personal
- php
- php-south-wales
date: 2021-10-07
---
I'm the current organiser of the [PHP South Wales user group](https://www.phpsouthwales.uk) and built the current website with Drupal 8, which I started in 2019.
There are some basic pages, but also functionality to display upcoming and past events, show current sponsors, and to populate event data from Meetup.com - functionality that could needed by other meetup groups for their websites - such as other PHP and Drupal user groups that I've organised and attended.
Inspired by other Drupal distributions like [LocalGov](https://www.drupal.org/project/localgov), I've decided to refactor the current site into a reusable distribution that other meetup groups can use. It's not intended to be a clone of Meetup.com, but to be used for a website for a single meetup group to show their events and showcase their own community.
This also means that any new functionality can be added straight to the distribution and immediately available for everyone.
I've created a [project page on Drupal.org][drupalorg] and a [Drupal Meetup organisation on GitHub][github] which contains repositories for the distribution as well as a project template that are pushed to [Packagist][packagist] to that they can be installed with Composer - e.g. `composer create-project --stability dev drupal-meetup/drupal-meetup-project my-new-meetup`.
This seems like a good opportunity to do some more Drupal contribution and may benefit others too who want to build their own meetup group websites.
[drupalorg]: https://www.drupal.org/project/meetup
[github]: https://github.com/drupal-meetup
[packagist]: https://packagist.org/packages/opdavies/?query=drupal-meetup

View file

@ -0,0 +1,29 @@
---
title: Introducing the Drupal Meetups Twitterbot
date: 2017-06-09
excerpt: Ive written a twitterbot for promoting Drupal meetups.
tags:
- php
- twitter
---
<p class="text-center" markdown="1">![](/images/blog/drupal-meetups-twitterbot.png)</p>
The [Drupal Meetups Twitterbot][0] is a small project that I worked on a few
months ago, but hadn't got around to promoting yet. Its intention is to provide
[one Twitter account][1] where people can get the up to date news from various
Drupal meetups.
It works by having a whitelist of [Twitter accounts and hashtags][2] to search
for, uses [Codebird][3] to query the Twitter API and retweets any matching
tweets on a scheduled basis.
If you would like your meetup group to be added to the list of searched
accounts, please [open an issue][4] on the GitHub repo.
[0]: https://github.com/opdavies/drupal-meetups-twitterbot
[1]: https://twitter.com/drupal_meetups
[2]:
https://github.com/opdavies/drupal-meetups-twitterbot/blob/master/bootstrap/config.php
[3]: https://www.jublo.net/projects/codebird/php
[4]: https://github.com/opdavies/drupal-meetups-twitterbot/issues/new

View file

@ -0,0 +1,32 @@
---
title: Leaving Nomensa, Joining Precedent
date: 2013-04-20
excerpt: Yesterday was my last day working at Nomensa. Next week, I'll be starting as a Senior Developer at Precedent.
tags:
- nomensa
- personal
- precedent
---
Yesterday was my last day working at
[Nomensa](http://www.nomensa.com 'Nomensa'). Next week, I'll be starting as a
Senior Developer at [Precedent](http://www.precedent.co.uk 'Precedent').
The last 14 months that I've been working at Nomensa have been absolutely
fantastic, and had allowed me to work on some great projects for great clients -
mainly [unionlearn](http://www.unionlearn.org 'unionlearn') and
[Digital Theatre Plus](http://www.digitaltheatreplus.com 'Digital Theatre Plus').
I've learned so much about accessibility and web standards, and have pretty much
changed my whole approach to front-end development to accommodate best
practices. I've also been involved with the Drupal Accessibility group since
starting at Nomensa, and have written several accessibility-focused Drupal
modules, including the
[Nomensa Accessible Media Player](http://drupal.org/project/nomensa_amp 'The Nomensa Accessible Media Player Drupal module')
module and the
[Accessibility Checklist](http://drupal.org/project/a11y_checklist 'The accessibility checklist for Drupal').
I'll definitely be continuing my interest in accessibility, championing best
practices, and incorporating it into my future work wherever possible.
With that all said, I'm really looking forward to starting my new role at
Precedent, tackling some new challenges, and I'm sure that it'll be as great a
place to work as Nomensa was.

View file

@ -0,0 +1,713 @@
---
title: Live Blogging From SymfonyLive London 2019
date: 2019-09-13
tags:
- conference
- php
- symfony
- symfonylive
---
Inspired by [Matt Stauffer](https://twitter.com/stauffermatt)'s
[live blogging of the keynote](https://mattstauffer.com/blog/introducing-laravel-vapor)
at Laracon US, Im going to do the same for the sessions that Im attending at
[SymfonyLive London 2019](https://london2019.live.symfony.com)...
## Keynote (Back to the basics)
**Embrace the Linux philosophy**
- How we grow the Symfony ecosystem. Built abstracts.
- HttpFoundation, HttpKernel
- Moved to infrastructure
- A few abstractions on top of PHP. Improved versions of PHP functions (`dump`
and `var_dump`)
- Started a add higher level abstractions (e.g. Mailer), built on the lower
ones.
- Recently worked on PHPUnit assertions. Mailer in Symony 4.4. Can test if an
email is sent or queued
**Building flexible high-level abstractions on top of low-level ones**
### What's next?
- Mailer announced in London last year. New component.
- System emails? e.g. new customer, new invoice.
- Symfony Mailer = Built-in responsive, flexible, and generic system emails
- Twig with TwigExtraBundle
- Twig `inky-extra` package (Twig 1.12+)
- Zurb Foundation for Emails CSS stylesheet
- Twig `cssinliner-extra` package (Twig 1.12+)
- Optimised Twig layouts
- `SystemEmail` class extends templated email
- Can set importance,
- Customisable
- Always trying to keep flexible, so things can be overidden and customised
### Sending SMS messages
- new `Texter` and `SmsMessage` class for sending SMS messages
- Same abstraction as emails, but for SMS messages
- Based on HttpClient + Symfony Messenger and third-party providers (Twilio and
Nexmo) `twilio://` and `nemxo://`
- Can set via transport `$sms->setTransport('nexmo')`
- Extend the `SystemEmail` and do what you want
- Failover
### Sending Messages
- Create `ChatMessage`
- Telegram and Slack
- `$message->setTransport('telegram')`, `$bus->dispatch($message)`
- Send to Slack **and** Telegram
- `SlackOptions` and `TelegramOptions` for adding emojis etc
- Common transport layer `TransportInterface`, `MessageInterface`
- Failover - e.g. if Twilio is down, send to Telegram
### New component - SymfonyNotifier
- Channels - email, SMS, chat
- Transport, slack, telegram, twilio
- Create a notification, arguments are message and transports (array)
- Receiver
- Customise notifications, `InvoiceNotification` extends `Notification`.
`getChannels`
- Override default rendering
- `ChatNotificationInterface` - `asChatMessage()`
- Semantic configuration
- `composer req twilio-notifier telegram-notifier`
- Channels
- Mailer
- Chatter
- Texter
- Browser
- Pusher (iOS, Android, Desktop native notifications)
- Database (web notification centre)
- **A unified way to notify Users via a unified Transport layer**
- Each integration is only 40 lines of code
### What about a SystemNotification?
- Autoconfigured channels
- `new SystemNotification`, `Notifier::getSystemReceivers`
- Importance, automatically configures channels
- Different channels based on importance
- `ExceptionNotification` - get email with stack trace attached
Notifier
- send messages via a unified api
- send to one or many receivers
- Default configu or custom one
### How can we leverage this new infrastructure?
- `Monolog NotifierHandler` - triggered on `Error` level logs
- Uses notified channel configuration
- Converts Error level logs to importance levels
- Configurablelike other Notifications
- 40 lines of code
- Failed Messages Listener - 10 lines of glue code
- **Experimental component in 5.0**
- Can't in in 4.4 as it's a LTS version
- First time an experimental component is added
- Stable in 5.1
## Queues, busses and the Messenger component (Tobias Nyholm)
- Stack is top and buttom - Last-in, first-out
- Queue is back and front - last in, first out
### 2013
- Using Symfony, used 40 or 50 bundles in a project - too much information!
- Used to copy and paste, duplicate a lot of code
- Testing your controllers - controllers as services?
- Controllers are 'comfortable'
- Tried adding `CurrentUserProvider` service to core, should be passed as an
argument. Cannot test.
- 'Having Symfony all over the place wasn't the best thing' - when to framework
(Matthias Noback)
- Hexagonal architecture
- Keep your kernel away from infrastructure. Let the framework handle the
infrastructure.
- Controller -> Command -> Command Bus -> `CommandHandler`
#### What did we win?
- Can leverage Middleware with a command bus
- Queues as a service (RabbitMQ)
- Work queue - one producer, multiple consumers
- Queues should be durable - messages are also stored on disk, consumers should
acknowledge a message once a message is handled
- Publish/subscribe
- Producer -> Fanout/direct with routing (multiple queues) -> multiple
consumers
- Topics - wildcards
### 2016
- New intern. Understand everything, 'just PHP'. Plain PHP application, not
'scary Symfony'
### Symfony Messenger
- `composer req symfony/messager` - best MessageBus implementation
- Message -> Message bus -> Message handler
- Message is a plain PHP class
- Handler is a normal PHP class which is invokable
- `messenger:message_hander` tag in config
- Autowire with `MessageHandlerInterface`
- What if it takes 20 seconds to send a message? Use asynchronous.
- Transports as middleware (needs sender, receiver, configurable with DSN,
encode/decode). `MESSENGER_DSN` added to `.env`
- Start consumer with `bin/console messager:consume-messages`. Time limit with
`--time-limit 300`
- PHP Enqueue - production ready, battle-tested messaging solution for PHP
### Issues
- Transformers, takes an object and transforms into an array -
`FooTransformer implements TransformerInterface`.
- Don't break other apps by changing the payload.
#### Multiple buses
- Command bus, query bus, event bus
- Separate actions from reactions
#### Envelope
- Stamps for metadata - has the item been on the queue already?
#### Failures
- Requeue, different queue or same queue after a period of time
- Failed queue 1 every minute, failed queue 2 every hour - temporary glitches or
a bug?
#### Creating entities
- What if two users registered at the same tiem? Use uuids rather than IDs.
- Symfony validation - can be used on messages, not just forms.
- Cache everything
- Option 1: HTTP request -> Thin app (gets responses from Redis) -> POST to
queue. Every GET request would warm cache
- Option 2: HTTP request -> Thin app -> return 200 response -> pass to workers
- Tip: put Command and CommandHandlers in the same directory
## HttpClient (Nicolas Grekas)
- new symfony component, released in may
- Httpclient contracts, separate package that contains interfaces
- Symfony
- PHP-FIG
- Httplug
- `HttpClient::create()`. `$client->get()`
- JSON decoded with error handling
- Used on symfony.com website (#1391). Replaces Guzzle `Client` for
`HttpClientInterface`
- Object is stateless, Guzzle is not. Doesn't handle cookies, cookies are state
- Remove boilerplate - use `toArray()`
- Options as third argument - array of headers, similar to Guzzle
### What can we do with the Response?
- `getStatusCode(): int`
- `getHeaders(): array`
- `getContent(): string`
- `toArray(): array`
- `cancel(): void`
- `getInfo(): array` - metadata
- Everything is lazy!
- 80% of use-cases covered
### What about PSR-18?
- Decorator/adapter to change to PSR compatible
- Same for Httplug
### What about the remaining 20%?
- Options are part of the abstraction, not the implementation
#### Some of the options
- `timeout` - control inactivity periods
- `proxy` - get through a http proxy
- `on_progress` - display a progress bar / build a scoped client
- `base_url` - resolve relative URLS / build a scoped client
- `resolve` - protect webhooks against calls to internal endpoints
- `max_redirects` - disable or limit redirects
- Robust and failsafe by default
- Streamable uploads - `$mimeParts->toIterable()`.
- donwload a file
```php
foreach ($client->stream($response) as $chunk) {
// ...
}
```
* Responses are lazy, requests are concurrent
* Asychronus requests. Reading in network order
```
foreach ($client->stream($responses) as $response => $chunk) {
if ($chunk->isLast()) {
// a $response completed
} else {
// a $response's got network activity or timeout
}
}
```
- 379 request completed in 0.4s!
- `Stream` has second argument, max number of seconds to wait before yielding a
timeout chunk
- `ResponseInterface::getInfo()` - get response headers, redirect count and URL,
start time, HTTP method and code, user data and URL
- `getInfo('debug')` - displays debug information
### The components
- `NativeHttpClient` and `CurlHttpClient`
- both provide
- 100% contracts
- secure directs
- extended (time) info
- transparent HTTP compression and (de)chunking
- automatic HTTP proxy configuration via env vars
#### `NativeHttpClient`
- is most portable, works for everyone
- based on HTTP stream wrapper with fixed redirect logic
- blocking until response headers arrive
#### `CurlHttpClient`
- Requires ext-curl with fixed redirection logic
- Multiplexing response headers and bodies
- Leverages HTTP/2 and PUSH when available
- Keeps connections open also between synchronous requests, no DNS resolution so
things are faster
#### Decorators
- ScopingHttpClient - auto-configure options based on request URL
- MockHttpClient - for testing, doesn't make actual HTTP requests
- CachingHttpClient - adds caching on a HTTP request
- Psr18Client
- HttplugClient
- TraceableHttpClient
### Combining
#### FrameworkBundle/Autowiring
```yaml
framework:
http_client:
max_host_connections: 4
deault_options:
# ....
scoped_client:
# ...
```
#### HttpBrowser
- HttpClient + DomCrawler + CssSelector + HttpKernel + BrowserKit
- RIP Goutte!
### Coming in 4.4...
- `max_duration`
- `buffer` based on a callable
- `$chunk->isInformational()`
- `$response->toStream()`
- Async-compatible extensibility, when decoration is not enough
`composer req symfony/http-client`
## Symfony Checker is coming (Valentine Boineau)
- Static analysis tool for Symfony
- Does a method exist?
- Is it deprecated?
- insight.symfony.com
- @symfonyinsight
- Released soon
### Differences
- Specialise in Symfony - can see more relevant things
- Different interface to other services
## Feeling unfulfilled by SPA promises? Go back to Twig (Dan Blows)
A way on the front-end JS, CSS, images at the beginning of the request, sends a
HTTP request (XHR/AJAX) to the back-end
### Why SPAs?
- A way on the front-end JS, CSS, images at the beginning of the request, sends
a HTTP request (XHR/AJAX) to the back-end
- no full page refresh
- Supposed to be much quicker
- 'Right tool for the job' - JS on the front-end, PHP on the back-end
- Division of responsibility == faster development
- Reusable API - Api -> Mobile App and SPA - easy to add another consumer
- Easier to debug?
### Why not SPAs?
- Lots of HTTP requests (400 to load the initial page on one project) == slow
front end
- Blurred responsibilities == tightly coupled teams
- harder to debug, bugs fall between systems and teams. Huge gap between
front-end and back-end, passing responsibilites.
- You can fix these problems in SPAs, but is it worth it?
- Examples of good SPAs - Trello, Flickr
### Using Twig as an alternative to an SPA?
#### Faster UI - Try and figure out where the problem is.
If you're trying to speed things up, find out where the problem is.
- Browser tools
- Web Debug Toolbar
- Blackfire
- Optimise and monitor
#### Speed up Twig
- Speeding up Symfony
- ext/twig (PHP5 only, not PHP 7)
- Store compiled templates in Opcache, make sure it's enabled
- Render assets though the webserver (assetic not running all the time)
#### Edge side includes
- Component cached differently to the rest of the page
- Varnish/Nginx
- `render_esi`
- News block that caches frequently, rest of the page
#### HTTP/2 with Weblink
- slow finding CSS files to load - 'push' over CSS files, doesn't need to wait
- `preload()` - https://symfony.com/doc/current/web_link.html
#### Live updating pages
- Instantly update when sports results are updated, news articles are added
- Mercure - https://github.com/symfony/mercure
- LiveTwig - whole block or whole section, and live update `render_live`
- Turbolinks - replace whole body, keeps CSS and JS in memory. Merges new stuff
in. `helthe/turbolinks`
- ReactPHP - shares kernel between requests
### Writing better code with Twig
- Keep templates simple. Avoid spaghetti code, only about UI. HTML or small
amounts of Twig.
- Avoid delimeter chains
- Bad:`blog_post.authors.first.user_account.email_address`
- Good `{{ blog_post.authors_email_address }}`
- Less brittle, slow
* Filters
- Use filters to be precise
- Custom filters
- Avoid chains. Can cause odd results. Create a new filter in PHP
* Functions
- Write your own functions
- Simpler templates
- Get data, can use boolean statements
* Components
- Break a page into components rather than one large page
- `include()`
- Use `only` to only pass that data. less tightenly coupled.
* `render` calls the whole of Symfony, boots Kernel, can be expensive and slow
* Loosely couple templates and controllers
- Keep responses simple
- What makes sense
- if you need extra data in the template, get it in the template
* View models
- Mixed results
- `BlogPostViewModel`
- Can result in boilerplate code
- Can be useful if the view model is different to the Entity
* DRY
- "Don't repeat yourself"
- Faster development
- Separate UI tests from back-end tests. Different layers for different teams.
People don't need to run everything if they are only changing certain
things.
* Help your front end
- Webpack - Encore
- Type hinting in functions and filters, easier to debug
- Logging
- Friendly exceptions - help front-end devs by returning meaningful, readbale
errors
- Web Debug Toolbar and Profiler, provide training for toolbar and profilers
- Twig-friendly development environment - Twig support in IDEs and text
editors
SPAs are sometimes teh right solution. Why do they want to use it, can the same
benefits be added with Twig?
3 most important points:
- Profile, identidy, optimise, monitor
- Loosely couple templates to your app code
- Help your front ends - put your front end developers first
- You don't need to use a SPA for single pages, use JavaScript for that one
page. It doesn't need to be all or nothing.
## BDD Your Symfony Application (Kamil Kokot)
- Applying BDD to Sylius
- 2 years since release of Sylius (Symfony 2 alpha)
- The business part is more important than the code part
### What is BDD?
- Behaviour driven development. Combines TDD and DDD, into an agile methodology
- Encourages communication and creates shared understanding
- Living, executable documentation that non-programmers understand. Always
correct.
- Feature file
- Feature
- Scenario - example of the behaviour for this feature. Simple, atomic. (e.g.
I need a product in order to add it to a cart)
- In order to...
- Who gets the benefit?
### BDD in practice
- Feature: booking flight tickets
- Scenario: booking flight ticket for one person
- Given there are the following flights...
- When I visit '/flight/LTN-WAW'
- Then I should be on '/flight/LTN-WAW'
- Add I should see "Your flight has been booked." in "#result"
- In the BDD way - what is the business logic? What is the value for this
scenario? What is the reason 'why', and who benefits from this?
- We just need to know that there are 5 seats left on a flight
- Talk and communicate about how the feature is going to work - not just
developers
- BDD aids communication
- Questions we can ask
- Can we get a different outcome when the context changes?
- When there was only one seat available
- When there were no available seats
- Can we get the same outcome when the event changes? Can we change 'When' and
'Then stays the same'
- When it is booked for an adult and a child
- When it is booked for an adult
- Does anything else happen that is not mentioned?
- Generate an invoice if a seat is booked
- a pilot would like to get a notification that a seat was booked.
* Figuring out the rules
- Adults are 15+ years old
- Children are 2-14 years old
- Infants and children can only travel with an adult
- We don't allow for overbooking
- Translating rules into examples
- Add a new scenario for each rule - e.g. don't allow over booking
- "And the flight should be no longer available..."
### Behat
- Used to automate and execute BDD tests, also SpecDDD
- maps steps to PHP code
- Given a context, when an event, then an outcome
- Domain Context, API context
- class implements `Context`, annotations for `@Given`, `@When`, `@Then`. allows
for arguments and regular expressions
- Suites: change what code is executed, and what scenarios are executed. context
and tags
- FriendsOfBehat SymfonyExtension - integrates Behat with Symfony
- Contexts registered as Symfony services - inject dependencies, service as a
context in Behat. Need to be 'public' for it to work
- Reduces boilerplate code. Supports autowiring.
- Zero configuration
### Domain context
- `Given` verb matches `@Given` annotation. Same for `When` and `Then`.
- Transformers, type hint name string, return Client instance
### API context
- inject `FlightBookingService` and `KernelBrowser`
- Use `$this->kernelBrowser->request()`
- Use `assert()` function wuthin `@Then`
### Back to reality - how it's done with Sylius
- Business part applies to all context. Start talking about what needs to be
done, start communicating
- Implement contexts for UI and API
- 12716 steps, 1175 scenarios, 8 min 8 sec, 2.4 scenarios /sec
- 12x faster than JS (17 min 48 sec, 0.19 scenario / sec)
- Treat test CI environment like production
- Turn off debug settings, add caching
- Enable OPcache
- Write features in a natural way
- Too many setup steps - merge steps. less visual debt. e.g. Create currency,
zone and locale when creating a store
- Avoid scenarios that are too detailed. You should specify only what's
important to this scenario.
## Migrating to Symfony one route at a time (Steve Winter)
- New client with an old application, built in an old version of another
framework with unusual dependency management, no tests, no version control and
deploying via FTP. Done over a ~3 month period.
- Subscription based index of suppliers
- New requirements to implement by the client
- Our requirements: Needed a deployment process, make it testable, fix the build
chain
- Solution attempt 1: Migrate to a new version of the current framework
- Minor template and design changes were fine
- Modifiy features, add new dependencies.
- Solution attempt 2: Upgrade to the latest version - same outcome due to
multiple BC breaks (no semver), lots of manual steps
- Solution attempt 3: Symfony!
- Semver! Backwards compatibility promise
- Symfony app to run in parallel, Apache proxy rules and minor changes to the
legacy app, added data transfer mechanisms
- Anything new done in Symfony
- Installed on the same server with it's own vhost but not publicly accessible
- Deployed independently of legacy app
### Apache proxy rules
Proxy `/public` to symfony app
### Legacy app
- Shared cookie for single login between apps - user account details (name etc),
session details (login time)
### Added functionality
- Built in Symfony
- new proxy rules for new routes
- Add menu links to legacy app menu
- How do we show how many reminders are active?
- Symfony based API called from the front-end
### Migrating routes
- Rebuilt or extend in Symfony app
- Test and deploy, then update the apache config to add new proxy rules
### A gotcha
- Legacy app uses CSRF
- Needed to track the token, added to shared cookie and pass through to the
Symfony side
### Storing data
- Both apps using the same data with different credentials
- Some shared tables, some tables are specific to each app
### Remaining challenges
- User session management, still handled by legacy app
- Templating/CSS - two versions of everything
- Next step: move all CSS to Symfony
### Summary
- Add Symfony app, Apache proxy rules for routes
- User transfer mechanisms
- New functionality added in Symfony
### Is this right for you?
It depends. Fine for a 'modest' size. Use a real proxy for larger scale apps,
use different servers with database replication.
## Closing Keynote: The fabulous World of Emojis and other Unicode symbols (Nicolas Grekas)
- ASCII. Still used today. Map between the first 128 numbers to characters. OK
for UK and US.
- 256 numbers in Windows-1252 (character sets). Each country had their own set.
- It's legacy. 0.2% for Windows-1252. 88.8% for UTF-8 (Feb 2017)
- Unicode: 130k characters, 135 scripts (alphabets)
- Validation errors using native alphabet - e.g. invalid last name when
submitting a form
- 17 plans, each square is 255 code points
- Emojis are characters, not images
- Gliph is a visual representation of a character
- From code points to bytes
- UTF-8: 1,2,3 or 4 bytes
- UTF16: 2 or 4 bytes
- UTF-32: 4 bytes
- UTF-8 is compatible with ASCII
- Case sensitivity - 1k characters are concerned. One uppercase letter, two
lower case variants. Turkish exception (similar looking letters that are
different letters with different meanings). Full case folding.
- Collations - ordering is depends on the language. 'ch' in Spanish is a single
character.
- Single number in unicode to represent accents. Combining characters.
- Composed (NFC) and decomposed (NFD) forms - normalisation for comparison
- Grapheme clusters - multiple characters, but one letter as you write it
(separate characters for letters and accent)
- Emjois - combining characters. e.g. Combine face with colour. Different codes
and character names. Also applies to ligatures. A way to combine several
images together into one single visual representation.
### unicode fundamentals
- uppercase, lowercase, folding
- compositions, ligatures
- comparistions - normalisations and collations
- segmentation: characters, words, sentences and hyphens
- locales: cultural conventions, translitterations
- identifiers & security, confusables
- display: direction, width
### unicode in practice
- MySQL - `utf*_*`. `SET NAMES utf8mb4` for security and storing emojis. Cannot
store emojis with `utf8`
### in php
- `mb_*()`
- `iconv_*()`
- `preg_*()`
- `grapheme_*()` `normalizer_*()`
- `symfony/polyfill-*` - pure PHP implementation
- Made a component - **symfony/string** -
https://github.com/symfony/symfony/pull/33553
- Object orientated api for strings. Immutable value objects
- `AbstractString`
- `GraphemeString`
- `Utf8String`
- `BinaryString`
* AbstractString - Methods to serialize, get length, to binary or grapheme or
utf8
- Methods for starts with, ends with, is empty, join, prepend, split, trim,
title etc

View file

@ -0,0 +1,66 @@
---
title: Looking forward to DrupalCamp London
date: 2018-02-27
excerpt: This weekend is DrupalCamp London 2018. Ill be there along with a number of my Microserve colleagues.
tags:
- drupal
- drupalcamp
- drupalcamp-london
- speaking
---
This weekend is [DrupalCamp London 2018][1]. Ill be there along with a number
of my [Microserve][2] colleagues.
I look forward to DrupalCamp London every year, partly because it was the first
DrupalCamp that I attended back in 2014. It was also the first DrupalCamp that I
[gave a talk][3] at, when I presented a session about Git Flow having given only
one user group talk before.
Ive presented sessions at every DrupalCamp London since (including two last
year), and Im lucky enough to be [speaking again this year][4] due to one of
the originally announced speakers no longer being able to make it to the event.
Here are some other sessions that Im hoping to see (in no particular order):
- Keynote by [Ryan Szrama][5] from [Commerce Guys][6]
- [Drupal 8 Services And Dependency Injection](https://drupalcamp.london/session/drupal-8-services-and-dependency-injection)
by Phil Norton
- [Growing developers with Drupal](https://drupalcamp.london/session/growing-developers-drupal)
by Fran Garcia-Linares (fjgarlin)
- [How to make it easier for newcomers to get involved in Drupal](https://drupalcamp.london/session/how-make-it-easier-newcomers-get-involved-drupal)
by heather
- [Lets take the best route - Exploring Drupal 8 Routing System](https://drupalcamp.london/session/lets-take-best-route-exploring-drupal-8-routing-system)
by surbhi
- [New recipe of Decoupling: Drupal 8, Symfony and Slim Framework](https://drupalcamp.london/session/new-recipe-decoupling-drupal-8-symfony-and-slim-framework)
by Jyoti Singh
- [Plugin API by examples](https://drupalcamp.london/session/plugin-api-examples)
by Gabriele (gambry)
- [Value of mentorship in the community](https://drupalcamp.london/session/value-mentorship-community)
by Hussain Abbas (hussainweb)
- [Warden - Helping Drupal Agencies Sleep at Night](https://drupalcamp.london/session/warden-helping-drupal-agencies-sleep-night)
by Mike Davis
Unfortunately there are some time slots where Id like to see more than one of
the talks (including when Im going to be speaking). This regularly happens at
conferences, but Ill look forward to watching those on [YouTube][7] after the
event.
Im also looking forward to catching up with former colleagues, spending some
time in the "hallway track" and hopefully doing some sprinting too!
## Finally
For nostalgia, [heres the blog post][0] that I wrote before I attended my first
DrupalCamp London.
See everyone this weekend!
[0]: {{site.url}}/blog/2014/02/09/drupalcamp-london-2014
[1]: https://drupalcamp.london
[2]: {{site.companies.microserve.url}}
[3]: {{site.url}}/talks/git-flow
[4]: {{site.url}}/talks/deploying-drupal-fabric
[5]: http://ryanszrama.com
[6]: https://commerceguys.com
[7]: https://www.youtube.com/channel/UCsaB96zszIP4Y3czs-ndiIA

View file

@ -0,0 +1,67 @@
---
title: Yay, the Mediacurrent Contrib Half Hour is Back!
date: 2018-03-02
excerpt: Mediacurrents "contrib half hour sessions" are back.
tags:
- contribution
- drupal
- open-source
has_tweets: true
---
Back in November, [Mediacurrent introduced][1] the contrib half hour - a weekly
online meeting to provide guidance and assistance on contributing to Drupal and
Drupal projects. A range of topics were covered in the first few sessions,
including finding and testing bug fixes, Composer, Drush, and how to re-roll
patches.
From Damien's [introductory blog post][2]:
> Not sure what this whole "patch" thing is? Have a core change that you can't
> quite finish? Running into a problem with a contrib module, or a theme, or a
> 3rd party library, and not sure how to fix it? New to maintaining a module and
> unsure of what to do next? Wondering how to get your module through the
> security opt-in process? Is your project's issue queue getting you down? Join
> us every Thursday at noon EST for the Mediacurrent Contrib Half Hour where
> we'll be available to help solve contrib challenges.
>
> Each week we'll host a live meeting to give step-by-step guidance on some best
> practices for contributing to Drupal, and provide Q and A assistance for our
> favorite open source (OSS) content management system (CMS). The meetings will
> be lead by yours truly, Damien McKenna, a prolific contributor to the Drupal
> community, and my coworkers here at Mediacurrent.
There is also an [updates blog post][3] that continues to show the latest
information, and the video recordings are [uploaded to YouTube][0] after the
session. Here is the first one from November:
<!-- <div class="mb-4 talk-video">
<iframe width="678" height="408" src="//www.youtube.com/embed/8xHE5y1rA1g" frameborder="0" allowfullscreen></iframe>
</div> -->
I enjoyed watching the first few videos, as Im always interested in
contribution to Drupal and open-source and how to encourage it, but then no new
videos were uploaded for a while and I hoped that it hadnt faded away.
Im glad to see today that its back and that all of the previous videos have
been uploaded and added to the [YouTube playlist][0], and that [on the update
post][3] there are scheduled topics for the rest of this month including
documentation and automated testing.
<div class="mb-4">
<blockquote class="twitter-tweet" data-cards="hidden" data-lang="en"><p lang="en" dir="ltr">All of the <a href="https://twitter.com/mediacurrent?ref_src=twsrc%5Etfw">@mediacurrent</a> <a href="https://twitter.com/hashtag/ContribHalfHour?src=hash&amp;ref_src=twsrc%5Etfw">#ContribHalfHour</a> videos have been uploaded to our Youtube channel: <a href="https://t.co/1sWZT5sRSN">https://t.co/1sWZT5sRSN</a><br>Note: I accidentally forgot to save the Feb 22nd video, sorry :-\</p>&mdash; Damien McKenna (@DamienMcKenna) <a href="https://twitter.com/DamienMcKenna/status/969668677980315649?ref_src=twsrc%5Etfw">March 2, 2018</a></blockquote>
</div>
I do enjoy watching these, and I like both the presentation and Q&A format that
they alternate between. Ill look forward to catching up over the next few days,
and to hopefully seeing them continue to be uploaded after future meetings.
Thanks Damien and Mediacurrent!
[0]: https://www.youtube.com/playlist?list=PLu-MxhbnjI9rHroPvZO5LEUhr58Yl0j_F
[1]:
https://www.mediacurrent.com/blog/introducing-mediacurrent-contrib-half-hour
[2]:
https://www.mediacurrent.com/blog/introducing-mediacurrent-contrib-half-hour
[3]:
https://www.mediacurrent.com/blog/updates-mediacurrent-contrib-half-hour-weekly-meeting

View file

@ -0,0 +1,22 @@
---
title: 'Migrating to Drupal 8: Introduction'
excerpt: An introduction to the 'Migrating to Drupal 8' blog post series.
date: 2020-08-12
tags:
- drupal
- drupal-8
- drupal-planet
---
I recently finished porting this website from a static site generator to Drupal 8, meaning that this site has now been powered by three different major versions of Drupal (6, 7 and 8) as well as by two static site generators since it was first launched in early 2010.
The majority of the content was imported using migrations from JSON feeds that I created. This included:
- Blog tags
- Blog posts
- Talks
- Redirects
In some follow-up posts, I'll be looking at each migration separately, describing any issues and look at how it was used to import its respective content.
I'll update this post with the links to the follow-up posts, and they are also available from the [blog series' page](/taxonomy/term/165).

View file

@ -0,0 +1,96 @@
---
title: How to Define a Minimum Drupal Core Version
date: 2015-04-03
excerpt: How to define a minimum Drupal core version for your module or theme.
tags:
- drupal
- drupal-7
- drupal-planet
meta:
og:
title: 'How to Define a Minimum Drupal Core Version'
excerpt: 'How to define a minimum Drupal core version for your module or theme.'
type: article
---
This week, my first code patch was
[committed to Drupal core](https://www.drupal.org/node/2394517#comment-9773143).
The patch adds the `user_has_role()` function to the user module, to simplify
the way to check whether a user in Drupal has been assigned a specific role.
This is something that I normally write a custom function for each project, but
it's now available in Drupal core as of
[7.36](https://www.drupal.org/drupal-7.36-release-notes).
But what if someone is using a core version less than 7.36 and tries using the
function? The site would return an error because that function wouldn't exist.
If you're building a new Drupal site, then I'd assume that you're using a latest
version of core, or you have the opportunity to update it when needed. But what
if you're writing a contrib module? How can you be sure that the correct minimum
version of core?
## Setting Dependencies
What I'm going to be doing for my contrib projects is defining a minimum version
of Drupal core that the module is compatible with. If this dependency isn't met,
the module won't be able to be enabled. This is done within your module's .info
file.
### Adding a Simple Dependency
You can define a simple dependency for your module by adding a line this this to
your project's .info file:
```bash
dependencies[] = views
```
This would make your module dependant on having the
[Views](https://www.drupal.org/project/views) module present and enabled, which
you'd need if you were including views as part of your module, for example.
### Adding a Complex Dependency
In the previous example, our module would enable if _any_ version of Views was
enabled, but we need to specify a specific version. We can do this by including
version numbers within the dependencies field in the following format:
```bash
dependencies[] = modulename (major.minor)
```
This can be a for a specific module release or a branch name:
```bash
dependencies[] = modulename (1.0)
dependencies[] = modulename (1.x)
```
We can also use the following as part of the field for extra granularity:
- = or == equals (this is the default)
- > greater than
- < lesser than
- > = greater than or equal to
- <= lesser than or equal to
- != not equal to
In the original scenario, we want to specify that the module can only be enabled
on Drupal core 7.36 or later. To do this, we can use the "greater than or equal
to" option.
```ini
dependencies[] = system (>=7.36)
```
Because we need to check for Drupal's core version, we're using the system
module as the dependency and specifying that it needs to be either equal to or
greater than 7.36. If this dependency is not met, e.g. Drupal 7.35 is being
used, then the module cannot be enabled rather than showing a function not found
error for `user_has_role()` when it is called.
![A screenshot of the modules page showing System as a dependency for a custom module.](/images/blog/minimum-drupal-version-d7.png)
## External Links
- [Writing module .info files (Drupal 7.x)](https://www.drupal.org/node/542202#dependencies)

View file

@ -0,0 +1,12 @@
---
title: My first blog post published for Inviqa
excerpt: My first blog post has been published on the inviqa.com website.
date: 2020-04-29
tags:
- drupal
- testing
---
My first blog post was published on the Inviqa website last week. Is an introduction to automated testing in Drupal, which also includes a recap of the workshop that I recently gave at DrupalCamp London.
The blog post can be found at <https://inviqa.com/blog/drupal-automated-testing-introduction>, and there's more information about the workshop specifically at <https://github.com/opdavies/workshop-drupal-automated-testing>.

Some files were not shown because too many files have changed in this diff Show more