fix: remove unwanted URLs for daily emails

This commit is contained in:
Oliver Davies 2022-10-13 08:40:06 +01:00
parent 69112f3976
commit 18a1dd0e03
53 changed files with 11 additions and 11 deletions

View file

@ -0,0 +1,75 @@
---
permalink: archive/2022/08/12/git-worktrees-docker-compose
title: Git Worktrees and Docker Compose
pubDate: 2022-08-12
---
I've recently started trialing Git worktrees again as part of my development workflow.
If you are unfamiliar with Git worktrees, they allow you to have muliple branches of a repository checked out at the same time in different directories.
For example, this is what I see within my local checkout of my website repository:
```
.
├── config
├── HEAD
├── main
│   ├── ansible
│   ├── nginx
│   ├── README.md
│   └── website
├── new-post
│   ├── ansible
│   ├── nginx
│   ├── README.md
│   └── website
├── objects
│   ├── info
│   └── pack
├── packed-refs
├── refs
│   ├── heads
│   └── tags
└── worktrees
├── main
└── new-post
```
The first thing that you'll notice is, because it's a bare clone, it looks a little different to a what you usually see in a Git repository.
Each worktree has it's own directory, so my "main" branch inside the `main` directory.
If I need to work on a different branch, such as `new-post`, then I can create a new worktree, move into that directory and start working. I don't need to commit or stash any in-progress work and switch branches.
## Complications with Docker Compose
I use Docker and Docker Compose for my projects, and this caused some issues for me the last time that I tried using worktrees.
By default, Docker Compose will use the name of the directory that the Compose file is in to name its containers. If the directory name is "oliverdavies-uk", then the containers will be `oliverdavies-uk-web_1`, `oliverdavies-uk-db_1` etc.
This doesn't work so well if the directory is a worktree called "main" or "master" as you'll have containers called `main_web_1` or `master_db_1`.
The way to solve this is to use the `COMPOSE_PROJECT_NAME` environment variable.
If you prefix Docker Compose commands with `COMPOSE_PROJECT_NAME=your-project`, or add it to an `.env` file (Docker Compose will load this automatically), then this will override the prefix in the container names to be `your-project-{service}`.
## Container names per worktree
Whilst you could use the same Compose project name within all of your worktrees, I prefer to include the worktree name as a suffix - something like `my-project-main` or `my-project-staging` - and keep these stored in an `.env` file in each worktree's directory.
As each worktree now has unique container names, I can have multiple instances of a project running at the same time, and each worktree will have it's own separate data - meaning that I can make changes and test something in one worktree without affecting any others.
You can also use the `COMPOSE_PROJECT_NAME` variable inside Docker Compose files.
For example, if you use Traefik and needed to override the host URL for a service, the string will be interpolated and the project name would be injected as you'd expect.
```yaml
labels:
- "traefik.http.routers.${COMPOSE_PROJECT_NAME}.rule=Host(
`${COMPOSE_PROJECT_NAME}.docker.localhost`,
`admin.${COMPOSE_PROJECT_NAME}.docker.localhost`
)"
```
This means that Traefik would continue to use a different URL for each worktree without you needing to make any changes to your Docker Compose file.

View file

@ -0,0 +1,47 @@
---
permalink: archive/2022/08/13/i-wrote-a-neovim-plugin
pubDate: 2022-08-13
title: I wrote a Neovim plugin
tags:
- neovim
- open-source
---
I enjoy writing and working with open-source software, starting back to when I started working with PHP and Drupal in 2007.
Since then, I've written and maintained a number of Drupal modules and themes, PHP libraries, npm packages, Ansible roles and Docker images - all of which are available on my GitHub and Drupal.org pages.
Just over a year ago, [I switched to using Neovim full-time](/blog/going-full-vim) for my development and DevOps work, and last week, I wrote my first Neovim plugin, written in Lua.
I've used Lua to configure Neovim but this is the first time that I've written and open-sourced a standalone Neovim plugin.
It's called [toggle-checkbox.nvim](https://github.com/opdavies/toggle-checkbox.nvim) and is used toggle checkboxes in Markdown files - something that I use frequently for to-do lists.
For example, this a simple list containing both checked and unchecked checkboxes:
```markdown
- [x] A completed task
- [ ] An incomplete task
```
To toggle a checkbox, the `x` character needs to be either added or removed, depending on whether we're checking or unchecking it.
This is done by calling the `toggle()` function within the plugin.
In my Neovim configuration, I've added a keymap to do this:
```lua
vim.keymap.set(
"n",
"<leader>tt",
"require('toggle-checkbox').toggle()"
)
```
This means that I can use the same keymap by running `<leader>tt` to check or uncheck a checkbox. I could use Vim's replace mode to do this, but I really wanted to have one keymap that I could use for both.
As it's my first Neovim plugin, I decided to keep it simple.
The main `toggle-checkbox.lua` file is currently only 41 lines of code, and whilst there is an existing Vim plugin that I could have used, I was excited to write my own plugin for Neovim, to start contributing to the Neovim ecosystem, and add a Neovim plugin to my portfolio of open-source projects.
You can view the plugin at <https://github.com/opdavies/toggle-checkbox.nvim>, as well as my Neovim configuration (which is also written in Lua) as part of [my Dotfiles repository](https://github.com/opdavies/dotfiles/tree/main/roles/neovim/files).

View file

@ -0,0 +1,36 @@
---
permalink: archive/2022/08/14/why-i-write-tests
pubDate: 2022-08-14
title: "Why I write automated tests"
tags: [testing]
---
In February 2012, I saw a tweet from Tim Millwood asking if anyone wanted to maintain or co-maintain a Drupal module called [Override Node Options](https://www.drupal.org/project/override_node_options).
It had more than 9,200 active installations at that time, with versions for Drupal 5, 6 and 7.
I said yes and became the modules maintainer.
The module now has versions for Drupal 7, 8 and 9, with (at the latest count, according to Drupal.org) 32,292 active installations - which makes it currently the 197th most installed module.
There have been two main things that come to mind with this module, related to automated testing.
Before I become the maintainer, a feature request had been created, along with a large patch file, to add some new permissions to the module. There were some large merge conflicts that stopped me from just committing the changes but I was able to fix them manually and, because the tests still passed, ensure that the original functionality still worked. There werent tests for the new permissions but I committed the patch and added the tests later.
Without the tests to ensure that the original functionality still worked, I probably wouldnt have committed the patch and would have just closed the issue.
More recently, a friend and ex-colleague and I decided to refactor some of the module's code.
We wanted to split the `override_node_options.module` file so that each override was in its own file and its own class. This would make them easier to edit and maintain, and if anyone wanted to add a new one, theyd just need to create a new file for it and add it to the list of overrides.
Without the tests ensuring that the module still worked after the refactor, we probably wouldnt have done it as it was used on over 30,000 sites that I didn't want to break.
When I was learning about testing, I was working on projects where I was writing the code during the day and the tests in the evening on my own time.
I remember once when my manual testing had been fine, but when writing the test, I found that Id used an incorrect permission name in the code that was causing the test to fail. This was a bug that, rather than waiting for a QA Engineer or the client to discover and report, I was able to fix it locally before I'd even committed the code.
I also worked on an event booking and management website, where we had code responsible for calculating the number of available spaces for an event based on orders, determining the correct price based on the customer's status and the time until the event, creating voucher codes for new members and event leaders, and bulk messaging event attendees. All of the custom functionality was covered by automated tests.
The great thing about testing is that it gives you confidence that everything still works how you expect - not only when you wrote the code, but also in the future.
I've talked about this, and how to get started with automated testing in Drupal, in a presentation called [TDD - Test-Driven Drupal]({{site.url}}/talks/tdd-test-driven-drupal). If you want to find out more, the slides and a video recording are embedded there.

View file

@ -0,0 +1,84 @@
---
permalink: archive/2022/08/15/using-run-file-simplify-project-tasks
pubDate: 2022-08-15
title: Using a "run" file to simplify project tasks
tags: ["php"]
---
Every project has its own set of commands that need to be run regularly.
From starting a local server or the project's containers with Docker or Docker Compose, running tests or clearing a cache, or generating the CSS and JavaScript assets, these commands can get quite complicated and time-consuming and error-prone to type over and over again.
One common way to simplify these commands is using a `Makefile`.
A Makefile contains a number of named targets that you can reference, and each has one or more commands that it executes.
For example:
```yaml
# Start the project.
start:
docker-compose up -d
# Stop the project.
stop:
docker-compose down
# Run a Drush command.
drush:
docker-compose exec php-fpm drush $(ARGS)
```
With this Makefile, I can run `make start` to start the project, and `make stop` to stop it.
Makefiles work well, but I don't use the full functionality that they offer, such as dependencies for targets, and passing arguments to a command - like arguments for a Drush, Symfony Console, or Artisan command, doesn't work as I originally expected.
In the example, to pass arguments to the `drush` command, I'd have to type `ARGS="cache:rebuild" make drush` for them to get added and the command to work as expected.
An agency that I worked for created and open-sourced their own Makefile-like tool, written in PHP and built on Symfony Console. I gave a talk on it called [Working with Workspace]({{site.url}}/talks/working-with-workspace) and used it on some of my own personal and client projects.
## What I'm using now
The solution that I'm using now is a `run` file, which is something that I learned from Nick Janetakis' blog and YouTube channel.
It's a simple Bash file where you define your commands (or tasks) as functions, and then execute them by typing `./run test` or `./run composer require something`.
Here's the Makefile example, but as a `run` script:
```bash
#!/usr/bin/env bash
function help() {
# Display some default help text.
# See examples on GitHub of how to list the available tasks.
}
function start {
# Start the project.
docker-compose up -d
}
function stop {
# Stop the project.
docker-compose down
}
function drush {
# Run a Drush command with any additional arguments.
# e.g. "./run drush cache:rebuild"
docker-compose exec php-fpm drush "${@}"
}
# Execute the command, or run "help".
eval "${@:-help}"
```
As it's Bash, I can just use `$1`, `$2` etc to get specific arguments, or `$@` to get them all, so `./run drush cache:rebuild` works as expected and any additional arguments are included.
You can group tasks by having functions like `test:unit` and `test:commit`, and tasks can run other tasks. I use this for running groups of commands within a CI pipeline, and to extract helper functions for tasks like running `docker-compose exec` within the PHP container that other commands like `drush`, `console` or `composer` could re-use.
As well as running ad-hoc commands during development, I also use the run file to create functions that run Git pre-commit or pre-push hooks, deploy code with Ansible, or build, push or pull the project's latest Docker images.
I also use one within my Talks repository to generate PDF files using rst2pdf, present them using phdpc, and generate thumbnail images.
For examples of `run` files that I use in my open-source code, [you can look in my public GitHub repositories](https://github.com/search?l=Shell&q=user%3Aopdavies+filename%3Arun&type=Code), and for more information, here is [Nick's blog post where I first found the idea](https://nickjanetakis.com/blog/replacing-make-with-a-shell-script-for-running-your-projects-tasks).

View file

@ -0,0 +1,42 @@
---
permalink: archive/2022/08/16/what-are-git-hooks-why-are-they-useful
pubDate: 2022-08-16
title: "What are Git hooks and why are they useful?"
tags: ["git"]
---
In yesterday's email, I mentioned Git hooks but didn't go into any detail. So, what are they?
Git hooks are Bash scripts that you add to your repository that are executed when certain events happen, such as before a commit is made or before a push to a remote.
By default, the script files need to be within the `.git/hooks` directory, have executable permissions, and be named to exactly match the name of the hook - e.g. `pre-push` - with no file extension.
If it returns an error exit code then the process is stopped and the action doesn't complete.
This is useful if, for example, you or your team use a specified format for commit messages and you want to prevent the commit if the message doesn't match the requirements.
But, the main benefit that I get from Git hooks if from the `pre-push` hook.
I use it to run a subset of the checks that are run within project's CI pipeline to limit failures in the CI tool and fix simple errors before I push the code.
Typically, these are the quicker tasks such as ensuring the Docker image builds, running linting and static analysis, validating lock files, and some of the automated tests if they don't take too long to run.
If a build is going to fail because of something simply like a linting error, then I'd rather find that out and fix it locally rather than waiting for a CI tool to fail.
Also, if you're utilising trunk-based development and continuous integration where team members are pushing changes regularly, then you want to keep the pipeline in a passing, deployable state as much as possible and prevent disruption.
But what have Git hooks got to do with the "run" file?
Firstly, I like to keep the scripts as minimal as possible and move the majority of the code into functions within the `run` file. This means that the scripts are only responsible for running functions like `./run test:commit` and returning the appropriate exit code, but also means that it's easy to iterate and test them locally without making fake commits or trying to push them to your actual remote repository (and hoping that they don't get pushed).
Secondly, I like to simplify the setup of Git hooks with their own functions.
For security reasons, the `.git/hooks` directory cannot be committed and pushed to your remote so they need to be enabled per user within their own clone of the repository.
A common workaround is to put the scripts in a directory like `.githooks` and either symlink them to where Git expects them to be, or to use the `core.hooksPath` configuration option and change where Git is going to look.
I like to lower the barrier for any team members by creating `git-hooks:on` and `git-hooks:off` functions which either set or unset the `core.hooksPath`. If someone wants to enable the Git hooks then they only need to run one of those commands rather than having to remember the name of the configuration option or manually creating or removing symlinks.
There are other Git hooks that can be used but just using `pre-commit` and `pre-push` has saved me and teams that I've worked on both Developer time and build minutes, provides quicker feedback and fewer disruptions in our build pipelines, and I like how simple it can be by creating custom functions in a `run` file.
Lastly, I've created <https://github.com/opdavies/git-hooks-scratch> as an example with a minimal `run` file and some example hooks.

View file

@ -0,0 +1,39 @@
---
permalink: archive/2022/08/17/one-more-run-command-git-worktrees
pubDate: 2022-08-17
title: One more "run" command, for Git worktrees
tags: ["drupal", "git"]
---
Here's another `run` file example, this time relating to Git worktrees...
One project that I work on is a multilingual Drupal application that needs to work in both English and Welsh. As I'm cloning a fresh version today, I'm doing it as a bare repository so I can use worktrees.
To work on it locally, just like in production, I need to use a different URL for each language so that Drupal can identify it and load the correct content and configuration.
For fixed environments like production or staging, the URLs are set in configuration files, but for ad-hoc environments such as local worktrees, I thought that the best approach was to override them as needed per worktree using Drush (a Drupal CLI tool).
I could do this manually each time or I could automate it in a `run` command. :)
Here's the function that I came up with:
```bash
function drupal:set-urls-for-worktree {
# Set the site URLs based on the current Git worktree name.
local worktree_name="$(basename $PWD)"
local cy_url="cy-projectname-${worktree_name}.docker.localhost"
local en_url="projectname-${worktree_name}.docker.localhost"
# Update the URLs.
drush config:set language.negotiation url.domains.cy -y $cy_url
drush config:set language.negotiation url.domains.en -y $en_url
# Display the domains configuration to ensure that they were set correctly.
drush config:get language.negotiation url.domains
}
```
It builds the worktree URL for each language based on the directory name, executes the configuration change, and finally displays the updated configuration so I can confirm that it's been set correctly.
This is a good example of why I like using `run` files and how I use them to automate and simplify parts of my workflow.

View file

@ -0,0 +1,27 @@
---
permalink: archive/2022/08/18/talking-drupal-tailwind-css
pubDate: 2022-08-18
title: "'Talking Drupal' and Tailwind CSS"
tags:
- css
- tailwind-css
- twig
---
In March, I was a guest again on the Talking Drupal podcast. This time I was talking about utility CSS and, in particular, the Tailwind CSS framework.
I've become a big fan of this approach to styling websites and was an early adopter of Tailwind, and have released [a starter-kit theme](https://www.drupal.org/project/tailwindcss) for building custom Drupal themes with Tailwind CSS based on what I was using for my own client projects.
## Rebuilding Talking Drupal with Tailwind
Usually when I give a Tailwind CSS talk at a conference or user group, I rebuild something familiar - maybe a page of their website - as an example and to explain some of the concepts and anything that was particularly interesting during the build. (I have [a blog post]({{site.url}}/blog/uis-ive-rebuilt-tailwind-css) that lists the ones that I've done before).
After this podcast episode, I built a [Tailwind version of the Talking Drupal homepage](https://talking-drupal-tailwindcss.oliverdavies.uk).
But, given that Drupal uses Twig and that we'd talked about best practices around using a templating engine to use loops and extract components to organise code and reduce duplication, I definitely wanted to build this example using Twig templates.
Drupal seemed like too much for a single page example, and Symfony or Sculpin could distract from the main focus of the demo, so I decided to start from scratch with an empty PHP file and add Twig and any other dependencies myself.
[The code repository](https://github.com/opdavies/talking-drupal-tailwindcss) is publicly viewable on my GitHub profile so people can look at the code and see some of the things that I talked about during the episode in practice and not just the resulting HTML a browser.
You can [listen to the episode](https://talkingdrupal.com/338), and if you want any more information, the slides and video from my [Taking Flight with Tailwind CSS talk]({{site.url}}/talks/taking-flight-with-tailwind-css) are on my website.

View file

@ -0,0 +1,25 @@
---
permalink: archive/2022/08/19/pair-programming-or-code-reviews
pubDate: 2022-08-19
title: Pair programming or code reviews?
---
It's been almost a year and a half since I last pushed a feature branch, created a pull request, and waited for it to be reviewed and (hopefully) merged and deployed.
On the majority of teams and projects that I've worked on, this was how things were done.
Tasks would be worked on in separate branches which would need to be reviewed by one or more other Developers before being merged.
I'm an advocate for continuous integration and trunk-based development (both I plan on writing about in more depth) in which there is no formal code review step, but instead, I encourage people to pair program as much as possible.
Pair or mob (group) programming, for me, is like a real-time code review where you can discuss and make changes instantly, rather than waiting until the work is complete and someone reviewing it after the fact. If a bug is spotted as you're typing it or something could be named better, you can update it there and then.
But there are other benefits too.
Instead of one person writing some code, and others reviewing it after the fact, multiple people have written it together and the knowledge is shared amongst those people.
As you've worked together, you don't need to ask or wait for someone to set time aside to review your changes, so it's quicker for them to be merged and deployed. It's already been reviewed, so as long as any automated checks pass, the code can be merged.
I've worked in pairs where I've taught someone how to write automated tests and do test-driven development, which I suspect wouldn't have been quite the same if they'd just read the finished code afterwards.
Of course, some Developers and teams will prefer the typical code review process - it's worked well for me and projects that I've worked on in the past - but personally, I like the speed, agility, mentoring and learning, and social benefits that I can get more easily from pair programming.

View file

@ -0,0 +1,22 @@
---
pubDate: 2022-08-20
title: "A return to offline meetups and conferences"
permalink: "archive/2022/08/20/return-to-offline-meetups-conferences"
tags: ["community"]
---
Yesterday, I dusted off our Meetup page and posted our next [PHP South Wales meetup](https://www.meetup.com/php-south-wales) event.
We've had online meetups and code practice sessions throughout the pandemic and during lockdowns, but this will be our first offline/in person/IRL meetup since February 2020.
As well as organising our online meetups during COVID, I attended a lot of other online events, [usually giving various talks or workshops]({{site.url}}/blog/speaking-remotely-during-covid-19), and whilst they were good for a while, I eventually started to get burned out by them.
I've been an organiser of various meetups and conferences for a long time, and attending events has been a very large part of my career so far - providing opportunities to learn, to network and socialise with other attendees, and pass knowledge on through talks, workshops and mentoring.
It's been great to see some offline events returning, from local user groups to conferences such as DevOpsDays, DrupalCon and SymfonyLive.
I've given one talk this year - a lot less than this time last year - but it was in front of an audience instead of a screen, and whilst it seemed strange, I'm sure that it's something that will feel normal again in time.
I'm thinking of attending a conference next month, I've submitted some talk suggestions to some other conferences which I'm waiting to hear from, and am considering travelling to some of the other UK user groups as they restart - some of which I joined or spoke at online but it would be great to meet them in person.
For next week, I'll be glad to have PHP South Wales events running again and to see our community back together in person, and then do it again and start getting ready for next month's event.

View file

@ -0,0 +1,29 @@
---
permalink: archive/2022/08/21/2022-08-21
pubDate: 2022-08-21
title: "Why I use Docker and Docker Compose for my projects"
tags:
- docker
---
For the last few years, I've used Docker and Docker Compose exclusively on all of my projects. When I start a new project or onboard a new client, usually one of the first things that I need to do is get an application running in Docker so that I can work on it.
<!-- Since I started programming, I've used a number of different local environments. Starting with WAMP and XAMPP on Windows, MAMP on macOS, Laravel Valet, the Symfony local server, and various open-source Docker-based solutions. -->
I've inherited projects with no environment configuration or documentation at all and I need to start from scratch to get it running. Ideally, each project would have it's local environment configuration in the same Git repository as the application code.
For my own projects, these days I prefer to use Docker and Docker Compose - creating my own Dockerfiles for each project so that the correct dependencies are present and the required build steps are executed, as well as acting as documentation.
It's lean as the environment is built specifically for each project, and easy to configure using Docker and Docker Compose directly using native patterns such as override files, environment variables and interpolation, and multi-stage builds.
The configuration can be as simple or complicated as it needs to be for each project rather than using "a one size fits all" approach. If I'm working with Drupal, Fractal, Vue.js, a PHP library, a Go command line tool, or something else entirely, I can use the most appropriate starting point.
As well as local developments, it's easy to use Docker and Docker Compose in CI environments with tools like GitHub Actions and Bitbucket Pipelines. They will either be present by default or will be easy to install, and it's simple to run a `docker-compose build` or `docker-compose run` command within a pipeline to check that the project builds correctly and to execute tasks such as automated tests or static analysis.
As well as using it for projects, Docker has been useful for me in other situations where I need to run small tools such as rst2pdf for generating presentation slides, and ADR Tools for working with architectural decision records.
For some situations like an open-source contribution day, using an off-the-shelf solution would probably be a better option, and some teams will have their own preferences, but I prefer to use Docker and Docker Compose when I can.
Personally, I like to invest time into learning tools that provide reusable knowledge, such as Docker and Docker Compose. I'd prefer to spend time learning something, even if it may take longer compared to other tools, if it's going to give me a return on that investment in the medium- to long-term.
For some examples of how I work with Docker and Docker Compose, you can [see my public GitHub repositories](https://github.com/opdavies?tab=repositories&q=docker) and how things are put together there.

View file

@ -0,0 +1,25 @@
---
permalink: archive/2022/08/22/2022-08-22
pubDate: 2022-08-22
title: "Being a T-shaped Developer"
---
A blog post appeared on my feed this morning, titled [How to be T-Shaped](https://www.nomensa.com/blog/how-to-be-t-shaped).
"T-shaped Developers" is a term that I've also used before. Being T-shaped means that you have a deep knowledge in one particular area and a breadth of knowledge in other areas.
I would say that I'm T-shaped.
My main area of knowledge is PHP and Drupal software development - they're the programming language and content management system that I've used throughout most of my career so far, since I started in 2007.
As I worked on my own personal and client projects, I needed to learn more complementary skills.
I needed to learn how to style websites and build themes so I started to learn front-end development with CSS and frameworks like Bootstrap, Bulma and Tailwind CSS, and JavaScript frameworks like Angular, Vue.js and Alpine, as well as TypeScript.
I also needed to host these projects somewhere, which introduced me to Linux servers, virtual hosts, (S)FTP and SSL, web servers like Apache, Nginx and Caddy, MySQL and MariaDB databases, and as projects got more complicated, I started using tools like Vagrant and Puppet, Ansible, and Docker for configuring environments to work in.
I don't use Drupal for every project. I've used static site generators and frameworks like Symfony based on the project's requirements, and have projects that use several different technologies at the same time.
The main benefits are that I can either deliver entire projects or projects with more complicated architectures, or work across different teams - mentoring a team of Front-End Developers in Drupal theming, or working with System Administrators to start hosting PHP applications. Having these additional skills is definitely valuable to employers and clients.
I've said that one of the best and worst things about software development is that there's always something new to learn!

View file

@ -0,0 +1,31 @@
---
pubDate: 2022-08-23
title: "Git: GUI or command-line?"
permalink: "archive/2022/08/23/git-gui-command-line"
tags:
- "git"
---
Ive been using Git for a long time. My first full-time Developer role in 2010 was working on an in-house team and that project used Git as its version control system.
I remember typing commands into an Ubuntu terminal and trying to wrap my head around the process of adding and staging files, (sometimes) pulling, and then pushing to a remote. I think the remote was a simple bare repository on a server, so there was no UI like there is in GitHub and similar tools today.
In fact, GitHub only started two years earlier in 2008, and GitLab wasnt around until 2014.
Looking back, my introduction to Git as a Junior Developer wasn't easy and I remember starting to get frustrated until it eventually "clicked" and made sense.
I don't remember if there were GUIs at that time (I remember using gitk but I can't think when), but having a tool like GitHub where I could see the code, branches and commits, would probably have been helpful with my initial learning.
Whilst working locally, I've tried some of the desktop GUI tools like Sourcetree, gitkraken and Tower, but I always come back to using Git on the command line.
While a Git GUI tool may make it easier to learn Git initially as a Junior Developer, I'd recommend trying to learn the command line too.
In my opinion, understanding whats happening "under the hood" when is important working with a GUI - just in case you find yourself unexpectedly having to use the command line. Ive seen an error in a Git GUI that suggests running commands in the terminal to debug or fix the issue. If you aren't familiar with the terminal commands or what they do, then I'd expect this to be intimidating and confusing.
If you're working as part of a team or contributing to an open-source project then the consistency that the command line provides will make it easier when working with colleagues or getting help from project maintainers. You're also learning Git itself rather than a tool that may add it's own terminology or change how Git itself works, also causing confusion.
There's a lot of Git functionality and concepts that I wouldn't have explored if I wasn't using the command line and relying on a GUI, such as adding and removing code in chunks using patch mode, using bisect to find when a bug was introduced, worktrees for local code organisation, and understanding merging vs rebasing, interactive and non-interactive rebases, and merge commits and fast-forward merges.
Of course, if you prefer to use a GUI and it works for you, then that's fine. Personally, I like to dig deep when learning tools, to know them inside-out and understand how to use them well, and I think that the time that I've spent learning Git and optimising my workflow paid for itself a long time ago.
How do you like to use Git? Do you prefer to use the command line or a GUI tool? Reply to this email and let me know.

View file

@ -0,0 +1,51 @@
---
permalink: archive/2022/08/24/2022-08-24
pubDate: 2022-08-24
title: "How I've configured Git"
tags:
- "git"
---
After yesterday's post on why I prefer using Git on the command line rather than using a GUI tool, today I thought that I'd post about how I've configured Git.
First, I rarely ever run the `git` command - I usually run a `g` function that I've created within my zsh configuration.
Rather than being an simple alias, it's a shell function that will run `git status -sb` to show the current status of the repository if there are no additional arguments. If there are, such as when running `g add`, then this is executed as a normal Git command. (This is something that I first saw from Thoughtbot, if I remember correctly).
## Using .gitconfig
The main part of my configuration is within Git's `~/.gitconfig` file, where I can configure Git to work how I want.
For example, I like to avoid merge conflicts, so I always want to use fast-forward merges whilst pulling and also to rebase by default. I can do this by adding `ff = only` and `rebase = true` to the `[pull]` section of my `~/.gitconfig` file.
I can do this manually, or running `git config --global pull.rebase true` will set the option but also update the file automatically.
Some of the tweaks that I've made are to only allow fast-forward merges by adding `merge.ff = only`, automatically squash commits when rebasing by setting `rebase.autosquash = true`, and automatically pruning branches by adding `fetch.prune = true`.
### Simple aliases
Another way that I configure Git is using aliases, which are also within the `~/.gitconfig` file.
For example, if I ran `git config --global alias.b "branch"`, then running `git b` would just run `git branch` which shortens the command and saves some time and keystrokes.
I have similar one- or two letter "short" aliases for pushing and pulling code, and some that also set some additional arguments such as `aa` for `add --all` and `worktrees` for `worktree list`.
### More complicated aliases
Aliases can be more complex if needed by prefixing it with a `!`, meaning that it executes it as a shell command.
This means that I can have `repush = !git pull --rebase && git push` to chain two separate Git commands and combine them into one, and `ureset = !git reset --hard $(git upstream)` which executes the full command, including another alias as part of it.
I also have `issues = !gh issue list --web` and `pulls = !gh pr list --web` to open the current repository's GitHub issues or pull requests respectively, which can be done as it's not limited to just running `git` commands.
### Custom functions
Finally, if an alias is getting too long or complex, then it can extracted to it's own file.
Any executable file within your `$PATH` that starts with `git-` will automatically become a Git command.
One example that I have is [git-cm](https://github.com/opdavies/dotfiles/blob/2b20cd1e59ae3b1fa81074077e855cbdfa02f146/bin/bin/git-cm) which, similar to the `g` function`, is a bash script that checks for any arguments passed to it and runs a slightly different command. It achieves the same thing as if it were an alias, but it does make it easier to write and maintain as it's in a separate file.
These are just some examples. If you want to see my entire configuration, then check out [my dotfiles repository on GitHub](https://github.com/opdavies/dotfiles/tree/2b20cd1e59ae3b1fa81074077e855cbdfa02f146/roles/git/files).
How have you configured Git for your workflow? Reply to this email and let me know.

View file

@ -0,0 +1,24 @@
---
pubDate: 2022-08-25
title: "Why I work in Neovim"
tags: ["vim", "neovim"]
permalink: "archive/2022/08/25/why-i-work-in-neovim"
---
Over a year ago, I posted that I was [switching to using Neovim full-time]({{site.url}}/blog/going-full-vim) for my development work.
I'd used Vim one file at a time on remote servers, and added Vim plugins in other IDEs and editors, so I was already familiar with a lot of the key bindings and motions before I decided to use it full-time.
Still, it was tough to begin with, but once I'd learned how to configure Neovim, I also learned that being able to customise and extend it as much as you need to is one of its main advantages compared to other IDEs and code editors.
TJ DeVries - a Neovim core team member - has recently coined the term "PDE" (a personalised development environment) which, for me, describes Neovim perfectly.
Currently, I have a fuzzy-finder to quickly open files (as well as many other things), an LSP client to add code intelesense, auto-completion, refactoring tools, custom snippets, and very recently, a database client and a HTTP client.
Just as important to me, I've found a growing community of other Neovim users who stream on Twitch, post YouTube videos, write blog posts, or publish their dotfiles for others to see and reference.
I've learned Lua. Not just for my own Neovim configuration, but I recently wrote and open-sourced my own simple plugin.
Like Git, I enjoy and prefer using tools that I can configure and adapt to my workflow.
Given Neovim's flexibility and configurability, its expanding feature set both in core and community plugins, and the growing community, I think that Neovim is going to be something that I continue to use and adapt for a long time.

View file

@ -0,0 +1,19 @@
---
pubDate: 2022-08-26
title: "Always be learning"
permalink: "archive/2022/08/26/always-be-learning"
---
I've been a Developer for 15 years and one thing that I've always focussed on is to always keep learning.
From starting as a self-taught Developer, initially learning HTML and CSS, to later learning PHP and Drupal as well as other languages, frameworks and tools.
For the last couple of days, I've been experimenting with Next.js - a React-based web framework. I hadn't used React before and have typically reached for Vue.js or sometimes Alpine.js based on what I needed to do. However, I'm always looking for opportunities to learn and implement new things, and see how I can use them in any of my projects.
This afternoon, I started a new Next.js and TypeScript project, and refactored a small codebase that used a static site generator to create a small number of landing pages from Markdown files.
It took me a short time to set up a Docker environment for it based on some of my Vue.js projects, ported across the application to recreate the pages, and finally, updated the CI pipeline that generated the static pages and uploaded them to an S3 bucket.
The end result is the same - the same HTML pages are generated and uploaded - but, for me, trying and experimenting with new things keeps my work interesting and my knowledge fresh, which benefits me as well as my colleagues and clients.
As I said in a previous email, one of the great things about software development is that there's always something new to learn.

View file

@ -0,0 +1,15 @@
---
pubDate: 2022-08-27
title: "Giving back"
permalink: "archive/2022/08/27/giving-back"
---
Today, I've been at an event run by a local animal rescue charity. It's one that we attend often as my children like to enter the dog show, but this year, I've also sponsored one of the categories.
As well as organising the PHP South Wales user group, I'm also now a sponsor - donating books and elePHPant plushies for raffle prizes and paying the group's Meetup.com subscription costs.
Giving back and supporting open-source maintainers and content creators is a big priority of mine. If I use some open-source software or find that someone's Twitch or YouTube channel is useful, if that person or organisation is on GitHub or Patron, then I'll sponsor them, or I'll subscribe to their channel.
If I find a useful blog post or video, I'll add a comment or link to it on Twitter, thanking them and letting them know that it helped me.
Especially if it's something that I've used within my projects, it makes sense to support it and it's maintainers, so that they keep working on and improving the software, continue streaming, and keep writing blog posts and recording videos for me to learn from.

View file

@ -0,0 +1,27 @@
---
pubDate: 2022-08-28
title: "How I started programming"
permalink: "archive/2022-08-28/how-started-programming"
---
In 2007, I was working in the IT sector in a Desktop Support role but hadn't done any coding professionally.
In my spare time, I was a black belt in Tae Kwon-Do and enjoyed training at a few different schools. Because of my IT experience, I was asked if I could create a website for one of the schools - somewhere that we could post information and class times for new starters, as well as news articles and competition results.
This would be my introduction to programming.
I started learning what I needed to know, starting with HTML and CSS - experimenting with a template that I found online and was able to tweak to match the school's colours.
I was able to complete the first version of the website with static HTML pages and CSS but had to manually create a new HTML page for every new news article and edit existing pages manually.
I wanted to make it more dynamic, and started to learn about PHP and MySQL from video courses and online forums.
After posting a question about some PHP code that I'd written, someone suggested that I look at content management systems - namely Drupal, which was used for that forum (I have [a screenshot of the reply](https://twitter.com/opdavies/status/1185456825103241216)). This was a new concept to me as until that point, I'd written everything so far myself whilst learning it.
I remember evaluating Drupal alongside some others - rebuilding the same website a few different times, but stuck with Drupal and relaunched it on Drupal 6 and a custom theme that I'd created from the original templates.
I signed up for a Drupal.org account, started to do some freelance work for a local web design agency, and built a new website for a local cattery.
I started blogging, attending meetups, and when an opportunity to switch careers to software development came along, I applied for and got the job.
That job was also using Drupal and, in another email, I'll write more about why I still like and use Drupal years later.

View file

@ -0,0 +1,22 @@
---
pubDate: 2022-08-29
title: "Why I like Drupal"
permalink: "archive/2022/08/29/why-like-drupal"
tags: ["drupal"]
---
As I said in yesterday's email, I developed my first website project on Drupal. It allowed me to take a static HTML and CSS website and convert it into something that was much easier and quicker for me to update, and allowed me to create more users with permissions to do those tasks too.
I worked on various Drupal projects, and my first full-time job was on an in-house team where we maintained and enhanced a Drupal 6 website.
I've since used Drupal for projects of all shapes and sizes with different levels of complexity. Everything from a simple brochure website to large and complex, multilingual, API-driven projects.
I've been able to build eCommerce websites with Drupal using Ubercart and Drupal Commerce. I've built traditional stores where customers purchase physical products, a photography competition website with custom judging functionality, a site for purchasing commercial and residential property and land searches, and a fully-fledged events booking and management platform.
Whatever the size and complexity of the project, Drupal is flexible enough to fit it.
I've loved some of the ecosystem improvements within the last few years. Moving to object-orientated code by default, integrating code from other projects like Symfony, shipping new features every six months as part of the new release cycle, and embracing tools like Composer, PHPStan and Rector.
I also love being part of the Drupal community. Collaborating on tasks, speaking on Slack, and attending events like DrupalCon where I've been lucky enough to attend, speak and mentor.
Although Drupal is my specialty and the tool that I've used the most, I don't use it exclusively. I'll talk more about this in tomorrow's email.

View file

@ -0,0 +1,24 @@
---
pubDate: 2022-08-30
title: "Why I don't only use Drupal"
permalink: "archive/2022/08/30/why-dont-only-use-drupal"
tags: ["drupal"]
---
Yesterday, [I shared some of the reasons]({{site.url}}/archive/2022/08/29/why-like-drupal) why I like Drupal and why I use it for the majority of my projects. But, as I said, I don't use it exclusively and for some projects I used various different tools.
Essentially, I always try to recommend and use the best tool for the job.
I previously interviewed for a job and was asked to complete a coding test. The role was mostly Drupal-focussed, but as the test asked for a command-line application, I completed it using Symfony and Symfony Console, and was able to discuss why I'd made that decision. In my opinion, it was the best choice based on the requirements.
This is the same approach that I use when making recommendations for a new project.
I've delivered projects using other tools like the Symfony framework or a static site generator, as long as it fitted the requirements.
If there's a SaaS solution that can be used instead, or an off-the-shelf tool that can be integrated instead of writing a custom solution, then that should be evaluated.
There may be other constraints like budgets or deadlines to consider - maybe something can be delivered faster or cheaper using a particular technology, even if it's not the final solution.
There are situations though where a tool may be the best choice even though it's not the ideal fit based purely on the technical requirements. Maybe the client is already familiar with publishing content in Drupal, or an in-house development team is used to working with a certain tool or language. In that case, those things should be considered too.
Also, for me, having a chance to evaluate other technologies and explore what's happening outside of the Drupal ecosystem is a good opportunity. A lot of what I've learned about automated testing, for example, is from the wider PHP and JavaScript communities, as well as tools like [Tailwind CSS]({{site.url}}/talks/taking-flight-with-tailwind-css) and [Illuminate Collections]({{site.url}}//talks/using-illuminate-collections-outside-laravel) that I've been able to bring back into my other Drupal projects.

View file

@ -0,0 +1,40 @@
---
pubDate: 2022-09-01
title: "Conventional commits and CHANGELOGs"
tags: []
permalink: "archive/2022/09/01/conventional-commits-changelogs"
---
One of the things that I've done since joining my current team is to implement a standard approach for our commit messages.
We're using the [Conventional Commits specification](https://www.conventionalcommits.org), which gives some additional rules to follow when writing commit messages.
For example:
```
build(deps): update Drupal to 9.4.5
Updated Drupal's `drupal/core-*` packages to 9.4.5.
See https://www.drupal.org/project/drupal/releases/9.4.5.
Refs: #123
```
We can see that this is a `build` task that relates to our project dependencies, in this example, we're updating Drupal core. We can also see this in the subject line.
In the commit body, I add as much information as possible to do with the change and include any relevant links, just in case I need to refer to them again, and the list the names of anyone else who worked with me. I also typically include any ticket numbers or links in the commit footer.
So far, I've mostly used the `build`, `chore`, `ci`, `docs` and `refactor` commit types, which are types that are recommended and used by [the Angular convention](https://github.com/angular/angular/blob/22b96b9/CONTRIBUTING.md#-commit-message-guidelines).
Following this standard means that it's very easy to look at the Git log and see what type of changes are going to be included within a release and, if you're using scopes, which part of the application are affected.
Conventional commits also works nicely with something else that we've introduced, which is a CHANGELOG file.
There are tools that can generate and update CHANGELOGs automatically from conventional commits, but so far, we've been following the [Keep a Changelog](https://keepachangelog.com) format.
It's easy to match the commits to the `Added`, `Changed` or `Fixed` types, and although it needs to be updated manually, it's easy to add to the `Unreleased` section of the file and re-organise everything within the appropriate headings as needed as part of a release.
What I like about this format is that it's more human-friendly and gives a higher level overview of the changes rather than a reformatted Git log.
As we do trunk-based development and continuous integration on our projects, there can be numerous commits related to the same change, so I'd rather only see a single line in the CHANGELOG for each change. This also makes it easier to share the CHANGELOG file with others, and we can still view and grep the Git log to see the individual commits if we need to.

View file

@ -0,0 +1,22 @@
---
title: "Automating all the things with Ansible"
pubDate: "2022-09-02"
permalink: "archive/2022/09/02/automating-all-the-things-with-ansible"
tags: ["ansible"]
---
Ansible is a tool for automating IT tasks. It's one of my preferred tools to use, and one that I've written about and [presented talks on]({{site.url}}/talks/deploying-php-ansible-ansistrano) previously.
It's typically thought of as a tool for managing configuration on servers. For example. you have a new VPS that you want to use as a web server, so it needs Nginx, MySQL, PHP, etc to be installed - or whatever your application uses. You define the desired state and run Ansible, which will perform whatever tasks are needed to get to that state.
Ansible though does include modules for interacting with services like Amazon AWS and DigitalOcean to create the servers and resources, and not just configure them.
It also doesn't just work on servers. I use Ansible to configure my local development environment, to ensure that dependencies and tools are installed, and requirements like my SSH keys and configuration are present and correct.
Lastly, I use Ansible to deploy application code onto servers and automatically run any required steps, ensuring that deployments are simple, robust and repeatable.
In the next few emails, I'll explain how I've been able to utilise Ansible for each of these situations.
---
Want to learn more about how I use Ansible? [Register for my upcoming free email course]({{site.url}}/ansible-course).

View file

@ -0,0 +1,57 @@
---
pubDate: 2022-09-03
title: Creating infrastructure with Ansible
permalink: archives/2022/09/03/creating-infrastructure-with-ansible
tags: ["ansible"]
---
Let's start at the beginning.
If we want to automate our infrastructure then we first need to create it. This could be done manually or we can automate it.
Popular tools for this include Terraform and Pulumi, but Ansible also includes modules to interface with hosting providers such as Amazon Web Services, Microsoft Azure, DigitalOcean, and Linode.
By using one of these tools, you can programatically provision a new, blank server that is ready for you to be configered.
For example, to [create a DigitalOcean droplet](https://docs.ansible.com/ansible/latest/collections/community/digitalocean/digital_ocean_module.htm):
```yaml
---
- community.digitalocean.digital_ocean_droplet:
image: ubuntu-20-04-x64
name: mydroplet
oauth_token: "..."
region: sfo3
size: s-1vcpu-1gb
ssh_keys: [ .... ]
state: present
wait_timeout: 500
register: my_droplet
```
Running this playbook will create a new Droplet with the specified name, size, and operating system, and within the specified region.
If you needed to create a separate database server or another server for a new environment, then the file can be updated and re-run.
[Creating an Amazon EC2 instance](https://docs.ansible.com/ansible/latest/collections/amazon/aws/ec2_instance_module.html#ansible-collections-amazon-aws-ec2-instance-module) looks very similar:
```yaml
---
- amazon.aws.ec2_instance:
image_id: ami-123456
instance_type: c5.large
key_name: "prod-ssh-key"
name: "public-compute-instance"
network:
assign_public_ip: true
security_group: default
vpc_subnet_id: subnet-5ca1ab1e
```
This doesn't apply just to servers - you can also use Ansible to create security groups and S3 buckets, manage SSH keys, firewalls, and load balancers.
Once we have our infrastructure in place, we can start using Ansible to set and manage its configuration, which we'll do in tomorrow's email.
---
Want to learn more about how I use Ansible? [Register for my upcoming free email course]({{site.url}}/ansible-course).

View file

@ -0,0 +1,23 @@
---
title: "Using Ansible for server configuration"
pubDate: "2022-09-04"
permalink: "archive/2022/09/04/using-ansible-for-server-configuration"
---
[In yesterday's email]({{site.url}}/archives/2022/09/03/creating-infrastructure-with-ansible), I described how to set up a blank server with Ansible.
Now that we've done that, it needs to be configured.
Once the servers IP address or hostname has been added to a `hosts.ini` file, you can run ad-hoc commands against it - such as `ansible all -i hosts.ini -m ping` to run Ansible's `ping` module on all of the hosts in your inventory and check that you can connect to them.
Another useful one that you can use is the `shell` module, that runs ad-hoc run commands on each host. If you need to check the uptime of each of your servers, run `ansible all -i hosts.ini -m shell -a uptime`. You can replace the last argument with any other shell command that you need to run, like `df` or `free`.
Running commands in this way is great for getting started, for routine maintenance, or an emergency free disk space check, but for more complex tasks like configuration management, using playbooks is the better option. They are YAML files that contain lists of tasks that Ansible will run through and execute in order.
If you have a group of related tasks, such as for installing a piece of software, then you can combine them into roles. In fact, Ansible Galaxy has thousands of pre-built collections and roles that you can download, include in your playbooks, configure, and run.
Very quickly, you can get a full stack installed and configured - ready to serve your application.
---
Want to learn more about how I use Ansible? [Register for my upcoming free email course]({{site.url}}/ansible-course).

View file

@ -0,0 +1,25 @@
---
title: "Using Ansible for local environment configuration"
pubDate: "2022-09-05"
permalink: "archive/2022/09/05/using-ansible-for-local-configuration"
---
As well as [configuring servers]({{site.url}}/archive/2022/09/04/using-ansible-for-server-configuration), you can use Ansible to configure your own local machine and development environment.
The change that you need to make is within the `hosts.ini` file:
```
127.0.0.1 ansible_connection=local
```
Instead of the server's IP address or hostname, use the localhost IP address and set `ansible_connection` to `local` to tell Ansible to run locally instead of using an SSH connection.
Another way to do this is to set `hosts: 127.0.0.1` and `connection: true` in your playbook.
Once this is done, you can run tasks, roles, and collections to automate tasks such as installing software, adding your SSH keys, configuring your project directories, and anything else that you need to do.
For an example of this, you can see [my dotfiles repository on GitHub](https://github.com/opdavies/dotfiles).
---
Want to learn more about how I use Ansible? [Register for my upcoming free email course]({{site.url}}/ansible-course).

View file

@ -0,0 +1,26 @@
---
title: "Deploying applications with Ansible"
pubDate: "2022-09-06"
permalink: "archive/2022/09/06/deploying-applications-with-ansible"
---
The last few days' emails have been about using Ansible to create and configure infrastructure, but it can also be used to deploy application code.
The simplest way being that an artifact is built locally - e.g. a directory of static HTML pages from a static site generator - and uploaded onto the server, and for this you could use Ansible's `synchronize` module.
It's a wrapper around the `rsync` command and makes it as simple as specifying `src` and `dest` values for the local and remote paths.
For more complicated deployments, I like to use a tool called Ansistrano - an Ansible port of a deployment tool called Capistrano.
It creates a new directory for each release and updates a `current` symlink to identify and serve the current release, and can share files and directories between releases.
As well as being able to configure settings such as the deployment strategy, how many old releases to keep, and even the directory and symlink names, there are a number of hooks that you can listen for an add your own steps as playbooks so you can install dependencies, generate assets, run migrations, or rebuild a cache as part of each deployment.
If you're running your applications in Docker, you could use Ansible to pull the latest images and restart your applications.
For more information and examples, I've given a talk on Ansible at various PHP events, which covers some Ansible basics before moving on to [deploying applications with Ansistrano]({{site.url}}/talks/deploying-php-ansible-ansistrano).
---
Want to learn more about how I use Ansible? [Register for my upcoming free email course]({{site.url}}/ansible-course).

View file

@ -0,0 +1,30 @@
---
title: "My Tailwind CSS origin story"
pubDate: "2022-09-07"
permalink: "archive/2022/09/07/my-tailwind-css-origin-story"
tags: ["tailwind-css"]
---
Tomorrow night, I'm attending one of Simon Vrachliotis (simonswiss)'s Pro Tailwind workshops, so I thought that it would be a good time, as Simon has done himself recently on the Navbar podcast, to describe how I started using Tailwind CSS.
I remember watching a lot of Adam Wathan's live streams on YouTube before Tailwind CSS, and I remember when he started a new project - a SaaS product called KiteTail.
It was a Laravel and Vue.js project, and although I'm not a Laravel Developer primarily, I got a lot of other information from Adam's streams about automated testing, test-driven development, and Vue.js as I was learning Vue at the time.
One of the episodes was about styling a card component using some styles that Adam was copying between projects - which would eventually be the starting point for Tailwind CSS.
In fact, I think I watched some of the episode and stopped as I was happy with the Sass and BEM or SMACSS approach that I was using at the time, and didn't initially see the value of the utility CSS approach that I was seeing for the first time (everyone has a similar reaction initially).
After a while, I did re-visit it but because Tailwind CSS wasn't released as it's own project yet, I (like Simon) started to experiment with Tachyons - another utility CSS library.
I rebuilt a particularly tricky component that I'd just finished working on and had caused me some issues, and managed to re-do it in only a few minutes.
I started to use Tachyons on some personal and client projects as a layer on other frameworks like Bootstrap and Bulma, and later moved on to Tailwind CSS once it has been released.
I was working in this way on a project when I released that I could use Tailwind for all of the styling instead of just adding small sprinklings of utilities here and there. I refactored everything and removed the other framework that I'd been using - leaving just Tailwind CSS.
With the exception of some legacy projects, now I use Tailwind CSS exclusively and have used it for a number of projects. I've given lunch and learn sessions to teams that I've worked on, [presented a Tailwind CSS talk]({{site.url}}/talks/taking-flight-tailwind-css) at a number of PHP, Drupal, WordPress, and JavaScript events, and maintain [a starter-kit theme](https://www.drupal.org/project/tailwindcss) for using Tailwind in custom Drupal themes.
I've also rebuilt a [number of existing sites]({{site.url}}/blog/uis-ive-rebuilt-tailwind-css) as examples and written some [Tailwind CSS related blog posts]({{site.url}}/blog/tags/tailwind-css).
I'm looking forward to attending Simon's workshop tomorrow and quickly putting that knowledge to use in the next phase of a project that I'm currently working on.

View file

@ -0,0 +1,34 @@
---
title: "Keeping secrets with Ansible Vault"
pubDate: "2022-09-08"
permalink: "archive/2022/09/08/keeping-secrets-with-ansible-vault"
tags: ["ansible"]
---
In the last few posts, I've talked about using Ansible for configuring servers and local environments, during both of which, you're likely to have some sensitive or secret values. These could be database credentials within your application and on your server, and your SSH private keys within your local environment.
Rather than committing these to a code repository in plain text, Ansible includes the `ansible-vault` command to encrypt values.
To see this working, run `ansible-vault encrypt_string my-secret-password`, enter a password, and then you should see something like this:
```
!vault |
$ANSIBLE_VAULT;1.1;AES256
33353031663366313132333831343930643830346531666564363562666136383838343235646661
6336326637333230396133393936646636346230623932650a333035303265383437633032326566
38616262653933353033376161633961323666366132633033633933653763373539613434333039
6132623630643261300a346438636332613963623231623161626133393464643634663735303664
66306433633363643561316362663464646139626533323363663337363361633333
```
This is the encrypted version of that password, and this could be committed and pushed to a code repository.
You can use it within a playbook, and you'll be prompted to re-enter the password so that Ansible can decrypt and use it.
Rather than a single string, you could have a file of variables that you want to encrypt. You can do this by running `ansible-vault encrypt vault.yml` and include it as before. Again, you'll be prompted by Ansible so that it can decrypt and use the values.
For an example of how I'm using Ansible Vault, see [the Dransible repository](https://github.com/opdavies/dransible/tree/986ba5097d62ff4cd0e637d40181bab2c4417f2e/tools/ansible) on GitHub or my [ Deploying PHP applications with Ansible, Ansible Vault and Ansistrano]({{site.url}}/talks/deploying-php-ansible-ansistrano) talk.
---
Want to learn more about how I use Ansible? [Register for my upcoming free email course]({{site.url}}/ansible-course).

View file

@ -0,0 +1,20 @@
---
title: "Refactoring a Tailwind CSS component"
pubDate: "2022-09-09"
permalink: "archive/2022/09/09/refactoring-tailwind-component"
tags: ["tailwind-css"]
---
After last night's Pro Tailwind theming workshop, I decided to revisit and refactor some similar code that I'd worked on before.
It was a demo for a presentation on utility-first CSS and Tailwind whilst I was at Inviqa.
I'd taken one of the components from the website that we'd lauched and rebuilt it - in particular to show how Tailwind could be used for responsive and themeable components.
[The original version](https://play.tailwindcss.com/Yfmw8O5UNN) was written in Tailwind 1 and used custom CSS with `@apply` rules to include text or background colours to elements based on the theme being used on that page or component.
As well as moving it into a Next.js application, [the new version](https://github.com/opdavies/inviqa-tailwindcss-example) uses techniques covered in Simon's workshop - using CSS custom properties (aka variables) to override the colours, and writing custom plugins to generate the required styles. It doesn't include everything from the workshop, but enough for this refactor.
I also moved the `flex-basis` classes into their own standalone plugin and might release that as it's own open-source plugin.
I'm working on a client project at the moment which will need switchable themes so I'm looking forward to putting these techniques to use again in the near future.

View file

@ -0,0 +1,38 @@
---
title: "Automating Ansible deployments in CI"
pubDate: "2022-09-10"
permalink: "archive/2022/09/10/automating-ansible-deployments-ci"
tags: ["ansible"]
---
Once you have a deployment that's run using Ansible, rather than running it manually, it's easy to automate it as part of a continuous integration pipeline and have your changes pushed automatically by tools like GitHub Actions and GitLab CI.
You'll need to configure SSH by adding a known hosts file and a private key so the tool can connect to your server, but after that, it's just running the same Ansible commands.
If you're using Ansistrano or other roles, you can install dependencies by using `ansible-galaxy`, and `ansible-vault` to decrypt and use any encrypted variables - securely storing the Vault password and any other secrets as environment variables within your pipeline.
Here's an example using GitHub Actions:
```
- name: Download Ansible roles
run: ansible-galaxy install -r requirements.yml
- name: Export the Ansible Vault password
run: echo $ANSIBLE_VAULT_PASS > .vault-pass.txt
env:
ANSIBLE_VAULT_PASS: ${{ secrets.ANSIBLE_VAULT_PASS }}
- name: Deploy the code
run: >
ansible-playbook deploy.yml
-i inventories/$INVENTORY_FILE.ini
-e "project_git_branch=$GITHUB_SHA"
--vault-password-file=.vault-pass.txt
- name: Remove the Ansible Vault password file
run: rm .vault-pass.txt
```
Before these steps, I've added the SSH key and determined which inventory file to use by the updated branch. The Vault password is exported and then removed once it has been used.
Automated tests and other code quality checks can be run in prior job, ensuring that the deployment only happens if those checks pass, but assuming that all is good, the playbook will be run and the changes will be deployed automatically.

View file

@ -0,0 +1,62 @@
---
title: "Custom styles in Tailwind CSS: `@apply`, `theme` or custom plugins"
pubDate: "2022-09-11"
permalink: "archive/2022/09/11/custom-styles-tailwind-css-apply-theme-custom-plugins"
tags: ["tailwind-css"]
---
There are three ways to add custom styles to a Tailwind CSS project. As there have been [some recent tweets](https://twitter.com/adamwathan/status/1559250403547652097) around one of them - the `@apply` directive - I'd like to look at and give examples for each.
## What is `@apply`?
`@apply` is a PostCSS directive, provided by Tailwind, to allow re-using it's classes - either when extracting components or overriding third-party styles.
The CSS file is the same as if you were writing traditional CSS, but rather than adding declarations to a ruleset, you use the `@apply` directive and specify the Tailwind CSS class names that you want to apply.
For example:
```css
fieldset {
@apply bg-primary-dark;
}
```
This is a simple example but it's easy to see how this could be used in ways that weren't intended and how edge-cases can be found.
Adam said in a another tweet:
> I estimate that we spend at least $10,000/month trying to debug extremely edge-case issues people run into by using `@apply` in weird ways.
## Using the `theme` function
As well as `@apply`, Tailwind also provides a `theme` function that you can use in your CSS file. This removes the abstraction of using the class names and adds the ability to retrieve values from the `theme` section of your tailwind.config.js file.
```css
fieldset {
backgroundColor: theme('colors.primary.dark');
}
```
This seems to be the preferred approach over using `@apply`.
## Creating a custom plugin
The `theme` function is also available if you write a custom Tailwind CSS plugin:
```javascript
const plugin = require('tailwindcss/plugin')
plugin(({ addBase, theme }) => {
addBase({
fieldset: {
backgroundColor: theme('colors.primary.dark'),
}
})
})
```
This is an approach that I've used for [generic, open-source plugins](https://github.com/opdavies?tab=repositories&q=%23tailwindcss-plugin) but for project-specific styling, I've mostly used `@apply` or the `theme` function.
That said, I like the modular architecture of having different custom plugins - especially if they're separated into their own files - and being able to easily toggle plugins by simply adding to or removing from the `plugins` array.
I usually don't write many custom styles in a Tailwind project but I think that I'll focus on using the `theme` function going forward, either in a stylesheet or a custom plugin.

View file

@ -0,0 +1,15 @@
---
title: "A month of daily emails"
pubDate: "2022-09-12"
permalink: "archive/2022/09/12/month-daily-emails"
---
Its already been a month since I started my email list and writing daily emails.
Since then, Ive written emails on various development and workflow-based topics, including Drupal, Git, Docker, Neovim, Ansible and Tailwind CSS.
The first email was written on Thursday the 12th of August and after initially wondering whether I should start on the upcoming Monday, or how often to post, I decided to jump in with both feet and wrote the first daily post that day. The first few weren't actually emailed as I waited to see if I could sustain writing a daily post (I was just posting them to my website), but after a few days, I set up the email list and started sending the posts.
I can confirm what [Jonathan Stark](https://jonathanstark.com) and [Jonathan Hall](https://jhall.io) have said - that it's easier to write daily and that you start to see topic ideas everywhere. I started with a list of between 20 and 25 ideas and still have most of them as I've pivoted on a day's topic based on an article or tweet that I saw, some code that I'd written, or some approach that I took.
If you're considering starting a daily email list, I'd recommend it.

View file

@ -0,0 +1,67 @@
---
title: "The simplest Drupal test"
pubDate: "2022-09-14"
permalink: "archive/2022/09/14/simpletest-drupal-test"
---
Most of my work uses the Drupal framework, and I've given talks and workshops on automated testing and building custom Drupal modules with test-driven development. Today, I wanted to see how quickly I could get a working test suite on a new Drupal project.
I cloned a fresh version of my [Docker Examples repository](https://github.com/opdavies/docker-examples) and started the Drupal example.
I ran `mkdir -p web/modules/custom/example/tests/src/Functional` to create the directory structure that I needed, and then `touch web/modules/custom/example/tests/src/Functional/ExampleTest.php` to create a new test file and populated it with some initial code:
```php
<?php
namespace Drupal\Tests\example\Functional;
use Drupal\Tests\BrowserTestBase;
use Symfony\Component\HttpFoundation\Response;
class ExampleTest extends BrowserTestBase {
protected $defaultTheme = 'stark';
}
```
For the simplest test, I decided to test some existing Drupal core functionality - that an anonymous user can view the front page:
```php
/** @test */
public function the_front_page_loads_for_anonymous_users() {
$this->drupalGet('<front>');
$this->assertSession()->statusCodeEquals(Response::HTTP_OK);
}
```
To execute the test, I ran `SIMPLETEST_DB=sqlite://localhost//dev/shm/test.sqlite SIMPLETEST_BASE_URL=http://web phpunit -c web/core web/modules/custom`. The environment variables could be added to a `phpunit.xml.dist` file but I decided to add them to the command and use Drupal core's PHPUnit configuration file.
As this is existing functionalty, the test passes. I can change either the path or the response code to ensure it also fails when expected.
With the first test working, it's easy to add more for other functionality, such as whether different users should be able to access administration pages:
```php
/** @test */
public function the_admin_page_is_not_accessible_to_anonymous_users() {
$this->drupalGet('admin');
$this->assertSession()->statusCodeEquals(Response::HTTP_FORBIDDEN);
}
/** @test */
public function the_admin_page_is_accessible_by_admin_users() {
$adminUser = $this->createUser([
'access administration pages',
]);
$this->drupalLogin($adminUser);
$this->drupalGet('admin');
$this->assertSession()->statusCodeEquals(Response::HTTP_OK);
}
```
Hopefully, this shows how quickly you can get tests running for a Drupal module. If you'd like to see more, the slides and video recording of my [Test-Driven Drupal talk]({{site.url}}/talks/tdd-test-driven-drupal) are online.

View file

@ -0,0 +1,111 @@
---
title: "Why I mostly write functional and integration tests"
pubDate: "2022-09-16"
permalink: "archive/2022/09/16/why-mostly-write-functional-and-integration-tests"
tags: ["drupal"]
---
In [Wednesday's email]({{site.url}}/archive/2022/09/14/simpletest-drupal-test), I showed how quick it is to get started writing automated tests for a new Drupal module, starting with a functional test.
I prefer the outside-in style (or London approach) of test-driven development, where I start with a the highest-level test that I can for a task. If the task needs me to make a HTTP request, then Ill use a functional test. If not, Ill use a kernel (or integration) test.
I find that these higher-level types of tests are easier and quicker to set up compared to starting with lower-level unit tests, cover more functionality, and make it easier to refactor.
## An example
For example, this `Device` class which is a data transfer object around Drupal's `NodeInterface`. It ensures that the correct type of node is provided, and includes a named constructor and a helper method to retrieve a device's asset ID from a field:
```php
final class Device {
private NodeInterface $node;
public function __construct(NodeInterface $node) {
if ($node->bundle() != 'device') {
throw new \InvalidArgumentException();
}
$this->node = $node;
}
public function getAssetId(): string {
return $this->node->get('field_asset_id')->getString();
}
public static function fromNode(NodeInterface $node): self {
return new self($node);
}
}
```
## Testing getting the asset ID using a unit test
As the `Node::create()` method (what I'd normally use to create a node) interacts with the database, I need to create a mock node to wrap with my DTO.
I need to specify what value is returned from the `bundle()` method as well as getting the asset ID field value.
I need to mock the `get()` method and specify the field name that I'm getting the value for, which also returns it's own mock for `FieldItemListInterface` with a value set for the `getString()` method.
```php
/** @test */
public function should_return_an_asset_id(): void {
// Arrange.
$fieldItemList = $this->createMock(FieldItemListInterface::class);
$fieldItemList
->method('getString')
->willReturn('ABC');
$deviceNode = $this->createMock(NodeInterface::class);
$deviceNode
->method('bundle')
->willReturn('device');
$deviceNode
->method('get')
->with('field_asset_id')
->willReturn($fieldItemList);
// Act.
$device = Device::fromNode($deviceNode);
// Assert.
self::assertSame('ABC', $device->getAssetId());
}
```
This is quite a long 'arrange' section for this test, and just be confusing for those new to automated testing.
If I was to refactor from using the `get()` and `getString()` methods to a different implementation, it's likely that the test would fail.
## Refactoring to a kernel test
This is how I could write the same test using a kernel (integration) test:
```php
/** @test */
public function should_return_an_asset_id(): void {
// Arrange.
$node = Node::create([
'field_asset_id' => 'ABC',
'type' => 'device'
]);
// Assert.
self::assertSame('ABC', Device::fromNode($node)->getAssetId());
}
```
I can create a real `Node` object, pass that to the `Device` DTO, and call the `getAssetId()` method.
As I can interact with the database, there's no need to create mocks or define return values.
The 'arrange' step is much smaller, and I think that this is easier to read and understand.
### Trade-offs
Even though the test is cleaner, because there are no mocks there's other setup to do, including having the required configuration available, enabling modules, and installing schemas and configuration as part of the test - and having test-specific modules to store the needed configuration files.
Because of this, functional and kernel tests will take more time to run than unit tests, but an outside-in approach could be worth considering, depending on your project and team.

View file

@ -0,0 +1,21 @@
---
title: "Thoughts on automated code formatting"
pubDate: "2022-09-17"
permalink: "archive/2022/09/17/thoughts-automated-code-formatting"
---
For a long time, I've been focused on writing code that complies with defined coding standards, either to pass an automated check from a tool like PHP Code Sniffer (PHPCS) or eslint, or a code review from a team member.
Complying with the standards though is something that I've done manually.
As well as automated tools for linting the code, there are tools like PHP Code Beautifier and Fixer, and Prettier for formatting the code based on the same standards, which I've started to use more recently.
These tools can be run on the command line, VS Code has a "Format on save" option, and I can do the same in Neovim using an auto-command that runs after writing a file if an LSP is attached. I typically use a key mapping for this though so I can run it when I need, rather than it running automatically every time a file is saved.
One of my concerns with automated code formatting is what to do when working with existing code that doesn't already follow the standards. If I need to make a change to a file, with automated formatting, the rest of the file can change due to formatting being applied when I save my change.
I recently introduced a PHPCS step to a CI pipeline for an existing project. I knew that it was going to fail initially, but I was able to see the list of errors. I ran the code formatter on each of the files to fix the errors, committed and pushed the changes, and watched the pipeline run successfully.
This meant that I had a commit reformatting all of the affected files, but it was good to combine these together rather than having them separate, and not mixed with any other changes like a new feature or a bug fix.
Since doing this, it's been nice when working in this codebase to not have to worry about code style violations, and I can focus on writing the code that I need to, knowing that I can rely on the automated formatting to fix any issues before I commit them.

View file

@ -0,0 +1,26 @@
---
title: "Useful Git configuration"
pubDate: "2022-09-19"
permalink: "archive/2022/09/19/useful-git-configuration"
tags: ["git"]
---
Here are some snippets from my Git configuration file.
These days, I use a much simpler workflow and configuration since doing more trunk-based development, but in general, I rebase instead of merging by default, and prefer to use fast-forward merges that doesn't create a merge commit.
`branch.autosetuprebase = always` and `pull.rebase = true` configure Git to always rebase instead of pull. It does this for all branches, though I might override this for `main` branches.
`pull.ff = only` and `merge.ff = only` prevents creating a merge commit and will prevent the merge if it would create one. If I needed to override this, I could by using the `--no-ff` option on the command line.
I use `checkout.defaultRemote = origin` to ensure that the `origin` remote is used if I have multiple remotes configured, and `push.default = upstream` to set the default remote to push to.
`merge.autoStash` allows for running merges on a dirty worktree by automatically creating and re-applying a stash of the changes, and `fetch.prune` will automatically prune branches on fetch - keeping things tidy.
I also have and use a number of aliases.
Some like `pl = pull` and `ps = push` are shorter versions of existing commands, and some like `aa = add --all`, `fixup = commit --fixup` and some additional arguments to commands.
I also have some like `current-branch = rev-parse --abbrev-ref HEAD` and `worktrees = worktree list` which add simple additional commands, and some like `repush = !git pull --rebase && git push` which use execute shell commands to execute more complex commands or combine multiple commands.
This is a snapshot of my Git configuration. The [full version is on GitHub](https://github.com/opdavies/dotfiles/blob/7e935b12c09358adad480a566988b9cbfaf5999e/roles/git/files/.gitconfig).

View file

@ -0,0 +1,26 @@
---
title: "Why I like trunk-based development"
pubDate: "2022-09-20"
permalink: "archive/2022/09/20/why-like-trunk-based-development"
tags: ["git"]
---
For the majority of my software development career, I've worked with version control in a very similar way.
There are one or two long-lived branches, usually a combination of `develop`, `master` or `main`, that contain the production version of the code. When starting work on a new feature or bug fix, a new branch is created where the changes are made in isolation, and is submitted for review once complete. This is typically referred to as "Git Flow" or "GitHub Flow".
Whilst those changes are awaiting review, a new task is started and the process is repeated.
## Trunk-based development
Something that I've been practicing and advocating for lately is trunk-based development, where there's only one branch that everyone works on, and commits and pushes to instead of creating separate per-task branches.
Even on a client project where I was the only Developer, I was used to creating per-task branches and I can recall when trying to demo two features to a client and the application broke when switching between branches.
The vast majority of the time, whether working individually or on a team, I've found that the per-task branches weren't needed and working on a single branch was easier and simpler.
There are still occassions when a temporary branch is needed, but in general, all changes are made to the single branch.
Trunk-based development ties in nicely with the continuous integration approach, where everyone commits and pushes their work at least once a day - ideally, multiple times a day. This eliminates long-running feature or bug fix branches that get out of sync with the main branch as well as conflicting with each other.
It seemed scary to begin with, having been used to per-task branches and asynchronous peer reviews via pull or merge requests, but trunk-based development has made things simpler and encourages other best practices such as pair and mob programming. having a good CI pipeline to identify regressions, using feature flags to separate code deployments from feature releases, and frequent code integration and deployment via continuous commits and pushes.

View file

@ -0,0 +1,34 @@
---
title: "Being a Drupal contribution mentor"
pubDate: "2022-09-21"
permalink: "archive/2022/09/21/being-drupal-contribution-mentor"
tags: ["drupal"]
---
This week is DrupalCon Prague, and although I'm not at this event, I'd like to write about some my experiences at DrupalCon - in particular about being a contribution mentor.
## My first DrupalCon
The first DrupalCon that I attended was in 2013, also in Prague.
I was enjoying the session days when I stopped at the mentoring table to find out more about the contribution sprints that were happening on the Friday.
I didn't have any commits in Drupal core but had already worked on and released some of my own contributed modules, so I was familiar with the tools and the Drupal.org contribution workflow. In short, I was signed up to be a mentor during the sprints.
I remember being involved in the preparation too, sitting in a hotel lobby, identifying potential issues for new contributors to work on, alongside people who I'd previously interacted with in the issue queues on Drupal.org.
On the day, I helped new contributors get their local environments up and running, select issues to work on, and perform tasks like creating and re-rolling patch files and submitting them for review.
One of my highlights at the end of the day was the live commit, when a patch that a new contributor had worked on that day was committed to Drupal core live on stage!
Whenever I've attended DrupalCon events since, I've always volunteered to be a contribution mentor, as well as mentoring and organising sprints at other Drupal events.
## The Five Year Issue
One of the most memorable times mentoring was whilst working with a group of contributors at DrupalCon in May 2015.
Someone was working on a Drupal core issue that was very similar to [one that I'd looked at](https://www.drupal.org/project/drupal/issues/753898) a few years before.
We focused on the original issue that I'd commented on, reviewed, tested, and re-rolled the patch, fixed a failing test, and marked it as "reviewed and tested by the community".
A few days after the conference, and just over five years after my original comment, the patch was committed - giving my contributors their first commits to Drupal 8 core, and also [one of mine](https://git.drupalcode.org/project/drupal/-/commits/9.5.x?search=opdavies).

View file

@ -0,0 +1,20 @@
---
title: "Releasing a Drupal module template"
pubDate: "2022-09-22"
permalink: "archive/2022/09/22/releasing-drupal-module-template"
tags: ["drupal"]
---
Today, I an the idea to create a reusable template for new Drupal modules, based on how I like to build modules and how I've shown others to do so in my Drupal testing workshop.
So I did, and released it for free [on my GitHub account](https://github.com/opdavies/drupal-module-template).
Like my Tailwind CSS starter theme on Drupal.org, it's not intended to be added as a module directly, but something that can be cloned and used as a base for people's own modules.
It includes an example route and Controller that load a basic page, and has a test to ensure that the page exists and loads correctly.
The Controller is defined as a service and uses autowiring to automatically inject the its dependencies, the same as in my workshop example code.
It's the initial release so it's rough around the edges still. I'll use it tomorrow to create a new module and document the steps to add to the README as well as other pieces of documentation.
If you're creating a new Drupal module and try it out, start a discussion on the GitHub repository or [let me know on Twitter](https://twitter.com/opdavies). If you have questions, create a discussion or just reply to this email and I'll get back to you.

View file

@ -0,0 +1,44 @@
---
title: "ADRs and Technical Design Documents"
pubDate: "2022-09-23"
permalink: "archive/2022/09/23/adrs-technical-design-documents"
tags: []
---
## Architectural Decision Records
Architectural Decision Records (ADRs) are documents to record software design choices. They could be saved in your code repository as plain-text or Markdown files, or stored in Confluence or a wiki - wherever your team stores its documentation.
They usually consist of the sections:
* Status - is it proposed, accepted, rejected, deprecated, superseded, etc.?
* Context - what is the issue that is causing the decision or change?
* Decision - what is the change that's being done or proposed?
* Consequences - what becomes easier or more difficult to do?
Any change that is architecturally significant should require an ADR to be written, after which it can be reviewed and potentially actioned.
These will remain in place to form a decision log, with specific ADRs being marked as superseded if a newer ADR replaces it.
## Technical Design Documents
A similar type of document are Technical Design Documents (TDDs), that I first saw on TheAltF4Stream. I like to think of these as lightweight ADRs.
The first heading is always "What problem are we trying to solve?", or sometimes just "The problem".
Similar to the Context heading in an ADR, this should include a short paragraph describing the issue.
Unlike ADRs, there are no other set headings but these are some suggested ones:
- What is the current process?
- What are any requirements?
- How do we solve this problem?
- Alternative approaches
I like after describing the problem, being able to move straight into describing what's appropriate and relevant for this task and ignore sections that aren't needed.
When I started writing ADRs, they all had the 'Accepted' status as I was either writing them for myself or in a pair or mob. As wasn't adding any value, I've removed it since switching to writing TDDs.
Whether you use ADRs, TDDs or another approach, it's very useful to have a log of all of your architectural design decisions, both looking back in the future to remember why something was done in a certain way, or before you start implementing a solution to review the problem, evaluate the requirements and all potential solutions and document the selected one any why it was selected.
[Find our more about ADRs](https://adr.github.io) or [find out more about TDDs](https://altf4.wiki/t/how-do-i-write-a-tdd/21).

View file

@ -0,0 +1,28 @@
---
title: "Using a component library for front-end development"
pubDate: "2022-09-25"
permalink: "archive/2022/09/25/using-component-library-for-front-end-development"
tags: []
---
On a current project, I've decided to use a component library as the first place to do front-end development.
I'm using [Fractal](https://fractal.build) as I can use Twig for templates. As Drupal also uses Twig templates, I have more reusabilty between the components in Fractal and Drupal compared to converting them from a different templating language like Handlebars or Nunjucks.
Rather than developing directly within the custom Drupal theme, I've been creating new components and pages initially within Fractal.
I have been able to create new components quickly and easily with the views uing Twig templates and inject data to it using a context file - a YAML file for each component that contains data that is injected automatically into the view.
This meant that I've been able to develop new components from scratch without needing to set up content types or paragraphs within Drupal, validate and confirm my data model, and present the templates to the client for review in Fractal. If a change is needed, it's quick to do.
I've also moved my asset generation step into Fractal. No CSS or JavaScript is being compiled within the Drupal theme, it is created within Fractal and copied over with the Twig templates.
In most cases, I've been able to copy the Twig templates into Drupal and replace the static context data with dynamic data from Drupal without needing to make any further changes.
In a couple of situations, I've needed to change my implementation slightly when moving a template into Drupal, so in this workflow, I've made the changes in Fractal and re-exported them to keep things in sync between the two systems.
In situations where there is existing markup and/or styles from the Drupal side, I've copied those into Fractal so that they match before adding the additional styling and any markup changes.
In general, I like the approach as it gives me more flexibility upfront to make changes before needing to configure Drupal. I can see how things could get out of sync between the two systems, but hopefully having the assets compiled in Fractal and needing to copy them into Drupal will keep things synced up.
I don't think that I'd use this approach for all projects, but for this one, where I'm working with multiple themes and will need to later add different variants of pages and components, it's worked well so far.

View file

@ -0,0 +1,20 @@
---
title: "Experimenting with the Nix package manager"
pubDate: "2022-09-26"
permalink: "archive/2022/09/26/experimenting-with-the-nix-package-manager"
tags: ["nix"]
---
After seeing it on some recent live streams and YouTube videos, I've recently been trying out the Nix package manager and looking into how I might use it for my local environment setup - potentially replacing some of my current Ansible configuration.
Separate from the NixOS operating system, Nix is a cross-platform package manager, so instead of using `apt` on Ubuntu and `brew` on macOS, you could run Nix on both and install from the 80,000 packages listed on https://search.nixos.org/packages.
There is a community project called Home Manager which can be installed alongside Nix which, similar to Stow or what I'm doing with Ansible, can manage your dotfiles or even create them from your Home Manager configuration, and can manage plugins for other tools such as ZSH and tmux.
There's also a Nix feature called "Flakes" which allow you to separate configuration for different operating systems. I currently have a flake for Pop!\_OS which installs all of my packages and a minimal flake for my WSL2 environment as some of the packages are installed in Windows instead of Linux.
I can see Ansible still being used to set up my post-setup tasks such as cloning my initial projects, but the majority of my current Ansible setup where I'm installing and configuring packages I think could be moved to Nix.
I have a work-in-progress Nix-based version [in my dotfiles repository](https://github.com/opdavies/dotfiles/tree/7c3436c553f8b81f99031e6bcddf385d47b7e785) where you can also see [how I've configured Git with Home Manager](https://github.com/opdavies/dotfiles/blob/7c3436c553f8b81f99031e6bcddf385d47b7e785/home-manager/modules/git.nix).
I may install NixOS on an old laptop to test that out too.

View file

@ -0,0 +1,16 @@
---
title: "Mentoring with Drupal Career Online"
pubDate: "2022-09-27"
permalink: "archive/2022/09/27/mentoring-with-drupal-career-online"
tags: ["drupal"]
---
Today, I met my new mentee from the Drupal Career Online program.
[As well as mentoring at events like DrupalCamps and DrupalCons]({{site.url}}/archive/2022/09/21/being-drupal-contribution-mentor), I enjoy mentoring and working with new Developers going through bootcamps and training programmes like Drupal Career Online, some who are experienced Developers who are learning a new skill, and some who are learning how to code and are taking their first steps into programming.
I've talked about [how I got started programming]({{site.url}}/archive/2022-08-28/how-started-programming), but as self-taught Developer, it would have been great to have had a mentor to ask questions of, to help me get me started, and to make sure that I was going down the right track and learning the correct things.
Maybe this is more applicable these days with more people learning and working from home since COVID-19?
Similar to helping mentees at a contribution sprint work towards their first commits to Drupal, it's great to be able to introduce new Developers to a open-source project and community such as Drupal, help develop their skills, and hopefully enable them to get the new job and career that they want.

View file

@ -0,0 +1,20 @@
---
title: "Mob programming at PHP South Wales"
pubDate: "2022-09-28"
permalink: "archive/2022/09/28/mob-programming-php-south-wales"
tags: []
---
Tonight was our September meetup for the PHP South Wales user group, where I ran a hands-on session on mob programming.
I created [a small slide deck](https://speakerdeck.com/opdavies/an-introduction-to-mob-programming) before we started a mob session with the group.
We worked on the FizzBuzz kata in PHP, using Pest for our automated tests.
We followed the Driver and Navigator model, with one person responsible for the typing and interpreting the instructions from the Navigators, and switched roles every ten minutes.
You can [see the code that we wrote](https://github.com/opdavies/code-katas/blob/1da5dd5a79bc7ca083c0c4216fc3b4b0854f623d/php/tests/FizzBuzzTest.php) on my code katas GitHub repository.
It was a fun experience and nice to code with some people who I hadn't coded with before.
We did some code kata sessions during our online meetups which also seemed to go well, so coding nights on katas or personal or open-source projects might be something that we do more of in the future.

View file

@ -0,0 +1,83 @@
---
title: "Store Wars: different state management in Vue.js"
pubDate: "2022-09-30"
permalink: "archive/2022/09/30/store-wars-vuejs"
tags: ["vue"]
---
I'm currently working on a Vue.js application that I started building in Vue 2 before starting to use the Composition API, and then moved it to Vue 3.
In the original version, I was using Vuex for state management within the application, and interacting with Vuex directly within my Vue components - calling `getters` and `dispatch` to retrieve and update data.
As part of moving to Vue 3, I wanted to evaluate any new options, like Pinia which is now the default state management library for Vue.
But because I was integrating with Vuex directly, switching to an alternative would mean changing code within my components.
## Defining a Store interface
This is a situation that often occurs in back-end development - where you may need to switch to a different type of database or a different payment provider in an eCommerce application.
In that situation, you need a generic interface that can be used by different implementations. Because they have consistent methods, one implementation can be replaced with another or multiple can be added at the same time. This is called the Strategy design pattern, and related to the open-closed principle in SOLID.
This is what I did by adding a `Store` interface:
```javascript
export default interface Store {
actions: {
addRow(): void;
init(): void;
removeRow(index: Number): void;
};
state: {
isLoading: boolean;
selection: {
items: [];
};
};
}
```
Any store that I want to work with needs to have these defined actions and state values, so I can use them within my components knowing that they will always be available.
## Creating a native Vue store
This is one implementation of the `Store` interface, using just Vue's `reactive` function from the Composition API:
```javascript
let state = reactive({
isLoading: false,
selection: {
items: [],
},
});
let actions = {
addRow(): void {
state.selection.items.push({
// ...
});
},
init(): void {
state.isLoading = true;
// ...
},
removeRow(index: number): void {
state.selection.items.splice(index, 1);
},
};
const vueStore: Store = {
actions,
state: readonly(state),
};
export default vueStore;
```
If I needed to add a Pinia version or another library, I can create another implementation that complies with same interface.
Each implementation being responsible for any specifics for that library - extracting that logic from the component code making it more flexible and reusable.

View file

@ -0,0 +1,34 @@
---
title: Why do code katas?
pubDate: "2022-10-01"
permalink: archive/2022/10/01/code-katas
tags: []
---
## What are code katas?
Code katas are programming exercises which, like katas in martial arts, use practice and repetition to improve your skills.
Common katas are Fizzbuzz, the Bowling score calculator, and the Gilded Rose.
Each gives you the criteria of what the kata should do before it can be considered complete along with any specific information, and some websites will also give you a suite of failing tests to make pass - though I prefer to write my own and follow a test-driven development approach.
Once you have completed the solution and the criteria is satisfied, the kata is complete.
## Why I do code katas
As I said, doing code katas improves your skills by solving problems and identifying patterns that you may see when working in your project code.
Different katas focus on different patterns. For example, the Fibonacci Number kata focuses on recursion, whereas the Gilded Rose kata is all about refactoring complex legacy code.
Doing code katas keeps your skills sharp and gives you a different perspectives as you work through different katas. You can then use and apply these within your main projects.
If you want to learn a new programming language then working on a kata that you've already solved in a language that you're familiar with allows you to focus on the syntax and features of the new language. I've been working on some code katas in TypeScript as I've been working with that recently, and would like to do some in Go.
If you work as part of a team or a part of a meetup, code katas can be worked on as a group and can introduce new skills like automated testing and test-driven development as well as providing some opportunities for team-building and socialising. If you're trying to introduce pair or mob programming, then working on code katas could be a good first step.
If you're just getting started with programming, working on code katas will help you learn the fundamentals and problem solving, but I'd also encourage you to put the code on GitHub and blog about each kata that you complete. Doing so will help and encourage others and also look good when applying for roles.
P.S. There are lists of code katas at https://github.com/gamontal/awesome-katas and https://codingdojo.org/kata, and online versions at https://www.codewars.com/join and https://exercism.org/tracks. There are many others - if you have a favourite, reply to this email and let me know.
I have [some GitHub repositories for my code kata solutions](https://github.com/opdavies?tab=repositories&q=katas) and will continue to build these as I do more.

View file

@ -0,0 +1,32 @@
---
title: Minimum viable CI pipelines
pubDate: "2022-10-02"
permalink: archive/2022/10/02/minimum-viable-pipelines
tags: []
---
When I start a new project, and sometimes when I join an existing project, there are no CI (continuous integration) pipelines, no automated tests, and sometimes no local environment configuration.
In that case, where should you start when adding a CI pipeline?
I like to start with the simplest solution to get a passing build and to prove the concept - even if it's a "Hello, world" message. I know that the pipeline is configured correctly and runs when expected, and gives the output that I expect.
I like to use Docker for my development environments, partly because it's very easy to reuse the same set up within a CI pipeline just by running `docker image build` or `docker compose build`.
Having a task that ensures the project builds correctly is a great next step.
Within a Dockerfile, I run commands to validate my lock files, download and install dependencies from public and private repositories, and often apply patch files to third-party code. If a lock file is no longer in sync with its composer.json or package.json file, or a patch no longer applies, this would cause Docker and the CI pipeline to fail and the error can be caught and fixed within the pipeline.
Next, I'd look to run the automated tests. If there aren't any tests, I'd create an example test that will pass to prove the concept, and expect to see the number of tests grow as new features are added and as bugs are fixed.
The big reason to have automated tests running in a pipeline is that all the tests are run every time, ensuring that the test suite is always passing and preventing regressions across the codebase. If any test fails, the pipeline fails. This is knows as continuous delivery - ensuring that code is always in a releasable state.
From there, I'd look to add additional tasks such as static analysis and code linting, as well as anything else to validate, build or deploy the code and grow confidence that a passing CI pipeline means that the code is releasable.
As more tasks are added to the pipeline, and the more of the code the tasks cover (e.g. test coverage) the more it can be replied upon.
If there is a failure that wasn't caught in the CI pipeline, then the pipeline itself should be iterated on and improved.
Having a CI pipeline allows you to identify issues sooner and fix them quicker, encourages best practices like automated testing and test-driven development, and enables continuous deployment where code is automatically deployed after a passing build.
If you have a project without a CI pipeline, I'd encourage you to add one, to start small, and continuously iterate on it over time - adding tasks that are useful and valuable, and that build confidence that you can safely release when you need to.

View file

@ -0,0 +1,75 @@
---
title: Refactoring to value objects
pubDate: "2022-10-03"
permalink: archive/2022/10/03/refactoring-value-objects
tags: [php]
---
Here's a snippet of some Drupal code that I wrote last week. It's responsible for converting an array of nodes into a Collection of one of it's field values.
```php
return Collection::make($stationNodes)
->map(fn (NodeInterface $station): string => $station->get('field_station_code')->getString())
->values();
```
There are two issues with this code.
First, whilst I'm implicitly saying that it accepts a certain type of node, because of the `NodeInterface` typehint this could accept any type of node. If that node doesn't have the required field, the code will error - but I'd like to know sooner if an incorrect type of node is passed and make it explicit that only a certain type of node can be used.
Second, the code for getting the field values is quite verbose and is potentially repeated in other places within the codebase. I'd like to have a simple way to access these field values that I can reuse anywhere else. If the logic for getting these particular field values changes, then I'd only need to change it in one place.
## Introducing a value object
This is the value object that I created.
It accepts the original node but checks to ensure that the node is the correct type. If not, an Exception is thrown.
I've added a helper method to get the field value, encapsulating that logic in a reusable function whilst making the code easier to read and its intent clearer.
```php
namespace Drupal\mymodule\ValueObject;
use Drupal\node\NodeInterface;
final class Station implements StationInterface {
private NodeInterface $node;
private function __construct(NodeInterface $node) {
if ($node->bundle() != 'station') {
throw new \InvalidArgumentException();
}
$this->node = $node;
}
public function getStationCode(): string {
return $this->node->get('field_station_code')->getString();
}
public static function fromNode(NodeInterface $node): self {
return new self($node);
}
}
```
## Refactoring to use the value object
This is what my code now looks like:
```php
return Collection::make($stationNodes)
->map(fn (NodeInterface $node): StationInterface => Station::fromNode($node))
->map(fn (StationInterface $station): string => $station->getStationCode())
->values();
```
<<<<<<< HEAD:website/source/_daily_emails/2022-10-03.md
=======
>>>>>>> b9cea6d (chore: replace Sculpin with Astro):website/src/pages/daily-emails/2022-10-03.md
I've added an additional `map` to convert the nodes to the value object, but the second map can now use the new typehint - ensuring better type safety and also giving us auto-completion in IDEs and text editors. If an incorrect node type is passed in, then the Exception will be thrown and a much clearer error message will be shown.
Finally, I can use the helper method to get the field value, encapsulating the logic within the value object and making it intention clearer and easier to read.

View file

@ -0,0 +1,20 @@
---
title: First impressions of Astro
pubDate: "2022-10-08"
permalink: archive/2022/10/08/first-impressions-astro
tags: [astro]
---
This week I attended another of Simon Vrachliotis' Pro Tailwind workshops.
The workshop again was great, teaching us about multi-style Tailwind components, such as a button that has props for variants like size, shape and impact, and how to create them in a flexible and maintainable way as well as making use of Headless UI.
For this workshop though, the examples and challenges used a tool that I wasn't familiar with - the Astro web framework.
I've seen a lot of blog posts and streams mentioning it but I hadn't tried it out for myself until the workshop.
What I find interesting is that it comes with a number of available integrations - from Tailwind CSS, to Vue, React, and Alpine.js, and you can use the all within the same project, or even on the same page. Installing an integration is as simple as `yarn astro add tailwindcss`.
The templates feel familiar and make use of front matter within Astro components, and regular YAML front matter works within Markdown files - which are supported out of the box.
I've been thinking of redoing my personal website and evaluating options, but I think that Astro might be a new one to add to the list.

View file

@ -0,0 +1,52 @@
---
title: Coding defensively, and Implicit vs explicit coding
pubDate: "2022-10-09"
permalink: archive/2022/10/09/coding-defensively-implicit-explicit
tags: [tailwindcss, php]
---
As well as [being introduced to Astro](https://www.oliverdavies.uk/archive/2022/10/08/first-impressions-astro) in Simon's most recent Pro Tailwind workshop, something else that we discussed was implicit vs explicit coding, and coding defensively.
For example, if you had this code:
```javascript
const sizeClasses = {
small: 'px-3 py-1 text-sm',
medium: 'px-5 py-2',
large: 'px-7 py-2.5 text-lg',
}
const shapeClasses = {
square: '',
rounded: 'rounded',
pill: 'rounded-full',
}
```
Both the `medium` size and `square` shape have an implicit value.
The `small` size has a text size class of `text-sm` and the `large` size has `text-lg`. As there isn't a text size added for `medium`, it is implicitly `text-base` - the default text size.
Likewise, the `rounded` shape has a class of `rounded` and the `pill` shape has `rounded-full`. As a square button doesn't have any rounding, it has an empty string but it is implicitly `rounded-none` - the default border radius value.
If we were to code this explicitly, `text-base` and `rounded-none` would be added to their respective size and shape classes.
It's mostly personal preference, but explicitly adding the additional classes could potentially future-proof the components if there was a situation where the text size or border radius was being overridden.
It also makes it more obvious to anyone reading the code that these values are being set, rather than them needing to make that assumption - assuming that they're aware of the default values at all.
It's similar to having this example PHP code:
```php
function __invoke(string $type, int $limit): void {};
```
Whilst I'm using type hints for the parameters to ensure that the values are a string and an integer respectively, it's also safe to assume that the type shouldn't be an empty string, so do we check for that?
I'd also suggest that the limit shouldn't be a negative integer, so we'd want to check that the value is not less than zero, or if zero isn't being used as an "all" value, then we'd want to check that the limit is greater than one.
In this case, the type hints add some explicitness to the parameters, but checking for these additional conditions adds another defensive layer to the code - forcing it to return earlier with an explicit error message rather than causing a vaguer error and elsewhere in the application.
Personally, I like to be explicit and code defensively, making sure that I try and cover as many edge cases as possible and writing test cases for them.
Coming back to the Tailwind example, the majority of us decided to add in extra classes after the exercise and it was an interesting discussion and part of the workshop.

View file

@ -0,0 +1,24 @@
---
title: "To monorepo, or not to monorepo?"
permalink: "archive/2022/08/31/monorepo-or-not"
pubDate: "2022-08-31"
tags: ["git"]
---
I listened to a podcast episode recently which talked about monorepos - i.e. code repositories that contain multiple project codebases rather than a single repository for each codebase - and this got me thinking about whether I should be using these more.
It's something that I've been trialling recently in my [Docker examples](https://github.com/opdavies/docker-examples) and [Docker images](https://github.com/OliverDaviesLtd/docker-images) repositories, where one repository contains and builds multiple Docker images.
I'm not suggesting that I put all of my client projects into one repository, but at least combining the different parts of the same project into the same repository.
For example, I'm working for one client on their current Drupal 7 websites whilst developing the new Drupal 9 versions, which are currently in two separate repositories. I'm also developing an embeddable Vue.js application as part of the Drupal 9 website, and using Fractal as a component library. These are also in their own repositories.
Using a monorepo approach, all of these projects would be in the same repository.
I can see advantages to being able to see cross-project changes in the same place - such as an API change in Drupal that needs a update to be made in Vue.js, or vice-versa - rather than needing to look at separate repositories. This could also make versioning easier as everything will be stored and tagged inside the same repository.
Each project has it's own CI pipeline, so it would require some changes where I set a specific pipeline to run only when a directory is changed.
I see how deployments may be tricker if I need to push an update within a directory to another Git repository, which makes me wonder if I'll need to look into using subtree splits to create separate deployment repositories - similar to how the Symfony project has one main repository and then each component split into its own repository.
I'll keep trialling it in my open-source projects and maybe test it with some client projects, but if you have experience with monorepos that you'd like to share, then please reply to this email - I'd love to hear about it.