diff --git a/content/node.00497277-4b40-4d36-a473-8d8e1a187c18.json b/content/node.00497277-4b40-4d36-a473-8d8e1a187c18.json index e5682b586..639e393e6 100644 --- a/content/node.00497277-4b40-4d36-a473-8d8e1a187c18.json +++ b/content/node.00497277-4b40-4d36-a473-8d8e1a187c18.json @@ -82,9 +82,9 @@ ], "body": [ { - "value": "\n
A side effect of using a tool to generate build configuration files<\/a> with templates is the consistency that it introduces.<\/p>\n\n The majority of my projects use a PHP-FPM or PHP CLI container. In my Docker Compose file, the service was mostly named Some projects would use As well as being easier to switch between projects and not having to think about which names are used in each codebase, it's also much easier to write tools and automation when the names are consistent.<\/p>\n\n For example, I'd always write a long-ish command to import a database file - reading and unzipping it, and importing it by connecting to the database running in its container. The command would essentially be the same with slight changes based on that project - such as the database service name.<\/p>\n\n Now the command is the same for all projects, and I can automate it by writing a script that works on any project meaning I no longer need to write the long command at all.<\/p>\n\n ",
+ "value": "\n A side effect of using a tool to generate build configuration files<\/a> with templates is the consistency that it introduces.<\/p>\n\n The majority of my projects use a PHP-FPM or PHP CLI container. In my Docker Compose file, the service was mostly named Some projects would use As well as being easier to switch between projects and not having to think about which names are used in each codebase, it's also much easier to write tools and automation when the names are consistent.<\/p>\n\n For example, I'd always write a long-ish command to import a database file - reading and unzipping it, and importing it by connecting to the database running in its container. The command would essentially be the same with slight changes based on that project - such as the database service name.<\/p>\n\n Now the command is the same for all projects, and I can automate it by writing a script that works on any project meaning I no longer need to write the long command at all.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n A side effect of using a tool to generate build configuration files<\/a> with templates is the consistency that it introduces.<\/p>\n\n The majority of my projects use a PHP-FPM or PHP CLI container. In my Docker Compose file, the service was mostly named Some projects would use As well as being easier to switch between projects and not having to think about which names are used in each codebase, it's also much easier to write tools and automation when the names are consistent.<\/p>\n\n For example, I'd always write a long-ish command to import a database file - reading and unzipping it, and importing it by connecting to the database running in its container. The command would essentially be the same with slight changes based on that project - such as the database service name.<\/p>\n\n Now the command is the same for all projects, and I can automate it by writing a script that works on any project meaning I no longer need to write the long command at all.<\/p>\n\n ",
+ "processed": "\n A side effect of using a tool to generate build configuration files<\/a> with templates is the consistency that it introduces.<\/p>\n\n The majority of my projects use a PHP-FPM or PHP CLI container. In my Docker Compose file, the service was mostly named Some projects would use As well as being easier to switch between projects and not having to think about which names are used in each codebase, it's also much easier to write tools and automation when the names are consistent.<\/p>\n\n For example, I'd always write a long-ish command to import a database file - reading and unzipping it, and importing it by connecting to the database running in its container. The command would essentially be the same with slight changes based on that project - such as the database service name.<\/p>\n\n Now the command is the same for all projects, and I can automate it by writing a script that works on any project meaning I no longer need to write the long command at all.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.0100be71-79ef-44ea-922e-e75fcc26ae16.json b/content/node.0100be71-79ef-44ea-922e-e75fcc26ae16.json
index 89aff230f..a6484dfb1 100644
--- a/content/node.0100be71-79ef-44ea-922e-e75fcc26ae16.json
+++ b/content/node.0100be71-79ef-44ea-922e-e75fcc26ae16.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n A common reason why environments aren't updated and get out of sync is because it's a time-consuming or complex task.<\/p>\n\n The process should be simple to run, quick, reliable and reproducible.<\/p>\n\n It's the same as deploying a change to a staging or production environment.<\/p>\n\n You want the same result on every time on every environment.<\/p>\n\n You want every environment - including local development environments<\/a> to be as consistent as possible to minimise bugs and errors.<\/p>\n\n To do this, I automate things to make them as simple as possible.<\/p>\n\n I use run files<\/a> with commands to import databases, perform updates and run pre-update and post-update tasks.<\/p>\n\n I use tools like Nix and devenv<\/a> to create identical and reproducible environments.<\/p>\n\n The simpler and quicker is it, the more it can and will be done.<\/p>\n\n You can also use automation to perform long or complex tasks outside of working hours such as sanitising and importing large databases.<\/p>\n\n The more you can automate, the better.<\/p>\n\n ",
+ "value": "\n A common reason why environments aren't updated and get out of sync is because it's a time-consuming or complex task.<\/p>\n\n The process should be simple to run, quick, reliable and reproducible.<\/p>\n\n It's the same as deploying a change to a staging or production environment.<\/p>\n\n You want the same result on every time on every environment.<\/p>\n\n You want every environment - including local development environments<\/a> to be as consistent as possible to minimise bugs and errors.<\/p>\n\n To do this, I automate things to make them as simple as possible.<\/p>\n\n I use run files<\/a> with commands to import databases, perform updates and run pre-update and post-update tasks.<\/p>\n\n I use tools like Nix and devenv<\/a> to create identical and reproducible environments.<\/p>\n\n The simpler and quicker is it, the more it can and will be done.<\/p>\n\n You can also use automation to perform long or complex tasks outside of working hours such as sanitising and importing large databases.<\/p>\n\n The more you can automate, the better.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n A common reason why environments aren't updated and get out of sync is because it's a time-consuming or complex task.<\/p>\n\n The process should be simple to run, quick, reliable and reproducible.<\/p>\n\n It's the same as deploying a change to a staging or production environment.<\/p>\n\n You want the same result on every time on every environment.<\/p>\n\n You want every environment - including local development environments<\/a> to be as consistent as possible to minimise bugs and errors.<\/p>\n\n To do this, I automate things to make them as simple as possible.<\/p>\n\n I use run files<\/a> with commands to import databases, perform updates and run pre-update and post-update tasks.<\/p>\n\n I use tools like Nix and devenv<\/a> to create identical and reproducible environments.<\/p>\n\n The simpler and quicker is it, the more it can and will be done.<\/p>\n\n You can also use automation to perform long or complex tasks outside of working hours such as sanitising and importing large databases.<\/p>\n\n The more you can automate, the better.<\/p>\n\n ",
+ "processed": "\n A common reason why environments aren't updated and get out of sync is because it's a time-consuming or complex task.<\/p>\n\n The process should be simple to run, quick, reliable and reproducible.<\/p>\n\n It's the same as deploying a change to a staging or production environment.<\/p>\n\n You want the same result on every time on every environment.<\/p>\n\n You want every environment - including local development environments<\/a> to be as consistent as possible to minimise bugs and errors.<\/p>\n\n To do this, I automate things to make them as simple as possible.<\/p>\n\n I use run files<\/a> with commands to import databases, perform updates and run pre-update and post-update tasks.<\/p>\n\n I use tools like Nix and devenv<\/a> to create identical and reproducible environments.<\/p>\n\n The simpler and quicker is it, the more it can and will be done.<\/p>\n\n You can also use automation to perform long or complex tasks outside of working hours such as sanitising and importing large databases.<\/p>\n\n The more you can automate, the better.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.01893784-8f45-4466-8586-17df23e7e4b5.json b/content/node.01893784-8f45-4466-8586-17df23e7e4b5.json
index 9673804cb..6c10fd87a 100644
--- a/content/node.01893784-8f45-4466-8586-17df23e7e4b5.json
+++ b/content/node.01893784-8f45-4466-8586-17df23e7e4b5.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n PHPStan is a static analysis tool for PHP.<\/p>\n\n It finds potential issues in PHP code without needing to run it, so Developers can find and resolve potential issues sooner.<\/p>\n\n I use it on all my projects including existing ones I've inherited.<\/p>\n\n But how can you add a static analysis tool to a codebase without getting a lot of errors from the existing code?<\/p>\n\n PHPStan has different levels of strictness.<\/p>\n\n Level 0 is the least strict and each level adds more rules and strictness, resulting in more errors.<\/p>\n\n Most of the time, people will start by running PHPStan on level 0, fixing any errors and committing the changes.<\/p>\n\n Then repeat the process as many times as needed until you reach the level you want to achieve.<\/p>\n\n I don't think this is the right approach.<\/p>\n\n This could mean that you need to edit the same files multiple times as you work through the levels.<\/p>\n\n There's also a period of time where Developers can still write suboptimal code whilst you work your way up to your desired level.<\/p>\n\n Another approach is to use a feature of PHPStan called the baseline.<\/p>\n\n The baseline is a way of capturing and saving all the existing errors up to the selected level so they are no longer reported.<\/p>\n\n If you did this for an existing project, it would return no errors as everything would be included in the baseline.<\/p>\n\n Once you decide what level you want your project to run, you can start as soon as the baseline is generated and without needing to change files multiple times.<\/p>\n\n Instead of spending time working through the levels one at a time, commit some time to pruning the baseline and reducing the errors in it.<\/p>\n\n This I think is a better approach and how I add PHPStan to existing codebases.<\/p>\n\n To learn more about static analysis and PHPStan, listen to episode 22 of the Beyond Blocks podcast<\/a> with Dave Liddament.<\/p>\n\n ",
+ "value": "\n PHPStan is a static analysis tool for PHP.<\/p>\n\n It finds potential issues in PHP code without needing to run it, so Developers can find and resolve potential issues sooner.<\/p>\n\n I use it on all my projects including existing ones I've inherited.<\/p>\n\n But how can you add a static analysis tool to a codebase without getting a lot of errors from the existing code?<\/p>\n\n PHPStan has different levels of strictness.<\/p>\n\n Level 0 is the least strict and each level adds more rules and strictness, resulting in more errors.<\/p>\n\n Most of the time, people will start by running PHPStan on level 0, fixing any errors and committing the changes.<\/p>\n\n Then repeat the process as many times as needed until you reach the level you want to achieve.<\/p>\n\n I don't think this is the right approach.<\/p>\n\n This could mean that you need to edit the same files multiple times as you work through the levels.<\/p>\n\n There's also a period of time where Developers can still write suboptimal code whilst you work your way up to your desired level.<\/p>\n\n Another approach is to use a feature of PHPStan called the baseline.<\/p>\n\n The baseline is a way of capturing and saving all the existing errors up to the selected level so they are no longer reported.<\/p>\n\n If you did this for an existing project, it would return no errors as everything would be included in the baseline.<\/p>\n\n Once you decide what level you want your project to run, you can start as soon as the baseline is generated and without needing to change files multiple times.<\/p>\n\n Instead of spending time working through the levels one at a time, commit some time to pruning the baseline and reducing the errors in it.<\/p>\n\n This I think is a better approach and how I add PHPStan to existing codebases.<\/p>\n\n To learn more about static analysis and PHPStan, listen to episode 22 of the Beyond Blocks podcast<\/a> with Dave Liddament.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n PHPStan is a static analysis tool for PHP.<\/p>\n\n It finds potential issues in PHP code without needing to run it, so Developers can find and resolve potential issues sooner.<\/p>\n\n I use it on all my projects including existing ones I've inherited.<\/p>\n\n But how can you add a static analysis tool to a codebase without getting a lot of errors from the existing code?<\/p>\n\n PHPStan has different levels of strictness.<\/p>\n\n Level 0 is the least strict and each level adds more rules and strictness, resulting in more errors.<\/p>\n\n Most of the time, people will start by running PHPStan on level 0, fixing any errors and committing the changes.<\/p>\n\n Then repeat the process as many times as needed until you reach the level you want to achieve.<\/p>\n\n I don't think this is the right approach.<\/p>\n\n This could mean that you need to edit the same files multiple times as you work through the levels.<\/p>\n\n There's also a period of time where Developers can still write suboptimal code whilst you work your way up to your desired level.<\/p>\n\n Another approach is to use a feature of PHPStan called the baseline.<\/p>\n\n The baseline is a way of capturing and saving all the existing errors up to the selected level so they are no longer reported.<\/p>\n\n If you did this for an existing project, it would return no errors as everything would be included in the baseline.<\/p>\n\n Once you decide what level you want your project to run, you can start as soon as the baseline is generated and without needing to change files multiple times.<\/p>\n\n Instead of spending time working through the levels one at a time, commit some time to pruning the baseline and reducing the errors in it.<\/p>\n\n This I think is a better approach and how I add PHPStan to existing codebases.<\/p>\n\n To learn more about static analysis and PHPStan, listen to episode 22 of the Beyond Blocks podcast<\/a> with Dave Liddament.<\/p>\n\n ",
+ "processed": "\n PHPStan is a static analysis tool for PHP.<\/p>\n\n It finds potential issues in PHP code without needing to run it, so Developers can find and resolve potential issues sooner.<\/p>\n\n I use it on all my projects including existing ones I've inherited.<\/p>\n\n But how can you add a static analysis tool to a codebase without getting a lot of errors from the existing code?<\/p>\n\n PHPStan has different levels of strictness.<\/p>\n\n Level 0 is the least strict and each level adds more rules and strictness, resulting in more errors.<\/p>\n\n Most of the time, people will start by running PHPStan on level 0, fixing any errors and committing the changes.<\/p>\n\n Then repeat the process as many times as needed until you reach the level you want to achieve.<\/p>\n\n I don't think this is the right approach.<\/p>\n\n This could mean that you need to edit the same files multiple times as you work through the levels.<\/p>\n\n There's also a period of time where Developers can still write suboptimal code whilst you work your way up to your desired level.<\/p>\n\n Another approach is to use a feature of PHPStan called the baseline.<\/p>\n\n The baseline is a way of capturing and saving all the existing errors up to the selected level so they are no longer reported.<\/p>\n\n If you did this for an existing project, it would return no errors as everything would be included in the baseline.<\/p>\n\n Once you decide what level you want your project to run, you can start as soon as the baseline is generated and without needing to change files multiple times.<\/p>\n\n Instead of spending time working through the levels one at a time, commit some time to pruning the baseline and reducing the errors in it.<\/p>\n\n This I think is a better approach and how I add PHPStan to existing codebases.<\/p>\n\n To learn more about static analysis and PHPStan, listen to episode 22 of the Beyond Blocks podcast<\/a> with Dave Liddament.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.0231b26c-ac28-44e8-a2bf-cc7c86984e7e.json b/content/node.0231b26c-ac28-44e8-a2bf-cc7c86984e7e.json
index 5e3e3b09c..11714b3c5 100644
--- a/content/node.0231b26c-ac28-44e8-a2bf-cc7c86984e7e.json
+++ b/content/node.0231b26c-ac28-44e8-a2bf-cc7c86984e7e.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n When reviewing a pull or merge request, tools like GitHub and GitHub offer the option to squash the commits before merging.<\/p>\n\n If the request had twenty commits, they'd be combined into a single commit before being merged.<\/p>\n\n But should you do it?<\/p>\n\n The answer will be \"it depends\" based on the project or team, but I'm personally not a fan of squashing commits.<\/p>\n\n Even though I commit small changes often, I put quite a bit of effort into crafting commits and writing detailed commit messages<\/a> that capture the reason for each change. If the commits are squashed, either the messages will be combined into one extra-long commit message or I've seen them be deleted completely.<\/p>\n\n One large commit message would be very difficult to read and connect specific messages with their changes, and deleting the commit body would lose the history completely and waste the time it took to write the messages and craft the commits. It may be available within the pull or merge request page but there's no guarantee that you'll continue to use the same repository hosting service in the future.<\/p>\n\n One large commit would also be difficult to debug if there was an error. If the whole feature was added in a single commit, tools like git bisect<\/a> would no longer work and a single commit couldn't be simply reverted if it contained a bug.<\/p>\n\n I prefer to keep the original small commits and instead prefer to use rebasing and only fast-forward merges to avoid merge commits and keep a simple, linear history in my Git log, and be able to easily read, find and, if needed, fix the code that's been committed.<\/p>\n\n ",
+ "value": "\n When reviewing a pull or merge request, tools like GitHub and GitHub offer the option to squash the commits before merging.<\/p>\n\n If the request had twenty commits, they'd be combined into a single commit before being merged.<\/p>\n\n But should you do it?<\/p>\n\n The answer will be \"it depends\" based on the project or team, but I'm personally not a fan of squashing commits.<\/p>\n\n Even though I commit small changes often, I put quite a bit of effort into crafting commits and writing detailed commit messages<\/a> that capture the reason for each change. If the commits are squashed, either the messages will be combined into one extra-long commit message or I've seen them be deleted completely.<\/p>\n\n One large commit message would be very difficult to read and connect specific messages with their changes, and deleting the commit body would lose the history completely and waste the time it took to write the messages and craft the commits. It may be available within the pull or merge request page but there's no guarantee that you'll continue to use the same repository hosting service in the future.<\/p>\n\n One large commit would also be difficult to debug if there was an error. If the whole feature was added in a single commit, tools like git bisect<\/a> would no longer work and a single commit couldn't be simply reverted if it contained a bug.<\/p>\n\n I prefer to keep the original small commits and instead prefer to use rebasing and only fast-forward merges to avoid merge commits and keep a simple, linear history in my Git log, and be able to easily read, find and, if needed, fix the code that's been committed.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n When reviewing a pull or merge request, tools like GitHub and GitHub offer the option to squash the commits before merging.<\/p>\n\n If the request had twenty commits, they'd be combined into a single commit before being merged.<\/p>\n\n But should you do it?<\/p>\n\n The answer will be \"it depends\" based on the project or team, but I'm personally not a fan of squashing commits.<\/p>\n\n Even though I commit small changes often, I put quite a bit of effort into crafting commits and writing detailed commit messages<\/a> that capture the reason for each change. If the commits are squashed, either the messages will be combined into one extra-long commit message or I've seen them be deleted completely.<\/p>\n\n One large commit message would be very difficult to read and connect specific messages with their changes, and deleting the commit body would lose the history completely and waste the time it took to write the messages and craft the commits. It may be available within the pull or merge request page but there's no guarantee that you'll continue to use the same repository hosting service in the future.<\/p>\n\n One large commit would also be difficult to debug if there was an error. If the whole feature was added in a single commit, tools like git bisect<\/a> would no longer work and a single commit couldn't be simply reverted if it contained a bug.<\/p>\n\n I prefer to keep the original small commits and instead prefer to use rebasing and only fast-forward merges to avoid merge commits and keep a simple, linear history in my Git log, and be able to easily read, find and, if needed, fix the code that's been committed.<\/p>\n\n ",
+ "processed": "\n When reviewing a pull or merge request, tools like GitHub and GitHub offer the option to squash the commits before merging.<\/p>\n\n If the request had twenty commits, they'd be combined into a single commit before being merged.<\/p>\n\n But should you do it?<\/p>\n\n The answer will be \"it depends\" based on the project or team, but I'm personally not a fan of squashing commits.<\/p>\n\n Even though I commit small changes often, I put quite a bit of effort into crafting commits and writing detailed commit messages<\/a> that capture the reason for each change. If the commits are squashed, either the messages will be combined into one extra-long commit message or I've seen them be deleted completely.<\/p>\n\n One large commit message would be very difficult to read and connect specific messages with their changes, and deleting the commit body would lose the history completely and waste the time it took to write the messages and craft the commits. It may be available within the pull or merge request page but there's no guarantee that you'll continue to use the same repository hosting service in the future.<\/p>\n\n One large commit would also be difficult to debug if there was an error. If the whole feature was added in a single commit, tools like git bisect<\/a> would no longer work and a single commit couldn't be simply reverted if it contained a bug.<\/p>\n\n I prefer to keep the original small commits and instead prefer to use rebasing and only fast-forward merges to avoid merge commits and keep a simple, linear history in my Git log, and be able to easily read, find and, if needed, fix the code that's been committed.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.0300b4ea-e71b-4f9c-b4f2-f04f98e92d6c.json b/content/node.0300b4ea-e71b-4f9c-b4f2-f04f98e92d6c.json
index 1e60b4adf..925ab1f83 100644
--- a/content/node.0300b4ea-e71b-4f9c-b4f2-f04f98e92d6c.json
+++ b/content/node.0300b4ea-e71b-4f9c-b4f2-f04f98e92d6c.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n Yesterday's email<\/a> explained why your company should contribute to open-source software, but why should you contribute as an individual?<\/p>\n\n Most of the same reasons apply, such as gaining experience and improved knowledge from contributing.<\/p>\n\n As an individual, you can build your own reputation and personal brand.<\/p>\n\n You'll get exposure from contributions and involvement with initiatives, such as the Drupal admin UI improvements and other core initiatives, which look great on your CV and LinkedIn profile.<\/p>\n\n This could lead to better career opportunities and potential projects.<\/p>\n\n I've had paid development work directly from my open-source code contributions, as well as public speaking and event organising, so I can vouch for this.<\/p>\n\n Like companies, if you make money from open-source software - either a salary or from paid projects or courses - it's in your interest to contribute so the software you use is maintained and improved so it's the best it can be.<\/p>\n\n ",
+ "value": "\n Yesterday's email<\/a> explained why your company should contribute to open-source software, but why should you contribute as an individual?<\/p>\n\n Most of the same reasons apply, such as gaining experience and improved knowledge from contributing.<\/p>\n\n As an individual, you can build your own reputation and personal brand.<\/p>\n\n You'll get exposure from contributions and involvement with initiatives, such as the Drupal admin UI improvements and other core initiatives, which look great on your CV and LinkedIn profile.<\/p>\n\n This could lead to better career opportunities and potential projects.<\/p>\n\n I've had paid development work directly from my open-source code contributions, as well as public speaking and event organising, so I can vouch for this.<\/p>\n\n Like companies, if you make money from open-source software - either a salary or from paid projects or courses - it's in your interest to contribute so the software you use is maintained and improved so it's the best it can be.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n Yesterday's email<\/a> explained why your company should contribute to open-source software, but why should you contribute as an individual?<\/p>\n\n Most of the same reasons apply, such as gaining experience and improved knowledge from contributing.<\/p>\n\n As an individual, you can build your own reputation and personal brand.<\/p>\n\n You'll get exposure from contributions and involvement with initiatives, such as the Drupal admin UI improvements and other core initiatives, which look great on your CV and LinkedIn profile.<\/p>\n\n This could lead to better career opportunities and potential projects.<\/p>\n\n I've had paid development work directly from my open-source code contributions, as well as public speaking and event organising, so I can vouch for this.<\/p>\n\n Like companies, if you make money from open-source software - either a salary or from paid projects or courses - it's in your interest to contribute so the software you use is maintained and improved so it's the best it can be.<\/p>\n\n ",
+ "processed": "\n Yesterday's email<\/a> explained why your company should contribute to open-source software, but why should you contribute as an individual?<\/p>\n\n Most of the same reasons apply, such as gaining experience and improved knowledge from contributing.<\/p>\n\n As an individual, you can build your own reputation and personal brand.<\/p>\n\n You'll get exposure from contributions and involvement with initiatives, such as the Drupal admin UI improvements and other core initiatives, which look great on your CV and LinkedIn profile.<\/p>\n\n This could lead to better career opportunities and potential projects.<\/p>\n\n I've had paid development work directly from my open-source code contributions, as well as public speaking and event organising, so I can vouch for this.<\/p>\n\n Like companies, if you make money from open-source software - either a salary or from paid projects or courses - it's in your interest to contribute so the software you use is maintained and improved so it's the best it can be.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.0316dfcf-b709-47e1-9622-9355b5ece6d9.json b/content/node.0316dfcf-b709-47e1-9622-9355b5ece6d9.json
index b5c687db0..faf8a4586 100644
--- a/content/node.0316dfcf-b709-47e1-9622-9355b5ece6d9.json
+++ b/content/node.0316dfcf-b709-47e1-9622-9355b5ece6d9.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n Does your team have a \"No deploy Friday\" policy?<\/p>\n\n What about not deploying after a certain time in the afternoon?<\/p>\n\n These approaches are attempts to minimise risk when deploying.<\/p>\n\n If there is an issue, will someone be available during the evening or weekend to resolve it?<\/p>\n\n To me, this indicates the deployment process is too complicated, possibly due to a lack of automation, or deployments aren't happening frequently enough.<\/p>\n\n Having a robust and passing CI pipeline<\/a> that runs automated checks and tests is crucial to know the code is deployable.<\/p>\n\n Feature flags are a great way<\/a> to separate deploying code from releasing changes to users, which means you don't need to avoid pushing some code until the change is complete. It can be done incrementally and released over several deployments.<\/p>\n\n Too much time between deployments is a smell.<\/p>\n\n The more time there is between a deployment and the larger the changeset, the riskier the deployment will be.<\/p>\n\n There is more to go wrong and it'll be harder to diagnose and resolve any issues.<\/p>\n\n I always advocate for many smaller releases than larger less frequent ones.<\/p>\n\n Ideally, a production release every day - even if the changes are small or everything is hidden behind feature flags.<\/p>\n\n Deploying on Friday is easy if you last deployed on Thursday.<\/p>\n\n ",
+ "value": "\n Does your team have a \"No deploy Friday\" policy?<\/p>\n\n What about not deploying after a certain time in the afternoon?<\/p>\n\n These approaches are attempts to minimise risk when deploying.<\/p>\n\n If there is an issue, will someone be available during the evening or weekend to resolve it?<\/p>\n\n To me, this indicates the deployment process is too complicated, possibly due to a lack of automation, or deployments aren't happening frequently enough.<\/p>\n\n Having a robust and passing CI pipeline<\/a> that runs automated checks and tests is crucial to know the code is deployable.<\/p>\n\n Feature flags are a great way<\/a> to separate deploying code from releasing changes to users, which means you don't need to avoid pushing some code until the change is complete. It can be done incrementally and released over several deployments.<\/p>\n\n Too much time between deployments is a smell.<\/p>\n\n The more time there is between a deployment and the larger the changeset, the riskier the deployment will be.<\/p>\n\n There is more to go wrong and it'll be harder to diagnose and resolve any issues.<\/p>\n\n I always advocate for many smaller releases than larger less frequent ones.<\/p>\n\n Ideally, a production release every day - even if the changes are small or everything is hidden behind feature flags.<\/p>\n\n Deploying on Friday is easy if you last deployed on Thursday.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n Does your team have a \"No deploy Friday\" policy?<\/p>\n\n What about not deploying after a certain time in the afternoon?<\/p>\n\n These approaches are attempts to minimise risk when deploying.<\/p>\n\n If there is an issue, will someone be available during the evening or weekend to resolve it?<\/p>\n\n To me, this indicates the deployment process is too complicated, possibly due to a lack of automation, or deployments aren't happening frequently enough.<\/p>\n\n Having a robust and passing CI pipeline<\/a> that runs automated checks and tests is crucial to know the code is deployable.<\/p>\n\n Feature flags are a great way<\/a> to separate deploying code from releasing changes to users, which means you don't need to avoid pushing some code until the change is complete. It can be done incrementally and released over several deployments.<\/p>\n\n Too much time between deployments is a smell.<\/p>\n\n The more time there is between a deployment and the larger the changeset, the riskier the deployment will be.<\/p>\n\n There is more to go wrong and it'll be harder to diagnose and resolve any issues.<\/p>\n\n I always advocate for many smaller releases than larger less frequent ones.<\/p>\n\n Ideally, a production release every day - even if the changes are small or everything is hidden behind feature flags.<\/p>\n\n Deploying on Friday is easy if you last deployed on Thursday.<\/p>\n\n ",
+ "processed": "\n Does your team have a \"No deploy Friday\" policy?<\/p>\n\n What about not deploying after a certain time in the afternoon?<\/p>\n\n These approaches are attempts to minimise risk when deploying.<\/p>\n\n If there is an issue, will someone be available during the evening or weekend to resolve it?<\/p>\n\n To me, this indicates the deployment process is too complicated, possibly due to a lack of automation, or deployments aren't happening frequently enough.<\/p>\n\n Having a robust and passing CI pipeline<\/a> that runs automated checks and tests is crucial to know the code is deployable.<\/p>\n\n Feature flags are a great way<\/a> to separate deploying code from releasing changes to users, which means you don't need to avoid pushing some code until the change is complete. It can be done incrementally and released over several deployments.<\/p>\n\n Too much time between deployments is a smell.<\/p>\n\n The more time there is between a deployment and the larger the changeset, the riskier the deployment will be.<\/p>\n\n There is more to go wrong and it'll be harder to diagnose and resolve any issues.<\/p>\n\n I always advocate for many smaller releases than larger less frequent ones.<\/p>\n\n Ideally, a production release every day - even if the changes are small or everything is hidden behind feature flags.<\/p>\n\n Deploying on Friday is easy if you last deployed on Thursday.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.0318e249-a9a8-476f-9bd4-432544369917.json b/content/node.0318e249-a9a8-476f-9bd4-432544369917.json
index fdd215ac2..c51f1edeb 100644
--- a/content/node.0318e249-a9a8-476f-9bd4-432544369917.json
+++ b/content/node.0318e249-a9a8-476f-9bd4-432544369917.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n Applying patch files<\/a> is a common way to customise and extend open source software, and how we used to submit changes to Drupal before issue forks and merge requests were added to Drupal.org.<\/p>\n\n Some software, such as dwm and st from suckless.org are released as minimal versions that you patch to add features to.<\/p>\n\n If you find a line of code that you want to add, edit or delete, a patch file describes the changes so you can re-apply them whenever the source file changes.<\/p>\n\n Patching offers unlimited customisation and flexibility.<\/p>\n\n Whatever changes you want to make, you can.<\/p>\n\n The downside is you need to maintain any patches you've written.<\/p>\n\n If a change is made that causes your patch to no longer apply, you'll need to update the patch.<\/p>\n\n There are some patches I commonly apply to Drupal projects, but I'll try to either contribute the changes back to the Drupal so I no longer need the patch or make the change in a custom module.<\/p>\n\n Sometimes, though, patching is the only option<\/a>.<\/p>\n\n ",
+ "value": "\n Applying patch files<\/a> is a common way to customise and extend open source software, and how we used to submit changes to Drupal before issue forks and merge requests were added to Drupal.org.<\/p>\n\n Some software, such as dwm and st from suckless.org are released as minimal versions that you patch to add features to.<\/p>\n\n If you find a line of code that you want to add, edit or delete, a patch file describes the changes so you can re-apply them whenever the source file changes.<\/p>\n\n Patching offers unlimited customisation and flexibility.<\/p>\n\n Whatever changes you want to make, you can.<\/p>\n\n The downside is you need to maintain any patches you've written.<\/p>\n\n If a change is made that causes your patch to no longer apply, you'll need to update the patch.<\/p>\n\n There are some patches I commonly apply to Drupal projects, but I'll try to either contribute the changes back to the Drupal so I no longer need the patch or make the change in a custom module.<\/p>\n\n Sometimes, though, patching is the only option<\/a>.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n Applying patch files<\/a> is a common way to customise and extend open source software, and how we used to submit changes to Drupal before issue forks and merge requests were added to Drupal.org.<\/p>\n\n Some software, such as dwm and st from suckless.org are released as minimal versions that you patch to add features to.<\/p>\n\n If you find a line of code that you want to add, edit or delete, a patch file describes the changes so you can re-apply them whenever the source file changes.<\/p>\n\n Patching offers unlimited customisation and flexibility.<\/p>\n\n Whatever changes you want to make, you can.<\/p>\n\n The downside is you need to maintain any patches you've written.<\/p>\n\n If a change is made that causes your patch to no longer apply, you'll need to update the patch.<\/p>\n\n There are some patches I commonly apply to Drupal projects, but I'll try to either contribute the changes back to the Drupal so I no longer need the patch or make the change in a custom module.<\/p>\n\n Sometimes, though, patching is the only option<\/a>.<\/p>\n\n ",
+ "processed": "\n Applying patch files<\/a> is a common way to customise and extend open source software, and how we used to submit changes to Drupal before issue forks and merge requests were added to Drupal.org.<\/p>\n\n Some software, such as dwm and st from suckless.org are released as minimal versions that you patch to add features to.<\/p>\n\n If you find a line of code that you want to add, edit or delete, a patch file describes the changes so you can re-apply them whenever the source file changes.<\/p>\n\n Patching offers unlimited customisation and flexibility.<\/p>\n\n Whatever changes you want to make, you can.<\/p>\n\n The downside is you need to maintain any patches you've written.<\/p>\n\n If a change is made that causes your patch to no longer apply, you'll need to update the patch.<\/p>\n\n There are some patches I commonly apply to Drupal projects, but I'll try to either contribute the changes back to the Drupal so I no longer need the patch or make the change in a custom module.<\/p>\n\n Sometimes, though, patching is the only option<\/a>.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.033f0b46-69f9-4ea0-9706-d930e67dd5d8.json b/content/node.033f0b46-69f9-4ea0-9706-d930e67dd5d8.json
index a6d1a96fc..5e473f299 100644
--- a/content/node.033f0b46-69f9-4ea0-9706-d930e67dd5d8.json
+++ b/content/node.033f0b46-69f9-4ea0-9706-d930e67dd5d8.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n My website is built with Sculpin - a static site generator written in PHP.<\/p>\n\n It uses a some of the same Symfony components as Drupal, uses Twig for templating and YAML for configuration, and has similar features like content types and taxonomies for structuring content.<\/p>\n\n When I first created my website it was on Drupal 6 and upgraded to Drupal 7 before I started to take an interest in static site generators and later using Jekyll, Sculpin and Astro (and Sculpin, again).<\/p>\n\n I enjoyed learning Sculpin and took it as an opportunity to learn Twig before Drupal 8, which I spoke about in the first Sculpin talk I gave<\/a>, at DrupalCamp North in July 2015.<\/p>\n\n I had three Git repositories, the current Sculpin one, the Astro version, and the original Sculpin version with its first commit in March 2015 - a few months before DrupalCamp North.<\/p>\n\n Static site generators keep the files in text files intead of a database, so I was wondering if it was possible to merge the repositories together and combine the information whilst keeping the same commit history so existing tags and contribtions would still apply to the original commits.<\/p>\n\n In short, I was able to do it by adding the old repositories as additional remotes and using the After fixing some minor merge conflicts, everything was merged successfully and I have [one repository containing 5,272 all commits][2], going back to 2015.<\/p>\n\n This makes it older than my dotfiles repository<\/a>, which I started in July 2015.<\/p>\n\n Similar to why I use Linux<\/a>, I believe in owning your own content rather than relying on third-party platforms, so having all my content and history in one repository is great.<\/p>\n\n And I learned something new about Git at the same time.<\/p>\n\n ",
+ "value": "\n My website is built with Sculpin - a static site generator written in PHP.<\/p>\n\n It uses a some of the same Symfony components as Drupal, uses Twig for templating and YAML for configuration, and has similar features like content types and taxonomies for structuring content.<\/p>\n\n When I first created my website it was on Drupal 6 and upgraded to Drupal 7 before I started to take an interest in static site generators and later using Jekyll, Sculpin and Astro (and Sculpin, again).<\/p>\n\n I enjoyed learning Sculpin and took it as an opportunity to learn Twig before Drupal 8, which I spoke about in the first Sculpin talk I gave<\/a>, at DrupalCamp North in July 2015.<\/p>\n\n I had three Git repositories, the current Sculpin one, the Astro version, and the original Sculpin version with its first commit in March 2015 - a few months before DrupalCamp North.<\/p>\n\n Static site generators keep the files in text files intead of a database, so I was wondering if it was possible to merge the repositories together and combine the information whilst keeping the same commit history so existing tags and contribtions would still apply to the original commits.<\/p>\n\n In short, I was able to do it by adding the old repositories as additional remotes and using the After fixing some minor merge conflicts, everything was merged successfully and I have [one repository containing 5,272 all commits][2], going back to 2015.<\/p>\n\n This makes it older than my dotfiles repository<\/a>, which I started in July 2015.<\/p>\n\n Similar to why I use Linux<\/a>, I believe in owning your own content rather than relying on third-party platforms, so having all my content and history in one repository is great.<\/p>\n\n And I learned something new about Git at the same time.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n My website is built with Sculpin - a static site generator written in PHP.<\/p>\n\n It uses a some of the same Symfony components as Drupal, uses Twig for templating and YAML for configuration, and has similar features like content types and taxonomies for structuring content.<\/p>\n\n When I first created my website it was on Drupal 6 and upgraded to Drupal 7 before I started to take an interest in static site generators and later using Jekyll, Sculpin and Astro (and Sculpin, again).<\/p>\n\n I enjoyed learning Sculpin and took it as an opportunity to learn Twig before Drupal 8, which I spoke about in the first Sculpin talk I gave<\/a>, at DrupalCamp North in July 2015.<\/p>\n\n I had three Git repositories, the current Sculpin one, the Astro version, and the original Sculpin version with its first commit in March 2015 - a few months before DrupalCamp North.<\/p>\n\n Static site generators keep the files in text files intead of a database, so I was wondering if it was possible to merge the repositories together and combine the information whilst keeping the same commit history so existing tags and contribtions would still apply to the original commits.<\/p>\n\n In short, I was able to do it by adding the old repositories as additional remotes and using the After fixing some minor merge conflicts, everything was merged successfully and I have [one repository containing 5,272 all commits][2], going back to 2015.<\/p>\n\n This makes it older than my dotfiles repository<\/a>, which I started in July 2015.<\/p>\n\n Similar to why I use Linux<\/a>, I believe in owning your own content rather than relying on third-party platforms, so having all my content and history in one repository is great.<\/p>\n\n And I learned something new about Git at the same time.<\/p>\n\n ",
+ "processed": "\n My website is built with Sculpin - a static site generator written in PHP.<\/p>\n\n It uses a some of the same Symfony components as Drupal, uses Twig for templating and YAML for configuration, and has similar features like content types and taxonomies for structuring content.<\/p>\n\n When I first created my website it was on Drupal 6 and upgraded to Drupal 7 before I started to take an interest in static site generators and later using Jekyll, Sculpin and Astro (and Sculpin, again).<\/p>\n\n I enjoyed learning Sculpin and took it as an opportunity to learn Twig before Drupal 8, which I spoke about in the first Sculpin talk I gave<\/a>, at DrupalCamp North in July 2015.<\/p>\n\n I had three Git repositories, the current Sculpin one, the Astro version, and the original Sculpin version with its first commit in March 2015 - a few months before DrupalCamp North.<\/p>\n\n Static site generators keep the files in text files intead of a database, so I was wondering if it was possible to merge the repositories together and combine the information whilst keeping the same commit history so existing tags and contribtions would still apply to the original commits.<\/p>\n\n In short, I was able to do it by adding the old repositories as additional remotes and using the After fixing some minor merge conflicts, everything was merged successfully and I have [one repository containing 5,272 all commits][2], going back to 2015.<\/p>\n\n This makes it older than my dotfiles repository<\/a>, which I started in July 2015.<\/p>\n\n Similar to why I use Linux<\/a>, I believe in owning your own content rather than relying on third-party platforms, so having all my content and history in one repository is great.<\/p>\n\n And I learned something new about Git at the same time.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.037796b8-42d8-4ea4-abec-f8c8de0d97e8.json b/content/node.037796b8-42d8-4ea4-abec-f8c8de0d97e8.json
index 19f652c7a..632536ae6 100644
--- a/content/node.037796b8-42d8-4ea4-abec-f8c8de0d97e8.json
+++ b/content/node.037796b8-42d8-4ea4-abec-f8c8de0d97e8.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n This week's episode<\/a> of the Beyond Blocks podcast is live, where I speak with Panagiotis Moutsopoulos (vensires on Drupal.org) - Drupal Backend Developer at E-Sepia.<\/p>\n\n We discuss his first time DrupalCon and, more specifically, his session \"Drupal's Alternate Realities\" - a \"Birds of a Feather\" (BoF) session presenting some history, but mainly the different ways to tackle a problem in Drupal using different methodologies.<\/p>\n\n ",
+ "value": "\n This week's episode<\/a> of the Beyond Blocks podcast is live, where I speak with Panagiotis Moutsopoulos (vensires on Drupal.org) - Drupal Backend Developer at E-Sepia.<\/p>\n\n We discuss his first time DrupalCon and, more specifically, his session \"Drupal's Alternate Realities\" - a \"Birds of a Feather\" (BoF) session presenting some history, but mainly the different ways to tackle a problem in Drupal using different methodologies.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n This week's episode<\/a> of the Beyond Blocks podcast is live, where I speak with Panagiotis Moutsopoulos (vensires on Drupal.org) - Drupal Backend Developer at E-Sepia.<\/p>\n\n We discuss his first time DrupalCon and, more specifically, his session \"Drupal's Alternate Realities\" - a \"Birds of a Feather\" (BoF) session presenting some history, but mainly the different ways to tackle a problem in Drupal using different methodologies.<\/p>\n\n ",
+ "processed": "\n This week's episode<\/a> of the Beyond Blocks podcast is live, where I speak with Panagiotis Moutsopoulos (vensires on Drupal.org) - Drupal Backend Developer at E-Sepia.<\/p>\n\n We discuss his first time DrupalCon and, more specifically, his session \"Drupal's Alternate Realities\" - a \"Birds of a Feather\" (BoF) session presenting some history, but mainly the different ways to tackle a problem in Drupal using different methodologies.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.03c0a713-f32d-445b-9397-add68fbf2fe5.json b/content/node.03c0a713-f32d-445b-9397-add68fbf2fe5.json
index ad2fe1d13..724881b2a 100644
--- a/content/node.03c0a713-f32d-445b-9397-add68fbf2fe5.json
+++ b/content/node.03c0a713-f32d-445b-9397-add68fbf2fe5.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n As well as my laptop configuration<\/a>, local development environments<\/a> and production server<\/a>, I've also been using Nix for something else recently.<\/p>\n\n Setting up and configuring a Homelab on an old laptop.<\/p>\n\n I've been able to install and configure services like Jellyfin for playing video files, Immich for photo hosting and management, Gitea as my own Git server, Vaultwarden for securely storing my passwords and Traefik as a reverse proxy.<\/p>\n\n Each of these was very easy to configure with only a few lines of the Nix language and avoided a heavy use of Docker which seems common in most Homelab setups.<\/p>\n\n Next, I'd like to add home automation with Home Assistant and explore running a local DNS server within my network.<\/p>\n\n I'm looking forward to these, and seeing what else I can add to this setup using Nix and NixOS.<\/p>\n\n ",
+ "value": "\n As well as my laptop configuration<\/a>, local development environments<\/a> and production server<\/a>, I've also been using Nix for something else recently.<\/p>\n\n Setting up and configuring a Homelab on an old laptop.<\/p>\n\n I've been able to install and configure services like Jellyfin for playing video files, Immich for photo hosting and management, Gitea as my own Git server, Vaultwarden for securely storing my passwords and Traefik as a reverse proxy.<\/p>\n\n Each of these was very easy to configure with only a few lines of the Nix language and avoided a heavy use of Docker which seems common in most Homelab setups.<\/p>\n\n Next, I'd like to add home automation with Home Assistant and explore running a local DNS server within my network.<\/p>\n\n I'm looking forward to these, and seeing what else I can add to this setup using Nix and NixOS.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n As well as my laptop configuration<\/a>, local development environments<\/a> and production server<\/a>, I've also been using Nix for something else recently.<\/p>\n\n Setting up and configuring a Homelab on an old laptop.<\/p>\n\n I've been able to install and configure services like Jellyfin for playing video files, Immich for photo hosting and management, Gitea as my own Git server, Vaultwarden for securely storing my passwords and Traefik as a reverse proxy.<\/p>\n\n Each of these was very easy to configure with only a few lines of the Nix language and avoided a heavy use of Docker which seems common in most Homelab setups.<\/p>\n\n Next, I'd like to add home automation with Home Assistant and explore running a local DNS server within my network.<\/p>\n\n I'm looking forward to these, and seeing what else I can add to this setup using Nix and NixOS.<\/p>\n\n ",
+ "processed": "\n As well as my laptop configuration<\/a>, local development environments<\/a> and production server<\/a>, I've also been using Nix for something else recently.<\/p>\n\n Setting up and configuring a Homelab on an old laptop.<\/p>\n\n I've been able to install and configure services like Jellyfin for playing video files, Immich for photo hosting and management, Gitea as my own Git server, Vaultwarden for securely storing my passwords and Traefik as a reverse proxy.<\/p>\n\n Each of these was very easy to configure with only a few lines of the Nix language and avoided a heavy use of Docker which seems common in most Homelab setups.<\/p>\n\n Next, I'd like to add home automation with Home Assistant and explore running a local DNS server within my network.<\/p>\n\n I'm looking forward to these, and seeing what else I can add to this setup using Nix and NixOS.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.050a5113-59bc-461e-ac78-420e16055303.json b/content/node.050a5113-59bc-461e-ac78-420e16055303.json
index 2f5d46831..f35e3b833 100644
--- a/content/node.050a5113-59bc-461e-ac78-420e16055303.json
+++ b/content/node.050a5113-59bc-461e-ac78-420e16055303.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n This week on the Beyond Blocks podcast<\/a>, I'm joined by Dan Leech - a PHP Developer and open-source project creator.<\/p>\n\n He and I recently gave talks at the PHP South West meetup, where Dan introduced a new project - PHP-TUI - for building terminal user interfaces (TUIs) with PHP.<\/p>\n\n I use one of Dan's other open-source projects - Phpactor - within Neovim, and he also presented at PHP South Wales about PHPBench, so it was great to discuss and learn more about these in this episode.<\/p>\n\n Listen to the episode now<\/a>, and I'll be back with more in the New Year.<\/p>\n\n ",
+ "value": "\n This week on the Beyond Blocks podcast<\/a>, I'm joined by Dan Leech - a PHP Developer and open-source project creator.<\/p>\n\n He and I recently gave talks at the PHP South West meetup, where Dan introduced a new project - PHP-TUI - for building terminal user interfaces (TUIs) with PHP.<\/p>\n\n I use one of Dan's other open-source projects - Phpactor - within Neovim, and he also presented at PHP South Wales about PHPBench, so it was great to discuss and learn more about these in this episode.<\/p>\n\n Listen to the episode now<\/a>, and I'll be back with more in the New Year.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n This week on the Beyond Blocks podcast<\/a>, I'm joined by Dan Leech - a PHP Developer and open-source project creator.<\/p>\n\n He and I recently gave talks at the PHP South West meetup, where Dan introduced a new project - PHP-TUI - for building terminal user interfaces (TUIs) with PHP.<\/p>\n\n I use one of Dan's other open-source projects - Phpactor - within Neovim, and he also presented at PHP South Wales about PHPBench, so it was great to discuss and learn more about these in this episode.<\/p>\n\n Listen to the episode now<\/a>, and I'll be back with more in the New Year.<\/p>\n\n ",
+ "processed": "\n This week on the Beyond Blocks podcast<\/a>, I'm joined by Dan Leech - a PHP Developer and open-source project creator.<\/p>\n\n He and I recently gave talks at the PHP South West meetup, where Dan introduced a new project - PHP-TUI - for building terminal user interfaces (TUIs) with PHP.<\/p>\n\n I use one of Dan's other open-source projects - Phpactor - within Neovim, and he also presented at PHP South Wales about PHPBench, so it was great to discuss and learn more about these in this episode.<\/p>\n\n Listen to the episode now<\/a>, and I'll be back with more in the New Year.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.05232b81-d945-4112-99bf-d1adb2552428.json b/content/node.05232b81-d945-4112-99bf-d1adb2552428.json
index c416e5fc6..a210e7b52 100644
--- a/content/node.05232b81-d945-4112-99bf-d1adb2552428.json
+++ b/content/node.05232b81-d945-4112-99bf-d1adb2552428.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n I've noticed a lot of Developers recently adopting SQLite for their database and I wonder why this is.<\/p>\n\n Laravel changed their default database to SQLite for local development.<\/p>\n\n It simplifies the development environment as there's no need for a separate database like MySQL or MariaDB but, if you'll be using one of those in production, won't that cause more issues when you migrate your local application?<\/p>\n\n Drupal supports using SQLite, but, other than for my automated testing course<\/a>, or when running automated tests, I've always used a MySQL or MariaDB database.<\/p>\n\n Maybe this is something to keep an eye on and potentially use more for some scenarios in the future.<\/p>\n\n ",
+ "value": "\n I've noticed a lot of Developers recently adopting SQLite for their database and I wonder why this is.<\/p>\n\n Laravel changed their default database to SQLite for local development.<\/p>\n\n It simplifies the development environment as there's no need for a separate database like MySQL or MariaDB but, if you'll be using one of those in production, won't that cause more issues when you migrate your local application?<\/p>\n\n Drupal supports using SQLite, but, other than for my automated testing course<\/a>, or when running automated tests, I've always used a MySQL or MariaDB database.<\/p>\n\n Maybe this is something to keep an eye on and potentially use more for some scenarios in the future.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n I've noticed a lot of Developers recently adopting SQLite for their database and I wonder why this is.<\/p>\n\n Laravel changed their default database to SQLite for local development.<\/p>\n\n It simplifies the development environment as there's no need for a separate database like MySQL or MariaDB but, if you'll be using one of those in production, won't that cause more issues when you migrate your local application?<\/p>\n\n Drupal supports using SQLite, but, other than for my automated testing course<\/a>, or when running automated tests, I've always used a MySQL or MariaDB database.<\/p>\n\n Maybe this is something to keep an eye on and potentially use more for some scenarios in the future.<\/p>\n\n ",
+ "processed": "\n I've noticed a lot of Developers recently adopting SQLite for their database and I wonder why this is.<\/p>\n\n Laravel changed their default database to SQLite for local development.<\/p>\n\n It simplifies the development environment as there's no need for a separate database like MySQL or MariaDB but, if you'll be using one of those in production, won't that cause more issues when you migrate your local application?<\/p>\n\n Drupal supports using SQLite, but, other than for my automated testing course<\/a>, or when running automated tests, I've always used a MySQL or MariaDB database.<\/p>\n\n Maybe this is something to keep an eye on and potentially use more for some scenarios in the future.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.066297b1-ab00-4769-afe1-d3831ed2a654.json b/content/node.066297b1-ab00-4769-afe1-d3831ed2a654.json
index 1d1993b95..8910b9fc7 100644
--- a/content/node.066297b1-ab00-4769-afe1-d3831ed2a654.json
+++ b/content/node.066297b1-ab00-4769-afe1-d3831ed2a654.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n As well as writing comments first<\/a>, when writing tests, I sometimes like to write my tests backwards and start by writing the assertions first.<\/p>\n\n I know what I want to assert in the test, so it's an easy place to start.<\/p>\n\n I'll run it, see the error, fix it and continue working backwards.<\/p>\n\n For example, I could start with this:<\/p>\n\n This test will fail when I run it, but it makes me think about what I need to do to fix the error and how to do so in the best way.<\/p>\n\n In this case, I need to make a request to the page that should render the text:<\/p>\n\n This will still fail, as I also need to create the required posts:<\/p>\n\n Now the test passes.<\/p>\n\n Doing test-driven development keeps my code clean and minimal, and I find this approach keeps my test clean, too.<\/p>\n\n ",
+ "value": "\n As well as writing comments first<\/a>, when writing tests, I sometimes like to write my tests backwards and start by writing the assertions first.<\/p>\n\n I know what I want to assert in the test, so it's an easy place to start.<\/p>\n\n I'll run it, see the error, fix it and continue working backwards.<\/p>\n\n For example, I could start with this:<\/p>\n\n This test will fail when I run it, but it makes me think about what I need to do to fix the error and how to do so in the best way.<\/p>\n\n In this case, I need to make a request to the page that should render the text:<\/p>\n\n This will still fail, as I also need to create the required posts:<\/p>\n\n Now the test passes.<\/p>\n\n Doing test-driven development keeps my code clean and minimal, and I find this approach keeps my test clean, too.<\/p>\n\n ",
"format": "full_html",
- "processed": "\n As well as writing comments first<\/a>, when writing tests, I sometimes like to write my tests backwards and start by writing the assertions first.<\/p>\n\n I know what I want to assert in the test, so it's an easy place to start.<\/p>\n\n I'll run it, see the error, fix it and continue working backwards.<\/p>\n\n For example, I could start with this:<\/p>\n\n This test will fail when I run it, but it makes me think about what I need to do to fix the error and how to do so in the best way.<\/p>\n\n In this case, I need to make a request to the page that should render the text:<\/p>\n\n This will still fail, as I also need to create the required posts:<\/p>\n\n Now the test passes.<\/p>\n\n Doing test-driven development keeps my code clean and minimal, and I find this approach keeps my test clean, too.<\/p>\n\n ",
+ "processed": "\n As well as writing comments first<\/a>, when writing tests, I sometimes like to write my tests backwards and start by writing the assertions first.<\/p>\n\n I know what I want to assert in the test, so it's an easy place to start.<\/p>\n\n I'll run it, see the error, fix it and continue working backwards.<\/p>\n\n For example, I could start with this:<\/p>\n\n This test will fail when I run it, but it makes me think about what I need to do to fix the error and how to do so in the best way.<\/p>\n\n In this case, I need to make a request to the page that should render the text:<\/p>\n\n This will still fail, as I also need to create the required posts:<\/p>\n\n Now the test passes.<\/p>\n\n Doing test-driven development keeps my code clean and minimal, and I find this approach keeps my test clean, too.<\/p>\n\n ",
"summary": null
}
],
diff --git a/content/node.079f2e09-0827-458e-91d3-6dd5b8b80c56.json b/content/node.079f2e09-0827-458e-91d3-6dd5b8b80c56.json
index f1147425c..6646cbdcd 100644
--- a/content/node.079f2e09-0827-458e-91d3-6dd5b8b80c56.json
+++ b/content/node.079f2e09-0827-458e-91d3-6dd5b8b80c56.json
@@ -82,9 +82,9 @@
],
"body": [
{
- "value": "\n It's shown in the examples of the conventional commits specification<\/a> as part of the optional footer data.<\/p>\n\n But is it useful?<\/p>\n\n It can be if your issue tracker is linked to your Git repository and you can click the issue ID in a commit message and see the issue.<\/p>\n\n But, how often do teams change issue-tracking software or the project is passed to a different company that uses a different issue tracker?<\/p>\n\n That makes the issue IDs that reference the old IDs useless as no one has access to the issues it references.<\/p>\n\n I'd recommend putting as much information in the commit message itself and not relying on it being in an external source, like an issue tracker.<\/p>\n\n The Git log and commit messages will remain even if a different issue tracker is used, or a different team starts working on the project, and that additional information isn't lost.<\/p>\n\n I'm not against putting the issue ID in the commit message but don't do it instead of writing a descriptive commit message.<\/p>\n\n ",
+ "value": "\n php<\/code> but sometimes it was
php-fpm<\/code>. In the templated file, it's always named
php<\/code>.<\/p>\n\n
mysql<\/code> or
mariadb<\/code> for the database service and
nginx<\/code> or
caddy<\/code> depending on which server was being used. These are now always
database<\/code> and
web<\/code> respectively.<\/p>\n\n
php<\/code> but sometimes it was
php-fpm<\/code>. In the templated file, it's always named
php<\/code>.<\/p>\n\n
mysql<\/code> or
mariadb<\/code> for the database service and
nginx<\/code> or
caddy<\/code> depending on which server was being used. These are now always
database<\/code> and
web<\/code> respectively.<\/p>\n\n
php<\/code> but sometimes it was
php-fpm<\/code>. In the templated file, it's always named
php<\/code>.<\/p>\n\n
mysql<\/code> or
mariadb<\/code> for the database service and
nginx<\/code> or
caddy<\/code> depending on which server was being used. These are now always
database<\/code> and
web<\/code> respectively.<\/p>\n\n
php<\/code> but sometimes it was
php-fpm<\/code>. In the templated file, it's always named
php<\/code>.<\/p>\n\n
mysql<\/code> or
mariadb<\/code> for the database service and
nginx<\/code> or
caddy<\/code> depending on which server was being used. These are now always
database<\/code> and
web<\/code> respectively.<\/p>\n\n
--allow-unrelated-histories<\/code> option for git merge<\/a>.<\/p>\n\n
--allow-unrelated-histories<\/code> option for git merge<\/a>.<\/p>\n\n
--allow-unrelated-histories<\/code> option for git merge<\/a>.<\/p>\n\n
--allow-unrelated-histories<\/code> option for git merge<\/a>.<\/p>\n\n
public function testOnlyPostNodesAreShown(): void {\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n $this->drupalGet('\/blog');\n\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n PostBuilder::create()->setTitle('Post one')->getPost();\n PostBuilder::create()->setTitle('Post two')->getPost();\n\n $this->createNode([\n 'title' => 'This is not a post',\n 'type' => 'page',\n ]);\n\n $this->drupalGet('\/blog');\n\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n $this->drupalGet('\/blog');\n\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n PostBuilder::create()->setTitle('Post one')->getPost();\n PostBuilder::create()->setTitle('Post two')->getPost();\n\n $this->createNode([\n 'title' => 'This is not a post',\n 'type' => 'page',\n ]);\n\n $this->drupalGet('\/blog');\n\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n $this->drupalGet('\/blog');\n\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n PostBuilder::create()->setTitle('Post one')->getPost();\n PostBuilder::create()->setTitle('Post two')->getPost();\n\n $this->createNode([\n 'title' => 'This is not a post',\n 'type' => 'page',\n ]);\n\n $this->drupalGet('\/blog');\n\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n $this->drupalGet('\/blog');\n\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n
public function testOnlyPostNodesAreShown(): void {\n PostBuilder::create()->setTitle('Post one')->getPost();\n PostBuilder::create()->setTitle('Post two')->getPost();\n\n $this->createNode([\n 'title' => 'This is not a post',\n 'type' => 'page',\n ]);\n\n $this->drupalGet('\/blog');\n\n $assert = $this->assertSession();\n $assert->pageTextContains('Post one');\n $assert->pageTextContains('Post two');\n $assert->pageTextNotContains('This is not a post');\n}\n<\/code><\/pre>\n\n