100 lines
No EOL
5.8 KiB
JSON
100 lines
No EOL
5.8 KiB
JSON
{
|
|
"uuid": [
|
|
{
|
|
"value": "3c01a9ce-5e01-4edb-9a43-2fd1a02ee8aa"
|
|
}
|
|
],
|
|
"langcode": [
|
|
{
|
|
"value": "en"
|
|
}
|
|
],
|
|
"type": [
|
|
{
|
|
"target_id": "daily_email",
|
|
"target_type": "node_type",
|
|
"target_uuid": "8bde1f2f-eef9-4f2d-ae9c-96921f8193d7"
|
|
}
|
|
],
|
|
"revision_timestamp": [
|
|
{
|
|
"value": "2025-05-11T09:00:36+00:00"
|
|
}
|
|
],
|
|
"revision_uid": [
|
|
{
|
|
"target_type": "user",
|
|
"target_uuid": "b8966985-d4b2-42a7-a319-2e94ccfbb849"
|
|
}
|
|
],
|
|
"revision_log": [],
|
|
"status": [
|
|
{
|
|
"value": true
|
|
}
|
|
],
|
|
"uid": [
|
|
{
|
|
"target_type": "user",
|
|
"target_uuid": "b8966985-d4b2-42a7-a319-2e94ccfbb849"
|
|
}
|
|
],
|
|
"title": [
|
|
{
|
|
"value": "Working backwards\n"
|
|
}
|
|
],
|
|
"created": [
|
|
{
|
|
"value": "2023-07-25T00:00:00+00:00"
|
|
}
|
|
],
|
|
"changed": [
|
|
{
|
|
"value": "2025-05-11T09:00:36+00:00"
|
|
}
|
|
],
|
|
"promote": [
|
|
{
|
|
"value": false
|
|
}
|
|
],
|
|
"sticky": [
|
|
{
|
|
"value": false
|
|
}
|
|
],
|
|
"default_langcode": [
|
|
{
|
|
"value": true
|
|
}
|
|
],
|
|
"revision_translation_affected": [
|
|
{
|
|
"value": true
|
|
}
|
|
],
|
|
"path": [
|
|
{
|
|
"alias": "\/daily\/2023\/07\/25\/working-backwards",
|
|
"langcode": "en"
|
|
}
|
|
],
|
|
"body": [
|
|
{
|
|
"value": "\n <p>Today, I did a show-and-tell session with my team where I demonstrated an integration I've been working on for a few months and recently released to production.<\/p>\n\n<p>The simplified workflow is we collate some data, send it to a third-party system for translation, receive the translated file and import the translations into Drupal's translation system.<\/p>\n\n<h2 id=\"where-did-i-start%3F\">Where did I start?<\/h2>\n\n<p>The first thing I did was not to collate the data and generate the file but to send a minimal, hard-coded version of the contents to the third-party system.<\/p>\n\n<p>I'd have started with the code to import the translated strings if I hadn't already done this in an earlier spike.<\/p>\n\n<p>This allowed me to send the file, check the response from the third party and ensure they could work with that file type and my proposed content structure.<\/p>\n\n<p>If needed, I could have changed direction and avoided investing much time. This wouldn't have been the case if I'd left this until the end of the process.<\/p>\n\n<p>I also have a working end-to-end test, and I can send a file and get the response I need.<\/p>\n\n<p>What if I'd written all the code and discovered something wouldn't work?<\/p>\n\n<h2 id=\"what-next%3F\">What next?<\/h2>\n\n<p>Now, I can work backwards and start to make the content dynamic.<\/p>\n\n<p>I can introduce more authentic and complicated data, remove the hard-coded test data, and check that things still work.<\/p>\n\n<p>I still have the quick feedback loop, as I can always send the data to the third-party system and verify things work as I iterate on my implementation.<\/p>\n\n<p>With the main pieces of the puzzle in place, I can continue building and filling in the others.<\/p>\n\n<p>Once I have a complete feature with all the pieces in place, I can refactor as needed.<\/p>\n\n<p>I still have the same finished puzzle - I just built it in a different order.<\/p>\n\n ",
|
|
"format": "full_html",
|
|
"processed": "\n <p>Today, I did a show-and-tell session with my team where I demonstrated an integration I've been working on for a few months and recently released to production.<\/p>\n\n<p>The simplified workflow is we collate some data, send it to a third-party system for translation, receive the translated file and import the translations into Drupal's translation system.<\/p>\n\n<h2 id=\"where-did-i-start%3F\">Where did I start?<\/h2>\n\n<p>The first thing I did was not to collate the data and generate the file but to send a minimal, hard-coded version of the contents to the third-party system.<\/p>\n\n<p>I'd have started with the code to import the translated strings if I hadn't already done this in an earlier spike.<\/p>\n\n<p>This allowed me to send the file, check the response from the third party and ensure they could work with that file type and my proposed content structure.<\/p>\n\n<p>If needed, I could have changed direction and avoided investing much time. This wouldn't have been the case if I'd left this until the end of the process.<\/p>\n\n<p>I also have a working end-to-end test, and I can send a file and get the response I need.<\/p>\n\n<p>What if I'd written all the code and discovered something wouldn't work?<\/p>\n\n<h2 id=\"what-next%3F\">What next?<\/h2>\n\n<p>Now, I can work backwards and start to make the content dynamic.<\/p>\n\n<p>I can introduce more authentic and complicated data, remove the hard-coded test data, and check that things still work.<\/p>\n\n<p>I still have the quick feedback loop, as I can always send the data to the third-party system and verify things work as I iterate on my implementation.<\/p>\n\n<p>With the main pieces of the puzzle in place, I can continue building and filling in the others.<\/p>\n\n<p>Once I have a complete feature with all the pieces in place, I can refactor as needed.<\/p>\n\n<p>I still have the same finished puzzle - I just built it in a different order.<\/p>\n\n ",
|
|
"summary": null
|
|
}
|
|
],
|
|
"feeds_item": [
|
|
{
|
|
"imported": "1970-01-01T00:32:50+00:00",
|
|
"guid": null,
|
|
"hash": "4f3f3eed089d1a4af0d75429f2731bfe",
|
|
"target_type": "feeds_feed",
|
|
"target_uuid": "90c85284-7ca8-4074-9178-97ff8384fe76"
|
|
}
|
|
]
|
|
} |