{ "uuid": [ { "value": "01f3b831-e981-471f-aef9-076b054e3495" } ], "langcode": [ { "value": "en" } ], "type": [ { "target_id": "daily_email", "target_type": "node_type", "target_uuid": "8bde1f2f-eef9-4f2d-ae9c-96921f8193d7" } ], "revision_timestamp": [ { "value": "2025-05-11T09:00:36+00:00" } ], "revision_uid": [ { "target_type": "user", "target_uuid": "b8966985-d4b2-42a7-a319-2e94ccfbb849" } ], "revision_log": [], "status": [ { "value": true } ], "uid": [ { "target_type": "user", "target_uuid": "b8966985-d4b2-42a7-a319-2e94ccfbb849" } ], "title": [ { "value": "Testing is all about confidence\n" } ], "created": [ { "value": "2023-07-24T00:00:00+00:00" } ], "changed": [ { "value": "2025-05-11T09:00:36+00:00" } ], "promote": [ { "value": false } ], "sticky": [ { "value": false } ], "default_langcode": [ { "value": true } ], "revision_translation_affected": [ { "value": true } ], "path": [ { "alias": "\/daily\/2023\/07\/24\/testing-is-all-about-confidence", "langcode": "en" } ], "body": [ { "value": "\n
Testing - manual or automated - is about building confidence.<\/p>\n\n
If we deploy this change or release this feature, are we confident it will work as expected and not cause regressions elsewhere?<\/p>\n\n
What if someone asked you on a scale between one and ten?<\/p>\n\n
From an automated perspective, have you written enough tests for the feature to be confident it works?<\/p>\n\n
If you're fixing a bug, do you have a test that reproduces the bug that was originally failing but now passing since you've added the fix?<\/p>\n\n
Do the tests have enough assertions, and have you covered enough use cases and scenarios?<\/p>\n\n
You can utilise code coverage metrics, but no hard rule says that the feature will work once x percentage is covered. Something with 100% coverage can still contain bugs.<\/p>\n\n
For me, it's about the answer to the question:<\/p>\n\n
If we deploy this change, how confident are you that it will work as expected?<\/p>\n\n ", "format": "full_html", "processed": "\n
Testing - manual or automated - is about building confidence.<\/p>\n\n
If we deploy this change or release this feature, are we confident it will work as expected and not cause regressions elsewhere?<\/p>\n\n
What if someone asked you on a scale between one and ten?<\/p>\n\n
From an automated perspective, have you written enough tests for the feature to be confident it works?<\/p>\n\n
If you're fixing a bug, do you have a test that reproduces the bug that was originally failing but now passing since you've added the fix?<\/p>\n\n
Do the tests have enough assertions, and have you covered enough use cases and scenarios?<\/p>\n\n
You can utilise code coverage metrics, but no hard rule says that the feature will work once x percentage is covered. Something with 100% coverage can still contain bugs.<\/p>\n\n
For me, it's about the answer to the question:<\/p>\n\n
If we deploy this change, how confident are you that it will work as expected?<\/p>\n\n ", "summary": null } ] }