uuid: - value: 01f3b831-e981-471f-aef9-076b054e3495 langcode: - value: en type: - target_id: daily_email target_type: node_type target_uuid: 8bde1f2f-eef9-4f2d-ae9c-96921f8193d7 revision_timestamp: - value: '2025-05-11T09:00:36+00:00' revision_uid: - target_type: user target_uuid: b8966985-d4b2-42a7-a319-2e94ccfbb849 revision_log: { } status: - value: true uid: - target_type: user target_uuid: b8966985-d4b2-42a7-a319-2e94ccfbb849 title: - value: | Testing is all about confidence created: - value: '2023-07-24T00:00:00+00:00' changed: - value: '2025-05-11T09:00:36+00:00' promote: - value: false sticky: - value: false default_langcode: - value: true revision_translation_affected: - value: true path: - alias: /daily/2023/07/24/testing-is-all-about-confidence langcode: en body: - value: |

Testing - manual or automated - is about building confidence.

If we deploy this change or release this feature, are we confident it will work as expected and not cause regressions elsewhere?

What if someone asked you on a scale between one and ten?

From an automated perspective, have you written enough tests for the feature to be confident it works?

If you're fixing a bug, do you have a test that reproduces the bug that was originally failing but now passing since you've added the fix?

Do the tests have enough assertions, and have you covered enough use cases and scenarios?

Here's the thing

You can utilise code coverage metrics, but no hard rule says that the feature will work once x percentage is covered. Something with 100% coverage can still contain bugs.

For me, it's about the answer to the question:

If we deploy this change, how confident are you that it will work as expected?

format: full_html processed: |

Testing - manual or automated - is about building confidence.

If we deploy this change or release this feature, are we confident it will work as expected and not cause regressions elsewhere?

What if someone asked you on a scale between one and ten?

From an automated perspective, have you written enough tests for the feature to be confident it works?

If you're fixing a bug, do you have a test that reproduces the bug that was originally failing but now passing since you've added the fix?

Do the tests have enough assertions, and have you covered enough use cases and scenarios?

Here's the thing

You can utilise code coverage metrics, but no hard rule says that the feature will work once x percentage is covered. Something with 100% coverage can still contain bugs.

For me, it's about the answer to the question:

If we deploy this change, how confident are you that it will work as expected?

summary: null field_daily_email_cta: { }