oliverdavies.uk/content/node.01f3b831-e981-471f-aef9-076b054e3495.yml

85 lines
3 KiB
YAML

uuid:
- value: 01f3b831-e981-471f-aef9-076b054e3495
langcode:
- value: en
type:
- target_id: daily_email
target_type: node_type
target_uuid: 8bde1f2f-eef9-4f2d-ae9c-96921f8193d7
revision_timestamp:
- value: '2025-05-11T09:00:36+00:00'
revision_uid:
- target_type: user
target_uuid: b8966985-d4b2-42a7-a319-2e94ccfbb849
revision_log: { }
status:
- value: true
uid:
- target_type: user
target_uuid: b8966985-d4b2-42a7-a319-2e94ccfbb849
title:
- value: |
Testing is all about confidence
created:
- value: '2023-07-24T00:00:00+00:00'
changed:
- value: '2025-05-11T09:00:36+00:00'
promote:
- value: false
sticky:
- value: false
default_langcode:
- value: true
revision_translation_affected:
- value: true
path:
- alias: /daily/2023/07/24/testing-is-all-about-confidence
langcode: en
body:
- value: |
<p>Testing - manual or automated - is about building confidence.</p>
<p>If we deploy this change or release this feature, are we confident it will work as expected and not cause regressions elsewhere?</p>
<p>What if someone asked you on a scale between one and ten?</p>
<p>From an automated perspective, have you written enough tests for the feature to be confident it works?</p>
<p>If you're fixing a bug, do you have a test that reproduces the bug that was originally failing but now passing since you've added the fix?</p>
<p>Do the tests have enough assertions, and have you covered enough use cases and scenarios?</p>
<h2 id="here%27s-the-thing">Here's the thing</h2>
<p>You can utilise code coverage metrics, but no hard rule says that the feature will work once x percentage is covered. Something with 100% coverage can still contain bugs.</p>
<p>For me, it's about the answer to the question:</p>
<p>If we deploy this change, how confident are you that it will work as expected?</p>
format: full_html
processed: |
<p>Testing - manual or automated - is about building confidence.</p>
<p>If we deploy this change or release this feature, are we confident it will work as expected and not cause regressions elsewhere?</p>
<p>What if someone asked you on a scale between one and ten?</p>
<p>From an automated perspective, have you written enough tests for the feature to be confident it works?</p>
<p>If you're fixing a bug, do you have a test that reproduces the bug that was originally failing but now passing since you've added the fix?</p>
<p>Do the tests have enough assertions, and have you covered enough use cases and scenarios?</p>
<h2 id="here%27s-the-thing">Here's the thing</h2>
<p>You can utilise code coverage metrics, but no hard rule says that the feature will work once x percentage is covered. Something with 100% coverage can still contain bugs.</p>
<p>For me, it's about the answer to the question:</p>
<p>If we deploy this change, how confident are you that it will work as expected?</p>
summary: null
field_daily_email_cta: { }