91 lines
No EOL
3.5 KiB
JSON
91 lines
No EOL
3.5 KiB
JSON
{
|
|
"uuid": [
|
|
{
|
|
"value": "da4dbacd-671b-45ee-b70e-58aa16de7d8f"
|
|
}
|
|
],
|
|
"langcode": [
|
|
{
|
|
"value": "en"
|
|
}
|
|
],
|
|
"type": [
|
|
{
|
|
"target_id": "daily_email",
|
|
"target_type": "node_type",
|
|
"target_uuid": "8bde1f2f-eef9-4f2d-ae9c-96921f8193d7"
|
|
}
|
|
],
|
|
"revision_timestamp": [
|
|
{
|
|
"value": "2025-05-11T09:00:32+00:00"
|
|
}
|
|
],
|
|
"revision_uid": [
|
|
{
|
|
"target_type": "user",
|
|
"target_uuid": "b8966985-d4b2-42a7-a319-2e94ccfbb849"
|
|
}
|
|
],
|
|
"revision_log": [],
|
|
"status": [
|
|
{
|
|
"value": true
|
|
}
|
|
],
|
|
"uid": [
|
|
{
|
|
"target_type": "user",
|
|
"target_uuid": "b8966985-d4b2-42a7-a319-2e94ccfbb849"
|
|
}
|
|
],
|
|
"title": [
|
|
{
|
|
"value": "Increasing test coverage with regression tests\n"
|
|
}
|
|
],
|
|
"created": [
|
|
{
|
|
"value": "2023-09-18T00:00:00+00:00"
|
|
}
|
|
],
|
|
"changed": [
|
|
{
|
|
"value": "2025-05-11T09:00:32+00:00"
|
|
}
|
|
],
|
|
"promote": [
|
|
{
|
|
"value": false
|
|
}
|
|
],
|
|
"sticky": [
|
|
{
|
|
"value": false
|
|
}
|
|
],
|
|
"default_langcode": [
|
|
{
|
|
"value": true
|
|
}
|
|
],
|
|
"revision_translation_affected": [
|
|
{
|
|
"value": true
|
|
}
|
|
],
|
|
"path": [
|
|
{
|
|
"alias": "\/daily\/2023\/09\/18\/increasing-test-coverage-with-regression-tests",
|
|
"langcode": "en"
|
|
}
|
|
],
|
|
"body": [
|
|
{
|
|
"value": "\n <p>Automated test suites don't tell you everything works - they tell you what you've tested isn't broken.<\/p>\n\n<p>Having tests doesn't mean your code is bug-free. There could be edge cases or scenarios you haven't tested for that contain bugs, even though your test suite is passing.<\/p>\n\n<h2 id=\"what-do-we-do%3F\">What do we do?<\/h2>\n\n<p>When you find a bug, try replicating it within an automated test before attempting to fix it.<\/p>\n\n<p>Once you have a failing test and can replicate the issue, go ahead and fix it.<\/p>\n\n<p>If the test passes, you know you've fixed the bug and solved the issue.<\/p>\n\n<h2 id=\"here%27s-the-thing\">Here's the thing<\/h2>\n\n<p>Now you have this test, you cannot re-add the bug again without the test failing. You've prevented anyone from accidentally re-introducing it in the future and increased your test coverage.<\/p>\n\n ",
|
|
"format": "full_html",
|
|
"processed": "\n <p>Automated test suites don't tell you everything works - they tell you what you've tested isn't broken.<\/p>\n\n<p>Having tests doesn't mean your code is bug-free. There could be edge cases or scenarios you haven't tested for that contain bugs, even though your test suite is passing.<\/p>\n\n<h2 id=\"what-do-we-do%3F\">What do we do?<\/h2>\n\n<p>When you find a bug, try replicating it within an automated test before attempting to fix it.<\/p>\n\n<p>Once you have a failing test and can replicate the issue, go ahead and fix it.<\/p>\n\n<p>If the test passes, you know you've fixed the bug and solved the issue.<\/p>\n\n<h2 id=\"here%27s-the-thing\">Here's the thing<\/h2>\n\n<p>Now you have this test, you cannot re-add the bug again without the test failing. You've prevented anyone from accidentally re-introducing it in the future and increased your test coverage.<\/p>\n\n ",
|
|
"summary": null
|
|
}
|
|
]
|
|
} |