100 lines
No EOL
3.9 KiB
JSON
100 lines
No EOL
3.9 KiB
JSON
{
|
|
"uuid": [
|
|
{
|
|
"value": "ad8a6a5d-2c70-4bc5-9ffc-1c932dd09e2d"
|
|
}
|
|
],
|
|
"langcode": [
|
|
{
|
|
"value": "en"
|
|
}
|
|
],
|
|
"type": [
|
|
{
|
|
"target_id": "daily_email",
|
|
"target_type": "node_type",
|
|
"target_uuid": "8bde1f2f-eef9-4f2d-ae9c-96921f8193d7"
|
|
}
|
|
],
|
|
"revision_timestamp": [
|
|
{
|
|
"value": "2025-04-16T14:13:03+00:00"
|
|
}
|
|
],
|
|
"revision_uid": [
|
|
{
|
|
"target_type": "user",
|
|
"target_uuid": "b8966985-d4b2-42a7-a319-2e94ccfbb849"
|
|
}
|
|
],
|
|
"revision_log": [],
|
|
"status": [
|
|
{
|
|
"value": true
|
|
}
|
|
],
|
|
"uid": [
|
|
{
|
|
"target_type": "user",
|
|
"target_uuid": "b8966985-d4b2-42a7-a319-2e94ccfbb849"
|
|
}
|
|
],
|
|
"title": [
|
|
{
|
|
"value": "Why it's important to see the test fail\n"
|
|
}
|
|
],
|
|
"created": [
|
|
{
|
|
"value": "2023-05-06T00:00:00+00:00"
|
|
}
|
|
],
|
|
"changed": [
|
|
{
|
|
"value": "2025-04-16T14:13:03+00:00"
|
|
}
|
|
],
|
|
"promote": [
|
|
{
|
|
"value": false
|
|
}
|
|
],
|
|
"sticky": [
|
|
{
|
|
"value": false
|
|
}
|
|
],
|
|
"default_langcode": [
|
|
{
|
|
"value": true
|
|
}
|
|
],
|
|
"revision_translation_affected": [
|
|
{
|
|
"value": true
|
|
}
|
|
],
|
|
"path": [
|
|
{
|
|
"alias": "\/daily\/2023\/05\/06\/why-its-important-to-see-the-test-fail",
|
|
"langcode": "en"
|
|
}
|
|
],
|
|
"body": [
|
|
{
|
|
"value": "\n <p>With automated testing and test-driven development, it's important to see a test fail.\nIf a test passes straight away, how do you know that you're testing the right thing? You could be accidentally testing a different piece of functionality, or it could be a false positive.<\/p>\n\n<p>If the functionality already exists, do you need another test for it?<\/p>\n\n<p>When you see a test fail, you know that the functionality hasn't been implemented, that you're testing the correct thing, and you have a clear goal to work towards.<\/p>\n\n<p>If you're fixing a bug, writing a test and seeing it fail verifies the bug exists and that, once the bug is fixed, the test will pass.<\/p>\n\n<p>Usually, you can anticipate why a test will fail as it evolves and know when it will pass. If a test passes before I expect, I'm immediately sceptical and will look into why rather than assuming it passed for the right reasons.<\/p>\n\n ",
|
|
"format": "full_html",
|
|
"processed": "\n <p>With automated testing and test-driven development, it's important to see a test fail.\nIf a test passes straight away, how do you know that you're testing the right thing? You could be accidentally testing a different piece of functionality, or it could be a false positive.<\/p>\n\n<p>If the functionality already exists, do you need another test for it?<\/p>\n\n<p>When you see a test fail, you know that the functionality hasn't been implemented, that you're testing the correct thing, and you have a clear goal to work towards.<\/p>\n\n<p>If you're fixing a bug, writing a test and seeing it fail verifies the bug exists and that, once the bug is fixed, the test will pass.<\/p>\n\n<p>Usually, you can anticipate why a test will fail as it evolves and know when it will pass. If a test passes before I expect, I'm immediately sceptical and will look into why rather than assuming it passed for the right reasons.<\/p>\n\n ",
|
|
"summary": null
|
|
}
|
|
],
|
|
"feeds_item": [
|
|
{
|
|
"imported": "2025-04-16T14:13:03+00:00",
|
|
"guid": null,
|
|
"hash": "dea295dc493bba491afadc1db32d16fe",
|
|
"target_type": "feeds_feed",
|
|
"target_uuid": "90c85284-7ca8-4074-9178-97ff8384fe76"
|
|
}
|
|
]
|
|
} |