Software Test Automation→ The Functional checks
Can we increase our understanding and expectations of a system by combining various functional automation tests at different steps within the development lifecycle?
We look at the fundamental disciplines of unit, integration, API, UI and infrastructure automation. And how various distributed and centralised approaches can lower the barrier to entry, provide faster feedback and reduce risk
What might an ideal automation distribution look like if we split a percentage of functional checks across each part of the SDLC ?
And if we had the opportunity to do run different automated tests across the development lifecycle perhaps would it look something like this:
Performance which runs across 4 cycles(Development,Test,Deploy,Operate) and Pen testing are not included given they are more Non functional focussed, Some more on performance Engineering you can check out the blog here
A look at each automated functional check:
- API (Application programming interface)
- UI (User interface)
Tests are generally categorised into low, medium or high level. The higher the level the more complicated, expensive and longer it takes to execute, implement, troubleshoot and maintain
The Unit Test
A fast running test done against a method or function that looks to validate its behaviour Due to their quick feedback they are ideal for running locally, in the CI pipeline and as the 1st check in the CD pipeline
The Integration Test
Used to confirm integration with other dependencies (apis, databases, message hubs). They provide fast feedback, Useful to determine you are interacting correctly with the required dependencies
The Infrastructure Test
Used to verify infrastructure behaviour and can include checks on directory permissions, running processes and services, open ports, node counts, storage accounts etc.
These are often underutilised, and can help round off a well orchestrated automation approach
The API Test
The API test often triggers off a sequence of actions. You send a request and expect a particular response code with the right payload
Usually can give you good feedback that a number of parts of the system are working as expected(APIs, Db’s, Hubs, caches, load balancers)
The UI Test
Used to validate actions/reactions with a browser
The Security Test
A complex topic, however at a high level we want to know whether we have exposed yourself to vulnerabilities in our code, containers and infrastructure
We want to understand how all of our different suites of automation are performing across all environments at any one time To do this we need to collate the data from each source and present that back as something useful, Such as a dashboard
We introduce TLO’s (test level objectives), TLA’s (test level agreements) and TLI’s (test level indicators). Which are defined at design time to align with the team and business objectives.
They look to bring more clarity, accountability and transparency to the automation being executed. They also open communication channels and help to frame objectives
The goal being a way to distributed automation where tests execute at each stage of the development lifecycle And where its data is collated in a centralised manner and exposed though a series of dashboards
This leads to a more sustainable, resilient automation solution that detects problems early and these can then be fixed easier