IT:AD:Continuous Delivery:SAD:11 ...
- See also:
Section Purpose
This document section provides the document's purpose, intended audience, structure, conventions used, glossary, maintenance and location where it can be found in the Organisation's datastores.
Document Location
The location of this description's various artefacts within the Organisation’s common document repositories are listed in the Appendices.
Document Purpose
This document's purpose is intended to describe to technical stakeholders – and non-technical stakeholders interested in understanding the technical features – a solution to a business need, its compliance with the organisation's standard requirements, its development, its subsequent operation, maintenance, and support once brought into service.
The complexity of the solution is broken down and described in a series of curated Views, prepared from the point of view of key Stakeholder role groups.
Document Structure
The solution’s model is documented using industry standards conventions, demonstrating that it meets both specific business requirements and common organisation specifications.
The structure of this document follows recommended industry standards to present curated views of the solution's model, from the viewpoint of appropriate stakeholder roles.
For a description of each view, refer to the Appendices.
Document Conventions
Document Audience
This document is intended for multiple audiences.
With both business domain and technical domain sections, this document is intended for various project stakeholder roles, including the following:
- Business:
- Business Owners
- Product Owners
- Project Managers
- Delivery and Support:
- Development Lead and Developers
- Tester Lead and Testers
- Infrastructure Service Engineers
- Application Service Engineers
- Customer Support Services
- Accreditation Services
Standard Related Documents
This document builds on other organisation documentation.
Living Document
This document is a Living Document periodically updated to reflect the stakeholder requirements as they emerge and or are re-prioritized.
Document Lifespan
This document is expected to reviewed and endorsed at several points during the life cycle of the solution.
## Section Purpose ##
The purpose of this document section is to provide a comprehensive summary of the Project's Motivation, High Level Functional Requirements, Services required and the Systems needed to deliver them.
## Solution Synopsis ##
The on-premise Software Development LifeCycle (SDLC) processes used within this Organisation are expensive, time consuming, error prone, bringing bad repute to the IT Services as a source of value to business services.
Current practices around engagement, requirement definition, source code access and management, development, testing, deployment, and ongoing maintenance are implemented differently depending on the project, and are mostly performed in manual, cumbersome, expensive, and error prone ways.
The “Government ICT Strategy and Action Plan to 2017”, introduced in 2012 a “Cloud First” policy which seeks to improve service delivery an deliver savings with cloud computing as a key enabler. The Ministry's published “Education System Digital Strategy” reaffirms this objective and policy.
In order to gain the expected benefits from cloud computing, the Organisation must at the same time improve its development and operations culture, practices, and tools used to develop green field applications.
This Solution Architecture Description documents evidence based findings summarizing this Organisation's current processes and – in order to maximise the value of cloud services for Stakeholders – recommends an industry and evidence backed adoption of a DevOps culture, practices and associated tooling – including an extended Application Lifecycle Management (ALM) Service to facilitate the adoption without sacrificing Quality – to manage a “PaaS First” approach to green field development.
## Problem Definition ##
As evidenced under the Assessments section of this document, this organisation's on-premise Software Development LifeCycle (SDLC) processes are characterised by being: Cumbersome to use - as they rely on largely manual processes and are not designed to promote collaboration and integration of different views into the delivery, an increasingly important requirement for more complex systems. Costly to maintain: Changes to requirements and performance demands require extensive redevelopment costs. Since operations are involved late in the delivery the operational considerations are not fully under * Error prone: Due to disparate systems, frequent handovers and slow feedback loops the resulting systems and products tend to have a high level of errors and inconsistencies. This also promotes costly testing and security reviews at the end of the solution, rather than building quality into the development process.
With customers demanding more responsive IT and increasingly delivery of more value for less, this organisation's IT Services credibility to deliver services is being questioned. An application lifecycle has been developed that focuses on improving: quality, reducing costs, and improving delivery responsiveness by focusing on the following areas:
- Improvable automation development practices
- Improvable quality development practices
- Improvable Manual Infrastructure Provisioning processes
- Improvable Application Deployment
- Improvable Testing Handover
- Improvable Testing Definition processes
- Improvable Manual Testing execution processes
- Improvable Manual Security Testing
- Improvable Manual Penetration Testing
## Solution Summary ##
This Solution advocates the use of Azure in order to adhere to stated “Cloud First” objectives, at the lowest cost while offering the most Services.
A comparison between Azure, Google and AWS, showed a one-year cost of running an IaaS instance as $832.20 for AWS, $699.05 for Azure and $1,594.20 for Google. Across several scenarios, “Azure was the lowest price for seven scenarios, the highest price for two. Azure tended to match or be lower than AWS”.4).
This solutions emphasises that this organisation should adhere to a “PaaS First” design approach where possible for the the majority of green-field development projects. Hosting on PaaS is even less expensive than the IaaS figures provided above. In addition, Forrester reports that the use of PaaS provides a 466% improvement in ROI, an 80% reduction in IT administration time required to manage apps deployed on the platform, a 25-hour average reduction in development and testing time required to develop or update Azure PaaS application deliveries, and a 50% reduction in time required to help deploy a new application solution to a client.“5). We expect our savings to be even more significant.
The development of an Azure based policy to use Cloud Services provides a timely Opportunity that should be taken advantage of.
- Technology is available and others are already reaping the benefits.
- Organisation is ready to transition
- Current legacy processes are not aligned with Government Strategy or sustainable.
The Organisation's move to “Cloud Infrastructure Services provides a timely opportunity to implement a bimodal SDLC process based on the introduction of DevOps based culture, processes and tools to manage green field PaaS hosted services, in parallel to existing ITIL based processes to continue to manage legacy applications intended for legacy on-prem infrastructure along with their mostly manual processes.
- The organisation has significant drivers to adopt DevOps based on other’s experience. Rackspace6) reported that of those who had implemented 52% reported increased Customer Satisfaction, 49% reported reduced IT Spend, 44% reported reduced downtime and failures, 43% reported improved customer engagement, 32% reported improved employee engagement.
Because businesses recognize software services as a key business differentiator, yet 49% of organisations complain that largely manual testing phases remain a significant bottleneck to speeding up development cycle times7) [2015 World Quality Report] and 77% agree that “ITIL does not have all the answers” and therefore remain largely unable to response to changing market demands and software issues, it is no wonder that 63% of over 4000 respondents to the 2014 Puppet Labs and IT Revolution Press8) survey are already implementing DevOps practices), 88% of Organisations have or plan to adopt DevOps in the near future9).
The Organisations who have adopted DevOps report between 18%10) and 34%11) faster time to market, between 19%12) and 36%13) improved quality and performance, between 46%14) and 300%15) increased software/service deployment frequency, and up to 40% increases in productivity16).
Note:
Many more data points regarding the benefits of adopting DevOps are summarized in the Appendices.
To gain similar benefits from adopting DevOps it is imperative that organisations select appropriate automation and management services to facilitate its adoption in order to unify traditionally distinct departments around automation that benefits all groups.
This Solution proposes the use of Gartner's recommended market leading, enterprise-ready, cloud hosted, Application LifeCycle Management (ALM) Service to provide the most coherent set of services to ease this uptake DevOps.
The specified ALM Service provides the necessary tools and services to provide a continuous ongoing backbone of communication, automation, delivery, measurement between traditionally disparate groups.
To elucidate to the various organisation departments how to use a common ALM Service, for maximum value, without sacrificing Quality, the User View of this Solution Architecture Description document outlines processes and Use Cases which facilitate a change of culture from blocking to feedback.
Using cheap and plentiful cloud hosted resources, adopting DevOps culture, processes and tools, along with an appropriate ALM Service suite to facilitate communication and automation are important steps. But not enough.
The Key finding by multiple studies [CapGemini, TD] is that “after accelerating other aspects of the delivery pipeline, organisations typically find that traditional Testing and overall Quality and Accreditation processes remain problematic, preventing organisations from achieving the benefits of their expected SDLC acceleration initiative.”.
“Automation of the quality activities is not only required but is the core enabler of increasing throughput and velocity *”[CapGemini]. The alternative to not only automating but enlarging the scope of what is considered testing activities, *”is testing will not be done adequately, therefore putting the organisation reputation at risk.”
Reasons given by other Organisations include: * Automating GUI testing has proved problematic, due to requiring frequent adjustments. * Traditional test script based testing can be developed without an understanding of what stakeholders considers to be an acceptable level of risk, leading to release candidates that pass all Testing Services, but unacceptable to stakeholders. * Traditional test scripts focus on business functionality requirements, relegating security, performance, reliability, and compliance testing to out of band costly specialists. * Even if testing is automated and effectively measures the level of stakeholder risk, teams without a coordinated end-to-end quality process tend to have trouble satisfying stakeholder expectations within today's compressed delivery cycles. Trying to remove risks at the end of each delivery iteration has been shown to be significantly slower and more resource-intensive than building quality into the product through defect prevention strategies such as development testing and subsequent continuous delivery pipeline testing.
Therefore, to address this risk to the business, this Solution proposes a technical solution to upholding all projects at all times to an Accreditable level.
A core technical deliverable of this Solution is the development of a custom Build Step Extension for the specified Build Service – Visual Studio Team Services (VSTS) – which all projects will implement in their build pipeline.
The Build Step Extension's purpose is to minimize setup costs per project while ensuring automated quality testing is implemented, and not skipped.
The primary functionality of the Build Step Extension is to build a project's source code into artefacts, run a series of static tests on both the source code and the the built assembly, deploy the solution to a Build Test (BT) environment, run a series of dynamic tests – including automated functionality, performance and security penetration tests – prior to accepting to integrate the submitted code.
The results of the various tests are assembled into an Accreditation Report which is made available to Accreditation Services as a basis for their assessments.
As Using DevOps to manage a Quality focused delivery to Cloud Services enables, as Chris Jackson, CTO DevOps Services, RACKSPACE stated17): the results show that by implementing DevOps IT operations “are no longer solely focused on risk mitigation and compliance but on getting the best out of the development side of the business.”.
Scope
The scope of this solution includes and excludes the following:
Inclusion
The scope of this project includes:
- The investment into using Visual Studio Team Services as the Organisation's default ALM Service.
- This choice implies the stopping of the use of Testing Services JIRA for new green-field PaaS based services.
- The development of a custom Build Step Extension hosted in the Visual Studio Marketplace to provide ministry projects a cost effective way of implementing Automated Quality Testing, and produce a Continuous Accreditation Report of use to Accreditation Services.
Exclusion
At this point in time the scope of the Solution described within the rest of this Section and the rest of the Solution Architecture Description (SAD) does are the following Goals, Services or Systems.
Excluded Goals: * Reengineer current and legacy applications to be manageable via an ALM Service. Specifically, the following actions are deemed to be too costly to be of benefit with any current legacy applications:
- Reengineer to be hostable on cloud based PaaS infrastructure, or – less drastically – move to cloud based IaaS infrastructure, while
- updating the systems to be secure, independent of any on-premise network boundary security they may have relied upon previously,
- uncouple any cross-system tight coupling – eg: ETL operations – that are non best practices for cloud hosted development.
- Migrate to an ALM Service, including:
- building an automated development pipeline for legacy applications
- automating all manual test scripts,
- migrating management artefacts (requirements, work items tracking, issue tracking, reports etc.)
Motivation Context
The Solution is a composed of Systems to provide Services to meet a certain number of Principled Goals defined to address Assessments (Problems/Opportunities) which impacted on Stakeholders and their Concerns (internal and external Drivers).
Goals
Key Goals of the Solution, based on Assessments that are listed later in this section, are listed below:
- Faster Responsiveness: automation of quality testing can ensure artefacts are always kept deployment ready.
- Better informed Planning:
- Minimize Non-Productive Work:
Supporting Goals are directly related to the above Goals:
- Iterative Delivery: providing tools for managing Agile iterative deliveries allows planned effort to be regularly reassessed and optionally re-prioritized to deliver value to stakeholders.
- Improve Communication: an Agile communication system groups can replace time consuming processes, consensus and scheduling meetings.
- Improve Automation: automation can be used to replace time consuming building, testing, deployment and documentation manual operations.
- Improve Quality: automation can be used to test more, regularly.
- Empower Stakeholders: provides better tools, more transparency, more options
- Appropriate Security: providing a more flexible environment to host services transparently without sacrificing security when appropriate.
The above supporting Goals can be summarized as:
- Use Cloud Services for Infrastructure
- Implement DevOps Culture, Processes, Tooling
- Automating Auditing Processes appropriate to Cloud Infrastructure and DevOps
Concerns and Drivers
Key Concerns (internal and external Drivers relevant to Stakeholders) addressed by the above Goals are listed below.
As per most Organisations, the following Concerns are of importance to key Stakeholders:
- Prepareness and Adaptability: change is constant. Speed of change is not – it's increasing. Being ready, and able to adapt is a key organisational concern.
- Better Value for Money: the organisation, although a monopoly, does not have infinite resources. Delivering solutions more rapidly, at less cost is a Driver for all within the organisation.
- Strategic Alignment: strategies have been developed to meet the above concerns and are expected to be aligned to.
The above primary key Concerns encompass several secondary Concerns:
- IT Reputation:
- improve the IT Group's reputation to being perceived by business services as trusted advisors who enable – rather than costly deployment blockers that cannot be worked around.
- changing from manual time and cost inefficient operations to automated services where IT Service resources are Trusted Advisers on impact and impact improves the reputation of IT Services.
* Cloud First: alignment with the Government's stated Cloud First objectives.
- Digital Strategy: alignment with the long term goals of the sector by the organisation, its departments and various projects is a driver that minimizes the cost to achieve agreed strategic goals.
- The Digital Strategy states: “Smart tools and common IT systems make delivering service improvements and implementing policy changes simpler and less expensive than they used to be, freeing up investment to improve outcomes for students and educators.”
- Software as a Differentiator: it is nearly a decade since it was accepted that the use of software is a key service differentiator for most businesses. The use of more services, delivered initially more rapidly to stakeholders, and iteratively re-released with more functionality is a key strategic driver.
- Minimize Cost: Although savings can come from buying new solutions, savings start by not wasting time and expenses on cumbersome low value processes.
- Deliver Value: essential to the key concern of improving Reputation is delivering what was requested
- Deliver Quality: to be valuable, what is delivered must be tested to ensure it is not be defective.
- Innovate to Meet New Opportunities: the organisation, although a monopoly, is mandated to assess and take advantage of new opportunities, while remaining Principled.
Stakeholders
Concerned Stakeholders
Key Stakeholders impacted or driven by the above Concerns are:
- Deputy Secretary
- Chief Financial Offier (CFO)
- Chief Financial Performance Officer (CFPO)
- Chief Information Offier (CIO)
- Chief Security and Privacy Officer (CSPO)
- Business Owner(s)
- Product Owner(s)
Additional Stakeholders
A Stakeholder is anyone with a Concern in this Solution (eg: Development, Delivery, Change Management, Operations, Users, etc.) designed to address the Concerns of the previously defined Stakeholders.
- IT Group:
- ICT Strategy, Planning and Architecture (SPA) Senior Manager
- ICT Project Services Senior Manager
- Chief Information Security Officer (CISO)
- Customer Services Manager
- Service Desk Manager
- Training Services Manager
- Operations Infrastructure Services Senior Manager
- Infrastructure Services Manager
- Application Support Services Manager
- Web and Application Services Senior Manager
- Web Services Manager
- Development Services Manager
- Testing Services Manager
Assessments
The Goals listed above were developed in order to address SWOT Assessments that affected key Concerns of Stakeholders.
These Assessments are listed below.
Current State Internal Assessments
An Assessment of current Weaknesses is as follows:
- Testing Services processes could be improved to deliver more value more rapidly:
- Testing remains costly in terms of budget and time, as it is repeated – mostly manually – for each deployment. This cost and delay is a direct contributor to the reason project managers prefer to roll out new features on such large intervals.
- Manual Functional Testing is commissioned from internal Testing Services near to the scheduled delivery date. The unexpected failure to satisfactorily pass manual UAT testing leads to subsequent negotiations, and potentially costly re-scheduling, resource engagement extensions, and subsequent re-testing.
- According to the CapGemini report “DevOps with Quality”, “automation of the quality activities is not only required but it is the core enabler of increasing throughput and velocity”.
- 49% of organisations complain that still largely manual testing phases are a bottleneck in speeding up the development cycle time [2015 World Quality Report].
- According to Cap Gemini's “DevOps with Quality” report “Without maximizing automation, the speed at which a team can deploy features is limited by the speed at which the quality activities can be successfully completed. In other words, testing activities, done traditionally will become the constraining factor or the alternative is testing will not be done adequately, therefore putting the organisation reputation at risk.”
- “After accelerating other aspects of the delivery pipeline, organisations typically find that traditional Testing and overall Quality and Accreditation processes remain problematic, preventing organisations from achieving the benefits of their expected SDLC acceleration initiative.”23). Reasons given by other Organisations include:
- Initiatives to automate GUI testing have proved problematic, due to requiring frequent adjustments.
- Traditional test script based testing can be developed without an understanding of what stakeholders considers to be an acceptable level of risk, leading to release candidates that pass all Testing Services, but stakeholders do not consider ready for release due to failing stakeholder's tolerance for risks.
- Traditional test scripts often focus on business functionality requirements, relegating around security, performance, reliability, and compliance testing to out of band, and costly, specialists.
- Even if testing is automated and effectively measures the level of stakeholder risk, teams without a coordinated end-to-end quality process tend to have trouble satisfying stakeholder expectations within today's compressed delivery cycles. Trying to remove risks at the end of each delivery iteration has been shown to be significantly slower and more resource-intensive than building quality into the product through defect prevention strategies such as development testing and subsequent continuous delivery pipeline testing24).
- Infrastructure Services processes could be improved to deliver more value more rapidly:
- Releases to production incorporating stakeholder feedback are few and far between, often in year-plus intervals, rather than fortnightly (see NSI).
- Due to several factors (meetings, scarcity of resources, communication issues) commissioning of a new environment to host solutions can take weeks. This delay is largely repeated for each environment needed (ST, UAT, TEST, PROD).
- Deployment operations are performed manually, and take on average a week due to meetings to schedule deployments, manual step documentation, manual deployment. This lengthy process is repeated for each deployment environment (ST, UAT, TRAIN, PROD).
- Note: in partial defence of Infrastructure Services several of the cumbersome manual processes were put in place partially to compensate for the quality of the developed artefacts received. But note that that although code quality is a concern outside the control of Infrastructure Services, the solution should not have been manual.
* Lengthy manual deployments (eg: 2 days) are disruptive to end users, or must be done for more cost after hours.
- Accreditation Services processes could be improved:
- Accreditation Performance and Security Testing is commissioned from external resources near to the scheduled delivery date. The unexpected failure to satisfactorily pass these external reviews leads to costly re-scheduling, resource engagement extensions, and repeated costly re-testing. To my knowledge, to date there has never been a project that has passed these tests on their first pass.
- Existing security practices are insufficient to control access to Confidential data by contractors. There is never a Use Case for development resources to access production data – yet there are several cases of whole databases of Confidential data being taken off site and even off country by developers, with no controls as to their subsequent disposal.
- Development Services and Development quality could be improved to deliver more value more quickly:
- Code quality has impacted downstream. Several of the slow, manual processes implemented by Infrastructure Services were put in place to compensate for the quality of the artefacts received.
- The maintainability of the code base the organisation must support – whether developed in house, or received from vendors – creates hiring difficulties.
- Development Services have no peer or automated review of deliverables. Outmoded and inappropriate development patterns continue to be used and accepted causing long term accessibility, availability, security, modularity, infrastructure, support costs born by both IT Services and business users. Associated resource lock-in maintaining outmoded and inappropriate artefacts is another real liability to the organisation.
- General Tooling could be improved:
- The current Issue Management System (JIRA) used by Test Services is IaaS based on internal infrastructure, and configured for use only by the Test Services (change requests for firewall rules are needed to allow temporary unmonitored remote access by vendors).
- The Organisation does not have a common and continuous method of managing project work items, artefacts, deployments, testing and support, from inception through to decommissioning. Instead, artificial and management, process and physical environment boundaries have been put up between development, testing, and support. These boundaries add complexity and cost, while decreasing transparency and ability to respond to end user requests for value.
- Without a common communication and resource planning service shared by all stakeholders, Business Project Managers decommission functionality without understanding the negative impact on other stakeholders Deployment, Configuration, Deployment, Monitoring, Support, Maintenance functions.
- The organisation does not have have in place an automation platform that can be used both internally and by vendors over the duration of a project (often a decade).
- The organisation does not yet use an iterative approach to delivering value – even when 61% of organisations rate time to market as a very important part of their corporate strategy.
- contrary to Organisational Behaviours (We work together for maximum impact “Ka mahi ngātahi mo te tukinga nui tonu”) the ministry uses several different – silo-ed – systems to manage the development of User Stories, Acceptance Tests, Test Scripts, Issues and Documentation between groups and vendors. Transparency remains problematic:
- Even when more current development patterns (eg: Service based UIs) and related test automation practices are added to existing test process, decision makers still lack adequate insight into the level of risk associated with releasing an application at any given point in time.
The above issues in turn contribute to project scheduling and cost issues:
- Missed delivery dates cause costly negotiation meetings, re-planning, rescheduling, communication changes.
- Delivery expectations versus available resources and cost lead to compromises on security, privacy, features, performance, availability, usability and/or modularity(see Helios Project).
- Compromises on features, availability, usability and modularity at the cost expended negatively affect the reputation of the organisation.
- The cost of individual projects remains non-measured and unoptimised as costs of infrastructure to support applications is not directly attributable on a per project basis.
- Due to the gated and slow nature of the current delivery process, small errors cascade to large disruptions, scheduling, resource needs and associated costs to individual projects, the IT department and the Organisation as a whole.
The above issues in turn contribute to strategic issues:
- The slow process employed to request and get approval – followed with the delay and norm compliance required to spin up new servers – is so onerous that it is very rarely employed beyond absolutely necessity. The process stifle investigation of new products and development techniques, innovation and potential efficiency gains.
External/Future State Assessments
The following are a mostly set of Assessments of Opportunities obtained by other Organisations moving to a more efficient SDLC process
- ITIL is insufficient:
- 77% respondents to Noel Bruton's 2004 survey either agreed or strongly agreed that “ITIL does not have all the answers”.
- DevOps provides higher value to stakeholders:
- 88% of Organisations already have or are planning to adopt DevOps in the near future25), because
- DevOps delivers 18% faster time to market26),
- 19% better quality and performance27), and
- addresses the issue that 49% of organisations complain about: that largely manual testing phases remain the bottleneck to speeding up development cycle times28).
Note:
Many more Assessments of DevOps are available in the Appendices.
Although other organisations have reported benefits from moving to DevOps, there are several key points worth rememebering when planning to to move to using it: * Companies do not migrate their whole organisation to it in one go (less that 3% have applied all processes to all legacy and future apps). Most companies have taken a reduced risk approach, and deployed DevOps in a staged manner29), exactly as we are suggesting – using DevOps for new green field projects. it is important to note that it was reported that Automation of Testing needs careful design:
- 31% of organisations have difficulty to determine the right coverage of quality validation checks [2015 World Quality Report].
Recommendations
Beyond focusing on the Assessments that directly support the development and operations of Services which support the stated Goals, some additional recommendations can be made.
- Limit investment in legacy system infrastructure and tools to manage legacy processes and legacy system infrastructure.
- Document some of this organisations most costly current processes (see Appendix) in order to make more people aware of the poor practices we currently employ, in order that we don't repeat them in the future – or design new infrastructure environments so that similar poor practices become the norm again.
Constraints
The Project's Goals are constrained by real and decided limitations.
Key Constraints and Decisions are listed below.
- Cloud First: it is a both Government and Ministry strategy that new systems be hosted in the Cloud.
- Being IaaS based, and not in alignment with either Government or Ministry strategy, no further CAPEX investment should be made in the current Test Service's JIRA service, or extensions thereof.
- Bimodal Deployment: the cost of rehosting existing applications in cloud infrastructure is too onerous to achieved successfully while meeting addressing specified Concerns and stated Goals.
Principles
The Project set Goals in a Principled manner, reflecting both the Organisation's Core Values the Project Desired Qualities.
High Level Requirements
The following requirements could be mapped from the stated Goals within the Motivation Content above:
Services
The above key Requirements defined Services to meet the stated Goals.
These Services required to meet these requirements are: * Cloud Platform Services, including:
- PaaS Hosting Services: green field applications commissioned by this organisation should be commissioned to take advantage of the cost savings of PaaS infrastructure30).
- IaaS Hosting Services: to host existing 3rd party products that were not designed to take advantage of PaaS' cost savings, an IaaS infrastructure service is required.
* A SaaS based Agile/DevOps capable Work Item Management Service: an all-organisation, cross-team, service available from within the organisation, as well as external services (3rd party consultant, development, support services, etc).
- note: The system must specifically be capable of managing Acceptance Test Definitions as distinct work Items.
* Automated Artefact Build Service: an automated build service that can automate builds, and invoke the Automated Testing Service defined below.
- note: the Build Service is in charge of running the tests that are the basis of any automated accreditation report.
* Automated Delivery Service: an automated delivery service that can be triggered – either automatically or manually – to deploy artefacts to target environments. * Automated Testing Service: a service invoked by the Automation Build Service to automated the running of static tests, deploy to build environments for further dynamic testing, develop a report, and accept/reject code integration.
- Test Script Management Service: an management system for managing manual test scripts that cannot yet be automated (to be used as sparingly as feasible).
Systems
The Services defined above are provided by the following System.
- Enterprise-grade DevOps appropriate ALM Service providing all of the services defined earlier.
* Azure Cloud Services.
Market Context
The 3rd Party Services chosen to deliver the required Services are all market leaders in their respective fields.
Microsoft's Azure is a leader in the PaaS space:
Microsoft's Azure is a leader in the Cloud IaaS space:
Microsoft's Visual Studio Team Services is a leader in the ALM Service space:
The assessment of the ALM Service being a market leader remains consistent whether it is developed formally (Gardner) or based on a crowd assessment 31).
Of specific note considering historical choices by this organisation is Gartner's rating of the service as being more valuable than Atlassian's (developers of JIRA and Confluence) offerings in its Ability to Execute and Completeness of Vision.
In addition – as per findings outlined in the Accreditation View of this document – VSTS has achieved stringent Certifications whereas the Atlassian products have not, as well as failed multiple internal security audits.
DevOps
The solution's defined systems and services rely heavily on DevOps being used to manage them.
This implies a basic understanding of what DevOps is, and is not.
Some evidence as to what DevOps can provide was given in the Summary and Assessments, and even more information has been provided in the Appendices, but a very small description is provided below.
DevOps is the union of people, Agile processes, and tools to enable continuous delivery of value to end users, by removing barriers between Development, Operations and Quality Assurance, emphasizing communication, collaboration, and continuous automated integration, quality assurance and delivery.
A primary goal of DevOps is to establish an environment where more reliable evolving applications can be released more frequently.
DevOps is an Enterprise reaction to the documented benefits of Agile delivery, and extending it beyond the development phase to the whole application lifecycle.
Of specific note is that DevOps is not: it is not DevOnly, replacing Ops (NoOps). It is all groups working at delivering the benefits of DevOps on an equal footing.
Services and Systems
The previous document section described the Services required to meet defined Goals.
The Systems to provide these Services are outlined below.
Architectural Constraints
The previous document section listed Constraints and limiting Decisions which impacted the Goals and informed the High Level Business Requirements.
Below are listed Architectural and Technical Constraints and Decisions which limited architectural options.
- Cloud First.
- The ALM Service must be a web service that is accessible by both Organisation and external vendor requirements.
- The ALM Service must provide the necessary Communication and Automation Services required for DevOps.
- The ALM Service not limit the Organisation to one platform, and instead be able to compile common languages (.NET, Java, Python Node, etc.)
- The build engine must be extensible.
- The DevOps procedures and tooling must provide automated Accreditation mechanisms and reports.
Further detail as to investigated and discounted design alternatives is available in the Appendices.
Architectural Principles
The solution was developed to meet requirements to meet (business) Principled (business) Goals.
The architecture itself of the solution was informed by a set of Architectural Principles.
These are listed below.
- Cloud First
- SaaS before PaaS before IaaS before OnPrem
- See also:
Standard Architectural View Descriptions
The above solution is described as a set of curated Architectural Views.
This following sections outlines this solution's system(s) in a series of industry recommended curated views.
The Rozanski and Woods View/Viewpoint document structure describes the complexity of the solution's systems by describing in a series of curated views, prepared from the point of view – the viewpoint – of key stakeholders.
A description of the standard architectural views are as follows:
- System Context View: the relationships, dependencies, and interactions between the system and its environment (the people, systems, and external entities with which it interacts). Includes the system’s runtime context and its scope and requirements.
- System Functional View: the system’s functional elements, their responsibilities, interfaces, and primary interactions; drives the shape of other system structures such as the information structure, concurrency structure, deployment structure, and so on.
- System Information View: the way that the architecture stores, manipulates, manages, and distributes information. This viewpoint develops a complete but high-level view of static data structure and information flow to answer the big questions around content, structure, ownership, latency, references, and data migration.
- System Concurrency View: the concurrency structure of the system and maps functional elements to concurrency units to clearly identify the parts of the system that can execute concurrently and how this is coordinated and controlled.
- System Development View: the architecture that supports the software development process. Development views communicate the aspects of the architecture of interest to those stakeholders involved in building, testing, maintaining, and enhancing the system.
- System Deployment View: the environment into which the system will be deployed, and the dependencies the system has on its runtime environment. Deployment views capture the system’s hardware environment, technical environment requirements, and the mapping of the software to hardware elements.
- System Operational View: how the system will be operated, administered, and supported when it is running in its production environment, by identifying system-wide strategy.
Note:
A summary of the industry recommended *Views and Viewpoints is available in the Appendices.*
- See also Appendices
Section Purpose
The purpose of this View is to describe the solution's context in terms of architecture, technologies and licenses – and the people, devices and systems it interacts with.
Stakeholder Context
An Application Lifecycle Management (ALM) Service a cross cutting service that affects several stakeholders:
Stakeholders affected by this solution include:
- Business:
- STAKE-0001: Business Owner
- STAKE-0002: Business Product Owner
- STAKE-0003: Business System Manager
- Technical:
- STAKE-1000: CIO
- STAKE-1100: Chief Information Security Officer (CISO)
- STAKE-1200: Support Services Manager
- STAKE-1210: Service Desk Manager
- STAKE-1220: Training Services Manager
- STAKE-1300: Operations Infrastructure Services Manager
- STAKE-1310: Infrastructure Services Manager
- STAKE-1320: Application Support Services Manager
- STAKE-1400: Web and Application Services Manager
- STAKE-1410: Web Services Manager
- STAKE-1420: Development Services Manager
- STAKE-1430: Testing Services Manager
- STAKE-2000: External Vendor Development Representative
Organisation Context
The following lists Organisations and their Departments impacted by the current Solution:
- This organisation
- IT Services
- ICT Accreditation Services
- Customer Services
- Service Desk
- Training Services
- Web and Application Services
- Web Services
- Operations and Infrastructure Services
- Infrastructure Services
- Application Support Services
- Change Management
- Partner organisations in the sector
- Vendor custom development services
The ITC Accreditation Services is business owner and steward of the Continuous Accredited Delivery Service (CADS) ALM Extension and Documentation used by other Stakeholders.
The CADS Service is accessed by several other Organisation departments, including: * Customer Services * Operations Infrastructure Services * Web and Application Services
Organisation Principles Context
Organisation Specifications Context
Below is a high level summary of the organisational Specifications which this Solution abides by.
Design Principles: * SaaS before PaaS before IaaS before Physical: prefer enterprise grade cloud hosted SaaS services * Cloud before Concrete: avoid hosting services on corporate managed infrastructure, whether virtual or physical. * Ensure 8A Access: select services that appropriately available to all stakeholders (8A stands for Available, by Anyone, from Anywhere, at Anytime, Anyhow/any device, Appropriate, Audited, Accounted access). * Secure by Design: secure systems are designed from the ground up to expect and mitigate malicious practices and invalidate inputs. Access to the design is open to all because the security relies on the design being secure in the first place. * Practice defense in depth. * Loosely Coupled, Highly Cohesive: Minimize coupling between units so either party can change without breaking existing relationships. Reduce the number and diversity of tasks individual units are designed for. * Don't repeat Yourself (DRY): avoid duplication of effort, artefacts, services, processes, code. * Develop using SOLID Principles: create a system that is easy to maintain and extend over time. * Open Source before Closed Source: When comparable in terms of functionality, prefer open source cloud based services. * Assure CIA: Ensure, Measure, Improve Assure and Accredit Confidentiality, Integrity, Availability over the lifespan of services.
System Context
A Decision was made to use a cloud hosted SaaS ALM Service in order to meet: * Organisation Design Principles:
- “Cloud before Concrete”
- “SaaS before PaaS before IaaS before Physical”
- “Ensure 8A Access”
* Strategic Organisation Objectives. * Strategic Sector Objectives (Digital Roadmap) of using Cloud based Services. * requirements that the Service be usable by both internal development and external development services in order to get a consistent quality baseline across all new cloud hosted systems used by the ministry.
The ALM Service suite is multi functional, providing key software delivery services, including the following: * Work Item/Task Management Service * Version Control Service * Build Management Service * Deployment Management Service * Test Management Service
To adhere to this solution's design principles, the solution leverages the ALM's functionality, and extends it with a loosely coupled, single high-value Extension:
Listed below are any Systems the solution's systems integrate with:
| ID | Name | Purpose | Direction | Trigger/Schedule | Technology | Protocol | Volume/Bandwidth | Security | Notes | |
|---|---|---|---|---|---|---|---|---|---|---|
| SYS-xxxx | IdP | Authentication | O | SAML | Microsoft Live, Organisation AAD |
* Direction: (I)n, (O)out, (B)oth * Trigger: (M)anual, (S)cheduled
As a PaaS, the ALM manages integrates internally with other services (SMTP, Reporting). These are not the concern of this solution.
Deprecated Systems
In the first phases the system will be used to manage and deploy only new cloud based projects.
For consistency, maintainability and cost reasons, existing cloud hosted projects may be updated in the future to use this common delivery service in the future, making per-project build services redundant.
- SYSX-xxxx: Per-project build services.
Key Decisions Context
The project tracks solution specific Principles, Constraints, Assumptions and Decisions, Issues and High Level Specifications in external documents.
Below is a selection of the above which clarify this document.
- Design Principles:
- PRINC-xxxx: SaaS before PaaS before IaaS before OnPrem
- PRINC-xxxx: Pipelines and Environments belong to Projects
- PRINC-xxxx: IT are Trusted Advisors to Projects.
- Design Constraints:
- CONS-xxxx: The ALM Service must be capable of being used for the duration of the project from definition to decommissioning.
- Assumptions:
- ASS-xxxx: Funding will be arranged to provision, support and maintain an ongoing pool of common build agents.
- ASS-xxxx: Funding of Subscriptions required at various stages of a project's application lifecycle (eg: MSDN, Azure, VSTS) will be on a per project basis.
- Decisions:
- DES-xxxx: In order to meet Solution Principles and Digital Strategy objectives the solution will be delivered using a SaaS.
- DES-xxxx: Visual Studio Team Services is the selected ALM SaaS based service.
- DES-xxxx: Common Private Build Agents will be provisioned with languages and frameworks for use by all projects.
- Design Authority Decisions:
- DAD-xxxx: N/A
- High level Requirements:
- REQ-xxxx: An Extension will be made available in the Visual Studio MarketPlace that can be downloaded into individual Project Development Pipelines.
- Risks:
- RISK-xxxx: Uploading of inadequately tested Build Step Definition Extensions may adversely affect multiple Projects relying on the custom Build Step Definition.
Technology Context
A map of key technologies relevant to the solution is presented below.
Key technical concepts are:
- Given appropriate Private Access Tokens (PAT) access, the SaaS based ALM Service can
- create resources in Azure, such as PaaS Web Application Slots.
- deploy C# applications to an Azure PaaS target Slot.
- The CADS ALM Extension is
- Uploaded to the Visual Studio Market – from which it can be downloaded into various project development pipelines – is written in a combination of Javascript, CSS, HTML5 and Markdown text files.
- It in turn Invokes a C# based console application, which orchestrates calls to various other open and closed source console applications, written for a variety of execution environments (.NET, Python, Node.JS, etc.).
- The ALM
Cloud Context
As a PaaS Service, Team Services is hosted entirely in Microsoft Azure datacenters and uses many of the core Azure services including Compute, Storage, Networking, SQL Database, Identity and Access Management Services, and Service Bus, taking advantage of the state of the art capabilities, protection, and industry certifications available from the Azure platform.
Licensing Context
A map of the licensing frameworks around key technologies of the solution is presented below.
The Solution uses a closed source SaaS ALM Service, rather than an Open Source service due to it's position Magic Quadrant Leader position – according to Gartner – that has no Open Source comparables.
The ALM Extension relies mostly several open source products for its development and execution.
In future stages the ALM Extension is expected to rely on some paid closed source products if comparable open source products are not found at that point in time.
Delivery Context
The solution is delivered using DevOps processes to manage the continuous delivery of value to stakeholders.
The primary reasons for choosing DevOps processes is its alignment with Agile, with its emphasis on:
- Continuous Delivery of Value to Stakeholders,
- Ongoing Stakeholder Engagement and Feedback,
- Avoid effort lock-in in order to re-prioritize effort early and regularly as new information becomes available.
Section Purpose
## Section Purpose ##
The purpose of this section is to summarize the primary interactions between various Roles and the solution.
## Users and Roles
The systems are designed to meet the functionality required by User Stakeholders, in terms of specific Roles which can be assigned to one or more Users.
## Use Cases / Stories
Although this solution is delivered in an Agile delivery context, in order to best meet the high level communication requirements of this View, Use Case Diagrams were chosen versus textual User Stories.
Depending on the scale of the effort required by each Use Case, they are mapped to Features and/or User Stories in the ALM system.
## Out of Scope Functionality ##
The following lists functionality is not in scope for the current Solution's design:
- XREQ-xxxx: N/A
Roles
Below is a list of the various identified Roles who require the system's functionality:
- ROLE-xxxx: System Role
- ROLE-xxxx: User Role (Group)
- ROLE-xxxx: Unauthenticated User Role (Group)
- ROLE-xxxx: Authenticated Public User Role
- ROLE-xxxx: Business System Administrator Role
- ROLE-xxxx: Business User Role
- ROLE-xxxx: Non Business Role
- ROLE-xxxx: Support User Service Role
- ROLE-xxxx: Authenticated User (Group)
- ROLE-xxxx: Authenticated Internal User Role (Group)
- ROLE-xxxx: ICT Assurance (Group)
- ROLE-xxxx: Chief Information Office
- ROLE-xxxx: Customer Services (Group)
- ROLE-xxxx: Senior Manager Customer Services
- ROLE-xxxx: Ministry of Education Service Desk (Group)
- ROLE-xxxx: Service Desk Manager
- ROLE-xxxx: Service Desk Team Manager
- ROLE-xxxx: Service Desk Team Lead
- ROLE-xxxx: Service Desk Staff
- ROLE-xxxx: Service Desk Operations Analyst
- ROLE-xxxx: Incident & Problem Manager
- ROLE-xxxx: Contingency Improvements Manager
- ROLE-xxxx: IT Group Admin
- ROLE-xxxx: Training Services (Group)
- ROLE-xxxx: Manager Training Services
- ROLE-xxxx: Training Advisor
- ROLE-xxxx: Web and Applications Services (Group)
- ROLE-xxxx: Senior Manager Web and Applications Services
- ROLE-xxxx: Applications Delivery: Sector Access and Interoperability (Group)
- ROLE-xxxx: Applications Delivery Manager (ADM): Sector Access and Interoperability
- ROLE-xxxx: Applications Delivery: Small Business Systems (Group)
- ROLE-xxxx: Applications Delivery Manager (ADM): Small Business Systems
- ROLE-xxxx: Applications Delivery: Funding Services (Group)
- ROLE-xxxx: Applications Delivery Manager (ADM): Funding Services
- ROLE-xxxx: Applications Delivery: Student Systems (Group)
- ROLE-xxxx: Applications Delivery Manager (ADM): Student Systems
- ROLE-xxxx: Applications Testing Services (Group)
- ROLE-xxxx: Manager Applications Testing Services
- ROLE-xxxx: Service Delivery (Group)
- ROLE-xxxx: Service Delivery Manager
- ROLE-xxxx: Web Services (Group)
- ROLE-xxxx: Web Services Manager
- ROLE-xxxx: Web Developer
- ROLE-xxxx: Web Advisor
- ROLE-xxxx: Web Content Advisor
- ROLE-xxxx: Operations and Infrastructure Services (Group)
- ROLE-xxxx: Infrastructure Services (Group)
- ROLE-xxxx: Applications Support Services (Group)
- ROLE-xxxx: Systems Administrator
- ROLE-xxxx: Service Desk Senior Analyst
- ROLE-xxxx: System Administrator
- ROLE-xxxx: Senior Database Administrator
- ROLE-xxxx: Application Support Service Role
- ROLE-xxxx: Application Development Service Role
- ROLE-xxxx: Infrastructure Service Service Role
- ROLE-xxxx: Change Management Service Role
Uses
The following outline how the various Roles interact with the Service.
Note: as per the Use Case convention, roles are defined as Actors in the primary Use Cases described below.
Customer Support Service Role
Customer Support Services perform multiple operations, including: * manage customer issues * initiate 2nd Level Support requests * provide feedback to production in order to:
- decrease the cost of customer support responding to performance/availability issues.
Business Support Service Role
Business Support Services perform multiple operations to help business users perform their daily functions. These operations include:
- Provision System User Accounts
- Deactivate System User Accounts
- Add or remove Roles to User Accounts
- Configure Application Settings
Application Support Service Role
Application Support Services perform multiple operations, including: * monitor the deployment of applications * monitor the subsequent operations of applications * optionally provide 2nd Level Support * perform and restore backups * implement Disaster Recovery (DR) plans * adhoc querying to bypass application reporting design limitations * provide operational feedback to production (eg: performance characteristics) in order to
- improve performance
- improve availability
- decrease the cost of customer support responding to performance/availability issues.
In order to facilitate these operations, the deployed system must be delivered with appropriate functionality including but not limited to: * application level diagnostic logging * application level monitoring * application level alerting * automatic backup scheduling
Database Support Role
In order to facilitate these operations, the deployed system's datastore must be delivered with appropriate including but not limited to: * database level diagnostic logging * database level monitoring * database level alerting * automatic backup scheduling
Application Development Service Role
The organisation provides in house development services for small business applications, as well as occassionally provide first instance support for vendor products.
In order to facilitate these operations, the deployed system must be provided with appropriate functionality including but not limited to:
- application level diagnostic logging
- application test suite to provide confidence that fixes do not break other functionality
- a configured continuous delivery service to run the test suite
Infrastructure Service Role
Infrastructure Services perform multiple operations, including: * monitor the provisioning of infrastructure network, devices, storage * provisioning of certificate artefacts required for deployment * optionally contribute to 2nd Level Support
In order to facilitate these operations, the system must be provided with appropriate functionality including but not limited to: * infrastructure level diagnostic logging * infrastructure level monitoring * infrastructure level alerting
Accreditation Service Role
In order to facilitate these operations, the system must be provided with appropriate functionality including but not limited to: * security:
- transport security must be implemented using organisation accredited technologies and protocols.
- application level monitoring and alerting must be implemented.
- infrastructure level monitoring and alerting must be implemented.
* auditability:
- auditing of all operations – including views – must be implemented.
* legal:
- the system must meet the constraints imposed by the classification of the data that the application manipulates.
- the system must meet current accessibility and usability laws by being organised in a specific manner, be developed for visually impaired internal and external users, and have the internal structure to be able to be translated to the user base's languages.
System Role
The System Role performs various operations at regular intervals.
These include the following maintenance tasks:
- Ensuing unused Anonymous Users are purged from the User System, while leaving the Session Logs and Auditing records intact.
- Processing queued asynchronous (potentially lengthy) operations.
- Unspooling temporary Operation Audit log files (eg: high speed tail-appended text log files) and recording them as structured operation audit records.
Unauthenticated User Role
Public unauthenticated users of the system are members of the Unauthenticated Role.
Note: Unauthenticated users will be provided a Session in order to create an Operation Audit Log of what they View for auditing purposes (this is a reason for the requirement for loose coupling between the SessionLog and User table: Anonymous Users will be periodically purged from the User table).
Business Role
Viewer Role
Creator Role
Authoriser Role
Organisation Roles
Below are a listed a selection of key Stakeholder Roles positively impacted by the use of: * a common cloud based services * a robust Continuous Delivery pipeline that includes Continuous Testing. * a commitment to a DevOps approach to delivering value in an Agile manner.
Business Product Owner Role
Business Product Owner Roles are positively impacted by the combination affordable and available managed Cloud based Services, and DevOps practices and tools to manage their projects throughout the lifespan of their project.
A common ALM Work Item Management Service provides a means for Trusted Advisors – including Customer Services, Infrastructure Services, Application Support Services and Accreditation Services – to provide feedback, without the need for expensive cross-group scheduling and resource meetings.
A common ALM Work Item Management Service provides a means for Project Owners to prioritize the effort required to address the feedback in order to deliver more rapidly products that are more secure, easier to manage and support, be more performant, and ultimately provide more features for less cost.
A common Work Item Service appropriately accessible to all team members adheres to common core values of Transparency, removes Communication barriers.
A common Build, Test, and Delivery Service accessible to the whole team – used in conjunction with well understood DevOps techniques such as Infrastructure as Code, Configuration as Code, Tests as Code, TDD, Agile delivery methodologies, etc. – minimizes and even removes traditionally manual cumbersome steps, improving delivery speed without sacrificing Quality.
Organisation Infrastructure Role
Infrastructure Service Roles are positively impacted in that they are able to provide their expertise as feedback to production roles while removing the reputation as a delivery bottleneck.
A common ALM Work Item Management Service available to all provides a means for the Infrastructure Service Roles to provide feedback as User Stories or Tasks visible to all project stakeholders. The User Stories are prioritized by the Project Owner Role. Infrastructure Services can monitor progress of the User Stories they published.
Infrastructure Services Role benefits from DevOps processes being used for Cloud hosted projects in that that they are no longer held responsible for the definition of the infrastructure required for a project. Advised, the developers are responsible for implementing suggestions.
Infrastructure Services Roles benefit from DevOps processes being used for Cloud hosted projects in that they will no longer be held responsible for the provisioning of infrastructure, releasing their time for other tasks, including monitoring and advisory services to stakeholders.
- Responsibilities:
- The responsibility for the design of the infrastructure definition will belong to the Development Team.
- The responsibility for implementing the recommendations of Infrastructure Services will belong to the Development Team.
- The responsibility for reviewing the implementation of the recommendations will belong to Accreditation Services.
- Advantages:
- Faster delivery to stakeholders.
- Better value for money.
- Improved infrastructure monitoring.
- Better accreditation processes.
- Considerations:
- Process changes and tools will be require.
Organisation Application Support Role
REQ-xxxx: Configuration as Code
Application Services will be positively impacted in that they will no longer be held responsible for the deployment of application packages to provisioned infrastructure.
- Process Responsibilities:
- The responsibility for the definition of the package topology will belong to the Development Team.
- The responsibility for the definition of the package configuration in all environments bar those with Production Data will belong to the Development Team.
- The responsibility for the definition of the package configuration in environments with Production Data will belong to the Application Services.
- The responsibility for implementing or not the recommendations from the Application Services will belong to Production Team.
- The responsibility for the accrediting the infrastructure topology will belong to Accreditation Services.
- Advantages:
- Improved delivery time to stakeholders.
- Better value for money.
Project Business Analyst Role
Project Business Analysts benefit from working in an Agile manner on DevOps empowered projects due to several factors.
The ALM Service provides Business Analysts a cloud based Work Item Management Service in which to document, categorize, order, Agile Work Items of various scale:
The Service manages the Work Items created by the Business Analysts – the User Stories – which
@@@@@@@@@@@@@@
and the Acceptance Tests Definitions later associated to them by Testing Services.
Organisation Testing Role
Testing Services will be positively impacted by the availability of the Continuous Delivery Service within the commonly available ALM Service in that – working in conjunction with the Organisation's Development Services – they will be able to seamlessly transition their investigative and testing skills to delivering value in a Continuous Delivery environment, without the cost of new tooling, training, and/or having to go to market to obtain resources skilled in test automation.
The process leverages each department's abilities, working together with already acquired skills and tools, to deliver tested value to stakeholders regularly.
Agile's emphasis on an iterative delivery process provides more focused testing with less wasted effort than Waterfall approaches, which often leads to one of the following scenarios:
* investing heavily up front, developing parallel to development test scripts based on completed Requirements – at the risk of developing scripts that end up significantly diverging from what gets delivered, or * wait till the end of the development phase, in order to have a nearly product to test, significantly delaying delivery to end users.
In an Agile development process, User Stories are developed by Business Analysts, and recorded in the ALM's Work Item Management Service's Backlog for the whole team to see.
Project Testers are tasked to complete upcoming User Stories that do not have associated Acceptance Test Definitions.
Only when the Business Analyst defined User Story is accompanied by its Tester defined Acceptance Test can it be moved from the Project Backlog to an upcoming Sprint Backlog for a Developer to complete.
The process of developing Acceptance Test Definitions before development, but in small iterative stages, rather than in a large up-front (or alternatively, too close to release date), advantages all parties: * Testers diminish their backlog as they only need to complete Test Definitions for Stories about to be worked on. * Testers diminish their effort in that they only have to design the tests – not execute the tests. The reason is that the task of testing becomes the job of the Build Server. * Developers are advantaged by having the Acceptance Tests attached to the User Stories – they better understand what they will be tested on. * Business Analysts are able to focus on Stakeholder requirements, knowing that Testers will be reviewing the User Stories very carefully before they are deemed ready for effort scheduling.
The responsibility of developing valuable Acceptance Tests appropriate to the submitted User Stories still belongs to the Organisation's Testing Services, but the responsibility of delivering the automated tests matching the defined Acceptance Tests are delegated to the Organisation's Development Service – resources more appropriate to the task of developing automation.
Key behavioral changes required for this symbiotic relationship to deliver value to stakeholders are as follows:
* Testers are engaged early, and test submitted work.
* As Agile processes change Design from heavy upfront design processes to ongoing, incremental design, costly and time consuming upfront Test Scripting is avoided in favour of ongoing Acceptance Test defining, recorded in the Work Item Service, attached to a User Story.
* It is the Organisation's Developer Service's to filter for Acceptance Tests definitions that have not been Automated in order to perform the conversion and submit the tests into the code base.
Alternate Function allotment
An alternate approach would be to add automation skills to the Organisation's Testing Service.
With the availability of a cross-group ALM Service, this configuration can be achieved. But has disadvantages:
- High cost of duplicating hiring, training, infrastructure and tools
- The duplication – rather than seeking to collaborate – is not in alignment with DevOps objectives of collaboration between available skillsets.
Organisation Development Services
The Organisation's internal Development Services – currently performed by SBS – will be positively impacted in several ways.
The first is that the they will be able to assist another part of the organisation, by providing an ongoing service to rapidly provide the automated testing coding required for by the Organisation's Testing Services to deliver an automated testing based service. In effect, this applies the Separation of Concerns (SoC) Principle, allowing the Organisation's Testing Services to do what it does best, investigating systems, without the cost of tooling, training and/or hiring new skills, while the Organisation's Development Team can use its existing knowledge, tools, and skills, without having to learn how to investigate and test.
The result of this symbiotic relationship is * a conversion of the Organisation's Testing Services from a time consuming blockage to a investigative and trusted advisory role * an incremental expansion of the Organisation's Development Service's responsibilities.
When the Organisation's Development Service are commissioned to produce applications (eg: PaaS based RADs) in their own right, rather than collaborate with external vendor development services, the flow is as follows:
External Development Services
Although the Organisation has an internal Development Service, the Organisation will continue to commission external Development Services to development larger specialized PaaS based services.
Due to the ubiquitous availability afforded by Cloud Services (both Azure and VSTS's ALM Service) External Vendor Development Services will be positively impacted in that they will be able to have the choice of working on premise or at their preferred place of work, while ensuring their work meets the organisation's expected level of quality, security and performance.
Of importance is that whether working on premise or off premise, closely or independently of the Organisation's Development and Testing Services, the use of the Organisation's designated Continuous Delivery and Accreditation Service will ensure their quality meets Organisation expectations.
The specific nature of the CADS Service is that it will test and reject submittals that do not meet expectations. This pre-check process will significantly reduce their risk of missing schedules by doing Assurance Testing only near the expected delivery date.
Organisation Accreditation Services
Accreditation Services will be positively impacted by the Continuous Accredited Delivery Service in that they will have a single document to review, automatically generated by the risk Continuous Accredited Delivery Service, which contain the results of an up to date automated assessment of security, performance, compliance, and functionality.
An risk summary report automatically generated from each product being delivered by the CADS will allow Accreditation Services to build a summary report of the enterprise's risk, project by project, and as a whole, allowing Accreditation Services to recommend where to prioritize effort to minimize risk for the enterprise.
For Projects of ambiguous or especially high risk, Accreditation Services may decide to request a report from external services to supplement their understanding of the project's change to the organisation's overall risk profile.
Organisation Customer Services
By adhering to the Organisation's Principles of Appropriately Available Cloud based SaaS services, the ALM's Work Item Management Service is available and of benefit to all Stakeholders.
Customer Services, specifically, Service Desk Roles, will be positively impacted in that their observations of customer usage, common issues, expectations will be able to recorded, evaluated, prioritised, developed and delivered to end users at a more rapid pace as per Agile development processes.
Service Desk Roles become efficient Trusted Advisors to Product Owners as to how to prioritize expenses on effort.
Section Purpose
The section of the document lists high level functionality desired by first Technical Domain stakeholders, and then by Business Domain stakeholders.
Section Context
The functionality described in this document section is the basis of the following document:
- Information View
- Concurrency View
Section Summary
The solution is comprised of one or more custom Extensions uploaded into an online Extension Catalogue (the Visual Studio Marketplace) configured to minimize the effort required of individual project teams – whether internal or external to the organisation – to configure their Build Management Service to meet organisation accreditation expectations.
The solution is designed to take advantages of as much of the functionality already available within the organisation specified ALM Service's Build Management Service, while remaining highly uncoupled from the underlying infrastructure.
Application Lifecycle Management (ALM) Service
The Application Lifecycle Management (ALM) Service provides several key services required for continuous delivery of valuable and tested software to end users.
These services include:
- Work Item Management Service: to manage Agile Work Items (Epics, Features, Stories, Tasks, Bugs).
- Version Control Service: to manage the versioning and integration of change to code, configuration, and documentation produced.
- Build Management Service: to manage Build Configurations which define how to pull code from the Version Control Service and build artefacts.
- Release Management Service: to manage the Release Configurations which define when and where to publish built artefacts to.
- Test Management Service: to manage the manual testing required to determine what to automate.
The Work Item Management Service
The Work Item Management Service manages the Work Items (Tasks) of Projects developed by Teams of Users.
The service allows for Agile Work Items to be managed on a shared team Kanban Board as Epics, Features and User Stories + Acceptance Tests, and Bugs:
The functionality provided by the above ensure the following requirements are met:
- REQ-xxxx: Development projects must be managed, developed, tested and deployed, supported and enhanced using Agile tools and approaches.
The Version Control Service
The ALM Service's Version Control Service manages a distributed versioned code repository that is clonable to developer workstations.
The Version Control Service is used by projects to manage and version:
- Source Code
- Configuration Templates
- Deployment Templates
- Development Documentation
The Version Control Service is directly used by the following Roles:
- Project Development Roles, which depending on the size of the project will be one or more of the following:
- Internal Organisation Development Role
- External Vendor Development Role(s)
- External Review Role (Security and Maintainability Code Review)
In no circumstance is the Version Control Service to be used to manage sensitive data and artefacts:
- Copies of all or partial Production Data
- Certificates
- Security Tokens/Passwords/Keys
- Confidential Information
The functionality provided by the above ensure the following requirements are met:
- REQ-xxxx: Solution source artefacts must be managed using a a distributed version control repository.
- REQ-xxxx: Development policies must be in place to limit the persisting of any form of production data to the ALM Service.
The Build Management Service
As part of the ALM Service's functionality, VSTS provides a Build Management Service.
It is comprised of a hosted Build Controller, which manages a Build Queue of queued Build Jobs, which are spooled to a Build Pool of Build Agents.
A Build Job executes a Build Definition which contains a custom sequence of Build Steps selected from a selection of free and open source Visual Studio Extensions available for this purpose from the Visual Studio Marketplace.
An example of a Build Definition comprised of a series of common Build Steps is provided below.
The ability to develop, upload and maintain custom Build Step Extensions to the Visual Studio Marketplace is a key aspect of this solution.
Custom Build Step Extension
This solution relies on the ongoing development of a custom Build Step, discussed later in this View.
The Release Management Service
As part of its ALM functionality, VSTS provides a Release Management Service.
The Release Management Service manages Release Definitions which queue the delivery of cached Build Artefacts built by a successful Build (see above) to a target Environment, upon receiving a Signal.
The Signal can be automatic or manual (eg: acceptance by an appropriate stakeholder).
A Deployment Job executes a Deployment Definition which contains a custom sequence of Deployments Steps selected from a selection of free and open source Visual Studio Extensions available for this purpose from the Visual Studio Marketplace.
An example of a Deployment Definition comprised of a series of common Deployment Steps is provided below.
Depending on the Deployment Definitions choice Deployment Steps and their configuration, a Deployment Job deploys to one or more target environments:
As per Organisation Design Specifications, in order to fully take advantage of the lower maintenance processes and cost of PaaS as compared to IaaS, PaaS should be the preferred option for new development projects.
Custom ALM Build Step Extension
This solution relies on the development of a custom Build Step capable of invoking a series of static text and binary tests.
Organisation Users setting up their project will download the Build Step Extension from the Build Step Catalogue and let the Build Job execute it:
The solution's custom Build Step Extension invokes a series of free and commercial 3rd party tools to develop parts of the final accreditation report artefacts.
A sequence diagram outlining key steps is defined below:
Accreditation Report
The results of the various tests run by the Console applicatoin are consolidated into a common report available to Accreditation Services.
The report will provide to Accreditation Services a report that outlines the factual readiness of an application for deployment using production data*.
Requirements: * REQ-xxxx: Accreditation Reports for all Builds should be persisted. * REQ-xxxx: Accreditation Reports for Systems deployed with access to live data must be persisted. * REQ-xxxx: Persisted Accreditation Reports must be accessible by Accreditation Services.
- This could maybe be done in a first instance by attaching the Reports to an Email so that they are persisted in the Accreditation Service Role's Email server. A dedicated website could be envisioned later.
Section Purpose
This section describes the concurrency structure of the system, and maps functional elements to concurrency units, addressing how they are coordinated and controlled.
Section Context
Section Summary
The project currently has no specific functional concurrency requirements, beyond those related to performance throughput requirements outlined in the Functional View. These requirements are met by the multi-threaded capabilities of the Execution Environments listed in the Development View.
Section Purpose
This section of the document describes how the Solution stores, manipulates, manages, and distributes both static data structures and dynamic information flow.
System Data Stores
As a SaaS, Visual Studio Team Serviecs (VSTS) persists the data it manages in a datastore backing the application:
Remote Access to the System Data Store
The System does not provide external systems to access its datastores by any other means than authenticated and authorised APIs.
Data Content
The solution manages the persistence of the following privacy significant forms of information:
- VSTS Accounts, which contain the Display Name and Email Address of organisation – and optionally vendor – Users.
Data Classification
Considering that the system displays the Display Names and email addresses of delivery users (Project Managers, Product Owners, Developers, Testers), as per the information below, the solution manages data is classified as:
IN CONFIDENCE.
Note: refer to the Appendices for detailed information on how the Data Classification is defined.
Data Classification Impact
The requirements and impact of the above Data Classification is listed in the Appendices.
The document describes how the system complies with the requirements in the relevant sections of this document.
The Risk of Data
The risk of data at transit and at rest in an environment.
The removal of Production data from all environments reduces the organisation's exposure allowing it to concentrate on protecting only the production environments:
Requirements: * REQ-xxxx: Original or obfuscated copies of production data, in whole or subset MUST NOT be used or made available from any environment other than Production and Production Staging.
Data Per Environment
Under no circumstances will copies of original or obfuscated, whole or subsets, of production data be used in any environment.
System Data Responsibility
Each system is responsible for its own security and privacy protection. This solution's systems are responsible for protecting the data it caches from other systems, but not the other system's application, data storage or transport security and privacy protection.
Although not held responsible, it is in this solution's interest to assess the impact of any discovered issues with a remote system, and report it to the remote system's maintenance channel.
Data Sizing and Growth Requirements
The managed service provides unlimited storage of: * Work Items * Version Controlled Code * A configurable number of previous builds and deployments (usually 10).
That said, the number of projects managed by the system is expected to grow moderately and more or less linearly:
Architecturally Significant Data Entities
Being a managed SaaS service the system data store's design is not known to end users and is irrelevant to the proposed solution.
For conceptual reference purposes only, below are key conceptual data entities:
Data Protection
Under no circumstances is Production Data to be made available outside of Organisation controlled environments (e.g.: Vendors).
Data Migration
No database migration is required or possible.
The System's Projects, Users, Code, Build Definitions are provisioned on a per Project basis: each Project that requires the service will be provided a new Project space, using the provided Web UI or API calls.
Data Auditing
The Provided Service provides auditing of key operation, including:
- Work Item creation, updating, deletion
- Build Definition creation, updating
- Release Definition creation, updating
Data Encryption
Data Archiving
The service does not currently provide a means to Archive projects.
Data Exportation
Version controlled Code can be cloned.
Work Items can be exported to Excel and/or MS Project.
Data Deletion and Retention
Data is currently retained indefinitely.
Media Backups
Data Security
Section Purpose
This section describes the architecture and processes that supports the Development, Support, Operation, Maintenance, Enhancing and Decommissioning process over the lifetime of the application – not just a Big-Bang first delivery.
Section Context
The functionality described in this document section is related to the following views:
- Functionality View
- Information View
- Deployment View
Section Summary
The solution is an Visual Studio Marketplace Extension Build Extension, written in node.js, which downloads a Nuget package containing an command line *.exe, which in turn ShellExec's a series of command lines test tools, finally collating their output together into a single coherent report.
Development Preamble
Before describing the development process it is of interest that there is a certain recursive nature to developing Extensions for a Continuous Delivery Service: management of the tool is using the same tool that one is actually extending…
In effect, a VSTS Project is required to be registered within the ALM Service, in order to manage the project's Work Items, Repository, Build and Deployment Definition.
As the extension is extended and redeployed, the tests it implements will be reapplied to itself, ensuring it meets the same quality measures expected of others.
Development Prerequisites
In order for a development team to adhere to agile principles and deliver value to stakeholders, functional services, subscriptions, licenses and tools are required.
Prerequisites: ALM Service
Prerequisites: ALM Service
The solution will be managed using a cloud hosted Application Lifecycle Management service.
- REQ-xxxx: Project Application Lifecycle Management ALMs Services must be accessible from within and outside the organisation.
- REQ-xxxx: Project Application Lifecycle Management ALMs Services should be cloud hosted.
Prerequisites: Version Control Context
The solution's artefacts will be versioned within a distributed code repository.
The primary repository will be the production team's Git Repository, managed by the ALM's Version Control Service:
Prerequisites: Continuous Delivery Service
The first principle of the Agile Manifesto is “continuous delivery of valuable software”.
In order to perform “continuously delivery”, a Continuous Integration, Build, Testing and Accreditation and Delivery Service is required.
Prerequisites: Continuous Delivery vs Continuous Deployment
The difference between a Continuous Delivery Service and a Continuous Deployment Service is that delivery to target environments with a Continuous Delivery Service is a decision (by change control, accreditation services, etc.) – whereas in the second case, every commit to the code base that passes all automated tests is trusted to be deployed to production immediately.
This organisation – and this Project – uses a controlled audited Continuous Delivery based approach.
Prerequisites: Continuous Delivery: Code Branch Integration
The Continuous Delivery Service employed to deliver the solution will perform Continuous Integration activities and verify submitted code feature branches before integrating the code with the protected master branch.
If the Continuous Delivery Service rejects the code due to it failing tests – or a Pull Request reviewer (see elsewhere in this View) has manually rejected the submission – the developer has to fix it and try again before the Continuous Delivery Service will allow the submitted feature branch to be integrated with the protected master branch:
- REQ-xxxx: The Project's CodeBase must be protected by testing new code submissions prior to integration.
Prerequisites: Continuous Delivery: Integration Testing
Continuous Integration (see above) relies on the completeness of Tests.
The testing to be automated by the Continuous Delivery Service is as follows:
Note that in the above diagram the test categories marked with a pencil are those that are still often done manually. This is unacceptable as it is detrimental to delivering value often to the client, for the following reason:
After accelerating other aspects of the delivery pipeline, teams typically find that their testing process is preventing them from achieving the expected benefits of their SDLC acceleration initiative. Testing and the overall quality process remain problematic for several key reasons.“ ”…“ Iteration length has changed from months to weeks or days with the rising popularity of Agile, DevOps, and Continuous Delivery. Traditional methods of testing, which rely heavily on manual testing and automated GUI tests that require frequent updating, cannot keep pace. At this point, organizations tend to recognize the need to extend their test automation efforts.” Src: Wikipedia: Continuous Testing
Note: The architecture of the test pipeline and the coordination of the tools required to test the above is described in another document (see: CADS SAD document).
Prerequisites: Continuous Delivery: Accreditation
The value of the Continuous Delivery Service is delivering creditable value rapidly to user role stakeholders, without being delayed by manual testing and accreditation.
The tools used in a Continuous Tested environment to test Security, Performance, Usability, Accreditability each produce reports.
The assembly of the primary points from each report can be assembled into a single report which becomes the basis of the Accreditation report which Accreditation Services uses to Certify and Accredit a solution:
Prerequisites: Continuous Delivery: Organisation provided CADS Service
It is important to note that in the above testing diagram only two types of test are specific to a solution:
If available it is preferable that the project uses an organisation provided common Continuously Accredited Delivery Service (CADS) in order to remove much of the the delivery pipeline setup cost being repeated per project.
Note: Refer to the CADS SAD document.
Prerequisites: Subscriptions, Licenses, Tools&noheader
In order to work without hindrance or delay the following Accounts, Subscriptions, Licenses and Tools are needed in order to manage work items and produce valuable deliverables:
- Project Managers:
- TBD.
- Developers:
- Accounts:
- Each Developer must have an MSDN Account (associated to an SSO Account).
- Each Developer must have an Azure Services Account, associated to an SSO Account (Microsoft Account, or Azure AD Account)
- Subscriptions:
- Microsoft Developer Network License (MSDN), required in order to use Visual Studio, local instances of Sql Server, etc.
- Azure Subscription (required to access cloud services, as well as use Visual Studio Team Services)
- Resharper Ultimate, which provides code maintainability and quality verification tools
- Licenses:
- CodeIt.Right, in order to configure one part of the Continuous Delivery Service's testing tools.
- Tools:
- Visual Studio (latest version) – comes with a valid MSDN license
- Resharper – comes with the Ultimate license
- CodeIt.Right (optional – see above)
* Testers:
- Accounts:
- TBD.
- Subscriptions:
- TBD.
- Licenses:
- TBD.
- Tools:
- TBD.
- Continuous Delivery Service:
- Accounts:
- TBD.
- Subscriptions:
- TBD.
- Licenses:
- CodeIt.Right
- Tools:
- TBD.
ALM Service Extensions
The ALM Service is extensible either by several ways33): * creating REST based service hooks to be notified of events within external applications, or * creating Extensions (new build tasks, dashboard widgets, etc) which directly integrate within the ALM Service
This solution relies on the creation of an Extension.
Build Step Extension
An Extension is comprised of the following elements34):
The Manifest35) file describes the extension to the Visual Studio Marketplace.
The static artefacts include images, markdown help files, and one or more scripts to be run on the Build Agent.
CADS.exe
One of the static script files within in the Extension is an initialization script invoked by the Build Service Agent.
The script acts as a simple shim and in turn ensures a specific Nuget Package is either available, or downloads it, before unpacking the package and invoking the custom command line tool within the Package.
The command line tool scans the project's source control for a config file which it will use to guide it through subsequent steps.
The Config file is used by extract settings, which it uses to determine how best to invoke a set of test tools.
Config File
The Config File used by the CADS.exe file is used to guide the tests that are applied.
{
"project" : {
"department": "EIS"
},
"report" : {
"title" : "Foo Project Accreditation Report"
},
"tools" : {
//Run all Tools unless excluded (maybe the project is not ready to be rigorous:
"excluded" : []
}
}
}
Test Tools Invocation
Using the Build Step Config File, the Command Line Executable invokes a series of Command line based test tools.
These tools are open and close sourced command line tools currently being used by Security and Performance Specialists to analyse code.
The process by which they are invoked is typical .NET, similar to the following example:
Process p = new Process(); p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.FileName = "Write500Lines.exe"; p.Start(); // To avoid deadlocks, always read the output stream first and then wait. string output = p.StandardOutput.ReadToEnd(); p.WaitForExit();
Src: MSDN
Build Step Report Consolidation
The various tools invoked by the Command Line Tool each produce their own output.
Some tools produce XML, some tools produce text output. Most tools produce far more than is required in order to produce a valuable report for both developers and accreditors.
For this reason, each tool's output needs to be preprocessed using Regular Expressions in order to find and extract the most actionable information within the output(eg: Error messages and top 5 warnings, dropping the rest) and converting it to a common format.
The preferred portable syntax used within the Visual Studio Extension marketplace is Markdown.
Report Generation
Markdown can in turn be converted, using Pandoc (again invoked using .NET Shell commands), to richer formats as needed within the organisation.
The process by which the various test results are consolidate is demonstrated below.
Report Availability
The generated report will be made available to stakeholders. In a first instance, the report will simply be a configured to be sent via SMTP to the email address accessible by the Accreditation Services.
Report Persistence
The final Markdown report will be committed back into the Repository to a configured location, using Git.
This ensures an accreditation report is persisted with the code base, and remains available for review at any time in the future.
Extension Manifest
{
"manifestVersion": 1,
"id": "cads-01",
"version": "0.1.0",
"name": "Continuous Accredited Delivery Build Step",
"publisher": "moe-nz",
"targets": [
{
"id": "Microsoft.VisualStudio.Services"
}
],
"public": false,
"scopes": [
"vso.build",
"vso.build_execute",
"vso.work",
"vso.code",
"vso.code_write",
"vso.code_manage"
"vso.code",
"vso.code_write",
"vso.code_manage"
],
}
Summary Purpose
This section describes the environments and processes required to support the Development, Support, Operation, Maintenance, Enhancing and Decommissioning process of the solution's components over the lifetime of the application – not just a Big-Bang first delivery.
Application LifeCycle Management Service Deployment
Being a Software As A Service (SaaS), the base ALM Service does not require deployment.
Application LifeCycle Management Service Extension Deployment
Although the base ALM Service does not require deployment, a custom Extension to the ALM Service – does need to be deployed to an Extension Catalogue.
Once deployed, users – including developers internal to this Organisation and external vendor developers – can download the Build Step Extension, install and configure it as per their specific needs within their project's Build Definition.
Visual Studio MarketPlace
As per the Functional View, individual projects using the ALM Service's Build Management Service will define their own Build Definition, each composed of one or more Build Steps.
These Build Steps are selected from a Build Steps Catalogue (the Visual Studio MarketPlace).
In order for a project developer to complete a Build Definition using a custom Build Step Extension, it must first be uploaded to the Visual Studio MarketPlace:
Build Step Extension
The Build Step Extension acts a Shim between Visual Studio Team Services' Build Management Service and a custom Command Line Interface (CLI) based application (CADS.exe) which is downloaded by the shim onto a Build Agent and executed:
Build Step Extension Deployment
Due to the Build Step Extension being relatively thin in functionality – acting only as a shim to a faster changing CLI EXE (discussed later) – the Build Step Extension is expected to be deployed once, and updated rarely.
When Updating, “the updated version of your extension is automatically installed to accounts that have it already installed. New accounts where the extension is installed in the future also receive the latest version.36).”
Confirmed by Microsoft Support, updates to Build Step Extensions can be uploaded without affecting existing users by modification of the Extension's metadata file, giving it a different name per environment (eg: CADS-DEV, CADS-ST, CADS-PROD). For example:
{
"manifestVersion": 1,
"id": "CADS-ST",
"version": "0.1.0",
"name": "CADS Build Step (ST)",
"publisher": "ThisOrg",
"targets": [
{
"id": "Microsoft.VisualStudio.Services"
}
]
}
Private/Public Deployment
The Visual Studio MarketPlace displays a selection of public free and paid for Extensions. Any person or organisation can develop a custom Extension and upload it for public use by others. Extensions can also be marked as private and used by invitation only.
Inviting users to use private Extensions is only appropriate for development and early testing purposes. Production Build Step Extensions ready for use should be made public.
- REQ-xxxx: Development instances of Visual Studio Extensions should be marked private.
- REQ-xxxx: Production instances of Visual Studio Extensions must be marked public.
- Rational: Public access promotes use and therefore feedback.
Nuget Package
As per current Microsoft standards, the CLI Executable (CADS.EXE) is packaged and made available within a Nuget37) package.
Nuget Package Deployment
This Nuget Package is required to be uploaded to the online Nuget Package Catalogue site (https://nuget.org) before the Build Step Extension can download it in order to invoke the CLI Executable (CADS.EXE) within it.
Whereas the Build Step Extension is intended to be rarely updated, the Nuget Package is intended to be updated regularly, in an Agile manner, as functionality is added.
The deployment to the Nuget Package Catalogue is – of course – automated by the project's Release Management Service.
Requirements: * REQ-xxxx: A Nuget.org Account must be created in order to upload Nuget packages.
- REQ-xxxx: For Integrity and Auditability reasons the Nuget Catalogue Account should be the Developer's Name.
- REQ-xxxx: For Business Continuity reasons, the Nuget Catalogue Account credentials should be known by an Organisation employee, and not solely by Contractors.
* REQ-xxxx: The Project Metadata should reflect the Organisation as tool Steward.
Invoked Test Tools
The CLI EXE invokes a series of tools that must either already be on the Build Agent or are downloaded via Nuget packages.
The list of executables environments available on a Build Agent is always growing. A current list can be found at the following location:
At time of writing, the list included – amongst many other items – the following key execution environments and common tools:
- Visual Studio .NET SDK
- Azure
- ASP.NET
- Cordova 6.1.0
- Git 2.10.1
- Microsoft Office Developer Tools
- Node.js
- Python
- SQL Lite
- SQL Server
- TFS Build Extensions
- TypeScript
- WIX
- Web Deploy
Of special note were the execution environments: .NET Python * Node.js * TypeScript
The first set of tests that will be configured will be:
- IT:AD:Resharper (using the free CLI)
Invoked Tool Deployment
Most if not all of the tools listed above are already available as packaged that can be downloaded by script to the Build Agent.
Section Purpose
## Section Purpose ##
This section describes how the solution’s Systems will be monitored, administered, supported and operated while running in a Production and/or Training environment.
## Section Summary ##
## Section Structure ##
Policies
Principles
Patterns
Technologies
Environments
Roles
Procedures
Training
Maintenance (Schedule/Recertification)
Tracing
Tracing is a non-permanent Diagnostics tool for Maintainer and Developer stakeholders.
TODO: None defined at this point in time.
Governance
Monitoring
Monitoring is implemented at the infrastructure, execution environment, and application level, as per requirements within [DOCR-0002: Enterprise System Specifications].
The following monitoring is in place and forms the basis of raising the alerts specified under: Alerting
Name Type Level Implemented Accessible To Description Traceability MON-xxx OS PerfCounters MON-L-2
SysAdmin
NFR-xxxx,MONL-xxxx Refer to: [DOCR-0007: Enterprise Solution Architecture Description (SAD) Reference]
Alerting
The data collected by the monitoring mechanisms defined above along with specified rules are used to create Alerts sent to the specified Stakeholders: Name Type Alerts Description Traceability ALERT-xxxx Performance Monitor [SysAdmin|Business]
MON-xxxx,NFR-xxxx
Consider: Windows Performance Counters, to measure: OS Performance IIS Performance SQL Server Performance Application Performance
Auditing
Long term auditing – not to be confused with Diagnostic Tracing – is built into the Solution’s services as follows:
- None defined at this point in time.
«INSTRUCTIONS: Describe what database, what table, and what types of information are persisted. » «SUGGESTED PREFIX: AUDIT-xxxx»
Support Reports
The following lists Reports that are available to Support stakeholders to assist with supporting the application: TODO: None defined at this point in time.
Note: These reports were already listed within the Deployment View.
- SUGGESTED PREFIX: REPORT-xxxx
Configuration Management
The following lists means of configuring the application: TODO: None defined at this point in time.
- Host specific .config files in the root directory of the package. A database table used to persist shared application settings – but note that shared updatable only via Application UI. Etc.
Operational Management
Common ongoing Management requirements are outlined within [DOCR-0002: Enterprise System Specifications].
Below are listed any additional operations specific to this solution during the lifespan of the application.
«INSTRUCTIONS: Ongoing operations tasks are outlined in [DOCR-0002: Enterprise System Specifications], but each system will have their own specific tasks.
## Periodic Management
The following tasks must be reviewed periodically:
- MGMT-01:
- Ensure Infrastructure is periodically reviewed to be sufficient to handle expected loads over the following agreed timespan till the next review.
- MGMT-02:
- Ensure used infrastructure is supported, and plans are prepared and followed to remain supported until deactivated.
- MGMT-03:
- Ensure licenses are current, and plans are prepared and followed to ensure continued service until the solution is deactivated.
- MGMT-04:
- Ensure Firewalls remain active.
- MGMT-05:
- Ensure Firewall, Network, Node, Execution Environment, and Application Monitoring remain active and supported.
- MGMT-06:
- Ensure Firewall, Network, Node, Execution Environment, and Application Monitoring Alerting remains active, and reporting to an appropriate System/Resource.
- MGMT-07:
- Ensure Diagnostic tracing is active at the appropriate level.
- MGMT-08:
- Ensure Licenses remain supported.
- MGMT-09:
- Ensure Service Subscriptions remain activated.
- MGMT-10:
- Ensure Certificate are reviewed and resubscribed periodically.
- MGMT-11:
- Ensure that Backup plans are prepared and tested periodically.
- MGMT-12:
- Ensure that DR plans are prepared and tested periodically.
- MGMT-13:
- Ensure that the Data Export plans used during Disengagement remain current, and that the feature is periodically tested.
Operational Support Process
Users of the system can expect to be supported as per the support channels documented within [DOCR-0007: Enterprise Solution Architecture Description (SAD) Reference]
Security Incidence Management Plan
Risk Management
The following lists raised Risks, whether they have been Addressed, or the means they are being Managed:
Backup and Restore
The following lists systems data stores, whether they are backed up, where, and at what interval. * None defined at this point in time.
Disaster Recovery
In the eventuality that the System becomes unavailable, the following list the elements involved in a Disaster Recovery Plan: * None defined at this point in time.
Business Continuity Plan (BCP)
In the eventuality that a business significant part of the system is unavailable to user role stakeholders, the following lists contingency plans:
- None defined at this point in time.
Disengagement Plan
None defined at this point in time.
Which stakeholders to contact, any process we know of, etc.
Generally, this section is pretty vague to useless as nobody expects a system to be disengaged. But do try anyway.
- SUGGESTED PREFIX: END-xxxx
ALM Service Subscription Management
The ALM Service Subscription (VSTS Subscription) used for Development purposes will be managed by the Project's Development Service on behalf of the Organisation's Accreditation Service.
There is no additional cost for an VSTS based Project beyond what is already required by Development Services (MSDN Subscriptions).
Package Management
Access Tokens will be generated and configured within the Build Management Service and Deployment Management Service in order for the Services to deploy packages to the Visual Studio MarketPlace and Nuget.org.
The Organisation or Vendor Development Service who is responsible for developing the Build Step Extension will manage these tokens.
There is no cost for hosting packages on Nuget.org or Visual Studio MarketPlace.
Build Pipeline Management
As is customary in all projects that have used a Build Management Service, the Project will configure and manage its own build pipeline's Build Definitions and Deployment Definitions.
Service Management
Microsoft will manage their own SaaS based Service, including its monitoring, security and performance.
Any alerts raised by Microsoft will be channeled to the ALM Service's project dashboard.
The Organisation's Application Support Service is proficient at managing virtual infrastructure within the Organisation's network, but is not required for this project as they will not be consulted by Microsoft.
Service Security Management
Microsoft developed the ALM Service and the Azure stack it depends on following Microsoft's Security Development Lifecycle (SDL), implementing Security in Depth, investing in the prevention of security holes, include threat modeling during service design and following design and code best practices.
See the Accreditation View for further details.
Operation Management
The Operation of Azure, Visual Studio Team Services, Visual Studio MarketPlace or Nuget.org is Microsoft's responsibility.
The responsibility of the use of the VSTS Service to manage this project is the Project's responsibility during development, deployment, and – as per Agile DevOps patterns – subsequent ongoing lifespan.
See the Accreditation View for further details.
Security Incidence Management
Any Security Incident within Azure, Visual Studio Team Services, Visual Studio MarketPlace or Nuget.org is Microsoft's responsibility.
Note that the project does not manage confidential information in any way – not within the source code, Build Step Extension package or Nuget package. There is therefore currently no requirement for a Security Incidence Management Plan.
See the Accreditation View for further details.
Service Availability Management
Service Availability of Azure, Visual Studio Team Services, Visual Studio Marketplace, or Nuget is Microsoft's responsibility.
See the Accreditation View for further details.
Hardware and Service Failure Management
Service Hardware and Service Failure Mitigation is Microsoft's responsibility.
See the Accreditation View for further details.
Disaster Recover Management
It is Microsoft's responsibility for providing DR for Azure, Visual Studio Team Services, Visual Studio MarketPlace, and Nuget.org.
See the Accreditation View for further details.
Infrastructure Management
The Organisation's Infrastructure Support Service is proficient at maintaining Virtual Infrastructure within the Organisation's network, but is not required as the Project requires no Organisation or cloud based Virtual Infrastructure in order to develop the Build Step Extension.
Customer Service Management
Customer Support Services is channeled through the Visual Studio MarketPlace.
Feedback to Extensions are emailed to the Visual Studio Marketplace Account identity's email.
If the Extension's usage warrants more, the project can in the future investigate the usage of UserVoice – an online SaaS specializing in collect User Feedback – which is known to integrate well with VSTS.
Adoption and Change Management
Adoption and Change Management (ACM) is required to be developed to communicate to existing IT Resources that * existing projects will continue to be managed using current procedures, and will not be modified to DevOps * new projects will be managed using DevOps methodologies.
For new projects this implies a series of changes including – but not limited to – the following:
- Projects manage their own Azure Cloud Subscriptions, with guidance offered by Infrastructure Architects.
- Projects will manage their own ALM Subscriptions.
- Projects are encouraged to save the Organisation unnecessary cost and prefer designing solutions to be hosted on PaaS over IaaS.
- No Production data, in whole or in part, in clear text, encrypted or obfuscated, will be used in any environment bar production.
- PaaS based architectures will naturally curtail the use of ETL, and applications are expected to provide APIs for integration purposes.
- It is a Project's responsibility to clarify that it is each developer's responsibility to act responsibly and with care when working with Cloud resources. Unexpected use of resources will be resolved as required.
- Projects will manage all phases using a cross-team ALM Service.
- Of specific note is the fact that Testing Services will not require Production services to use their Issue Tracking Software either during production, or for testing purposes.
Business Continuity Management
Being a SaaS service, hosted on resilient Geo-redundant infrastructure, the need for an BCP plan becomes moot.
That stated, for discussion purposes only, Visual Studio Team Services is the online hosted version of Microsoft's Team Foundation Services (TFS). Being identical products it would be possible to stand up an in-house or cloud-hosted Virtual Machine on which to run TFS and regularly export/import data between the two systems.
See the Accreditation View for further details.
Disengagement Management
3rd party tools exist 38) to extract data from VSTS.
But the work can be done without 3rd party tools if required as exporting the core data is relatively trivial:
- the actual work created – the code – is contained in a distributed Repository that is easily replicable to any other workstation and or build server.
- the metadata used to manage the work created – the Work Items can be extracted from the Work Item Management Service easily using Excel, or the Work Item Management Service's APIs directly if needed.
- the exact method by which build and deployment workflows are extracted would required further investigation
It is important to note that Visual Studio Team Services is the online hosted version of Microsoft's Team Foundation Services (TFS). Being identical it is possible to stand up an in-house or cloud-hosted Virtual Machine on which to run TFS, and port the data, metadata and workflows to it. That stated, the logic of doing this, with all the infrastructure, migration, accreditation, support and maintenance costs is highly questionable.
Accreditation Management
The development of the Build Step will be passed through a Build Definition which includes itself. The generated report will act as both as Test and Accreditation Report.
Change Control Management
It would be an oxymoron to use an ITIL based Change Control process to control an Agile DevOps based solution to a problem associated with ITIL.
Section Purpose
The purpose of this View is to list information relevant to providing a Risk based assessment of the Security, Performance, Functionality, Supportability and Maintainability Qualities of the Project.
Certification
The Cloud based ALM Service - Visual Studio Team Services – has achieved third-party evaluation of our data security procedures.
- ISO 27001:2013,
- HIPAA (Health Insurance Portability and Accountability Act)
- BAA (Business Associate Agreeement),
- EU Model Clauses,
- SOC 1 Type 2 and
- SOC 2 Type 2
The SOC audit for Team Services covers controls for data security, availability, processing integrity, and confidentiality.
Certification Comparison with other ALM Tools
It is important to note that JIRA products, whether hosted in the cloud, or on site, do not offer anywhere near this level of certification and therefore protection from risk.
"ISO27001 - We follow many of the principles of ISO27001/2 in our security practice but have no current plans to certify. You can read more about the structure of our Security Management Program. Cloud Security Alliance - We have completed our Cloud Control Matrix CAIQ Self Assessment for the CSA Security, Trust, & Assurance Registry. HIPAA / HITECH – For our Cloud products, we are not able to sign a Business Associate agreement and we recommend our Server products for companies that need to comply."
Data Classification
Within the scope of this project no Data is manipulated.
Therefore the Data Classification is UNCLASSIFIED.
Access Management
Access to the VSTS Project is limited to the Project's Team Members - neither the rest of the Organisation or Public have access.
Access to Visual Studio Team Services is controlled by VSTS' Role Based Access Controls (RBAC) capabilities.
Service Security Management
Microsoft developed the VSTS and the Azure stack it depends on following Microsoft's Security Development Lifecycle (SDL), implementing Security in Depth, investing in the prevention of security holes, include threat modeling during service design and following design and code best practices.
Src: https://www.visualstudio.com/en-us/articles/team-services-security-whitepaper
Operation Management
The Service is managed in compliance with Microsoft's Operational Security Assurance (OSA) which include constant verifying security with standard tooling and testing, limiting access to operational and customer data, and gating rollout of new features through a rigid approval process.
Src: https://www.visualstudio.com/en-us/articles/team-services-security-whitepaper
Security Incidence Management
In the event of a breach, Microsoft use security response plans to minimize data leakage, loss or corruption. Relevant progress state is reported publicly.
Src: https://www.visualstudio.com/en-us/articles/team-services-security-whitepaper
Service Availability Management
Team Services relies on Azure's DDoS defense system to prevent attacks against the service. It uses standard detection and mitigation techniques such as SYN cookies, rate limiting and connection limits. The system is designed not only to withstand attacks from the outside but also from within Azure. For application-specific attacks that are able to penetrate the Azure defense systems, Team Services establishes application and account level quotas and throttling to prevent any overuse of key service resources during an attack or accidental misuse of resources.
Src: https://www.visualstudio.com/en-us/articles/team-services-security-whitepaper
Hardware and Service Failure Management
To protect data in the case of hardware or service failures, Microsoft Azure storage geo-replicates customer data between two locations within the same region that are hundreds of miles apart; for instance, between North and West Europe or between North and South United States.
Src: https://www.visualstudio.com/en-us/articles/team-services-security-whitepaper
Disaster Recovery Management
Visual Studio Team Services leverages many of the Microsoft Azure storage features to ensure data availability in the case of hardware failure, service disruption, or data center disasters.
Additionally, the Microsoft has procedures to protect data from accidental or malicious deletion.
Src: https://www.visualstudio.com/en-us/articles/team-services-security-whitepaper
Business Continuity Plan (BCP)
The Azure platform on which Visual Studio Team Services runs has sufficient BCPs in place to support this Organisation's needs.
Src: https://www.visualstudio.com/en-us/articles/team-services-security-whitepaper
Disengagement Management
As described within the Organisation View, Tools are available to either * port data from VSTS to an IaaS based instance of Team Foundation Services (TFS), or * extract the data for preparation for another suite of DevOps relevant Services.
That stated, for discussion purposes only, Visual Studio Team Services is the online hosted version of Microsoft's Team Foundation Services (TFS). Being identical products it would be possible to stand up an in-house or cloud-hosted Virtual Machine on which to run TFS and regularly export/import data between the two systems.
Document DataStore Location
Rozanski and Woods SAD Structure Concepts
Summary
The following is a summary of key concepts used to describe Systems using the Rozanski and Woods View/Viewpoint based structure.
Standard Architectural View Descriptions
The Rozanski and Woods View/Viewpoint document structure describes the complexity of the solution's systems by describing in a series of curated views, prepared from the point of view – the viewpoint – of key stakeholders.
A description of the standard architectural views are as follows:
- System Context View: the relationships, dependencies, and interactions between the system and its environment (the people, systems, and external entities with which it interacts). Includes the system’s runtime context and its scope and requirements.
- System Functional View: the system’s functional elements, their responsibilities, interfaces, and primary interactions; drives the shape of other system structures such as the information structure, concurrency structure, deployment structure, and so on.
- System Information View: the way that the architecture stores, manipulates, manages, and distributes information. This viewpoint develops a complete but high-level view of static data structure and information flow to answer the big questions around content, structure, ownership, latency, references, and data migration.
- System Concurrency View: the concurrency structure of the system and maps functional elements to concurrency units to clearly identify the parts of the system that can execute concurrently and how this is coordinated and controlled.
- System Development View: the architecture that supports the software development process. Development views communicate the aspects of the architecture of interest to those stakeholders involved in building, testing, maintaining, and enhancing the system.
- System Deployment View: the environment into which the system will be deployed, and the dependencies the system has on its runtime environment. Deployment views capture the system’s hardware environment, technical environment requirements, and the mapping of the software to hardware elements.
- System Operational View: how the system will be operated, administered, and supported when it is running in its production environment, by identifying system-wide strategy.
Stakeholders
Rozanski and Woods state that stakeholders should be Informed, Committed, Authorized, Representative.
Rozanski and Woods classify stakeholder roles according to the following categories:
- Acquirers: Oversee the procurement of the system or product
- Assessors: Oversee the system’s conformance to standards and legal regulation
- Communicators: Explain the system to other stakeholders via its documentation and training materials
- Developers: Construct and deploy the system from specifications (or lead the teams that do this)
- Maintainers: Manage the evolution of the system once it is operational
- Production Engineers: Design, deploy and manage the hardware and software environments in which the system will be built, tested and run
- Suppliers: Build and/or supply the hardware, software, or infrastructure on which the system will run
- Support staff: Provide support to users for the product or system when it is running
- System administrators: Run the system once it has been deployed
- Testers: Test the system to ensure that it is suitable for use
- Users: Define the system’s functionality and ultimately make use of it.
Rozanski and Woods (RaW) Resources
Glossary
Summary
Common System Documentation Terms
- SAD: Solution Architecture *Description document* used to describe the complexity of a system model in curated views appropriate to the viewpoint of specific stakeholders.
- TDD: Technical Design Document. One or more continuation documents to technically expand on a SAD's Development View.
- UML: Unified Modeling Language: a general-purpose, (mostly visual) modeling language used to visualize system design in a in an unambiguous standard way39).
- ArchiMate: an modeling language to describe, analyse and visualize enterprise architecture in an unambiguous way40).
- RaW: Rozanski and Woods, authors of the seminal “Software Systems Architecture” within which was presented a SAD structure based on Views, Viewpoints, Perspectives. The Rozanski and Woods View/Viewpoint document structure describes the complexity of the solution's systems by describing in a series of curated views, prepared from the point of view – the viewpoint – of key stakeholders:
- Stakeholder: A stakeholder in the architecture of a system is a individual, team, organization, or classes thereof, having an interest in the realization of the system.
- View: A view is a representation of one or more structural aspects of an architecture that illustrates how the architecture addresses one or more concerns held by one or more of its stakeholders.
- Viewpoint: A viewpoint is a collection of patterns, templates, and conventions for constructing one type of view. It defines the stakeholders whose concerns are reflected in the viewpoint and the guidelines, principles, and template models for constructing its views.
- Perspective: An architectural perspective is a collection of activities, tactics, and guidelines that are used to ensure that a system exhibits a particular set of related quality properties that require consideration across a number of the system’s architectural views.
Common Quality Terms
* CIA: Confidentiality, Integrity, Availability * CIAP Confidentiality, Integrity, Availability and Privacy * AAA: Authenticated, Authorized, Accounted * 8A: Accessible, Anytime, Anywhere, Anyhow, Anyone, Appropriate, Audited, Accounted. A principle of providing Transparency up to the point it interferes with Protection. * MFA Multi-Factor Authentication * NDA Non-Disclosure Agreement * PIA Privacy Impact Assessment * DR Disaster Recovery * HA High Availability
Common System Delivery Management Terms
- BAU: Business As Usual
- PM: Project Manager
- Agile development and delivery: refers to a group of development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.
- Scrum: is a lightweight process framework which is a subset of Agile – and the most widely-used variant.
- DevOps: DevOps is the union of people, Agile processes, and Services (ie Tools) to enable continuous delivery of value to end users, by removing barriers between Development, Operations and Quality Assurance, emphasizing communication, collaboration, and continuous automated integration, quality assurance and delivery.
- SAFe: Scaled Agile Framework. An Agile-based framework intended to expose large and cautious organisations to core elements of Agile in a 'safe' way.
- ITIL: Information Technology Infrastructure Library is a set of IT Service Management (ITSM) processes, procedures, tasks and checklists focused on aligning IT services with the needs of the organisation's strategy, business value and maintaining a minimum level of competency from a baselined plan. It's high opportunity cost to delivered value ratio is behind the DevOps movement.
Common Development Management Terms
- ALM Service: a Service to manage the product lifecycle (governance, development, support, and maintenance) of computer programs. It encompasses requirements management, software architecture, computer programming, software testing, software maintenance, change management, continuous integration, project management, and release management. 41).
- Continuous Integration: the automation of the process of optionally testing and managing the peer review of submitted branches, prior to integrating the new additions into the Main Branch branch. The range of testing done for Continuous Integration purposes can range from testing just code unit tests all the way up to the type of testing referred to as Continuous Testing.
- Continuous Testing: the automation of a complete suite of quality and functionality tests applied to submitted branches by implementing Continuous Integration.
* Continuous Delivery: the use of a Build Service and Deployment Service to automate of publishing built artefacts to a target environment (DT, BT, AT, IT, TRAINING or PROD). An optimal Continuous Delivery pipeline implements Continuous Testing, but it is still too common to have Continuous Delivery pipelines that rely on manual testing. To remove delivery delays such pipelines should work towards replacing the manual testing with automated testing. - Continuous Deployment: the use of a Build Service and Deployment Service to automate the publishing of built and fully tested (using Continuous Testing processes) artefacts every time code is submitted. A very high level of software delivery maturity is required to implement this process – hence why it is more often a target state than an achieved state.
- Version Control Service: a category of software tool to help a development team manage branching, integration and changes in general over time to source code and documents.
- TFVC: Team Foundation Version Control. A non-distributed Version Control system still in use for managing legacy projects. TFVC has largely been surplanted by the use of Git.
- Git: A decentralized, Distributed Version Control Service, which allows many software developers to work on a given project without requiring them to share a common network 42).
- Main branch: the primary branch of code in a Version Control Repository, to which submitted branches are merged, after – optionally – peer review and automated testing. See Continuous Integration. In older repository systems (eg: Subversion) the Main Branch was referred to as 'trunk'.
- Code Unit Testing: a form of Automated Testing which tests a single unit of code. See TDD.
- Test Driven Development (TDD): a development process where Code Unit Tests are developed prior to development, based on Acceptance Test Definitions.
- Acceptance Test Definition: An Agile work item, commonly referred to simply as an Acceptance Test. An Acceptance Test Definition is a text based definition of user or system functional acceptance test for a User Story. A User Story can – and should – have more than one Acceptance Test Definition (and by extension, Code Unit Test) associated to it. The format of an Acceptance Test Definition is GWT.
- GWT: is an acronym for Given-When-Then, a term to describe the format in which Acceptance Test Definitions are written ('Given <some input> And <another input> When <user does something> Then <the following will be the result>').
- User Story: an Agile work item which is a text based summary of a User stakeholder's desired functionality, written by BAs in the language of stakeholders. The informality of the language used within a User Story adds value for stakeholder engagement, but are incomplete and valueless without accompanying Acceptance Test Definitions. The forma is ''.
- Feature: an Agile work item comprised of several User Stories. They are distinct elements of functionality that can't be delivered in a single Sprint Iteration, but can be delivered in one Release.
- Release: although functionality is completed in each iteration, in some work environments, the product is held back before being released to users.
- Epic: an Agile work item that are significantly larger bodies of work. Epics are feature-level work that encompasses many Features, and the User Stories within them.
- Work Item Management Service: an service to manage Agile work items (Epics, Features, User Stories, Acceptance Test Definitions, Bugs).
- Build Service: a service that extracts from a Version Control Service's Repository the latest version of the code and compiles it. The compiled artefact is then tested in various ways.
- Deployment Service: a service that deploys the result of a Build Service job – the compiled code – to a target environment (DT, ST, AT, IT, TRAINING/PROD). Further post-deployment testing may be commissioned.
- Domain Driven Design: an software development approach based on placing the project's focus on domains – both their model and their logic – and initiating a creative collaboration and dialogue between technical and domain experts in order to iteratively refine a conceptual model that addresses particular domain problems. Key development concepts of DDD are listed below43):
- Entities: An object that is not defined by its attributes, but rather by a thread of continuity and its identity. In other words, an object with an ID.
- Value Object: An object that contains attributes but has no conceptual identity. They should be treated as immutable.
- Aggregate: a collection of objects that are bound together by a root entity, otherwise known as an aggregate root . The aggregate root guarantees the consistency of changes being made within the aggregate by forbidding external objects from holding references to its members. Your car is an aggregate of several objects. One of which is engine block with an ID (an Entity).
- Domain Event: a domain object that defines an event (something that happens). A domain event is an event that domain experts care about.
- Service: When an operation does not conceptually belong to any object. Following the natural contours of the problem, you can implement these operations in services.
- Repository: A Object management service wrapped around specialized storage.
- Factory: a Method for creating domain objects should delegate to a specialized Factory object such that alternative implementations may be easily interchanged.
- CQRS: Command Query Responsibility Segregation is an architectural pattern for separation of reads Queries – that do not mutate state – from writes Commands – which do44).
- AOP: Aspect-oriented programming makes it easy to factor out technical concerns (such as security, transaction management, logging) from a domain model, and as such makes it easier to design and implement domain models that focus purely on the business logic.
- DSL: domain-specific languages are restrained languages used to model a domain in order to communicate with less ambiguity with domain stakeholders and systems.
Development Terms
* ORM: An Object Relational Mapping System is one that provides an simple and abstract means to manage the storage and retrieval of entities from a datastore (eg: a sequential database). An example of an industry accepted .NET based ORM is Entity Framework45). * Entity Framework: an open source supported industry leading ORM system46). * GoF: The Gang of Four is the term used to refer to the authors of the seminal “Design Patterns: Elements of Reusable Object-Oriented Software software engineering pattern book. The Creational, Structural, and Behavioral patterns described in the book are simply known as GoF Patterns. * Command Pattern: a GoF Pattern: encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations47). * Memento Pattern: a GoF Pattern: without violating encapsulation, capture and externalize an object's internal state so that the object can be restored to this state later48). * Chain of Responsibility: a GoF Pattern: avoid coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it49). * Builder Pattern: a GoF Pattern: Separate the construction of a complex object from its representation so that the same construction process can create different representations50). * Factory Method Pattern: a GoF Pattern: Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses51). * Inversion of Control (Framework): a design principle in which portions of a computer program receive the flow of control from a generic framework.52) * Dependency Injection: a software design pattern that implements inversion of control for resolving dependencies. A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it. The service is made part of the client's state.[1] Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern53). * Unity: a well known .NET Dependency Injection library. * StructureMap: a well known .NET Dependency Injection library. * SOLID: a set of 5 interconnected core principles of Object Oriented software development that improves adaptability and maintainability and value of the delivered code.
Structural Terms
* Component: a modular, replaceable, part of a system which defines behavior in terms of provided and required interfaces. * Artefact: a physical development deliverables. Eg: files, scripts, compiled .exe files, db tables, email message, etc. Node: model elements that represent general computational resources of a system, including servers, workstations (both of these are specifically devices), sensors, printing devices, etc. Nodes can be nested. Nodes can be connected by communication paths to describe network structures. * Device: a node which is a physical computational resource with processing capability upon which artifacts may be deployed for execution (eg: servers, workstations, etc). * Execution Environment: is a Node within a Device that represents software containers which offer an environment within which deployed artefacts components can be executed.
Data Management Terms
* OLTP: an Online Transaction Processing system is characterized by a large number of short on-line transactions (INSERT, UPDATE, DELETE). The main emphasis for OLTP systems is put on very fast query processing, maintaining data integrity in multi-access environments and an effectiveness measured by number of transactions per second. Typically, OLTP systems are used for order entry, financial transactions, customer relationship management (CRM) and retail sales. In OLTP databases there is detailed and current data and schema. Used by Operational Systems. * OLAP: an On-line Analytical Processing is characterized characterized by relatively low volume of transactions. Queries are often very complex and involve aggregations. OLAP applications are widely used by Data Mining techniques. In OLAP database there is aggregated, historical data, stored in multi-dimensional schemas (usually a star schema). Used by Data Warehouses. * Operational System: a term used in data warehousing to refer to a system used to process the day-to-day transactions of an organization. These systems are designed in a manner that processing of day-to-day transactions is performed efficiently while preserving integrity. Usually use OLTPs.54). * Data WareHouse: a system used for consolidating the data from several OLTP datastores, in order to meet reporting and data analysis requirements May use OLAPs55). * Data Mart: a simple form of a data warehouse focused on a single functional area, drawing data from a limited number of sources (eg: sales, finance or marketing). Often built and controlled by a single department within an organization. Can be Dependent, Independent and Hybrid data marts.
Integration Terms
* AD: Active Directory * ETL: Extract, Trasform and Load56) – a popular concept in the '70s – is a process of extracting data from one or more data sources (eg: databases), transforming the data into the target data format, and loading it into the target system. The system was intended to be used between operational datastores and target data warehouses, but many shops have also incorrectly used it to perform ETL between operational databases. ETL between systems is fine, but not via the database application programming interface, bypassing the application's programming interface. Using the Application's APIs has the benefit of providing authentication, authorisation, accounting, validation and triggered logic – while still providing Projections (ie, Transformations) using ODATA. * APIs: an Application Programming Inteface57) is an facing service endpoint, preferably externally facing and accessible by anyone, from anywhere, at anytime, anyhow, in an appropriate, audited and accounted manner (see 8A). * ODATA: Open Data Protocol58) is an industry accepted extensions to HTTP GET based REST Operations. * REST: Representational State Transfer59) Protocol is an HTTP based protocol which uses a limited HTTP based operation vocabulary. * SOAP: Simple Object Access Protocol60) is an alternative, older, web service protocol, which allows an arbitrary sets of operations, as opposed to REST which is based on only allowing a restrained vocabulary of operations.
Common System Operations Terms
- BWF: Basic Workflow
- BCP: Business Continuity Planning
- DR: Disaster Recovery
- DRP: Disaster Recover Planning
- GIW: Group Information Warehouse
- ID&R: Infrastructure Design and Reuse
- OOTB: Out of the Box
- SLA: Service Level Agreement
- SLAM: Service Level Agreement Monitoring
- SLAP: Service Level Agreement Ping|Pulse|
Common Ministry Terms
* ESAA: Education Sector Authentication & Authorisation * EDUMIS: EDUcation Management Information System * FIRST: Funding Information Regulatory System Technology * FUSION: Oracle Fusion Cloud Service (Enterprise Resource Planning – Financials) * Helios: Mythical Greek God (Name for the PMIS Replacement System) * FMIS: Financial Management Information System * PMIS: Property Management Information System * SE-RAD: Special Education – Rapid Application Development
Agile Summary
Summary
Agile is a development approach that emphasis:
- Continuous Delivery of Value to Stakeholders,
- Ongoing Stakeholder Engagement and Feedback,
- Avoiding effort lock-in in order to re-prioritize effort early and regularly as new information becomes available.
The Agile software development approach's proven benefit is the source of the DevOps approach. DevOps is an application of the learnings from the developmnt group, and applying them beyond, to all groups involved in the Software Development Lifecycle (SDLC).
Agile Stakeholder Engagement Benefits
A key benefit of the Agile approach – and therefore the DevOps approach – is stakeholder engagement.
The following two charts demonstrates succinctly the difference of stakeholder engagement, feedback and effort reprioritization.
Waterfall Delivery Stakeholder Engagement
Using older delivery patterns, key business and user stakeholder engagement is nearly absent bar immediately after project launch, when the solution should have been finished and requires repriorization and further funding, and the final late go live:
<gchart 300×150 #C0C0C0 line center> 1=100 2=30 3=10 4=5 5=5 6=5 7=5 8=5 9=5 10=10 11=30 12=100 13=90 14=30 15=10 16=30 18=100 </gchart>
Agile Delivery Stakeholder Engagement
Using Agile delivery patterns, key stakeholders are continuously engaged as they are delivered to regularly and often, which provides them the ability to test and provide feedback that is quickly taken on board to re-prioritize work items as needed in order to deliver value:
<gchart 300×150 #C0C0C0 line center> 1 =100 1.5=80 2=100 2.5=80 3=100 3.5=80 4=100 4.5=80 5=100 5.5=80 6=100 6.5=80 7=100 7.5=80 8=100 8.5=80 9=100 </gchart>
Agile Work Items, Status, Kanban and Process Summary
Agile manages collaboration, development and testing a specific set of Work Item tasks:
Epics are significantly larger bodies of work. Epics are feature-level work that encompasses many Features, and the User Stories within them.
Features are distinct elements of functionality that can't be delivered in one Sprint Iteration, but can be delivered in one Release.
User Story are loosely equivalent to User Requirements, written by Business Analysts(BAs) in the language of *Stakeholders. The informality of the language used within a User Story adds value for Stakeholder engagement, but are incomplete and valueless without accompanying Acceptance Test Definitions (Acceptance Tests).
The accepted structure for the definition of User Stories is:
As a <role>, I want <goal/desire> So that <benefit>
The informality of the language used within a User Story can lead to specifications that on their own are considered weak and open to interpretation. For this reason User Stories are incomplete and valueless without accompanying Acceptance Tests.
User Story Acceptance Tests are carefully written by Testers to provide explicit criteria for User Stories to developers and testers, while addressing other stakeholder's Quality Specifications (security, performance, compliance, legal, supportability, maintainability requirements, etc.(.
A User Story's associated Acceptance Tests is written following the well-known Given-When-Then format:
Given <condition> And <condition> Or <condition> When <trigger> Then <expected outcome>
The Given-When-Then structure is an industry recommended AT structure that both developers can import verbatim into their testing frameworks when developing coded unit tests and behaviour driven tests (see XBehave.NET).
User Stories are added as New Work Items to a Backlog of Work Items, and progressed through various States until complete while displayed on a common team (electronic) Kanban Board for all see, understand progress and/or potential cross impact.
Instead, consider procuring a large touch screen for the team to interact with a digital Kanban.
Agile Large Organisation Integration
The iterative delivery and continuous delivery and feedback loop of the Agile approach is perceived to be anathemic to the traditional methods of large organisations which need formality for proposals, funding, reporting among other requirements.
Organisation have recommends the use of the following to bridge between the two speeds and sets of requirements: * Scaled Agile Framework (SAFe) 61) * Accelerate Delivery Framework 62) * DevOps
DevOps has gained the most mind share.
Agile Management, Behaviour, Tools
Agile delivery does not work with just a visual work item management approach. It requires two other key aspects:
- Behavioural change by adhering to Principles.
- Appropriate Management and Delivery Tooling required to facilitate the behavioral changes in order to deliver value to customers.
Agile Delivery, Management, Development Principles
Three Manifestos have been created to define the Principles that Agile team members should abide by:
- The “Agile Manifesto”, focused mainly on stakeholder engagement and feedback.
- “The Project Managers Declaration of Independence”63) focused on successful management of Agile teams.
- “The Software Craftsman Manifesto”64) focused on delivering sustainable quality to stakeholders.
The 3 Manifestos are summarized below.
The Agile Manifesto
The Agile Manifesto is based on 12 Principles:
- Customer satisfaction by early and continuous delivery of valuable software (requires an ALM+Continuous Test+Delivery service)
- Welcome changing requirements, even in late development (agile focuses on quick responses to change and continuous development)
- Working software is delivered frequently (weeks rather than months)
- Close, daily cooperation between business people and developers (requirements can't be fully collected at the start - continuous customer/stakeholder involvement is essential)
- Projects are built around motivated individuals, who should be trusted (who you hire greatly affects the outcome)
- Face-to-face conversation is the best form of communication (co-location)
- Working software is the principal measure of progress (as opossed to stacks of doc-ware)
- Sustainable development, able to maintain a constant pace (hero development is not sustainable or scalable)
- Continuous attention to technical excellence and good design (continuous refactoring is valuable)
- Simplicity—the art of maximizing the amount of work not done—is essential
- Best architectures, requirements, and designs emerge from self-organizing teams (but teams must include experienced architects)
- Regularly, the team reflects on how to become more effective, and adjusts accordingly (use sprint post-mortems for self-feedback)
The Agile Manifesto – focused on Requirement gathering and delivery – is the basis of two other respected Agile Manifestos:
- “The Project Managers Declaration of Independence” focuses on management of Agile
- “The Software Craftsman Manifesto” focuses on delivering quality in an Agile environment.
Project Managers Declaration of Independence
The six principles65) felt essential to project management of an Agile enabled team were defined as:
- increase return on investment by making continuous flow of value our focus.
- deliver reliable results by engaging customers in frequent interactions and shared ownership.
- expect uncertainty and manage for it through iterations, anticipation and adaptation.
- unleash creativity and innovation by recognizing that individuals are the ultimate source of value and creating an environment where they can make a difference.
- boost performance through group accountability for results and shared responsibility for team effectiveness.
- improve effectiveness and reliability through situationally specific strategies, processes and practices.
Software Craftmanship Manifesto
The Software Craftmansip Manifesto66) has added 4 refinements to the Agile Manifesto principles in recognition that higher craftmanship leads to better maintainability, and therefore lower support costs over the lifespan of products that continue to be used:
- Not only working software, but also well-crafted software
- Not only responding to change, but also steadily adding value
- Not only individuals and interactions, but also a community of professionals
- Not only customer collaboration, but also productive partnerships
DevOps Summary
Summary
DevOps is the union of people, Agile processes, and tools to enable continuous delivery of value to end users, by removing barriers between Development, Operations (Infrastructure, Application and Customer Support) and Quality Assurance, emphasizing communication, collaboration, and continuous automated integration, quality assurance and delivery.
A primary goal of DevOps is to establish an environment where more reliable evolving applications can be released more frequently by maximizing the predictability, efficiency, security, and maintainability of operational processes. Very often, automation supports this objective.
Relationship to Agile
DevOps is an Enterprise reaction to the documented benefits of Agile delivery, extending it beyond just the development phase to the whole application lifecycle – into the organisation as a cultural change, Agile processes, backed by appropriate automation and communication tools.
Relationship to ITIL
As Agile developed as a refutation of the high cost of delivering value using a Waterfall based development process, DevOps rose as a refutation of the high cost of delivering value using ITIL, the “Waterfall based Operations process67).
Many Organisations have tried to update their SDLC, only to find little gain.
Analyse by others indicates the agreed common cause for this failure to deliver on expectations is a lack of a continuous ongoing ALM process that incorporates Continuous Testing.
Traditional Software Development Life Cycle (SDLC) management is commonly limited to the phases of software development including requirements, design, coding, testing, configuration, project management, and change management. DevOps ALM covers a broader scope, and continues after development until the application is no longer used, and may span many SDLCs.
In a 2004 survey designed by Noel Bruton (author of “How to Manage the IT Helpdesk” and “Managing the IT Services Process”), 77% of survey respondents either agreed or strongly agreed that “ITIL does not have all the answers”.
Criticisms of ITIL68) include the following: because of its focus on service management, ITIL does not feed back effectively into the design process. Nor does ITIL directly address the business applications which run on the IT infrastructure; nor does it facilitate a more collaborative working relationship between development and operations teams.
Relationship to SAFe
Several different attempts to move away from ITIL and other cumbersome frameworks. Beyond DevOps, the most well-known is the Scaled Agile Framework (SAFe).
Although criticized by by world-class Agile specialists69)70)) for being too cautious it is important to note that both critics and supporters of SAFe agree it yields widespread benefits: although SAFe may be a less effective implementation of Agile, it is a safe starting point for slow to change, large organizations to implement, and enjoy some of the benefits of, Agile.
Although SAFe gained initial attention, the current market is currently strongly backing moving straight to DevOps.
Interest and Adoption
A 2015 survey by CA Technologies71) shows that 88% of more than 1,400 IT or line-of-business executives have already adopted or plan to adopt DevOps within the next five years. This is up from about 66% in a similar survey taken in 2014.
Based on several factors – including its proven ability to lower costs and deliver better value while not sacrificing quality – Organisations continue to follow the upward trend of Agile awareness, actively move away from ITIL processes towards DevOps processes:
Observations
- 49% of organisations complain that still largely manual testing phases are a bottleneck to speeding up development cycle times72).
- 88% of enterprises already have or have plans to adopt DevOps within the next 4 years75).
- 63% of over 4000 respondents to the 2014 Puppet Labs and IT Revolution Press76) survey are already implementing DevOps practices.
Those who had moved to DevOps reported: * 46% Increased software/service deployment frequency77) * 36% Improved application quality and performance78) * 34% Reduced application time-to-market79) * Up to 40% increase in productivity80) * Up to 77% faster mean-time-to-recover (MTTR)81) * Up to 300% increase in the number of weekly deployments82) * Up to 200% increase in the number of deployed environments83) * Up to 15 times reduction the manual effort required for release84) * Up to 9X increase in release volume without adding resources85) * Up to 85% reduction in transaction response time86) * Up to 5X improvement in testing efficiency, testing times reduced from days to minutes87) * 76% reduction in resolution time and prevented 18 outages impact user experience88) * Gartner Says that By 2016, DevOps Will Evolve From a Niche to a Mainstream Strategy Employed by 25% of Global 2000 Organizations89).
Drivers
Stakeholder drivers include: * Time to Market was ranked as a very important part of their corporate strategy by 61% of organisations. * Corporate Image is #1 executive concern when it comes to quality, demanding protection from negative press. * Customer Experience – fit for purpose, availability, ease of use and performance – was determined a key objective.
Other drivers to the rate of current fast adoption are: * Agile processes – many projects have been delivered using Agile processes, therefore more are aware of their concrete benefits. * Cloud infrastructure: inexpensive, easy to manage virtual infrastructure is widely available. * Infrastructure as Code: cloud services made widely available and comprehended the process of remotely defining infrastructure by script and automation90). * Automation: both automation of cloud service infrastructure provisioning and automation in other areas – eg: data centers – is gaining wide recognition. * Continuous tested delivery: continuous delivery pipelines are have gained awareness and acceptance. * Best practices: a critical mass of publicly available best practices is available to remove adoption risk.
Cultural Change
The Cultural changes have been summarized as being around: * Amplify Feedback Loops: emphasis communication and feedback in order for all involved to understand the desires of all other stakeholders. * Think of the whole System: understand the feedback from the whole pipeline, starting from the business, as opposed to the performance of a specific or single department or individual. * Empower a Culture of Continual Experimentation and Learning: promote improvement investigation in order to master doing it safely.
The above are important cultural changes. But there are other changes as well.
A key cultural change under DevOps is changing the mindset of organisation groups from blocking verifiers to trusted advisors and enablers.
Due to ongoing increased availability of cloud services, along with the simplification “for the masses” of their management.
Developers have now expected to take advantage of these services and their simple management tools in order to define and manage a project's basic environment provisioning and deployment requirements using Infrastructure as Code, Testing as Code and Deployment as Code patterns.
These coded requirements are then automated rather than executed laboriously by hand.
It is important to understand that DevOps does not mean Developers have free reign to do as they please and that other roles no longer have input. This is certainly not the case. The role of Developers is to develop in response to User Stories that capture the requirements of Stakeholders – including Infrastructure Support Services, Application Support Services and Customer Support Services. All of these stakeholders are empowered to add User Stories to a Project's Agile work item Backlog that must be prioritized and addressed. These User Stories in turn define how Developers must update the Infrastructure as Code and Configuration as Code definitions to meet expectations of the various Stakeholders.
A key cultural change is the checks and balances being updated from a manual process to an automated process. Instead of using positions of authority to verify and potentially block deployment via change control processes, stakeholder become empowered to actively engaged in submitting Acceptance Tests which the automated *Build Service enforces on all stakeholders behalf*.
Parallel Processes
The perception that DevOps's emphasis on automation will replace ITIL is unfounded within this Organisation. Existing legacy apps that were not developed from the start to be managed by DevOps processes cannot be successfully and economically managed using DevOps processes. Existing roles will continue to be needed for years to come for these applications.
DevOps processes must instead be used in parallel, by the same Resources, but reserved for new projects.
Communication Services
At the heart of DevOps is the adoption of Agile methodologies to break down barriers between people and groups using common communication and work item management tools.
In Scrum Agile, the primary tools are an Work Item Management Service and electronic Kanban board appropriately accessible by all stakeholders.
Mature Organisation's choose certified SaaS based ALM Services that include Work Item Management Services.
Automation Services
In today's world of rapid development cycles developers are expected to ship code very frequently. Increasingly rapid release cycles mean customer needs are expected to be met earlier, by developers shipping code frequently.
On the other hand operations are still expected to ensure no customer is adversely affected by this cycle. Change is their enemy. Where Devs meet Ops there can often be significant tensions.
To alleviate these tensions the DevOps movement has focused on automating as many build/store/test/deploy tasks as possible.
Mature Organisation's choose certified SaaS based ALM Services that include tools and automation where possible of the following services:
- Coding: Code development and review, version control tools, tested code integration
- Building: Automated Build Services and tools
- Testing: Automated testing of qualities and functionality using Testing as Code tools
- Packaging: Artifact packaging and pre-deployment deployment
- Releasing: Assurance, release approvals, release automation
- Provisioning: Infrastructure provisioning and management using Infrastructure as Code tools
- Configuration: Infrastructure configuration and management using Configuration as Code tools
- Monitoring: Continuous applications application monitoring
Automated Testing is a DevOps Requirement
The benefits of rapid iterative Agile deployments are not able to be delivered when testing still relies on manual processes.
Simply put, an Organisation cannot embrace and reap the value of DevOps if it does not commit to ensuring Acceptance Tests are defined by Testers, converted into Testing as Code by Developers, to be enforced by the automated Build Service.
Testing as Code
Finally, organisations struggle with the question as to how to upskill manual testers to become automated testers. The answer is to not, and adhere to the architectural principle of Separations of Concerns.
An important reason it has been proven to be beneficial to keep the automation of tests separate from test definition is Testers tend to try to automate what they already know – manual testing – when manual testing should be seen as only having ever been required because there was no means to automate testing. The focus should be on automated testing, not automated manual testing.
The fresh break allows Testers to stay focused on what they know best – scripting acceptance test definitions – and developers focusing on what they know best – automation of any kind.
Implementation RoadMap
The Theory of Constraints identifies the single constraint to DevOps adoption is the inherent aversion to change from departments within the organisation.
Hence guidance on how to implement DevOps in traditional (eg: ITIL based) Organisations has been given by several reputable sources, including Microsoft. For example Gene Kim’s “Three Ways Principles” essentially establishes different ways of incremental DevOps adoption, to minimize risk and cost whilst building the necessary in-house skillset and momentum needed to have widespread successful implementation.
An implementation process can be developed across the whole organisation, per project, or a combination of both.
The benefit of doing it per Project, allows each project to perform a complete migration to DevOps, from top to bottom, taking on board the responsibility of solving the globally identified traditional bottlenecks (eg: converting Manual Testing to Automated Testing for their project only), without disrupting other projects still running using traditional processes.
The recommendation for this organisation is to continue to do the transition the later way.
CAMS/CALMS
John Willis and Damon Edwards (and later Jez Humble) coined the acronym “CALMS” to describe key aspects of DevOps91):
- Culture: “Culture eats strategy for breakfast.”(src: Peter Drucker).
- Automation: Automating repetitive, time-consuming, error-prone tasks yield big dividends.
- Lean: apply value-stream mapping, and plan to remove inefficiencies.
- Measurement: you can't improve what you don't measure.
- Sharing: friction-free information improves organizational performance.
DevOps Isn't NoOps
It’s a misconception that DevOps is Developers coming to wipe out Operations and do it themselves. The first and most obvious reasons this is a misconception is that Systems are written for the environment and processes they were intended to be run on. Organisations have legacy applications that were intended to be deployed and tested manually – they simply cannot be cost-effectively ported to an automated tested deployment process.
The second reason is DevOps – and its antecedents in Agile operations – are being initiated out of Operations teams more often than not92). This is because Operations have realized that practices need to be automated to keep pace with what is being expected from business stakeholders. The result has not been automating personnel out of a job, but instead – as lower level concerns become more automated – technically skilled staff start solving higher value problems.
References
The following sources provided facts for the above: * “DevOps with Quality” by GapGemini/Sogeti * https://en.wikipedia.org/wiki/Continuous_testing * History of DevOps * What is DevOps
Continuous Delivery Summary
Summary
@ccaum: Continuous Delivery doesn't mean every change is deployed to production ASAP. It means every change is proven to be deployable at any time.
Continuous Delivery is about ensuring ensure code is always in a deployable state (built, tested, packaged) in order to get changes of all types – new features, configuration changes, bug fixes and experiments – into the hands of users on demand, safely and quickly in a sustainable way.
Continuous Delivery recognizes that Coded Unit Tests and Static Tests cannot catch all functional logic. Unfortunately process maturity ends up dictating how much of the functional testing is automated (as opposed to IT:AD:Continuous Deployment which depends on all functional testing being automated).
Many projects are still somewhere on the continuum between barely more than Continuous Integration (with packaging added to the mix, but all functional testing still being manual) and the upper more mature practices ensuring that all functional testing is automated.
When implemented maturely, Continuous Delivery can completely eliminates the code freeze, integration, testing and hardening phases that traditionally follow “dev complete”.
Either way, in a Continuous Delivery based project, deployment to PROD remains a decided operation – unlike Continuous Deployment.
Continuous Delivery compared to Continuous Deployment
The fundamental difference between the two is that Continuous Delivery is when you make your software product available to the customer but the decision to upgrade/install it is manual. In the case of desktop apps, they have to download/install the update, and in the case of a web service, someone has to authorize it's deployment to live.
Continuous Deployment is when the upgrade is automatically deployed.
In other words, with IT:AD:Continuous Delivery, a product can be automatically delivered to production at the touch of a button, once approved – whereas with Continuous Deployment it is automatically deployed to end production.
The second fundamental difference between the two is that whereas Continuous Delivery can use some Continuous Testing, Continuous Deployment relies on using Continuous Testing to test the totality of the solutions's functionality.
Acceptance Test Driven Development Summary
Summary
Continuous Delivery cannot be accomplished without a testing approach appropriate to the automation services provided by a full ALM Service.
The following are well-tested patterns to deliver the required tests.
Acceptance Test Driven Development
As per the Guidelines above, development will be developed using Test-Driven Development (TDD) approach. Specifically, Acceptance Test Driven Development (ATDD).
ATDD is a software development approach that relies on the turning the Acceptance Tests associated to Agile User Stories into automated tests first . The software is then improved to pass the automated tests before the build service allows the code to be integrated into the core code.
The base concept of the ATDD approach is its being opposed to allowing new code to be added that is not proven to meet acceptance tests that encapsulate requirements/User Stories.
The benefit of using ATDD include: * Developers pass 100% of the Acceptance Tests defined by Testers.
- Note that in addition to Tester defined Acceptance Tests, developers may also pass as many additional Unit Code Tests defined by written as well.
* 100% feature coverage. * Limits the addition of code that is not proven to meet a requirement. * A full suite of automated tests directly linkable to source User Stories/Requirements ensures that new code does not break previous functionality without the ability to understand the source of tension.
Acceptance Test Test Naming
The title used for Tests is advantageous to developers as well as good traceability.
The format to be used is {Type}_{ID}_{SubId}_{Test_Name}.
The practice of including the Work Item ID in the automated test's name adds value to developers on larger projects that are worked on for extended periods of time. An example of the value to developers is demonstrated below.
A new developer is tasked to write code for Story 2048. Upon completing the new code, the developer runs his previously written test, and meets the requirements of Story 2048. The developer then runs the whole suite of tests, and discovers that the new code breaks earlier tests (eg: Story 139). Having the ID of both conflicting Stories, the developer can present the two Stories back to the BA to sort out the difference, while the developer moves on to the next Story. Without the ID, the developer would be tempted to either comment out the previous tests in order to deliver on the current commitment – potentially negating previous investigative work and causing bugs to slip through.
The following demonstrates the use of the above convention to title a TDD driven test to indicate the relationship between the Test and the Story with an ALM identifier of 139. It's the 3rd Unit Test developed for the Story.
[Scenario]
[Example(1, 2, 3)]
[Example(2, 3, 5)]
public void S_139_3_Addition(int x, int y, int expectedAnswer, Calculator calculator, int answer)
{
...
}
Acceptance Test Driven Development: Test Naming (Cont)
Organisation specification requirements (often identified with Ids similar to REQ-xxxx) are referenced by testers when they design the Acceptance Tests. The following proposal needs to be tested as to being valuable for traceability reasons:
- the Acceptance Test name could reference Requirements (eg: REQ-xxxx) being met.
- the developer could embed the Requirements ID (eg: REQ-xxxx) in the Test Name as well:
{Type}_{ID}_{SubId}_{REQID}_{Test_Name}(eg:S_123_2_REQ_1234_Addition).
Acceptance Test Driven Development: Test Format
As stated elsewhere, the format of Acceptance Tests is important. When the Given-When-Then structure
is used the text can be used as the basis of BDD structure coded tests.
Tests must be developed following the Behaviour Driven Design Given-When-Then structure which is equivalent to the older Arrange-Act-Assert structure.
The following is a Demonstration of using XBehave.NET (a BDD based extension to xUnit.net) using the Given-When-Then structure:
[Scenario]
[Example(1, 2, 3)]
[Example(2, 3, 5)]
public void S_2049_1_Addition(int x, int y, int expectedAnswer, Calculator calculator, int answer)
{
"Given the number {0}" // or in C# 6 or later, $"Given the number {x}"
.f(() => { });
"And the number {1}"
.f(() => { });
"And a calculator"
.f(() => calculator = new Calculator());
"When I add the numbers together"
.f(() => answer = calculator.Add(x, y));
"Then the answer is {2}"
.f(() => Assert.Equal(expectedAnswer, answer));
}
The result is that failing tests can be aligned 1-to-1 with the Acceptance Tests definitions. The Users Stories can be quickly found using the Story ID (eg: S_2049)
Requirements:
* REQ-xxxx: Coded Tests SHOULD be laid out according to the Given-When-Then structure.
Acceptance Test Driven Development: API Testing
The above TDD formatted tests can be extended with additional test tool libraries to develop dynamic API testing– but there are reasons not to.
APIs should not be tested from the point of view of the Server – but from the point of view of the Client.
In which case, a Test Runner such as Karma may be more appropriate.
Acceptance Test Driven Development: UX Testing
The above TDD formatted tests can be extended with additional test tool libraries to develop dynamic UX testing – but there are reasons not to.
Clients should be independent apps developed in TypeScript, developed separately from the Server side development.
In which case Testing tools specific to Typescript/Javascript development should be used. Such as Karma.
Data Classification
Summary
Organisations are not put at risk by Environments, but by the Data used within the Environments.
Ensuring that Production Classified data is removed from environments reduces the organisation's risk.
Data Classification
Installations that manage production data are classified by the type of data they manage.
The highest Data Classification given to the information managed by a solution defines both non-functional requirements and system function requirements that must be met at various stages of the Application Lifecycle, including definition, development, operation and disposal phase.
Data Classification Rating
Data is either Unclassified, or classified as either Policy and Privacy Information or National Security Information93)94):
The rating specified depends on several factors.
Unclassified
Classified
Classified as Policy and Privacy Information
The security classifications for material that should be protected because of public interest or personal privacy are:
Classified as National Security Information
Data Classification Impact
The architecturally significant impact of the specified Data Classification are listed below and complied with in the relevant sections of this document:
Requirements:
- Electronic Data Transmission:
- REQ-xxxx: Electronically transmitted IN-CONFIDENCE Information MUST be marked as IN-CONFIDENCE.
- REQ-xxxx: Electronically transmitted RESTRICTED/SENSITIVE/+ Information MUST be marked RESTRICTED or SENSITIVE.
- REQ-xxxx: Electronically transmitted IN-CONFIDENCE/RESTRICTED/SENSITIVE/+ information MUST be transmitted across external or public networks (including the Internet) without being encrypted.
- REQ-xxxx: Electronically transmitted IN-CONFIDENCE/+ information MAY be Username/Password protected.
- Electronic Data storage:
- REQ-xxxx: All Electronically transmitted IN-CONFIDENCE/RESTRICTED/SENSITIVE/+ information (including data) is to clearly identify the originating Govt agency and data.
- REQ-xxxx: An appropriate statement SHOULD accompany all IN-CONFIDENCE transmitted data.
- REQ-xxxx: An appropriate statement MUST accompany all RESTRICTED/SENSITIVE/+ transmitted data.
- REQ-xxxx: Electronically transmitted RESTRICTED/SENSITIVE information transmitted across public networks (this includes the Internet) within NZ or across any networks overseas must be encrypted using a system approved by GCSB.
- Electronic Data storage:
- REQ-xxxx: Electronically stored IN-CONFIDENCE/RESTRICTED/SENSITIVE/+ Electronic files MUST be protected against illicit internal use or intrusion by external parties through two or more of the following mechanisms:
- User challenge and authentication
- Logging use at level of individual
- Firewalls and intrusion detection systems and procedures
- Server authentication
- OS-specific/ application-specific security measures
- Encryption (required for RESTRICTIVE/SENSITIVE or above)
* Electronic Electronic Disposal:
- REQ-xxxx: IN CONFIDENCE/RESTRICTIVE/SENSITIVE/+ information MAY be destroyed by using the delete function.
- REQ-xxxx: IN-CONFIDENCE Electronic media SHOULD be disposed of in a way that makes compromise highly unlikely.
- REQ-xxxx: RESTRICTIVE/SENSITIVE/+ Electronic media SHOULD be disposed of in a way that makes reconstruction highly unlikely.
- REQ-xxxx: IN CONFIDENCE/RESTRICTIVE/SENSITIVE/+ media is to be disposed of or sold, it MUST be purged using a GCSB approved secure delete facility or physically destroyed.
* Paper Storage:
- REQ-xxxx: IN-CONFIDENCE documents can be secured using the normal building security and door-swipe card systems that aim to simply keep the public out of the administration areas.
- REQ-xxxx: RESTRICTED and SENSITIVE documents should be stored in compliance with Archives NZ Storage Standard NAS 9901 Storage of Public Records or Archives.
- Paper Waste Disposal:
- REQ-xxxx: MUST comply with provisions of Archives Act 1957
- REQ-xxxx: IN-CONFIDENCE documents are to be disposed of in a way that makes compromise highly unlikely, such as depositing the documents in bins that are taken away for secure destruction.
- REQ-xxxx: RESTRICTED and SENSITIVE documents are to be disposed of or destroyed in a way that makes reconstruction highly unlikely, such as mechanical shredding.
User Base Growth
Summary
Determining the system storage requirements of a system is based on several factors.
- the national nature of this Organisation's reach,
- the nation's population size today and it expected growth95) per year,
- the expected lifespan of the system,
- planning for the higher requirements of the scenarios listed below,
- providing an average of 1.5MB per user per year (based on a combination of neglieable data record storage requirements and storage requirements uploaded average document types).
During the solution's lifespan the storage requirements of an LOB application appropriate for this solution is expected to need less than 5GB at the start and elastically grow if and as needed to 60GB over its lifespan.
Projected Usage based on doubling Every Year
Based on number of Users doubling every year:
<gchart 300×150 #C0C0C0 line center> Year 0=100000 Year 1=200000 Year 2=400000 Year 3=800000 Year 4=1600000 Year 5=3200000 </gchart>
Projected Usage based on National Population
Based on Population:
<gchart 300×150 #C0C0C0 line center> Year 0=4600000 Year 1=5060000 Year 2=5566000 Year 3=6122600 Year 4=6734860 Year 5=7408346 </gchart>
Projected Usage based on National Schools
Based on number of Schools in the country, increasing by 2%:
<gchart 300×150 #C0C0C0 line center> Year 0=2441 Year 1=2490 Year 2=2540 Year 3=2590 Year 4=2642 Year 5=2695 </gchart>
Projected Usage based on National Teachers
Based on number of Teachers in the country, increasing by 2%:
<gchart 300×150 #C0C0C0 line center> Year 0=50950 Year 1=51969 Year 2=53008 Year 3=54061 Year 4=55150 Year 5=56253 </gchart>
##### Projected Usage based on National Students
Based on number of Students in the country, increasing by 2%:
<gchart 300×150 #C0C0C0 line center> Year 0=762683 Year 1=777937 Year 2=793495 Year 3=809365 Year 4=825553 Year 5=842064 </gchart>
Web Development Constraints
Summary
It is common for web sites to be commissioned without basic rules of thumb to help guide whether decisions are optimal or not. One should questions designs that require 12 cores to handle 200 concurrent users.
below are listed some statistics to back design decisions made during the development of systems.
Network Constraints
Responsiveness
Responsiveness is dependent on latency, which is in turn dependent on the network the client is using to access the service.
In the case of NZ inhabitants using an organisation service hosted in Australia, the following information is relevant:
“Ping times [24ms] to Australia [on Verizon] are on a par with domestic times. Reannz (Research and Education Advanced Network New Zealand Ltd) reports domestic latency between the two furthest points of presence on its network, North Shore and Invermay is 22ms. While traffic from New Zealand’s South Island has to travel to Auckland before making the trans-Tasman hop, for New Zealand companies in Auckland, Eastern Australia has domestic-like latency.”96)
Assuming that an uncached page requires an average of 9 additional requests for associated css, images, scripts, and that the leading browser can parallelize 697) connections at a time, the additional latency to Australia – on Verizon – could be as low as 48ms.
If the above analysis is more or less correct, 2×15.87ms is faster than 2x48ms for a complete View request. But not by much.
If the page was optimized to keep the number of requests required below 6, the additional latency would be 24ms-15.87ms (8.13ms).
The actual page itself takes time too. With throughput from NZ to NZ being 26.21 Mbps, and Australia to NZ being 13.11 Mbps, a complete view takes 76ms (or 152ms) to be transferred from server to client.
In light of the above data, the latency from the network distance to Australia is negligible, and the most performance improvements will come from paying attention to how the application is actually put together, implementing basic design recommendations.
IIS Constraints
A current standard web server (eg: IIS on Windows Server 2012, 4 Core CPU) can handle 80,000 Requests per Second (RPS) for a static text page98).
When developing using .NET Core, the RPS increases to 1.1 million RPS:
A static text html page orchestrating requests for approximately 9 additional uncached requests for related static css, images and js files means only 1/10th the number of complete pages can be sent per second (ie 100,000 *pages).
The above implies that a .NET Core based app on a single web server is capable of servicing uncached requests for a static html page by the whole working population of New Zealand in just over 40 seconds.
At this point in time, it is hard to define the true cost of dynamic page websites. If you use server based UI development methods (ASP.NET, ASP.MVC, etc.) practically every page has the same cost as the above first single page. In other words, 10 requests per page. But if you are developing using SPA development practices, subsequent responses do not include images, css, etc. And so therefore the number of responses required for further operations drops closer to 1 response per operation.
Either way, if the application server does not do cross device calls, and is not performing non-trivial time consuming calculations, the cost of the dynamic assembly of the response stream can be assumed to be absorbed in the above.
NIC, DB and SAN Constraints
But a dynamic web page is only as fast as its slowest component – which can be the NIC, Database Service, or SAN.
Generally speaking, a 100Mbs NIC is only able to handle about 3000 batch requests per second, and a 1Gbs Card could get up to 6000 requests (1Gbs card should, but is not 10 times as fast).
Sql Server with a 3000 Batch Requests/Sec is typically high.
And then there is the SAN, which cannot be summarized.
Discounting the SAN for now, the above indicates that for a dynamic pages, due to the database bottleneck, the following responses can be achieved:
• 1000 dynamic pages, with 3 db hits per page, and judicious caching
• 3000 dynamic pages with 1 db hit per page, and judicious caching
Considered Designs and Decisions
Not applicable at this point in time.
FAQs
Summary
Will DevOps lead to job loses?
One designs software for the target infrastructure and processes. It's more or less impossible to take an app written for an IaaS environment, designed to be deployed by hand following installation notes, with only manual test definitions and copies of live data, and cost effectively and securely port the application to a DevOps automation pipeline.
Due to the shear number of existing apps in the organisation that therefore cannot be ported, existing skill sets will need to be retained for at least the duration of these apps. Decades in many cases.
Job security in this organisation is simply not at risk due to DevOps. Job security in any field is far more at risk in the coming decade due to other emerging forces such as AI100). This process is already happening today101).
Will DevOps replace our use of ITIL?
It's not a question of replacement – it's a bimodal scenario with both processes working side by side for years (if not decades) to come.
ITIL will be used to continue to be used to manage legacy projects that are already deployed or currently in flight.
DevOps will only be used for new projects that can take advantage of an ALM Service and cloud hosted services.
Can we use ITIL Change Control for Managing DevOps Quality?
DevOps is a world-wide response to the lack of business value that ITIL delivered to stakeholders. DevOps has an automated equivalent to ensure quality.
The simple answer is no. It's not needed: DevOps flips ITIL on its head, while ensuring Quality is still met.
Whereas ITIL focused on performing checks after the deployment – when it's far too late, and therefore time consuming and costly to remediate – the DevOps approach relies on the Automated Build Service ensuring the tests are designed, implemented, and met before accepting newly submitted code.
The thought process switches from writing a product, and trying to test it, to developing tests as targets, and ensuring they are met.
Can we host existing apps in Azure
Most existing apps will not be able to ported to Azure. In many cases they were designed to rely on the security and services available within the Organisation's network. Other factors are that the Build Service should be the only tool deploying infrastructure installations – whereas most existing solutions were designed to be deployed by hand.
Rehosting existing applications – designed to take advantage of the Organisation's network's security – to the cloud would probably lead to several security issues that would in turn lead to time consuming processes to audit the deployments, effectively removing the benefits of using DevOps.
The more appropriate solution is to have a bimodal approach – maintaining legacy apps using ITIL on organisation infrastructure, and new applications, managed using DevOps, in the cloud.
Why not JIRA?
The essence of the problem with the use of JIRA for DevOps is that it simply is not an ALM Service tool.
Whereas an ALM Service handles Task (Work Item) Management, Build Management, Deployment Management, Automated Test Management, JIRA does just one of those tasks: Work Item Management.
And although it manages work items well, it is worth recognizing that JIRA JIRA was originally designed as a Bug Tracking tool, before it became an Issue tracking tool102). As the name implies, an Issue Tracking Service is designed to track Issues (ie, bugs) in already created software – not manage the Agile Work Items needed to create software. And whereas it is common practice for JIRAs Issues be repurposed as User Stories, JIRA provides no means of developing distinct associated Acceptance Tests Items – which is an absolutely critical feature required for effective DevOps . Without the formality of Acceptance Tests work items to commission the automation of functional tests, Developers have to interpret what is required, Projects Managers have less insight into how much effort is required to complete User Stories, and no automation is developed.
It is precisely because JIRA does not provide all the features of a complete enterprise grade ALM Service and does not address well the need for Acceptance Tests that VSTS is deemed a better fit for this Organisation's DevOps strategy.
General Questions
None at this point in time.
Current Processes
Summary
The following diagrams document some of the low value processes currently employed within the organisation.
These processes are the basis of some of the Assessments listed in the Context View and which this solution aims to replace with more efficiency.
Infrastructure Provisioning
A well-known issue with development for the organisation – whether done in house or by external development services – is the amount of time it takes to discuss, commission and obtain the infrastructure required for a project.
Infrastructure Services must be judicious with providing limited resources to projects, but projects continue to request considerable resources due to a combination of: * requiring upfront reservation of all the servers required for a project, for a multitude of environments (DEV, ST, UAT, TRAIN, PROD) * the organisation has no means of saving resources when not needed, and elastically meeting demand as required, * due to the quality of the development services employed, the applications are resource hungry.
The result is the process for obtaining the infrastructure for a project can be lengthy one that adds unnecessary time and cost to even the smallest project:
Current Deployment Process
The deployment process currently use is poor value for end users.
The following sequence diagram demonstrates this succinctly.
Handover to Testing
The following process became apparent when the Development team used an ALM Service that differed from the Issue Tracking Service used post-delivery:


















