Introduction
As we continue our exploration of the Scaled Agile Framework (SAFe) Requirements Model, we now turn our focus to the Program Level. In this third post of the series, we will discuss how SAFe organizes Agile teams at scale, taking a closer look at the roles and processes that enable effective coordination across multiple teams. We will cover topics such as feature and component teams, product management, the Agile Release Train, and release planning.
By understanding the Program Level, you’ll gain valuable insights into managing large-scale Agile initiatives and ensuring that your organization is well-equipped to deliver value consistently and effectively.
Don’t forget to check out our previous posts on the “big picture” overview and the Team Level, as well as our upcoming post on the Portfolio Level, which will tackle the strategic aspects of the SAFe framework.
The SAFe Program Level
At the Program level, we encounter an organizational, process, and requirements model designed to unite numerous agile teams for a larger enterprise purpose—delivering a comprehensive product, system, or application suite to customers.
At the Team level, teams are empowered, self-organizing, and self-managing. Operating from a local backlog overseen by the team’s product owners, they maintain control over their specific destiny, defining, building, and testing their features or components. In harmony with the Agile Manifesto principles, this is the most effective approach to motivating a team to achieve the best possible results.
However, as we transition to the Program level, the problem shifts, and the enterprise faces new challenges to execute agility at this larger scale effectively. The objectives at this level encompass the following:
- Sustaining the Vision and Roadmap: Constantly defining and communicating the program’s Vision while maintaining a Roadmap so that teams work towards a shared goal.
- Release management: Coordinating the efforts of multiple teams to develop release increments based on the enterprise’s chosen development cadence.
- Quality management: Ensuring the combined results (the system) of the teams are consistently integrated; that performance, security, reliability requirements, and any imposed external standards are met.
- Deployment: Since teams might not have the ability, scope, or authority to deploy systems to end-users, this crucial activity must be managed at the Program level.
- Resource management: Adapting resources to address constraints and bottlenecks in the program’s capacity to deliver value promptly.
- Eliminating impediments: Program leaders and managers are accountable for resolving obstacles that arise from teams—critical issues beyond the team’s control.
We must introduce additional resources and processes to achieve this broader objective at this level. We’ll explore these practices in this post.
Organizing Teams at Scale
One of the first questions emerging at this level may appear basic: how to organize agile teams to optimize value delivery of requirements. For smaller enterprises, this is typically a non-issue; teams will naturally organize around the limited products or applications that reflect the mission. Ideally, the silos that often separate development, product management, and testing in larger enterprises do not exist. Establishing an agile team in this context mainly involves determining individual roles and providing standard training.
However, at scale, as with most other agile aspects, the problem differs, and the challenge lies in understanding who works on what and where. Do we organize around features, components, product lines, services, or something else? While there’s no simple answer to this question, it must be explored since numerous agile practices—such as the number of backlogs and their management, how the Vision and features are conveyed to groups of teams, and how teams coordinate their activities to produce a larger solution—depend on that decision.
Feature and Component Teams
This section will explore the differences and similarities between feature and component approaches when organizing teams.
Component Teams
With a component-based approach, the development of a new feature is carried out by the relevant component teams. Component teams capitalize on their technical expertise and interests, concentrating on building robust components, ensuring reliability and extensibility, utilizing common technologies and usage models, and promoting reuse. We referred to these define/build/test teams as “component” teams, which might be an unfortunate label.
In this scenario, a new feature necessitates the creation of new backlog items for each team contributing to the feature. Component teams minimize multiplexing across features by implementing them sequentially rather than concurrently. Some benefits are apparent, as each team can consolidate the needs of multiple features into the architecture for their component and focus on developing the most effective, long-lasting component or service for their layer. Every new feature doesn’t disrupt its component; instead, it evolves as a set of services to implement current and, ideally, future features.
This approach may indicate an architecture-centric bias when constructing the largest-of-all-known software systems. This is because if the architecture isn’t reasonably accurate, it’s unlikely that the enterprise will achieve the reliability, performance, and long-term feature velocity delivery goals. Moreover, there are other reasons why component-based organizations can be efficient in the agile enterprise:
- The enterprise may already be organized based on past successes, with specialists knowledgeable in large-scale databases, web services, embedded operating systems, and similar areas working together. Individuals’ skills, interests, locations, friendships, cultures, and lifestyles are not interchangeable.
- These teams might already be co-located, streamlining communication and reducing the requirements, design, and test data batch handoff.
- Technologies and programming languages may vary across components, making it challenging for feature teams to engage in pairing, collective ownership, continuous integration, test automation, and other factors.
- Lastly, at scale, a single-user feature can be an incredibly large element that could potentially affect hundreds of practitioners. For instance, a feature like “share my new phone video to YouTube” could impact dozens of agile teams. Organizing by feature can be vague when multiple teams are needed for implementation.
Feature Teams
On the other hand, the central premise of this post, that agile teams excel at focusing on value delivery, generates an opposing perspective on this topic. The almost universally favored approach for organizing agile teams is to arrange them around features.
The benefits of a feature team approach are evident:
- Teams develop expertise in the system’s actual domain and usage mode, generally accelerating the value delivery of any given feature.
- There is less overhead since teams don’t have to exchange backlog items to ensure a feature is implemented, and there are significantly fewer interdependencies between teams.
- Planning and execution become leaner.
- The team’s core competence shifts to the feature (or set of features) rather than a single aspect of the technology stack.
- The team’s backlog is simplified, focusing on just one or two features simultaneously.
This approach undoubtedly promotes the rapid delivery of high-value-added features!
Sometimes the Line Is Blurry
Even considering this advice, we must acknowledge that features and components are both abstractions, and the distinction isn’t always clear-cut. One person’s feature might be another’s component. And at times, a single feature may be best implemented as a stand-alone, service-oriented component.
For instance, TradeStation Securities develops an online trading system where “charting” is a crucial capability for traders. A few co-located agile teams collaborate on the charting function. This appears to be an excellent example of a feature team, as charting is undoubtedly a significant system feature.
When new online trading capabilities, such as “trading foreign exchange currencies (Forex),” are developed, new chart functionality must be added. However, significant components like streaming data, account management, and interfaces with Forex exchanges drive this new chart functionality. Is the new feature value stream described as “trading Forex through the specialty chart function?” If so, that would create an apparent vertical feature stream, and the teams might reorganize by taking some members from each component team and forming a new vertical feature team for Forex trading. Or is the feature “trading of Forex” plus “charting Forex,” in which case the charting team is already organized appropriately? Is the charting capability a feature set or a component? Both? Does the label matter?
Even when the distinction is clear, is a feature team always the optimal choice? Keith Black, VP of TradeStation Technologies, observes:
Online trading demands a deep understanding of various levels of technical expertise and industry knowledge. Forming feature teams that included members from every component area would be unreasonable.
As a result, during our transition to agile, we organized around component teams, and as we matured, we started to assemble feature teams where it made sense.
While feature teams excel at driving an initiative to completion, in some cases, they may not be the most practical choice. For example, suppose you have twenty feature teams that all rely on a common component, such as a time-sensitive online transactional processing engine. In that case, it might not be wise to have 20 different teams interfering with this critical component. Instead, you could opt to have these changes managed by a single team that can coordinate the needs of the 20 teams and ensures they don’t compromise areas they don’t understand by making changes to their specific features.
Lean Toward Feature Teams
Considering the advantages and disadvantages of each approach, the answer isn’t always evident. However, with agile’s emphasis on immediate value delivery, there is a natural inclination toward feature teams. Mike Cottmeyer points out:
I usually start with the feature team approach and only move toward components if necessary…but the decision is situation-specific. To make this decision, you’ll need to examine the diversity of your technology, the quality of your system’s design, the tools you have to manage your codebase, your team’s size and competence, how and where your teams are distributed, and the efficiency of your infrastructure automation. You need to determine at what scale your feature teams WILL break down because, at some point, they WILL break down. Is scaling to this level something we must address now, or can it wait?
The Best Answer Is Likely a Mix In larger enterprises with numerous teams and countless features; one should consider the factors mentioned earlier and choose the best strategy for your specific context. In most cases, as you can see, the answer will likely involve a mix of feature and component teams.
Indeed, a mix is likely appropriate even in a modest-sized agile organization. Ryan Martens, founder and CTO of Rally Software shared a five-team agile org chart and its “feature paths” with us:
While we don’t think of it in these terms, three of these teams (ALM1, ALM2, and PPM at the top) would be readily identifiable as feature teams. One (I&O at the bottom) is a component team. I’m not sure what you’d call the one (Platform and Integration) in the middle, as it sometimes originates its own features and sometimes merely acts as a supportive component for other features.
Given that a mix is most likely appropriate, two main factors influence the mix: the practical limitation of the degree of specialization required and the economics of potential reuse.
The System Team
As mentioned earlier, agile teams are the primary force for software creation and testing. Each team should have the skills and resources to plan, design, code, and test their domain’s component or feature.
Nonetheless, at the Program level, individual teams might not have all the necessary capabilities to integrate, test, and deploy a complete solution. As a result, an additional team that supports the feature/component teams is often observed. This team goes by various names, such as system integration, QA and deployment, release team, or simply system team. Regardless of the name, this team shares the same goal, works at the same release train pace, and typically has a set of specific, system-level responsibilities, as outlined below.
System-Level Testing
Ideally, each team would be able to test all features at the system level. While many feature teams have such capabilities, it’s frequently impractical for a single feature or component team to test a feature within its entire system context. Therefore, the system team develops the skills and capabilities necessary for more extensive end-to-end testing of larger features and use cases that deliver ultimate value.
System Quality Assurance
Likewise, many teams lack the specialty skills and resources required to test specific nonfunctional and other quality requirements for the system. The system team may be the only viable means to test against various customer-supported platforms and application environments, known as the “matrix of death.”
System-Level Continuous Integration
The larger the system, the less likely teams can independently provide daily utility for a complete system build through their existing build and configuration management infrastructure environments.
Building Development Infrastructure
Transitioning to agile methods typically demands significant investment in an environment supporting configuration management, automated builds and deployment, and automated build verification tests (faster feedback). The system team’s formation ensures commitment, visibility, and accountability of resources while guaranteeing task completion, as the program relies on its success.
The Release Management
Team Besides the agile and system teams, there is usually another vital organizational unit. While there isn’t a standard convention for its name, it often assumes a release management team or steering committee role.
This team emerges because, despite being empowered, the agile teams may not have the necessary visibility, quality assurance, or release governance authority to determine when and how the solution should be delivered to end users. Members of this team could include critical stakeholders at the Program level of the enterprise, such as the following:
- Line-of-business owners and product managers who concentrate on the release’s content and market impact
- Senior representatives from sales and marketing
- Senior line managers are responsible for the teams and are typically accountable for developing the solution for the market.
- Internal IT and production deployment resources
- Senior and system-level QA personnel are accountable for final evaluating the solution’s system-level quality, performance, and suitability for use.
- System architects, CTOs, and others who supervise architectural integrity
This team convenes weekly in numerous agile enterprises to address the following questions.
- Do the teams still have a clear understanding of their mission?
- Do we comprehend what they are building?
- What is the current release’s status?
- What obstacles must we tackle to facilitate progress?
- Are we on track to meet the release schedule, and if not, how do we adjust the scope to ensure that we can meet the release dates?
This forum offers weekly senior management insight into the release status. This team is also able to make any scope, timing, or resource adjustments needed to support the release. In this way, the release management team serves as the final authority on all release governance issues and constitutes an integral part of the agile enterprise.
Product Management
Previously, we introduced the product owner as the individual responsible for determining which stories the team implements and the sequence in which they are implemented to deliver value to the end user.
At the Program level, we find another set of stakeholders with the same responsibility but for the solution as a whole. These stakeholders may have different titles, such as product manager, program manager, solution manager, business analyst, area or line product owner, etc. Still, the responsibility is clear: They are ultimately responsible for the end-to-end solution. This encompasses not only the content of the release but also the additional requirements for the “whole-product surrounds” like distribution, documentation, support, messaging, release governance, and so on.
Vision
With the organizational questions behind us, we can move on to describing the requirements-requirements-specific artifacts and activities specific to the Program level. The first of these is Vision. Generally, the Vision addresses the more significant questions, including the following.
- What is this program’s strategic intent?
- What problem will the application, product, or system resolve?
- What features and benefits will it offer?
- Who will it cater to?
- What performance, reliability, etc., will it deliver?
- What platforms, standards, applications, etc., will it support?
Since the product and software requirements specification documents and the like are unlikely to exist, directly communicating the Vision for the program must take a different form. Agile teams take a variety of approaches to communicating the Vision. These include the following:
- Vision document
- Draft press release
- Preliminary data sheet
- Backlog and Vision briefing
Features
No matter the form, the primary content of the Vision is a set of features that describe what new things the system will do for its users and the benefits the user will derive.
In describing the features of a product or system, we take a more abstract and higher-level view of the system of interest. In so doing, we have the security of returning to a more traditional description of system behavior, the feature. Features can be described as follows:
Features are services provided by the system that fulfill stakeholder needs.
Features live at a level above software requirements and bridge the gap from the problem domain (understanding the needs of the users and stakeholders in the target market) to the solution domain (specific requirements intended to address the user needs)
We also posited in that text that a system of arbitrary complexity could be described with a list of 25 to 50 features (just like a program backlog). This simple rule of thumb allows us to keep our high-level descriptions precisely that—high level—and simplifies our attempts to describe complex systems in a short form while still communicating the full scope and intent of the proposed solution.
And as we just described, features also allow us to organize agile teams in a way that optimizes value delivery.
New Features Shape the Program Backlog
Features, then, hold a prominent position in our agile requirements model. They represent a “type of backlog item,” and the collection of proposed features forms the program backlog.
Features are brought to life by stories. During release planning, features are broken down into stories, which the teams utilize to implement the feature’s functionality.
Features are usually expressed as bullet points or, at most, a couple of sentences. For instance, you might describe a few features of an online email service like this:
Enable “Stars” for marking important conversations or messages, acting as a visual reminder to follow up on a message or conversation later. Introduce “Labels” as a “folder-like” metaphor for organizing conversations.
Testing Features
Previously, we introduced the agile mantra “all code is tested code” and mentioned that a story could not be considered complete until it has passed one or more acceptance tests.
At the Program level, whether features also need (or warrant) acceptance tests arises. The answer is typically “yes.” Although story-level testing should ensure that methods and classes are reliable (unit testing) and stories serve their intended purpose (functional testing), a feature may involve multiple teams and numerous stories. Therefore, testing feature functionality is as crucial as testing story implementation.
Moreover, many system-level “what if” considerations (think alternative use-case scenarios) must be tested to guarantee overall system reliability. Some of these can only be tested at the full system level. So indeed, features, like stories, require acceptance tests as well.
In this way, we see that every feature demands one or more acceptance tests, and a feature cannot be considered complete until it passes.
Nonfunctional Requirements
Up to this point in our discussion of requirements, we have used feature and user story formats to describe the system’s functional requirements—those system behaviors where a combination of inputs produces a meaningful output (result) for the user. However, we have yet to explain how to capture and express the system’s nonfunctional requirements (NFRs).
Traditionally, these were often defined as system qualities—quality, reliability, scalability, and so on—and they are essential aspects of system behavior. Indeed, they are as significant as the total functionality. If a system is unreliable (crashes), unmarketable (fails to meet a specific regulatory standard), or unscalable (doesn’t support the required number of users), then, agile or not, we will fail just as miserably as if we neglected a critical functional requirement.
Nonfunctional Requirements as Backlog Constraints
From a requirements modeling perspective, we could simply include the NFRs in the program backlog, but their behavior tends to differ. New features usually enter the backlog, get implemented and tested, and then are removed (though ongoing functional tests ensure the features continue to work well in the future). NFRs restrict new development, reducing the level of design freedom that teams might otherwise possess. Here’s an example:
For partner compatibility, implement SAML-based single sign-on (NFR) for all products in the suite.
In other cases, when new features are implemented, existing NFRs must be reconsidered, and previously sufficient system tests may need expansion. Here’s an example:
The new touch UI (new feature) must still adhere to our accessibility standards (NFR).
Thus, in the requirements model, we represented NFRs as backlog limitations.
We first observe that nonfunctional requirements may constrain some backlog items while others do not. We also notice that some nonfunctional requirements may not apply to any backlog items, meaning they stand alone and pertain to the entire system.
Regardless of how we view them, nonfunctional requirements must be documented and shared with the relevant teams. Some NFRs apply to the whole system, and others are specific to a team’s feature or component domain.
Testing Nonfunctional Requirements
These requirements—usability, reliability, performance, supportability, and so on—are often called the “ilities” or qualities of a system. It should be clear that these requirements also need testing.
Most nonfunctional (0…*) requirements necessitate one or more tests. Instead of labeling these tests as another form of acceptance tests and further overusing that term, we’ve called them system qualities tests. This name implies that these tests must be conducted periodically to verify that the system still exhibits the qualities expressed by the nonfunctional requirements.
The Agile Release Train
Having discussed the organization of program teams, the Vision, features, and nonfunctional requirements that define the program’s strategic intent, we can examine how the Vision is implemented over time.
Releases and Potentially Shippable Increments
As previously explained, system functionality development is carried out by multiple teams in a synchronized Agile Release Train (ART), a standard rhythm of timeboxed iterations and milestones that are date- and quality-fixed but scope-variable. The ART generates releases or potentially shippable increments (PSIs) at regular, usually fixed, 60- to 120-day intervals.
The PSI is to the enterprise what iterations are to the team, in other words, the fundamental iterative and incremental cadence and delivery mechanism for the program (an “ubersprint”). For numerous programs, release increments can be deployed to customers at this chosen rhythm; for others, the milestone represents the accomplishment of a valuable and assessable system-level increment. Depending on the business context, these increments can then be delivered to the customer.
Release Planning
Release planning is the periodic program activity that aligns teams to a shared mission. During release planning, teams translate the Vision into the features and stories necessary to achieve the objectives.
However, as we approach release planning, with its costs and overhead, we are reminded of certain Agile Manifesto principles.
- The most efficient form of communication is face-to-face.
- The best requirements, architecture, and designs emerge from self-organizing teams.
- The team regularly reflects on becoming more effective and then tunes and adjusts its behavior accordingly.
These principles and the need to ensure that teams share a common mission prompt enterprises to participate in periodic, face-to-face release planning events. These events gather stakeholders to address the following goals.
- Develop and share a unified Vision.
- Communicate market expectations, features, and relative priorities for the next release.
- Plan and commit to the content of the next release.
- Reallocate resources to match current program priorities.
- Refine the product Roadmap.
- Reflect on and apply lessons learned from previous releases.
The event’s frequency depends on the company’s need for market responsiveness and the iteration and release cadence it has chosen. In most enterprises, it occurs every 60 to 120 days, with a 90-day cadence being typical.
Roadmap
When we discussed the Vision, it was portrayed as independent of time; in other words, it outlines the product or system’s objectives without being tied to specific timelines. This approach is suitable when the goal is to communicate the essence of “what we are about to create.” Overloading the discussion with timelines and the “when” could potentially hinder the conversation about the “what.”
However, to establish priorities and plan for implementation, we need a perspective incorporating time. This is the purpose of the Roadmap. The Roadmap is neither complex nor mechanically challenging to maintain.
The Roadmap comprises a series of planned release dates, each with a theme and a prioritized set of features. Although it is mechanically simple to represent the Roadmap, determining its content is different.
Summary
In this post, we introduced new requirements, roles, artifacts, and processes necessary for applying agile development in programs involving multiple teams. We described how to organize teams to optimize value delivery. We introduced several new requirements artifacts—Vision, features, nonfunctional requirements, and Roadmap—and explained how teams use these artifacts to communicate the larger purpose of the product, system, or application they are developing. We also described how teams combine a series of iterations with building PSIs, or incremental releases, through an Agile Release Train, progressively delivering value to users and customers.