Mar 12, 2017
I doubt I'd find anyone willing to say that management and governance processes should get in the way of teams delivering anything - not even I would deign to do so in my most contrarian moments. Decisions should absolutely be made in the right place - with the teams who have done the work on what it is that they're delivering - doing anything else will lead to nothing other than something that doesn't at all meet the expressed needs of users. I've seen this point expressed articulately in a number of places, but most recently by Daffyd Vaughn in his piece 'Create the space to let teams deliver'. Daffyd makes some excellent points about making the governance fit the crime and making sure that teams are multi-disciplinary in order that things can actually get done with at least less-imperfect knowledge from the people working on them. His discussion of tools, however, is more contentious from my point of view.
I'm that rare beast from the point of view of modern 'digital' staff: I have a responsibility for a lot of technology that we deliver internally, whether that be continuous integration tooling, configuration management tooling or collaboration tooling for developers. The type of users I have raises a couple of interesting issues. My users are:
- Technical users will always think they know best what tools will meet their needs - I'm projecting here, more than anything, because I know I'd be the same in their position.
- My users sit within delivery groups focussed on the provision of a certain type of service (eg, middleware, web-facing frontends): it would be unreasonable to expect them to have the time or incination to care about enterprise-wide concerns.
This combination of technical knowledge and what could uncharitably termed 'myopia' lead to a more complex environment for tools to be deployed into than may be hinted at in Daffyd's piece.
Laissez-faire tool selection in the enterprise context
Where teams are small and self-contained, a laissez-faire 'get what works' approach will probably be fine: concerns over integration between teams and worries about data siloisation probably aren't atop the list of concerns of anyone in the delivery organisation. In the context of an organisation of 60,000+, managing the data of tens of millions of customers and hundreds of projects across multiple service-focussed delivery groups with myriad interdependencies, those things start to matter a lot more. If Delivery Group A, for example, has chosen Jira to manage their work, and Delivery Group B makes the choice to use Tuleap, Project A+B turning up and requiring them to work together is going to involve either 1) one team giving up its preferred tool for the project (a bad thing as it's being argued that 'getting out of the way' in the tools space is the right approach, and someone is 'getting in the way' by suggesting they use another tol) or 2) the duplication of data and all sorts of issues knowing what the truth is between the two tools.
There has to be a central function responsible for making corporate technology choices where it is expected that similar functionality will be required across the organisation across an indefinite time horizon to prevent the sort of siloisation that one would otherwise expect.
The need for (light-touch) control
While this may not be the case everywhere, when designing services for HMRC, I have to be cognisant of the Public Sector Network (PSN) Code of Connection. Paritularly poignant in the context of giving people carte blanche to select their tools from the widest marketplace in section '1d. Protective monitoring and intrusion detection':
"If you are consuming Software as a Service (SaaS), you should consider how you will be able to monitor for any potential abuse of business process or privilege."
Given this constraint, and the potential risk that a service consumed on a device connected to the PSN could pose to the wider network, the idea of providing unfettered access to any given tool is one that simply cannot fly. In the case of tools such as Slack, where very little information is provided by the provider - no matter how much you pay them - as to how they would handle a data breach or any other significant incident.
Some enterprise-level management of risk around tools needs to be undertaken to ensure that it is appropriate for use in the context it is intended to be used. No matter how multi-disciplinary a team is, it will require some assistance from people external to that team to give the present enterprise-wide view on a given technology and it will be required for those at the coalface to inform that strategic view.
There are a number of points that can be made here around people and process considerations (such as telling people not to post sensitive information on SaaS), but technical controls are king here: Daffyd's point about making sure firewalls don't prevent access to collaboration tools ignores the complex policy and legal landscapes that exist, such as the obligations placed upon HMRC employees by the Commissioners for Revenue and Customs Act 2005 around not publishing taxpayer information - an error on the tool supplier's side, coupled with a lapse in judgement of a member of staff could lead to a jail term for them, in spite of a lack of intent to violate their obligation.
I don't think it's a case of the legislation being at fault here: this chain of events would definitely be a feature rather than a bug when it comes to protecting citizen data. This is definitely something to be mitigated through sensible tool choice and managemnet rather than the blunt instrument of law.
Is a service is key to you, why would you trust someone else to run it?
No matter how widely-used, no matter how resilient the design, no matter how proscriptive your Service Level Agreements with the provide are, the use of SaaS tool will bring with it risks around availability and the perennial question of "what do I do if this goes down?" Of course, this is the case with any digital service, but at least if the service is managed in-house, I have a building and a desk number to go and visit whoever is responsible for the downtime. A large multinational corporation, on the other hand, does not care about you, no matter how big you are, in UK Government, and it bemuses me how often smart people delude themselves into thinking otherwise.
Tools for your own delivery people should be delivered by people in-house, not only for the above reasons, but also for the inculcation of a spirit of camaraderie and mutual understanding between tool providers and core delivery functions.
A corporate managed service for delivery needs
I am absolutely not saying that delivery teams should not be given the tools they need, nor am I saying that they shouldn't play a role in the definition of the tooling that they use: both of these things are key for anything to be delivered. This isn't necessarily as easy as 'not getting in the way', however: 'not getting in the way' could have some terrible unintended side-effects.
A central technology function should absolutely 'get in the way' and make sure that the organisation provides its delivery functions with what they need, with that offering being provided as a platform intended to be constantly improved upon based on input from its users. Whim and fancy cannot rule at enterprise scale: user needs in the tooling space need to be taken together and a service provided that allows people to do their jobs, while maintaining the integrity of the organisation's data assets and security policy.