Progress announced new powerful capabilities and enhancements in the latest release of Progress® Sitefinity®.
Sometimes, my work as an analyst offers opportunities to pause, to reflect. At a risk of making this blog all about me, I have completed the report on Value Stream Management (VSM) that I mentioned in a previous blog, and I can now turn my full attention to DevSecOps. More will come after that, each report peppered with real-world experiences from people like you, good reader (and I welcome hearing about your own). But in the interim, between reports, I've had time to reflect on DevOps, and how I can help further the cause of delivering software-based innovation at scale.
There's a massive irony here. My first job was as a programmer; I later ran tools and infrastructure for development groups; I went on to advise some pretty big organizations on how to develop software, and how to manage data centers, servers, storage, networking, security and all that. I've written books about it, for heaven's sake — so how come, when I write about it all, I can sometimes feel out of my depth?
It's an important question to answer, because my own experience of imposter syndrome reflects many of the enterprises I speak to. Some have set up DevOps-type groups as independent bubbles within the organization, leaving those outside feeling very much away from the cutting edge. This phenomenon is not new: back in the Nineties I was working at the forefront of what we might call "the agile boom" — a time in which older, ponderous approaches to software production, with two-year lead times and no guarantees of success — were being reconsidered in the light of the internet.
The idea was and remains simple: take too long to deliver something, and the world will have moved on. As a Dynamic Systems Development Methodology consultant, my job was to help the cool kids do things fast but do things right. Over time I learned one factor that stood out above all others, that could make or break an agile development practice: complexity. It was in this period that I learned the power of the Pareto principle, or in layperson's terms, "let's separate out the things we absolutely need, from the nice-to-haves, that can come later."
Complexity kills innovation, there, I've said it. Back in the days of Waterfall methodologies, processes would be bogged down in over-specified requirements (so-called analysis paralysis) and exhausting test regimes. No wonder software development gurus looked to return to the source (sic) and adopt the JFDI approach that remains prevalent today.
Trouble is, complexity never went away: it just moved along the pipeline. In a recent online panel, I likened developers to the Sorcerer's Apprentice — it's one thing to be able to make a broom at will, but how are you going to manage them all? It's as good an analogy as any for how simple it is to create a software-based artifact, and what issues this creates. VSM hasn't come into existence on a whim: it's emerged in response to the challenges caused by its absence. Same with DevSecOps, for that matter.
Below the surface lies as simple truth, that short-termist approaches miss out on fundamental elements such as planning, setting strategy and so on. And the spin-off result of doing lots of things very fast, is to generate a lot of complexity which then needs to be managed. Even our most darling of cloud-native mega-businesses are now struggling with the complexity of what they have created — good for them for ignoring it, while they established their brand, but you can only put good old-fashioned configuration management off for so long.
Do I think that software should be delivered more slowly, or favor a return to old-fashioned methodologies? Absolutely not. But it does explain why we're seeing what we might call "a wave of governance" start to envelop the world of software development, as short-term perspectives are reconsidered in favor of getting things right first time. There's buzz-phrases for this, of course, such as "shift-left," which is about thinking about quality and security earlier in the process.
The challenge of complexity also offers a way forward, for enterprise organizations feeling out of their depth: it's a problem, for sure, but it is one they know how to address. Step aside, imposter syndrome: it's time to bring some of those older wisdoms, such as configuration management, requirements management and risk management, to bear. While enterprises can't suddenly become carefree startups, they can recognize that such enterprise-y practices are actually a good thing, which can be woven into new ways of delivering software.
This won't be easy, but it is necessary, and it will be supported by tools vendors as they, too, mature. Over coming years, I expect to see simplification and consolidation across the tools and platform space, enabling more policy-driven approaches, better guardrails and improved automation. So that developers can get on and do the thing with minimal encumbrance, even as managers and the business as a whole feels the co-ordination benefit.
The bottom line is that for DevOps to scale, governance principles need to be baked in. I hesitate before suggesting that the core CALMS notions need to add a G — perhaps the G is silent, but it is no less important.
Industry News
Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.
Securiti announced a new solution - Security for AI Copilots in SaaS apps.
Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.