Kurrent unveiled a developer-centric evolution of Kurrent Cloud that transforms how developers and dev teams build, deploy and scale event-native applications and services.
Containers are not a new concept. In fact, despite some erroneous claims, they have been around since the mid 90s when some of the first container technologies were created and deployed by Marimba. The early application containers were just the C version of Java. Who better to create them then the original Java team from Sun Microsystems (Kim Polese, Arthur Van Hoff). Containers have been deployed in production since the mid 90s by Marimba (acquired by Samsung) and other trailblazers like Thinstall (acquired by VMware), and Softricity (acquired by Microsoft). It was my great fortune to be a part of the leadership driving the products and vision around containers, business service management, and virtualization at two of these three companies from inception to launch and implementation.
The misconception is that containers have not been deployed live in production. The reality is they have since the mid 90s but starting primarily in more commercial solutions like gaming, online trading, tax accounting, and even education at some of the top names in the industry — to literally millions of endpoints. The point is that the hardest part about deploying containers in production has really already been solved even prior to the creation of the more popular Enterprise Docker Containers, as well as the DevSecOps movement.
So why are so many large enterprises struggling to roll them into production? It is not the lack of technology but lack of technical acumen, roles, and leadership in understanding people (skills) and processes needed to deploy it in today's highly regulated industry.
The purpose of this blog series is to debunk some of the current myths created by marketing hype, lack of understanding of containers, and lack of understanding of how businesses function across DevSecOps to enable overcoming some of the common challenges that are causing failure.
What are the top mistakes/myths? How do you overcome them?
1. Site Reliability Engineering — One Size Does Not Fit All
2. Leading by Example — Transformation Leadership Requires DevOps Chops
3. Just Because You Can Doesn't Mean You Should (Legal, Regulatory, Security or Business)
The series of articles and iSpeak Cloud Presentation on November 10(link is external) @ 6:30AM PST will cover these three areas in more depth.
Site Reliability Engineering — One Size Does Not Fit All
Site Reliability Engineering (SRE) has become the new buzzword lingo in the long line of overused terminology in the technology sector. Similar to Cloud, XXXOps (Everything as Ops), and Modernization, this role and its requirements are often misunderstood.
The concept and setup of Site Reliability Engineering, like containers, is not new. Although it is a necessary skillset it is not the ONLY necessary skill set for DevSecOps to be successful.
There is a lot of hype around the role. Yes, the role of SRE is a critical concept to enable DevOps but it is not the only necessary role within an organization. Nor should it be, as CIOs taking this approach will often be disappointed with the outcome.
Many consulting firms are advertising their ability to Build/Operate/Transfer the Site Reliability overlay onto your organization. These resources come with a virtual Swiss Army knife of skills from understanding automation, orchestration, to underlying DevSecOps pipeline tools that are in style these days. Although they can assist, they are not the holy grail that will prevent your transformation initiatives from failing.
Site Reliability Engineering is Fundamental
Automation is essential to realizing a fully functional CI/CD or DevSecOps Platform in production. Their skills in automation, development of the pipeline, and maintaining the pipeline will be critical for the overall success/support of the platform. Understanding everything from building the pipeline, to enhancing network operations, automating ticketing for onboarding, and everything required for Integration, Tuning and Timing will be essential for a scalable production deployment. Customers are demanding faster releases, more reliability, and a variety of last mile tuning in today's COVID world. There is no shortage of work for talented Site Reliability Engineers to be successful.
The operative term here is "engineers." Lately, everywhere from LinkedIn to Google the industry has seen a surge in repurposed resumes/blogs, and misinformation in this area. They are the same ones that propagate buzz word bingo. Beware of the imposters. Their resumes will have every search engine word but if the hiring manager is worth their chops they will quickly understand that the person lacks critical concepts or skills.
These imposters are the typical arsonists that will use the art of deflection to start internal political fires to blame others as a political ploy to make up for their lack of understanding, skillsets or expertise. In my book, iSpeak Cloud: Embracing Digital Transformation(link is external), I recommend (still do) to axe these arsonists before they literally kill the success of the company's transformation.
So what are the additional roles/skills needed for successfully deploying Containers in Production besides SRE? They are Container Engineering, CMDB Dependency Mapping, Product Management (not just project), and Release Management. The assumption is that traditional roles within IT are a functional part of the equation.
Some of these roles may already exists. If they do great! Before you check off your list, make sure those teams/roles are functional. Ask if they need an overhaul to work with Containers or Levelup their Knowledgebase.
Go to: Debunking Myths About Containers in Production Impacting Transformation - Part 2
Industry News
ArmorCode announced the launch of two new apps in the ServiceNow Store.
Parasoft(link is external) is accelerating the release of its C/C++test 2025.1 solution, following the just-published MISRA C:2025 coding standard.
GitHub is making GitHub Advanced Security (GHAS) more accessible for developers and teams of all sizes.
ArmorCode announced the enhanced ArmorCode Partner Program, highlighting its goal to achieve a 100 percent channel-first sales model.
Parasoft(link is external) is showcasing its latest product innovations at embedded world Exhibition, booth 4-318(link is external), including new GenAI integration with Microsoft Visual Studio Code (VS Code) to optimize test automation of safety-critical applications while reducing development time, cost, and risk.
JFrog announced general availability of its integration with NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform.
CloudCasa by Catalogic announce an integration with SUSE® Rancher Prime via a new Rancher Prime Extension.
MacStadium(link is external) announced the extended availability of Orka(link is external) Cluster 3.2, establishing the market’s first enterprise-grade macOS virtualization solution available across multiple deployment options.
JFrog is partnering with Hugging Face, host of a repository of public machine learning (ML) models — the Hugging Face Hub — designed to achieve more robust security scans and analysis forevery ML model in their library.
Copado launched DevOps Automation Agent on Salesforce's AgentExchange, a global ecosystem marketplace powered by AppExchange for leading partners building new third-party agents and agent actions for Agentforce.
Harness completed its merger with Traceable, effective March 4, 2025.
JFrog released JFrog ML, an MLOps solution as part of the JFrog Platform designed to enable development teams, data scientists and ML engineers to quickly develop and deploy enterprise-ready AI applications at scale.
Progress announced the addition of Web Application Firewall (WAF) functionality to Progress® MOVEit® Cloud managed file transfer (MFT) solution.
Couchbase launched Couchbase Edge Server, an offline-first, lightweight database server and sync solution designed to provide low latency data access, consolidation, storage and processing for applications in resource-constrained edge environments.