Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
As companies grapple with the rapid integration of AI into web applications, questions of risk mitigation and security are top of mind. AI-infused coding and secure defaults offer the potential for improved security, but organizations are still challenged with practical steps beyond just writing intent into policies and procedures. Further there are unique challenges with consumer-facing models not related to work, but something that must be managed as part of the growing attack surface.
Standing before a new wave of change and technology, the established security fundamentals should stay the same. For organizations, this includes effective policy and guidelines — which provides a paved path for trusted AI models, ensuring proper contractual language and avoidance of allowing your data to train public models, and an understanding of how to utilize open-source projects. For consumers, it's essential to recognize your privacy rights based on your geographical location alongside which various privacy models you opt-in to online, as usage patterns are unique to each individual. As our understanding of the technologies expands, tailored rules and guidelines will follow suit in order to safely harness AI benefits like faster iteration for organizations and enhancing user experiences for consumers.
Bridging Ethics, Privacy Policy, and Regulatory Gaps
The ethical and secure use of data within web applications is an ongoing issue that companies and government bodies alike are confronting, as evident in President Biden's first-ever executive order on AI technology last fall. The foundation of AI data security is largely reliant on individual company privacy policies, dictating how data — including those in AI applications — is managed and safeguarded. These policies, outlined in adherence to data privacy laws and regulations, encompass user consent and security measures for protecting consumer data with tools like encryption and access controls. Companies are held accountable for the data handling practices outlined within their privacy policies, which is crucial in the context of AI data use and security as it reinforces a framework for ethical practices within emerging technology that may not have direct regulatory requirements yet. This way, consumers can rest assured that their data is protected regardless of which application — AI or not — they may be interacting with.
While it can be difficult for organizations to know where to begin when it comes to ensuring the integrity of data use, a good place to start is to determine the purpose of the AI or large language model (LLM) to be trained, and then to differentiate if the model will be used internally or externally. Internal models often involve sensitive company data, including proprietary information, which underscores the need for robust security to safeguard against threats.
On the other hand, external models require a focus on user privacy and transparency to build as well as maintain consumer trust. For example, these consumer-facing models may have different ethical considerations, such as biases within models and broader societal impacts as citizens interact with these technologies in a public-facing way. By differentiating between these models, organizations can better navigate data protection regulations and ethical factors associated with each context to ensure the responsible and effective use of AI for themselves and their consumers. With respect to external models, they may just be enabling faster indexing and enablement of customer service within a chat bot that doesn't require sensitive PII to add significant effective business value with limited risk.
Despite ongoing discussions on how this technology will be regulated, a holistic approach that combines privacy policy frameworks with technical facets and differentiations is currently one of the most effective ways to ensure the confidentiality, protection, and integrity of data within AI applications.
A New Era of AI and Cybersecurity
As technology rapidly evolves, so does the cyber threat landscape with bad actors exploiting AI's capabilities to cause millions of dollars in damage. While the full extent of AI's impacts on the cybersecurity landscape is yet to be determined, new guidance shows how adversaries can deliberately confuse or even "poison" artificial intelligence (AI) systems to make them malfunction — and there's no foolproof defense that their developers can employ. Traditional cybersecurity measures often rely on predefined rules and signatures, inherently making them less adaptive to emerging threats and new technologies. AI-driven machine learning algorithms continuously learn from new data as it is available online, allowing them to adapt and evolve as quickly as cyber threats become more sophisticated. Additionally, AI can process enormous amounts of data quickly, enabling it to detect patterns that may go unrecognized by traditional measures. This offers organizations unique insight into dynamic attack patterns and allows them to proactively respond to potential threats in real-time.
Open source projects can be a powerful tool in unlocking AI's value when it comes to threat intelligence and detection. Both consumers and organizations have long benefitted from open-source projects due to the collaboration and transparency the community offers. Additionally, the open-source community's emphasis on teamwork and visibility has also led to well-kept documentation of security challenges brought about by emerging tech like AI. Open source learning models provide real-time analysis of attack patterns and how data is shared: imagine, for example, a world where you could share information across Web Application Firewalls for Remote Code Execution (RCE), Distributed Denial of Service (DDoS) or even zero day attack patterns and everyone could benefit in blocking and shutting down traffic before damage is caused. We're on the verge of an evolution of greater practical opt-in intelligence in which teams can submit data at a rate previously only known via shared threat feeds or as rudimentary as email threads for faster processing and indexing. By offering a more agile and responsive defense approach against the broadening landscape of cyber threats, AI has the potential to provide a cybersecurity advantage that we're on the precipice of unlocking. We've already begun to see great community-driven efforts with the Open Web Application Security Project, including ten security considerations to consider when deploying LLMs which will continue to iterate as we uncover more about the breadth of AI's capabilities.
Securing the Future
The rapid integration of AI into web applications has propelled organizations into a complex landscape of opportunities and challenges, especially when it comes to security for themselves and their consumers. If an organization has decided to pursue AI in its web application, a recommended initial step is for the security team to maintain a close partnership with the engineering teams to observe how the code is shipping. It's crucial for these teams to align with the establishment of the Software Development Lifecycle (SDLC), ensuring a clear understanding of the effective touchpoints at every stage. This alignment will guide your practices, helping determine where the team should facilitate meaningful reviews and touchpoints to ensure security practices are properly implemented in the AI applications. Recognizing the dual nature of AI models — internal and external — also guides organizations in tailoring security measures to protect sensitive company proprietary data or privacy policies prioritizing user safety in consumer-facing models. AI introduces a paradigm shift in the dynamic cyber threat landscape, as it amplifies the attacks possible by threat actors in web applications while also offering adaptive and real-time threat detection capabilities for organizations. Open-source projects bring transparency and collaboration to consumers and organizations working with AI, but it's paramount to balance innovation and risk tolerance.
The combination of established security fundamentals, evolving AI capabilities, and collaborative open-source initiatives provide a roadmap for organizations to begin safely integrating AI into their web applications. The careful navigation of these intersections will open the door to a future where innovation and security coexist, unlocking the full potential of AI for organizations and ensuring a secure digital landscape for consumers.
Industry News
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.
Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.
Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.
Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.