Security in Bytes: Decoding the Future of AI-Infused Web Applications
April 09, 2024

Ty Sbano
Vercel

As companies grapple with the rapid integration of AI into web applications, questions of risk mitigation and security are top of mind. AI-infused coding and secure defaults offer the potential for improved security, but organizations are still challenged with practical steps beyond just writing intent into policies and procedures. Further there are unique challenges with consumer-facing models not related to work, but something that must be managed as part of the growing attack surface.

Standing before a new wave of change and technology, the established security fundamentals should stay the same. For organizations, this includes effective policy and guidelines — which provides a paved path for trusted AI models, ensuring proper contractual language and avoidance of allowing your data to train public models, and an understanding of how to utilize open-source projects. For consumers, it's essential to recognize your privacy rights based on your geographical location alongside which various privacy models you opt-in to online, as usage patterns are unique to each individual. As our understanding of the technologies expands, tailored rules and guidelines will follow suit in order to safely harness AI benefits like faster iteration for organizations and enhancing user experiences for consumers.

Bridging Ethics, Privacy Policy, and Regulatory Gaps

The ethical and secure use of data within web applications is an ongoing issue that companies and government bodies alike are confronting, as evident in President Biden's first-ever executive order on AI technology last fall. The foundation of AI data security is largely reliant on individual company privacy policies, dictating how data — including those in AI applications — is managed and safeguarded. These policies, outlined in adherence to data privacy laws and regulations, encompass user consent and security measures for protecting consumer data with tools like encryption and access controls. Companies are held accountable for the data handling practices outlined within their privacy policies, which is crucial in the context of AI data use and security as it reinforces a framework for ethical practices within emerging technology that may not have direct regulatory requirements yet. This way, consumers can rest assured that their data is protected regardless of which application — AI or not — they may be interacting with.

While it can be difficult for organizations to know where to begin when it comes to ensuring the integrity of data use, a good place to start is to determine the purpose of the AI or large language model (LLM) to be trained, and then to differentiate if the model will be used internally or externally. Internal models often involve sensitive company data, including proprietary information, which underscores the need for robust security to safeguard against threats.

On the other hand, external models require a focus on user privacy and transparency to build as well as maintain consumer trust. For example, these consumer-facing models may have different ethical considerations, such as biases within models and broader societal impacts as citizens interact with these technologies in a public-facing way. By differentiating between these models, organizations can better navigate data protection regulations and ethical factors associated with each context to ensure the responsible and effective use of AI for themselves and their consumers. With respect to external models, they may just be enabling faster indexing and enablement of customer service within a chat bot that doesn't require sensitive PII to add significant effective business value with limited risk.

Despite ongoing discussions on how this technology will be regulated, a holistic approach that combines privacy policy frameworks with technical facets and differentiations is currently one of the most effective ways to ensure the confidentiality, protection, and integrity of data within AI applications.

A New Era of AI and Cybersecurity

As technology rapidly evolves, so does the cyber threat landscape with bad actors exploiting AI's capabilities to cause millions of dollars in damage. While the full extent of AI's impacts on the cybersecurity landscape is yet to be determined, new guidance shows how adversaries can deliberately confuse or even "poison" artificial intelligence (AI) systems to make them malfunction — and there's no foolproof defense that their developers can employ. Traditional cybersecurity measures often rely on predefined rules and signatures, inherently making them less adaptive to emerging threats and new technologies. AI-driven machine learning algorithms continuously learn from new data as it is available online, allowing them to adapt and evolve as quickly as cyber threats become more sophisticated. Additionally, AI can process enormous amounts of data quickly, enabling it to detect patterns that may go unrecognized by traditional measures. This offers organizations unique insight into dynamic attack patterns and allows them to proactively respond to potential threats in real-time.

Open source projects can be a powerful tool in unlocking AI's value when it comes to threat intelligence and detection. Both consumers and organizations have long benefitted from open-source projects due to the collaboration and transparency the community offers. Additionally, the open-source community's emphasis on teamwork and visibility has also led to well-kept documentation of security challenges brought about by emerging tech like AI. Open source learning models provide real-time analysis of attack patterns and how data is shared: imagine, for example, a world where you could share information across Web Application Firewalls for Remote Code Execution (RCE), Distributed Denial of Service (DDoS) or even zero day attack patterns and everyone could benefit in blocking and shutting down traffic before damage is caused. We're on the verge of an evolution of greater practical opt-in intelligence in which teams can submit data at a rate previously only known via shared threat feeds or as rudimentary as email threads for faster processing and indexing. By offering a more agile and responsive defense approach against the broadening landscape of cyber threats, AI has the potential to provide a cybersecurity advantage that we're on the precipice of unlocking. We've already begun to see great community-driven efforts with the Open Web Application Security Project, including ten security considerations to consider when deploying LLMs which will continue to iterate as we uncover more about the breadth of AI's capabilities.

Securing the Future

The rapid integration of AI into web applications has propelled organizations into a complex landscape of opportunities and challenges, especially when it comes to security for themselves and their consumers. If an organization has decided to pursue AI in its web application, a recommended initial step is for the security team to maintain a close partnership with the engineering teams to observe how the code is shipping. It's crucial for these teams to align with the establishment of the Software Development Lifecycle (SDLC), ensuring a clear understanding of the effective touchpoints at every stage. This alignment will guide your practices, helping determine where the team should facilitate meaningful reviews and touchpoints to ensure security practices are properly implemented in the AI applications. Recognizing the dual nature of AI models — internal and external — also guides organizations in tailoring security measures to protect sensitive company proprietary data or privacy policies prioritizing user safety in consumer-facing models. AI introduces a paradigm shift in the dynamic cyber threat landscape, as it amplifies the attacks possible by threat actors in web applications while also offering adaptive and real-time threat detection capabilities for organizations. Open-source projects bring transparency and collaboration to consumers and organizations working with AI, but it's paramount to balance innovation and risk tolerance.

The combination of established security fundamentals, evolving AI capabilities, and collaborative open-source initiatives provide a roadmap for organizations to begin safely integrating AI into their web applications. The careful navigation of these intersections will open the door to a future where innovation and security coexist, unlocking the full potential of AI for organizations and ensuring a secure digital landscape for consumers.

Ty Sbano is Chief Information Security Officer at Vercel
Share this

Industry News

December 02, 2024

Spectro Cloud is a launch partner for the new Amazon EKS Hybrid Nodes feature debuting at AWS re:Invent 2024.

December 02, 2024

Couchbase unveiled Capella AI Services to help enterprises address the growing data challenges of AI development and deployment and streamline how they build secure agentic AI applications at scale.

December 02, 2024

Veracode announced innovations to help developers build secure-by-design software, and security teams reduce risk across their code-to-cloud ecosystem.

December 02, 2024

Traefik Labs unveiled the Traefik AI Gateway, a centralized cloud-native egress gateway for managing and securing internal applications with external AI services like Large Language Models (LLMs).

December 02, 2024

Generally available to all customers today, Sumo Logic Mo Copilot, an AI Copilot for DevSecOps, will empower the entire team and drastically reduce response times for critical applications.

December 02, 2024

iTMethods announced a strategic partnership with CircleCI, a continuous integration and delivery (CI/CD) platform. Together, they will deliver a seamless, end-to-end solution for optimizing software development and delivery processes.

November 26, 2024

Check Point® Software Technologies Ltd. has been recognized as a Leader and Fast Mover in the latest GigaOm Radar Report for Cloud-Native Application Protection Platforms (CNAPPs).

November 26, 2024

Spectro Cloud, provider of the award-winning Palette Edge™ Kubernetes management platform, announced a new integrated edge in a box solution featuring the Hewlett Packard Enterprise (HPE) ProLiant DL145 Gen11 server to help organizations deploy, secure, and manage demanding applications for diverse edge locations.

November 26, 2024

Red Hat announced the availability of Red Hat JBoss Enterprise Application Platform (JBoss EAP) 8 on Microsoft Azure.

November 26, 2024

Launchable by CloudBees is now available on AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).

November 26, 2024

Kong closed a $175 million in up-round Series E financing, with a mix of primary and secondary transactions at a $2 billion valuation.

November 26, 2024

Tricentis announced that GTCR, a private equity firm, has signed a definitive agreement to invest $1.33 billion in the company, valuing the enterprise at $4.5 billion and further fueling Tricentis for future growth and innovation.

November 25, 2024

Sonatype and OpenText are partnering to offer a single integrated solution that combines open-source and custom code security, making finding and fixing vulnerabilities faster than ever.