Security in Bytes: Decoding the Future of AI-Infused Web Applications
April 09, 2024

Ty Sbano
Vercel

As companies grapple with the rapid integration of AI into web applications, questions of risk mitigation and security are top of mind. AI-infused coding and secure defaults offer the potential for improved security, but organizations are still challenged with practical steps beyond just writing intent into policies and procedures. Further there are unique challenges with consumer-facing models not related to work, but something that must be managed as part of the growing attack surface.

Standing before a new wave of change and technology, the established security fundamentals should stay the same. For organizations, this includes effective policy and guidelines — which provides a paved path for trusted AI models, ensuring proper contractual language and avoidance of allowing your data to train public models, and an understanding of how to utilize open-source projects. For consumers, it's essential to recognize your privacy rights based on your geographical location alongside which various privacy models you opt-in to online, as usage patterns are unique to each individual. As our understanding of the technologies expands, tailored rules and guidelines will follow suit in order to safely harness AI benefits like faster iteration for organizations and enhancing user experiences for consumers.

Bridging Ethics, Privacy Policy, and Regulatory Gaps

The ethical and secure use of data within web applications is an ongoing issue that companies and government bodies alike are confronting, as evident in President Biden's first-ever executive order on AI technology last fall. The foundation of AI data security is largely reliant on individual company privacy policies, dictating how data — including those in AI applications — is managed and safeguarded. These policies, outlined in adherence to data privacy laws and regulations, encompass user consent and security measures for protecting consumer data with tools like encryption and access controls. Companies are held accountable for the data handling practices outlined within their privacy policies, which is crucial in the context of AI data use and security as it reinforces a framework for ethical practices within emerging technology that may not have direct regulatory requirements yet. This way, consumers can rest assured that their data is protected regardless of which application — AI or not — they may be interacting with.

While it can be difficult for organizations to know where to begin when it comes to ensuring the integrity of data use, a good place to start is to determine the purpose of the AI or large language model (LLM) to be trained, and then to differentiate if the model will be used internally or externally. Internal models often involve sensitive company data, including proprietary information, which underscores the need for robust security to safeguard against threats.

On the other hand, external models require a focus on user privacy and transparency to build as well as maintain consumer trust. For example, these consumer-facing models may have different ethical considerations, such as biases within models and broader societal impacts as citizens interact with these technologies in a public-facing way. By differentiating between these models, organizations can better navigate data protection regulations and ethical factors associated with each context to ensure the responsible and effective use of AI for themselves and their consumers. With respect to external models, they may just be enabling faster indexing and enablement of customer service within a chat bot that doesn't require sensitive PII to add significant effective business value with limited risk.

Despite ongoing discussions on how this technology will be regulated, a holistic approach that combines privacy policy frameworks with technical facets and differentiations is currently one of the most effective ways to ensure the confidentiality, protection, and integrity of data within AI applications.

A New Era of AI and Cybersecurity

As technology rapidly evolves, so does the cyber threat landscape with bad actors exploiting AI's capabilities to cause millions of dollars in damage. While the full extent of AI's impacts on the cybersecurity landscape is yet to be determined, new guidance shows how adversaries can deliberately confuse or even "poison" artificial intelligence (AI) systems to make them malfunction — and there's no foolproof defense that their developers can employ. Traditional cybersecurity measures often rely on predefined rules and signatures, inherently making them less adaptive to emerging threats and new technologies. AI-driven machine learning algorithms continuously learn from new data as it is available online, allowing them to adapt and evolve as quickly as cyber threats become more sophisticated. Additionally, AI can process enormous amounts of data quickly, enabling it to detect patterns that may go unrecognized by traditional measures. This offers organizations unique insight into dynamic attack patterns and allows them to proactively respond to potential threats in real-time.

Open source projects can be a powerful tool in unlocking AI's value when it comes to threat intelligence and detection. Both consumers and organizations have long benefitted from open-source projects due to the collaboration and transparency the community offers. Additionally, the open-source community's emphasis on teamwork and visibility has also led to well-kept documentation of security challenges brought about by emerging tech like AI. Open source learning models provide real-time analysis of attack patterns and how data is shared: imagine, for example, a world where you could share information across Web Application Firewalls for Remote Code Execution (RCE), Distributed Denial of Service (DDoS) or even zero day attack patterns and everyone could benefit in blocking and shutting down traffic before damage is caused. We're on the verge of an evolution of greater practical opt-in intelligence in which teams can submit data at a rate previously only known via shared threat feeds or as rudimentary as email threads for faster processing and indexing. By offering a more agile and responsive defense approach against the broadening landscape of cyber threats, AI has the potential to provide a cybersecurity advantage that we're on the precipice of unlocking. We've already begun to see great community-driven efforts with the Open Web Application Security Project, including ten security considerations to consider when deploying LLMs which will continue to iterate as we uncover more about the breadth of AI's capabilities.

Securing the Future

The rapid integration of AI into web applications has propelled organizations into a complex landscape of opportunities and challenges, especially when it comes to security for themselves and their consumers. If an organization has decided to pursue AI in its web application, a recommended initial step is for the security team to maintain a close partnership with the engineering teams to observe how the code is shipping. It's crucial for these teams to align with the establishment of the Software Development Lifecycle (SDLC), ensuring a clear understanding of the effective touchpoints at every stage. This alignment will guide your practices, helping determine where the team should facilitate meaningful reviews and touchpoints to ensure security practices are properly implemented in the AI applications. Recognizing the dual nature of AI models — internal and external — also guides organizations in tailoring security measures to protect sensitive company proprietary data or privacy policies prioritizing user safety in consumer-facing models. AI introduces a paradigm shift in the dynamic cyber threat landscape, as it amplifies the attacks possible by threat actors in web applications while also offering adaptive and real-time threat detection capabilities for organizations. Open-source projects bring transparency and collaboration to consumers and organizations working with AI, but it's paramount to balance innovation and risk tolerance.

The combination of established security fundamentals, evolving AI capabilities, and collaborative open-source initiatives provide a roadmap for organizations to begin safely integrating AI into their web applications. The careful navigation of these intersections will open the door to a future where innovation and security coexist, unlocking the full potential of AI for organizations and ensuring a secure digital landscape for consumers.

Ty Sbano is Chief Information Security Officer at Vercel
Share this

Industry News

April 29, 2024

Code Intelligence announced a new feature to CI Sense, a scalable fuzzing platform for continuous testing.

April 29, 2024

WSO2 is adding new capabilities for WSO2 API Manager, WSO2 API Platform for Kubernetes (WSO2 APK), and WSO2 Micro Integrator.

April 29, 2024

OpenText™ announced a solution to long-standing open source intake challenges, OpenText Debricked Open Source Select.

April 29, 2024

ThreatX has extended its Runtime API and Application Protection (RAAP) offering to provide always-active API security from development to runtime, spanning vulnerability detection at Dev phase to protection at SecOps phase of the software lifecycle.

April 29, 2024

Canonical announced the release of Ubuntu 24.04 LTS, codenamed “Noble Numbat.”

April 25, 2024

JFrog announced a new machine learning (ML) lifecycle integration between JFrog Artifactory and MLflow, an open source software platform originally developed by Databricks.

April 25, 2024

Copado announced the general availability of Test Copilot, the AI-powered test creation assistant.

April 25, 2024

SmartBear has added no-code test automation powered by GenAI to its Zephyr Scale, the solution that delivers scalable, performant test management inside Jira.

April 24, 2024

Opsera announced that two new patents have been issued for its Unified DevOps Platform, now totaling nine patents issued for the cloud-native DevOps Platform.

April 23, 2024

mabl announced the addition of mobile application testing to its platform.

April 23, 2024

Spectro Cloud announced the achievement of a new Amazon Web Services (AWS) Competency designation.

April 22, 2024

GitLab announced the general availability of GitLab Duo Chat.

April 18, 2024

SmartBear announced a new version of its API design and documentation tool, SwaggerHub, integrating Stoplight’s API open source tools.

April 18, 2024

Red Hat announced updates to Red Hat Trusted Software Supply Chain.

April 18, 2024

Tricentis announced the latest update to the company’s AI offerings with the launch of Tricentis Copilot, a suite of solutions leveraging generative AI to enhance productivity throughout the entire testing lifecycle.