Security in Bytes: Decoding the Future of AI-Infused Web Applications
April 09, 2024

Ty Sbano
Vercel

As companies grapple with the rapid integration of AI into web applications, questions of risk mitigation and security are top of mind. AI-infused coding and secure defaults offer the potential for improved security, but organizations are still challenged with practical steps beyond just writing intent into policies and procedures. Further there are unique challenges with consumer-facing models not related to work, but something that must be managed as part of the growing attack surface.

Standing before a new wave of change and technology, the established security fundamentals should stay the same. For organizations, this includes effective policy and guidelines — which provides a paved path for trusted AI models, ensuring proper contractual language and avoidance of allowing your data to train public models, and an understanding of how to utilize open-source projects. For consumers, it's essential to recognize your privacy rights based on your geographical location alongside which various privacy models you opt-in to online, as usage patterns are unique to each individual. As our understanding of the technologies expands, tailored rules and guidelines will follow suit in order to safely harness AI benefits like faster iteration for organizations and enhancing user experiences for consumers.

Bridging Ethics, Privacy Policy, and Regulatory Gaps

The ethical and secure use of data within web applications is an ongoing issue that companies and government bodies alike are confronting, as evident in President Biden's first-ever executive order on AI technology last fall. The foundation of AI data security is largely reliant on individual company privacy policies, dictating how data — including those in AI applications — is managed and safeguarded. These policies, outlined in adherence to data privacy laws and regulations, encompass user consent and security measures for protecting consumer data with tools like encryption and access controls. Companies are held accountable for the data handling practices outlined within their privacy policies, which is crucial in the context of AI data use and security as it reinforces a framework for ethical practices within emerging technology that may not have direct regulatory requirements yet. This way, consumers can rest assured that their data is protected regardless of which application — AI or not — they may be interacting with.

While it can be difficult for organizations to know where to begin when it comes to ensuring the integrity of data use, a good place to start is to determine the purpose of the AI or large language model (LLM) to be trained, and then to differentiate if the model will be used internally or externally. Internal models often involve sensitive company data, including proprietary information, which underscores the need for robust security to safeguard against threats.

On the other hand, external models require a focus on user privacy and transparency to build as well as maintain consumer trust. For example, these consumer-facing models may have different ethical considerations, such as biases within models and broader societal impacts as citizens interact with these technologies in a public-facing way. By differentiating between these models, organizations can better navigate data protection regulations and ethical factors associated with each context to ensure the responsible and effective use of AI for themselves and their consumers. With respect to external models, they may just be enabling faster indexing and enablement of customer service within a chat bot that doesn't require sensitive PII to add significant effective business value with limited risk.

Despite ongoing discussions on how this technology will be regulated, a holistic approach that combines privacy policy frameworks with technical facets and differentiations is currently one of the most effective ways to ensure the confidentiality, protection, and integrity of data within AI applications.

A New Era of AI and Cybersecurity

As technology rapidly evolves, so does the cyber threat landscape with bad actors exploiting AI's capabilities to cause millions of dollars in damage. While the full extent of AI's impacts on the cybersecurity landscape is yet to be determined, new guidance shows how adversaries can deliberately confuse or even "poison" artificial intelligence (AI) systems to make them malfunction — and there's no foolproof defense that their developers can employ. Traditional cybersecurity measures often rely on predefined rules and signatures, inherently making them less adaptive to emerging threats and new technologies. AI-driven machine learning algorithms continuously learn from new data as it is available online, allowing them to adapt and evolve as quickly as cyber threats become more sophisticated. Additionally, AI can process enormous amounts of data quickly, enabling it to detect patterns that may go unrecognized by traditional measures. This offers organizations unique insight into dynamic attack patterns and allows them to proactively respond to potential threats in real-time.

Open source projects can be a powerful tool in unlocking AI's value when it comes to threat intelligence and detection. Both consumers and organizations have long benefitted from open-source projects due to the collaboration and transparency the community offers. Additionally, the open-source community's emphasis on teamwork and visibility has also led to well-kept documentation of security challenges brought about by emerging tech like AI. Open source learning models provide real-time analysis of attack patterns and how data is shared: imagine, for example, a world where you could share information across Web Application Firewalls for Remote Code Execution (RCE), Distributed Denial of Service (DDoS) or even zero day attack patterns and everyone could benefit in blocking and shutting down traffic before damage is caused. We're on the verge of an evolution of greater practical opt-in intelligence in which teams can submit data at a rate previously only known via shared threat feeds or as rudimentary as email threads for faster processing and indexing. By offering a more agile and responsive defense approach against the broadening landscape of cyber threats, AI has the potential to provide a cybersecurity advantage that we're on the precipice of unlocking. We've already begun to see great community-driven efforts with the Open Web Application Security Project, including ten security considerations to consider when deploying LLMs which will continue to iterate as we uncover more about the breadth of AI's capabilities.

Securing the Future

The rapid integration of AI into web applications has propelled organizations into a complex landscape of opportunities and challenges, especially when it comes to security for themselves and their consumers. If an organization has decided to pursue AI in its web application, a recommended initial step is for the security team to maintain a close partnership with the engineering teams to observe how the code is shipping. It's crucial for these teams to align with the establishment of the Software Development Lifecycle (SDLC), ensuring a clear understanding of the effective touchpoints at every stage. This alignment will guide your practices, helping determine where the team should facilitate meaningful reviews and touchpoints to ensure security practices are properly implemented in the AI applications. Recognizing the dual nature of AI models — internal and external — also guides organizations in tailoring security measures to protect sensitive company proprietary data or privacy policies prioritizing user safety in consumer-facing models. AI introduces a paradigm shift in the dynamic cyber threat landscape, as it amplifies the attacks possible by threat actors in web applications while also offering adaptive and real-time threat detection capabilities for organizations. Open-source projects bring transparency and collaboration to consumers and organizations working with AI, but it's paramount to balance innovation and risk tolerance.

The combination of established security fundamentals, evolving AI capabilities, and collaborative open-source initiatives provide a roadmap for organizations to begin safely integrating AI into their web applications. The careful navigation of these intersections will open the door to a future where innovation and security coexist, unlocking the full potential of AI for organizations and ensuring a secure digital landscape for consumers.

Ty Sbano is Chief Information Security Officer at Vercel
Share this

Industry News

September 05, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux (RHEL) AI across the hybrid cloud.

September 05, 2024

Jitterbit announced its unified AI-infused, low-code Harmony platform.

September 05, 2024

Akuity announced the launch of KubeVision, a feature within the Akuity Platform.

September 05, 2024

Couchbase announced Capella Free Tier, a free developer environment designed to empower developers to evaluate and explore products and test new features without time constraints.

September 04, 2024

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, announced the general availability of AWS Parallel Computing Service, a new managed service that helps customers easily set up and manage high performance computing (HPC) clusters so they can run scientific and engineering workloads at virtually any scale on AWS.

September 04, 2024

Dell Technologies and Red Hat are bringing Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform built on an AI-optimized operating system that enables users to more seamlessly develop, test and deploy artificial intelligence (AI) and generative AI (gen AI) models, to Dell PowerEdge servers.

September 04, 2024

Couchbase announced that Couchbase Mobile is generally available with vector search, which makes it possible for customers to offer similarity and hybrid search in their applications on mobile and at the edge.

September 04, 2024

Seekr announced the launch of SeekrFlow as a complete end-to-end AI platform for training, validating, deploying, and scaling trusted enterprise AI applications through an intuitive and simple to use web user interface (UI).

September 03, 2024

Check Point® Software Technologies Ltd. unveiled its innovative Portal designed for both managed security service providers (MSSPs) and distributors.

September 03, 2024

Couchbase officially launched Capella™ Columnar on AWS, which helps organizations streamline the development of adaptive applications by enabling real-time data analysis alongside operational workloads within a single database platform.

September 03, 2024

Mend.io unveiled the Mend AppSec Platform, a solution designed to help businesses transform application security programs into proactive programs that reduce application risk.

September 03, 2024

Elastic announced that it is adding the GNU Affero General Public License v3 (AGPL) as an option for users to license the free part of the Elasticsearch and Kibana source code that is available under Server Side Public License 1.0 (SSPL 1.0) and Elastic License 2.0 (ELv2).

August 29, 2024

Progress announced the latest release of Progress® Semaphore™, its metadata management and semantic AI platform.

August 29, 2024

Elastic, the Search AI Company, announced the Elasticsearch Open Inference API now integrates with Anthropic, providing developers with seamless access to Anthropic’s Claude, including Claude 3.5 Sonnet, Claude 3 Haiku and Claude 3 Opus, directly from their Anthropic account.