Perforce Software announced the launch of AI Validation, a new capability within its Perfecto continuous testing platform for web and mobile applications.
Fastly announced the general availability of Fastly AI Accelerator.
A semantic caching solution created to address the critical performance and cost challenges faced by developers with Large Language Model (LLM) generative AI applications, Fastly AI Accelerator delivers an average of 9x faster response times.1 Initially released in beta with support for OpenAI ChatGPT, Fastly AI Accelerator is also now available with Microsoft Azure AI Foundry.
"AI is helping developers create so many new experiences, but too often at the expense of performance for end-users. Too often, today’s AI platforms make users wait,” said Kip Compton, Chief Product Officer at Fastly. “With Fastly AI Accelerator we’re already averaging 9x faster response times and we’re just getting started.1 We want everyone to join us in the quest to make AI faster and more efficient.”
Fastly AI Accelerator can be a game-changer for developers looking to optimize their LLM generative AI applications. To access its intelligent, semantic caching abilities, developers simply update their application to a new API endpoint, which typically only requires changing a single line of code. With this easy implementation, instead of going back to the AI provider for each individual call, Fastly AI Accelerator leverages the Fastly Edge Cloud Platform to provide a cached response for repeated queries. This approach helps to enhance performance, lower costs, and ultimately deliver a better experience for developers.
"Fastly AI Accelerator is a significant step towards addressing the performance bottleneck accompanying the generative AI boom,” said Dave McCarthy, Research Vice President, Cloud and Edge Services at IDC. “This move solidifies Fastly's position as a key player in the fast-evolving edge cloud landscape. The unique approach of using semantic caching to reduce API calls and costs unlocks the true potential of LLM generative AI apps without compromising on speed or efficiency, allowing Fastly to enhance the user experience and empower developers."
Existing Fastly customers can add AI Accelerator directly from their Fastly accounts.
Industry News
Mirantis announced the launch of Rockoon, an open-source project that simplifies OpenStack management on Kubernetes.
Endor Labs announced a new feature, AI Model Discovery, enabling organizations to discover the AI models already in use across their applications, and to set and enforce security policies over which models are permitted.
Qt Group is launching Qt AI Assistant, an experimental tool for streamlining cross-platform user interface (UI) development.
Sonatype announced its integration with Buy with AWS, a new feature now available through AWS Marketplace.
Endor Labs, Aikido Security, Arnica, Amplify, Kodem, Legit, Mobb and Orca Security have launched Opengrep to ensure static code analysis remains truly open, accessible and innovative for everyone:
Progress announced the launch of Progress Data Cloud, a managed Data Platform as a Service designed to simplify enterprise data and artificial intelligence (AI) operations in the cloud.
Sonar announced the release of its latest Long-Term Active (LTA) version, SonarQube Server 2025 Release 1 (2025.1).
Idera announced the launch of Sembi, a multi-brand entity created to unify its premier software quality and security solutions under a single umbrella.
Postman announced the Postman AI Agent Builder, a suite empowering developers to quickly design, test, and deploy intelligent agents by combining LLMs, APIs, and workflows into a unified solution.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of CubeFS.
BrowserStack and Bitrise announced a strategic partnership to revolutionize mobile app quality assurance.
Mendix, a Siemens business, announced the general availability of Mendix 10.18.
Red Hat announced the general availability of Red Hat OpenShift Virtualization Engine, a new edition of Red Hat OpenShift that provides a dedicated way for organizations to access the proven virtualization functionality already available within Red Hat OpenShift.