Parasoft announces the opening of its new office in Northeast Ohio.
Cloudflare announced that developers can now deploy AI applications on Cloudflare’s global network in one simple click directly from Hugging Face, an open and collaborative platform for AI builders.
With Workers AI now generally available, Cloudflare is the first serverless inference partner integrated on the Hugging Face Hub for deploying models, enabling developers to quickly, easily, and affordably deploy AI globally, without managing infrastructure or paying for unused compute capacity.
Despite significant strides in AI innovation, there is still a disconnect between its potential and the value it brings businesses. Organizations and their developers need to be able to experiment and iterate quickly and affordably, without having to set up, manage, or maintain GPUs or infrastructure. Businesses are in need of a straightforward platform that unlocks speed, security, performance, observability, and compliance to bring innovative, production-ready applications to their customers faster.
“The recent generative AI boom has companies across industries investing massive amounts of time and money into AI. Some of it will work, but the real challenge of AI is that the demo is easy, but putting it into production is incredibly hard,” said Matthew Prince, CEO and co-founder, Cloudflare. “We can solve this by abstracting away the cost and complexity of building AI-powered apps. Workers AI is one of the most affordable and accessible solutions to run inference. And with Hugging Face and Cloudflare both deeply aligned in our efforts to democratize AI in a simple, affordable way, we’re giving developers the freedom and agility to choose a model and scale their AI apps from zero to global in an instant.”
Workers AI is generally available with GPUs now deployed in more than 150 cities globally
Workers AI provides end-to-end infrastructure needed to scale and deploy AI models efficiently and affordably for the next era of AI applications. Cloudflare now has GPUs deployed across more than 150 cities globally, most recently launching in Cape Town, Durban, Johannesburg, and Lagos for the first locations in Africa, as well as Amman, Buenos Aires, Mexico City, Mumbai, New Delhi, and Seoul, to provide low-latency inference around the world. Workers AI is also expanding to support fine-tuned model weights, enabling organizations to build and deploy more specialized, domain-specific applications.
In addition to Workers AI, Cloudflare’s AI Gateway offers a control plane for your AI applications, allowing developers to dynamically evaluate and route requests to different models and providers, eventually enabling developers to use data to create fine tunes and run the fine-tuned jobs directly on the Workers AI platform.
With Workers AI, developers can now deploy AI models in one click directly from Hugging Face, for the fastest way to access a variety of models and run inference requests on Cloudflare’s global network of GPUs. Developers can choose one of the popular open source models and then simply click “Deploy to Cloudflare Workers AI” to deploy a model instantly. There are 14 curated Hugging Face models now optimized for Cloudflare’s global serverless inference platform, supporting three different task categories including text generation, embeddings, and sentence similarity.
Industry News
Postman released v11, a significant update that speeds up development by reducing collaboration friction on APIs.
Sysdig announced the launch of the company’s Runtime Insights Partner Ecosystem, recognizing the leading security solutions that combine with Sysdig to help customers prioritize and respond to critical security risks.
Nokod Security announced the general availability of the Nokod Security Platform.
Drata has acquired oak9, a cloud native security platform, and released a new capability in beta to seamlessly bring continuous compliance into the software development lifecycle.
Amazon Web Services (AWS) announced the general availability of Amazon Q, a generative artificial intelligence (AI)-powered assistant for accelerating software development and leveraging companies’ internal data.
Red Hat announced the general availability of Red Hat Enterprise Linux 9.4, the latest version of the enterprise Linux platform.
ActiveState unveiled Get Current, Stay Current (GCSC) – a continuous code refactoring service that deals with breaking changes so enterprises can stay current with the pace of open source.
Lineaje released Open-Source Manager (OSM), a solution to bring transparency to open-source software components in applications and proactively manage and mitigate associated risks.
Synopsys announced the availability of Polaris Assist, an AI-powered application security assistant on the Synopsys Polaris Software Integrity Platform®.
Backslash Security announced the findings of its GPT-4 developer simulation exercise, designed and conducted by the Backslash Research Team, to identify security issues associated with LLM-generated code. The Backslash platform offers several core capabilities that address growing security concerns around AI-generated code, including open source code reachability analysis and phantom package visibility capabilities.
Azul announced that Azul Intelligence Cloud, Azul’s cloud analytics solution -- which provides actionable intelligence from production Java runtime data to dramatically boost developer productivity -- now supports Oracle JDK and any OpenJDK-based JVM (Java Virtual Machine) from any vendor or distribution.
F5 announced new security offerings: F5 Distributed Cloud Services Web Application Scanning, BIG-IP Next Web Application Firewall (WAF), and NGINX App Protect for open source deployments.
Code Intelligence announced a new feature to CI Sense, a scalable fuzzing platform for continuous testing.
WSO2 is adding new capabilities for WSO2 API Manager, WSO2 API Platform for Kubernetes (WSO2 APK), and WSO2 Micro Integrator.