Check Point® Software Technologies Ltd.(link is external) announced that its Infinity Platform has been named the top-ranked AI-powered cyber security platform in the 2025 Miercom Assessment.
ClearML announced the launch of its expansive end-to-end AI Platform, designed to streamline AI adoption and the entire development lifecycle.
This unified, open source platform supports every phase of AI development, from lab to production, allowing organizations to leverage any model, dataset, or architecture at scale. ClearML’s platform integrates seamlessly with existing tools, frameworks, and infrastructures, offering unmatched flexibility and control for AI builders and DevOps teams building, training, and deploying models at every scale on any AI infrastructure.
With this release, ClearML becomes the most flexible, wholly agnostic, end-to-end AI platform in the marketplace today in that it is:
- Silicon-agnostic: supporting NVIDIA, AMD, Intel, ARM, and other GPUs
- Cloud-agnostic: supporting Azure, AWS, GCP, Genesis Cloud, and others, as well as multi-cloud
- Vendor-agnostic: supporting the most popular AI and machine learning frameworks, libraries, and tools, such as PyTorch, Keras, Jupyter Notebooks, and others
- Completely modular: Customers can use the full platform alone or integrate it with their existing AI/ML frameworks and tools such as Grafana, Slurm, MLflow, Sagemaker, and others to address GenAI, LLMOps, and MLOps use cases and to maximize existing investments.
“ClearML’s end-to-end AI platform is crucial for organizations looking to streamline their AI operations, reduce costs, and enhance innovation – while safeguarding their competitive edge and future-proofing their AI investments by using our completely cloud-, vendor-, and silicon- agnostic platform,” said Moses Guttmann, Co-founder and CEO of ClearML. “By providing a comprehensive, flexible, and secure solution, ClearML empowers teams to build, train, and deploy AI applications more efficiently, ultimately driving better business outcomes and faster time to production at scale.”
The ClearML end-to-end AI Platform encompasses newly expanded capabilities and integrates previous stand-alone products, and includes:
- A GenAI App Engine, designed to make it easy for AI teams to build and deploy GenAI applications, maximizing the potential and the value of their LLMs.
- An Open Source AI Development Center, which offers collaborative experiment management, powerful orchestration, easy-to-build data stores, and one-click model deployment. Users can develop their ML code and automation with ease, ensuring their work is reproducible and scalable.
- An AI Infrastructure Control Plane, helping customers manage, orchestrate, and schedule GPU compute resources effortlessly, whether on-premise, in the cloud, or in hybrid environments. These new capabilities, which were also introduced today in a separate announcement, maximize GPU utilization and provide fractional GPUs, as well as multi-tenancy and extensive billing and chargeback capabilities that offer precise cost control, empowering customers to optimize their compute resources efficiently.
ClearML’s AI Platform enables customers to use any type of machine learning, deep learning, or large language model (LLM) with any dataset, in any architecture, at scale. AI Builders can seamlessly develop their ML code and automation, ensuring their work is reproducible and scalable. That’s important, because it addresses several critical challenges faced by organizations in developing, deploying, and managing AI solutions in the most complex and demanding environments. Here’s why it matters:
- Unified End-to-end Workflow: ClearML provides a seamless workflow that integrates all stages of AI development, from data ingestion and model training to deployment and monitoring. This unified approach eliminates the need for multiple disjointed tools, simplifying the AI adoption and development process.
- Superior Efficiency and ROI: ClearML’s new AI infrastructure orchestration and management capabilities help customers execute 10X more AI and HPC workloads on their existing infrastructure.
- Interoperability: The platform is designed to work with any machine learning framework, dataset, or infrastructure, whether on-premise, in the cloud, or in a hybrid environment. This flexibility ensures that organizations can use their preferred tools and avoid vendor lock-in.
- Orchestration and Automation: ClearML automates many aspects of AI development, such as data preprocessing, model training, and pipeline management. This ensures full utilization of compute resources for multi-instance GPUs and job scheduling, prioritization, and quotas. ClearML empowers team members to schedule resources on their own with a simple and unified interface, enabling them to self-serve with more automation and greater reproducibility.
- Scalable Solutions: The platform supports scalable compute resources, enabling organizations to handle large datasets and complex models efficiently. This scalability is crucial for keeping up with the growing demands of AI applications.
- Optimized Resource Utilization: By providing detailed insights and controls over compute resource allocation, ClearML helps organizations maximize their GPU and cloud resource utilization. This optimization leads to significant cost savings and prevents resource wastage.
- Budget and Policy Control: ClearML offers tools for managing cloud compute budgets, including autoscalers and spillover features. These tools help organizations predict and control their monthly cloud expenses, ensuring cost-effectiveness, by providing advanced user management for superior quota/over-quota management, priority, and granular control of compute resources allocation policies.
- Enterprise-Grade Security: The platform includes robust security features such as role-based access control, SSO authentication, and LDAP integration. These features ensure that data, models, and compute resources are securely managed and accessible only to authorized users.
- Real-Time Collaboration: The platform facilitates real-time collaboration among team members, allowing them to share data, models, and insights effectively. This collaborative environment fosters innovation and accelerates the development process.
Industry News
Orca Security announced the Orca Bitbucket App, a cloud-native seamless integration for scanning Bitbucket Repositories.
The Live API for Gemini models is now in Preview, enabling developers to start building and testing more robust, scalable applications with significantly higher rate limits.
Backslash Security(link is external) announced significant adoption of the Backslash App Graph, the industry’s first dynamic digital twin for application code.
SmartBear launched API Hub for Test, a new capability within the company’s API Hub, powered by Swagger.
Akamai Technologies introduced App & API Protector Hybrid.
Veracode has been granted a United States patent for its generative artificial intelligence security tool, Veracode Fix.
Zesty announced that its automated Kubernetes optimization platform, Kompass, now includes full pod scaling capabilities, with the addition of Vertical Pod Autoscaler (VPA) alongside the existing Horizontal Pod Autoscaler (HPA).
Check Point® Software Technologies Ltd.(link is external) has emerged as a leading player in Attack Surface Management (ASM) with its acquisition of Cyberint, as highlighted in the recent GigaOm Radar report.
GitHub announced the general availability of security campaigns with Copilot Autofix to help security and developer teams rapidly reduce security debt across their entire codebase.
DX and Spotify announced a partnership to help engineering organizations achieve higher returns on investment and business impact from their Spotify Portal for Backstage implementation.
Appfire announced its launch of the Appfire Cloud Advantage Alliance.
Salt Security announced API integrations with the CrowdStrike Falcon® platform to enhance and accelerate API discovery, posture governance and threat protection.
Lucid Software has acquired airfocus, an AI-powered product management and roadmapping platform designed to help teams prioritize and build the right products faster.
StackGen has partnered with Google Cloud Platform (GCP) to bring its platform to the Google Cloud Marketplace.