During this season, we find ourselves reflecting on the past year and the customers who’ve helped shape our success. In this spirit, the team at ELITE PARADIGM wishes you and yours a happy holiday season!
Applications power business. When they run well, customers get great experiences and IT and development teams remain focused on top initiatives. By combining Instana and Turbonomic from IBM, you get the high level of observability and application resource management capabilities you need to achieve these goals. Download this solution brief for an overview of the benefits of Instana and Turbonomic together.
Applications power businesses. When they run well, your customers have a great experience, and your development and infrastructure teams remain focused on their top initiatives. In today’s world, applications are becoming more distributed and dynamic as enterprises embrace new development methodologies and microservices. Simultaneously, applications are increasingly being deployed across complex hybrid and multicloud environments.
It has never been more challenging to assure applications deliver exceptional customer experiences that drive positive business results and beat the competition. Application architecture and design must be well executed, and the underlying infrastructure must be resourced to support the real-time demands of the application. The combination of Instana and Turbonomic provides higher levels of observability and trusted actions to continuously optimize and assure application performance
HPE GreenLake edge-to-cloud platform rolls out industry’s first cloud-native unified analytics and data lakehouse cloud services optimized for hybrid environments
IN THIS ARTICLE
First cloud-native solution to bring Kubernetes-based Apache Spark analytics and the simplicity of unified data lakehouses using Delta Lake on-premises
Only data fabric to combine S3-native object store, files, streams and databases in one scalable data platform
Cloud-native unified analytics platform enables customers to modernize legacy data lakes and warehouses without complex data migration, application rewrites or lock-in
37 solution partners support HPE Ezmeral with 15 joining the HPE Ezmeral Partner Program in the past 60 days
Built on HPE Ezmeral software, analytics and data science teams benefit from frictionless access to data from edge to cloud and a unified platform for accelerated Apache Spark and SQL
In the Age of Insight, data has become the heart of every digital transformation initiative in every industry, and data analytics has become critical to building successful enterprises. Simply put, data drives competitive advantage. However, for most organizations, significant challenges remain for organizations to successfully execute data-first modernization initiatives. Until now, organizations have been stuck with legacy analytics platforms that were either built for a pre-cloud era and lack cloud-native capabilities, or require complex migrations to public clouds, risking vendor lock-in, high costs and forcing adoption of new processes. This situation has left the big data and analytics software market1 — which IDC forecasts will reach $110 billion by 2023 – ripe for disruption.
Today, I am excited to announce two disruptive HPE GreenLake cloud services that will enable customers to overcome these trade-offs. There are four big value propositions we optimized for:
1. Seamless experience for a variety of analytics, SQL, and data science users
2. Top-notch performance
3. Choice and open ecosystem by leveraging pure open source in a hybrid environment
4. An intense focus on reducing TCO by up to 35% for many of the Workloads we are targeting
Built from the ground up to be open and cloud-native, our new HPE GreenLake for analytics cloud services will help enterprises unify, modernize, and analyze all of their data, from edge-to-cloud, in any and every place it’s stored. Now analytics and data science teams can leverage the industry’s first cloud-native solution on-premises, scale up Apache Spark lakehouses, and speed up AI and ML workflows. Today’s news is part of a significant set of new cloud services for the HPE GreenLake edge-to-cloud platform, announced today in a virtual launch event from HPE. The new HPE GreenLake for analytics cloud services include the following: HPE Ezmeral Unified Analytics
HPE now offers an alternative to customers previously limited to solutions in a hyperscale environment by delivering modern analytics on-premises, enabling up to 35%2 more cost efficiencies than the public cloud for data-intensive, long running jobs typical in mission critical environments. Available on the HPE GreenLake edge-to-cloud platform, HPE Ezmeral Unified Analytics is the industry’s first unified, modern, hybrid analytics and data lakehouse platform.
HPE Ezmeral Unified Analytics is the industry’s first unified, modern, hybrid analytics and data lakehouse platform Share
We believe it is the first solution to architecturally optimize and leverage three key advancements simultaneously which no one else in the industry has done.
1. Optimize for a Kubernetes based Spark environment for on-premises deployment providing the cloud-native elasticity and agility customers want
2. Handle the diversity of data types from files, tables, streams, and objects in one consistent platform to avoid silos and make data engineering easier
3. Embrace the edge by enabling a data platform environment which can span from edge to hybrid cloud
Instead of requiring all of your data to live in a public cloud, HPE Ezmeral Unified Analytics is optimized for on-premises and hybrid deployments, and uses open source software to ensure as-needed data portability. We designed our solution with the flexibility and scale to accommodate enterprises’ large data sets, or lakehouses, so customers have the elasticity they need for advanced analytics, everywhere.
Just a few key advantages of HPE Ezmeral Unified Analytics include:
Dramatic performance acceleration: Together NVIDIA RAPIDS Accelerator for Apache Spark and HPE Ezmeral can accelerate Spark data prep, model training, and visualization by up to 29x3, allowing data scientists and engineers to build, develop, and deploy at scale analytics solutions into production faster.
Next-generation architecture: We have built on Kubernetes and added value through an orchestration plane to make it easy to get the scale-out elasticity customers want. Our multi-tenant Kubernetes environment supports a compute-storage separation cloud model, providing the combined performance and elasticity required for advanced analytics, while enabling users to create unified real-time and batch analytics lakehouses with Delta Lake integration.
Optimized for data analytics:Enterprises can create a unified data repository for use by data scientists, developers, and analysts, including usage and sharing controls, creating the foundation for a silo-free digital transformation that scales with the business as it grows, and reaches new data sources. Support for NVIDIA Multi-Instance GPU technology enables enterprises to support a variety of workload requirements and maximize efficiency with up to seven instances per GPU.
Enhanced collaboration: Integrated workflows from analytics to ML/AI span hybrid clouds and edge locations, including native open-source integrations with Airflow, ML Flow, and Kubeflow technologies to help data science, data engineering, and data analytics teams collaborate and deploy models faster.
Choice and no vendor lock-in: On-premises Apache Spark workloads offer the freedom to choose deployment environments, tools, and partners needed to innovate faster
“Today’s news provides the market with more choice in deploying their modern analytics initiatives with a hybrid-native solution, enabling faster access to data, edge to cloud,” said Carl Olofson, Research Vice President, IDC. “HPE Ezmeral is advancing the data analytics market with continued innovations that fill a gap in the market for an on-premises unified analytics platform, helping enterprises unlock insights to outperform the competition.” HPE Ezmeral Data Fabric Object Store
Our second disruptive new solution is the HPE Ezmeral Data Fabric Object Store: the industry’s first Data Fabric to combine S3-native object store, files, streams and databases in one scalable data platform that spans edge-to-cloud. Available on bare metal and Kubernetes-native deployments, HPE Ezmeral Data Fabric Object Store provides a global view of an enterprise’s dispersed data assets and unified access to all data within a cloud-native model, securely accessible to the most demanding data engineering, data analytics, and data science applications. Designed with native S3 API, and optimized for advanced analytics, HPE Ezmeral Data Fabric Object Store enables customers to orchestrate both apps and data in a single control plane, while delivering the best price for outstanding performance.
We are proud of the innovation that has resulted in what we believe is an industry first: A consistent data platform which is able to handle a diversity of data types, is optimized for analytics, and is able to span from edge to cloud.
Several key features include:
Optimized performance for analytics: Designed for scalable object stores, HPE Ezmeral Object Store is the industry’s only solution that supports file, streams, database, and now object data types within a common persistent store, optimized for best performance across edge-to-cloud analytics workloads.
Globally synchronized edge-to cloud data: Clusters and data are orchestrated together to support dispersed edge operations, and a single Global Namespace provides simplified access to edge-to-cloud topologies from any application or interface. While data can be mirrored, snapshotted, and replicated, advanced security and policies ensure the right people and applications have access to the right data, when they need it.
Continuous scaling: Enterprises can grow as needed by adding nodes and configuring policies for data persistence while the data store handles the rest.
Performance and cost balance: Adapting to small or large objects, auto-tiering policies automatically move data from high-performance storage to low-cost storage.
Expanding the HPE Ezmeral Partner Ecosystem
We first introduced the HPE Ezmeral Partner Program in March 2021, enabling the rapid creation of streamlined, customized analytics engines and environments based on full stack solutions validated by trusted ISV partners. With 76% of enterprises expecting to be using on-premises, third-party-managed private cloud infrastructure for data and analytics workloads within the next year4, we’re excited to announce six new ISV partners today, including: NVIDIA NGC, Pepperdata, Confluent, Weka, Ahana and gopaddle.
“NVIDIA’s contributions to Apache Spark enable enterprises to process data orders of magnitude faster while significantly lowering infrastructure costs,” said Manuvir Das, head of Enterprise Computing, NVIDIA. “Integrating the NVIDIA RAPIDS Accelerator for Apache Spark and NVIDIA Triton Inference Server into the HPE Ezmeral Unified Analytics Platform streamlines the development and deployment of high-performance analytics, helping customers gain immediate results at lower costs.”
“Today, companies are using Spark to build their high-performance data applications, accelerating tens to thousands of terabytes of data transitioning from data lakes to AI data modeling,” said Joel Stewart, Vice President Customer Success, Pepperdata. “Pepperdata on HPE Ezmeral Runtime Enterprise can help reduce operating costs and provide deep insights into their Spark applications to improve performance and reliability.”
Since the HPE Ezmeral Partner Program launched, we’ve added 37 solution partners5 to support our customers’ core use cases and workloads, including big data and AI/ML use cases. The Partner Program is also adding support today for open-source projects such as Apache Spark, offering enterprises the ability to transition workloads to a modern, cloud-native architecture.
HPE Ezmeral has dozens of new customers, with competitive wins over both traditional big data players, and public cloud vendors Share
HPE GreenLake edge to-cloud platform and HPE Ezmeral are transforming enterprises – and HPE
As an important component of HPE GreenLake cloud services, the HPE Ezmeral software portfolio help enterprises such as GM Financial and Bidtellect advance modern data analytics initiatives. Since it was first introduced in June 2020, HPE Ezmeral has secured dozens of new customers, with significant competitive wins over both traditional big data players, as well as public cloud vendors.
Since vast volumes of applications and data remain will remain on-premises and at the edge as enterprises continue their digital transformations, our elastic, unified analytics solutions will help customers extract maximum value from their data, wherever it lives and moves, from edge-to-cloud. We look forward to working with you to make the most of your data as the Age of Insight continues to reshape enterprises around the world. Availability and Additional Resources
HPE Ezmeral Unified Analytics and HPE Ezmeral Data Fabric Object Store will be available as HPE GreenLake cloud services beginning November 2021 and Q1 2022, respectively.
Learn more about today’s news from the experts. Join these deep dive sessions as I chat with:
Keith White, SVP & GM, HPE GreenLake Cloud Services Commercial Business on how enterprises are accelerating transformation for greater business outcomes.
HPE and the HPE logo are trademarks or registered trademarks of HPE and/or its affiliates in the U.S. and other countries. Third-party trademarks mentioned are the property of their respective owners.
1 IDC, Worldwide Big Data and Analytics Software Forecast, 2021–2025, July 2021
2 Based on internal HPE competitive analysis, September 2021
3 Technical Paper: HPE Ezmeral for Apache Spark with NVIDIA GPU, published September 2021
4 451 Research Voice of the Enterprise: Data & Analytics, Data Platforms 2021
5 Internal HPE documentation on list of partners maintained by the group
Today’s digital journey is long and complex, creating equal parts opportunity and risk for organizations. The recent crisis of the pandemic has fueled more complexity in an already complicated world, and the digital landscape has been no exception. Networks have further expanded into the cloud, and organizations have reinvented themselves even while reacting and responding to new circumstances – and new cyberthreats. One question is top of mind: Where do we go from here? It’s clear that cybersecurity is no longer simply a defense. In a world that’s moving from cloud-ready to cloud-centric, cybersecurity has become a critical component in the foundation of the enterprise.
The physical world and the digital world have never been more interconnected and interdependent. You’ve no doubt seen the evidence – employees moving out of their offices, sensitive data and workloads leaving the friendly confines of the data center, legacy and SaaS applications needing to peacefully coexist, and every “thing” connecting to the Internet of Things. Network security is evolving to meet these challenges, and it’s critical to have the right cybersecurity strategy and partner.
Limitations of Legacy Approaches in a Cloud-Centric World
Legacy approaches to securing the network and cloud applications are broken due to several critical limitations:
Disjointed, complex SaaS security: Current Cloud-Access Security Brokers (CASB) solutions are complex to deploy and maintain, exist separately from the rest of the security infrastructure, and result in high total cost of ownership (TCO). In addition, they offer subpar security as threats morph and more data and applications reside in a “distributed cloud” that is spread over thousands of SaaS applications, multiple cloud providers and on-premises locations.
Reactive security: Legacy network security solutions still rely on a signature-based approach that requires security analysts to hunt down zero-day attacks in retrospect, rather than placing machine learning (ML) inline for realtime prevention. Meanwhile, attackers are using automation and the computing power of the cloud to constantly morph threats. Over the last decade, the numbers of new malware variants have increased from thousands per day to millions per day. In addition, hundreds of thousands of new malicious URLs are created daily, and security based on URL databases must evolve.
Lack of holistic identity-based security: The identity of users is no longer confined to on-premises directories. 87% of organizations use or plan to move to a cloud-based directory service to store user identities. Organizations need to configure, maintain and synchronize their network security ecosystem with the multiple identity providers used by an enterprise, which can be time-consuming and resource-intensive. Network security tools don’t apply identity-based security controls consistently, which creates a significant barrier to adopting Zero Trust measures to protect organizations against data breaches. As more people are working from anywhere, they require fast and always-on access to data and applications in the distributed cloud, regardless of location.
Trading performance for security: Users are accessing more data-rich applications hosted in the cloud. Performance of network security devices degrades severely when legacy security services and decryption are enabled. That’s why too often in the past, organizations have been forced to choose between performance in order to deliver good user experience or security to keep data and users safe.
Where Network Security Will Go From Here
Today’s distributed cloud operates at hyperscale – storing vast amounts of data and applications, and using near-infinite nodes to store that data. Traffic, especially web traffic, flowing between users and this distributed cloud is growing tremendously. The latest numbers from Google show that up to 98% of this traffic is being encrypted. In order to offer agility and flexibility, organizations moving toward this distributed cloud model aspire to become “cloud like,” providing on-demand access to resources and applications at hyperscale.
To meet the new challenges, security teams need cloud-centric network security solutions that:
a. See and control all applications, including thousands of SaaS applications that employees access daily – and the many new ones that keep becoming available at an incredible cloud velocity – using a risk-based approach for prioritization that takes into account data protection and compliance.
b. Stop known and unknown threats, including zero-day web attacks, in near realtime.
c. Enable access for the right users, irrespective of where user identity data is stored – on-premises, in the cloud or a hybrid of both.
d. Offer comprehensive security, including decryption, without compromising performance, allowing security to keep pace with growing numbers of users, devices and applications.
e. Have integrated, inline and simple security controls that are straightforward to set up and operate.
Palo Alto Networks has a 15-year history of delivering best-in-class security. We’re here to help secure the next steps on the digital journey, wherever they take us. Whether you’re a seasoned traveler or just starting out, we can help our customers find a new approach to network security – one that better matches today’s cloud-centric networks. What’s next for us will be revealed soon. Follow us on LinkedIn to be the first to know about our upcoming events.
You don’t need to be born in the cloud to have a cloud operating model.
While organizations embrace a cloud strategy for many reasons, one that stands out is a desire to get IT operations working more efficiently. Companies want to streamline value chains. They want to do more work and do it faster, with less friction.
Cloud-based operating models, done the right way, enable just that. In fact, for many organizations we work with, it is the adoption of the new ways of working that public cloud demands that creates the greatest benefits to business agility, and not merely the technology platform itself.
With this in mind, one of the top initiatives our customers prioritize is the development of cloud operating practices across their IT organizations and their entire IT ecosystem. Ironically, however, the operations domain is also one of the areas in which organizations struggle the most to generate momentum.
Breaking with the past
The biggest challenge organizations face is the need to shed a troublesome piece of baggage: their legacy operating models. Too many companies try to adopt cloud platforms without changing the way they work. They assume they can follow the same procedures they always have and make simple tweaks.
What they end up with is a collection of bad habits, tribal knowledge, and sets of processes, procedures, and tools that don’t respond well to the demands of a modern ecosystem. And, in turn, they fail to leverage operational learnings across their operations, missing opportunities to deliver agility improvements across the entire organization.
Organizations that are born in the cloud sometimes avoid developing these bad habits. They tailor their operating models to the way business is done in the cloud. They set up their operations to other organizational domains, incorporating DevOps, enabling innovation, sustaining applications, and driving strategy and governance.
But companies that have lengthy histories and may not have the luxury of spinning up new models from scratch have work to do to get their operations functioning at such a level. They face steep learning curves, including getting comfortable with new tooling, shared-responsibility models and practices, and job functions.
They have to learn how complex cloud transformations are so they can equip their teams with the resources needed to carry out the necessary work. Rather than relegate the work of getting to a cloud operating model to a side-of-desk task, they need to dedicate a group to adopting the new cloud-everywhere operating model.
If IT operations don’t prioritize the optimization of a comprehensive cloud operating model, some application owners will head to the cloud by themselves.
Where things stand
Based on our engagements with customers, we’ve evaluated enterprise progress in the eight domains making up the HPE Edge to Cloud Adoption Framework .
Figure 1: The Operations domain in the HPE Edge to Cloud Adoption Framework
Operations is a domain that most customers prioritize. It’s a tangible and significant part of most IT organizations, and one where progress is being made overall. Operations is one of the domains where we see the greatest overall progress in capability across our engagements, with an average maturity of 2.1 on a scale of 1 to 5, where a score of 3 indicates a cloud-ready organization.
However, there are some subdomains within operations where organizations are experiencing greater difficulty in achieving traction (see Figure 2):
Service operations
Platform operations
Pipeline operations
Figure 2: Organizational maturity in the Operations domain
Let’s take a look at what’s behind these challenges and what can be done to overcome them.
Service operations: Slow, ineffective incident response
Many of the organizations we engage with have struggled to progress incident response capability to the point where it can work at the velocity of cloud. Though incident response is considered a security function, the operational component is an important measure of a company’s overall success in the cloud.
Companies with immature service operations often fall short in their ability to be proactive and prevent events from happening in the first place. And if a security event takes place, they will often exhibit slow, ineffective incident responses. The time it takes to detect, address, and resolve an issue often places them outside of their service-level agreement (SLA) targets due to a lack of automation and orchestration.
To be proactive, companies need to set up systems to leverage metadata and characteristics from historical events. They start to learn from those events and build out proactive incident responses. Using techniques like comprehensive logging, event curation and correlation, and forensics can determine root causes and prevent issues.
The flip side is when an event has taken place, organizations take a long time to respond. Without automation and orchestration, you can’t scale the volume of elements that need to be remediated during an incident. If you’ve automated their response, it gives you a relatively short window for remediation.
Platforms operations: Lack of clearly defined infrastructure patterns to deliver consistent services
Organizations with a low level of cloud operational maturity lack a set of finite and clearly defined standards for developing applications and corresponding infrastructure. Without these patterns, they struggle to master the art of automating the provisioning of infrastructure and supporting the corresponding services that make applications tick.
This ties back to the legacy hangover issue. Organizations that are used to creating applications outside of cloud-native environments by default model their application architectures to support customized apps that may or may not generate high degrees of value. These architectures can’t scale to satisfy the emerging demands of the business.
As a result, operators find themselves having to provide white-glove service and custom work on an ongoing basis for customized applications. If they started from scratch, they’d be ready to scale. They could focus on those application archetypes that have a certain amount of critical mass. What starts to happen is organizations learn how to establish enterprise standards and policy and leverage cloud-native tools and that newfound operating model. It’s easier to bring others on board because they have that consistency and they generate results.
The new operating model creates its own gravity and starts to attract the attention of application and business owners. They can provide a valuable service as long as organizations play by a set of standards and rules to support scale. Otherwise, organizations are translating bad habits into the new cloud model.
Pipeline operations: Inconsistent management of container Images
Containers have changed the way organizations develop and deploy applications. Their lightweight structures and portable natures make them ideal packaging tools for organizations looking to add new features quickly and cost effectively. Still, a container operation is only as sound as the images that make up the containers themselves.
Organizations that are early in their cloud transformations tend to struggle with image integrity. They haven’t set up systems for hardening container images in a timely and repeatable fashion. They haven’t set up an automated testing and compliance certification process. They haven’t created secure container registries that identify images for use in the continuous integration and continuous delivery (CI/CD) pipeline, retire out-of-state images, and manage artifacts in various states of transit.
Pockets of skill
Some organizations see their cloud operations mature in a scattershot manner. In other words, it’s common to see cloud operations showing up as pockets of individual skills concentrated among certain people and certain groups. This can be helpful. Companies that promote a learning agenda can create first movers that act as a force multiplier. They can become change agents, with an eye toward centralization and supporting federation when the enterprise is ready.
But it isn’t always a good thing. If IT operations don’t prioritize the optimization of a comprehensive cloud operating model, some application owners will head to the cloud by themselves. The way these “experts” support their own applications won’t support all of the other application archetypes and the larger ecosystem out there. Costs spike, and the C-suite gets frustrated with the lack of progress. This is a big reason why cloud initiatives fail.
HPE GreenLake’s edge to cloud platform empowers organizations to harness the power of all their data, regardless of location.
A deliberate, well-planned approach toward operational maturity is the most effective way to succeed in the cloud. Across Hewlett Packard Enterprise’s base of cloud transformation engagements, leaders display this kind of discipline in these four areas:
Service operations: The good news is that organizations have grown more sophisticated in their ability to log a wide variety of activities. The more advanced organizations are doing a better job curating and identifying the right pieces of metadata to turn into insights. They have meta-models in place, but they need to put intelligence around them to put the data to work for them.
Architecture and governance: Formal governing bodies have been established, but many organizations still are working to streamline decision-making processes so they can reduce lead time.
Availability management: Leaders are developing their ability to define business availability requirements in SLAs. They have well-thought-out plans to improve metrics such as RTO (recovery time objective) and RPO (recovery point objective), especially for tier-one and tier-two applications. Where organizations still need help is in meeting those requirements. Many establish partnerships with vendors that can take on management functions to help achieve these objectives.
Platform operations: Leaders have created a mechanism for meta-data-based visibility across their ecosystems. This is important when they integrate with CI/CD pipelines; they know which components to pull from their repositories and where to put them. Infrastructure patterns and standards are important here too. A major bank we worked with was looking to increase adoption of its own internal private cloud. By using well-defined infrastructure patterns and corresponding storage standards, it was able to reduce infrastructure provisioning time, which in turn improved the reputation and buy-in for its initiative.
Getting your house in order
Moving to the cloud presents significant opportunities for companies to transform their operations—to make them more efficient and more focused on delivering business value. But to mature to the point where they’re accomplishing meaningful transformation, organizations need to get their own houses in order.
They need to commit to a new operating model, assess their strengths and weaknesses, and forge a plan to set their operations up for success. This operating model will yield benefits not only for the portion of operations that move to public cloud but across the entire enterprise, edge to cloud.
Extended detection and response (XDR) is a security solution that delivers end-to-end visibility, detection, investigation and response across multiple security layers. Core components of an XDR architecture include federation of security signals, higher-level behavioral and cross-correlated analytics, and closed-loop and highly automated responses. This creates a truly unified experience supported by a solutions architecture that equals more than the sum of its parts. Security teams are able to get more value from an XDR that meets the following criteria:
Supports open standards
Delivers advanced analytics
Provides a simpler, unified analyst experience
Streamlines operations
Enhances response through automation
An XDR solution can achieve improved visibility across an organization’s networks, endpoints, SIEM and more via open-system integration. Open source standards can help move the industry away from expensive and wasteful ‘rip and replace’ programs. Instead, an open approach helps organizations get more out of their existing investments.
An XDR solution can also offer more automation and AI enrichments at all levels of detection, analytics, investigation and response. Automation throughout the threat life cycle can dramatically reduce mean time to detect (MTTD) and mean time to recovery (MTTR). Not only does reducing these metrics have a direct relationship to mitigating the cost of a data breach, it also frees up time for analysts to do more human-led activities like investigation. An XDR solution also bolsters investigation with a unified view of threat activity, a single search and investigation experience and consistent enrichment with threat intelligence and domain expertise.
Is XDR Just Another Acronym or a Fundamental Market Shift?
For many decades now, emerging threats have put organizations at risk. As the IT landscape evolved and threat actors found new ways to attack, security teams needed to find new ways to detect and respond to threats.
Today, this evolving theme of complexity continues. And the list of point solutions being deployed to overcome these burgeoning threats goes on and on — from SIEM, to cloud workload protection, to endpoint detection and response (EDR), to network detection and response (NDR) and more. While these investments each do their part to solve immediate and dire issues, in combination they’ve created a bigger challenge: how to use and get value from them together.
This is why we call them point tools; they were made to address specific challenges. Now that security teams face a myriad of challenges, it’s never been more critical to have them work in concert. Without doing so, limited security operations resources will continue to be spread thin, total cost of ownership will continue to increase and the process of pinpointing and responding to threats will continue to be time-consuming and inefficient.
Extended detection and response (XDR) is the beginning of a shift towards uniting multiple siloed solutions and reducing the complexity that impedes fast detection and response. As stated in the blog Gartner Top 9 Security and Risk Trends for 2020: “The primary goals of an XDR solution are to increase detection accuracy and improve security operations efficiency and productivity.” Gartner identified XDR as the number one security and risk trend at the end of 2020, suggesting now is the moment when all this complexity — too many tools, too many alerts, too little time — is coming to a head, with XDR as a response.
What Are The Different Approaches to XDR?
Industry analysts have outlined two different approaches to extended detection and response: native and hybrid. Native XDR is a suite that integrates with a vendor’s other solutions to collect telemetry and execute response actions. Hybrid or open XDR is a platform that integrates with various vendors and third parties to collect telemetry and execute response actions.
Vendors have been taking different approaches to what is under the hood of XDR, so to speak. For instance, does XDR = EDR plus additional capabilities? Or is it EDR plus NDR, or some other combination? It might be too soon to tell where the market will land as the technology is nascent, but the delineation between native and hybrid XDR is one thing the industry seems to agree on.
How Does XDR ‘Extend’ SIEM?
For some readers, SIEM may have immediately come to mind as you perused the qualities of XDR. There are some key differences between the two. Correlation and alerting tend to be fully automated, employing use cases that are provided and tuned by the vendor. Lastly, incident response tends to focus on highly repeatable actions that can be automated, such as sending a suspicious file to a sandbox for detonation, enriching an alert with threat intelligence or blocking an email sender tied to phishing emails. This approach differs from SOAR, which can be broadly customized with custom playbooks and used to unite people in addition to technology.
XDR in many ways can extend the detection and response capabilities that are today enabled by SIEM. In fact, SIEM can play an integral role to support an XDR architecture in gathering, organizing and assessing massive amounts of data for SOC analysts. In this capacity, XDR builds on the data and events flowing through your SIEM solution. By bringing together the capabilities of multiple point solutions, XDR can take SIEM analytics one step further and amplify the outcome. As an example, when you receive analytics from a SIEM, endpoints and networks separately, it can be like having three different witnesses to an attack. XDR helps you immediately bring all three witnesses together and create one complete story — helping an analyst see more clearly across multiple sources.
XDR is not just a place where you consolidate security signals but a place where you can run more advanced, correlated analytics. As The Forrester Wave for Security Analytics Platforms, Q4 2020 asserts, security analytics and endpoint detection and response have been on a “collision course” for some time. Bringing together these capabilities with XDR can provide “highly enriched telemetry, speedy investigations, and automated response actions.” Behavioral analytics or machine learning analytics can also enrich content, increase accuracy and lead to automated response actions.
How Does XDR Compare to MDR?
Even though XDR vendors are striving to untangle the complexity problem, it will take time to make inroads. Compounding this challenge is the skills shortage. The dire need for talent to run security analysis and investigations leads many organizations to utilize a partner for managed detection and response (MDR) services.
MDR is an approach to managing sophisticated detection and response tools — whether via endpoint, network or SIEM technology. Some MDR providers include proactive threat hunting to reveal undetected threats faster. Research from EMA conducted in 2020 found that 94% of respondents not already using an MDR service were currently evaluating or had plans to evaluate MDR services over the next 18 months.
MDR services can provide critical skills and advanced threat intelligence, plus rapid response with 24/7 SOC coverage. As Jon Oltsik, senior principal analyst at ESG, stated, “XDR success still seems to be based on human expertise,” making MDR an invaluable companion to XDR for customers who could use a helping hand.
How Does XDR Support Customers With Zero Trust Aspirations?
If you set up a game of security buzzword bingo, there’s no doubt you’d come across both zero trust and XDR. There’s industry chatter around these powerful security frameworks with good reason — one concept can help enforce the other.
Zero trust is a framework that starts with an assumption of compromise, then continuously validates the conditions for connection between users’ data and resources to determine authorization and need. XDR provides an essential function to zero trust by continuously monitoring for incidents and responding in the most targeted way possible to avoid disruption to the business.
How so? XDR enables analysts to determine if their organization is under attack and figure out as quickly as possible what’s happening, what will happen next and how to prevent that from unfolding. Instead of placing blind trust in a system and saying the controls are enough, with XDR you constantly monitor the places where things could go wrong.
In this way, XDR is ensuring that zero trust security controls are working. The ‘never trust, always verify’ zero trust methodology is supported by verification. When it comes to detecting and responding to threats, as well as improving protection policies based on insights, a zero trust framework and an XDR solution can work hand in hand. And it’s exactly why identity tools, such as identity and access management (IAM), will play a critical role tying into XDR solution architectures to ensure the appropriate user-centric context is being employed for threat detection and response.
What Should Customers Look for in an XDR Solution?
Your XDR should be an open, extensible solution that enables your organization to get more from its existing investments. Look for integrations with third parties that will save your organization from a costly and unrealistic rip-and-replace approach. Cloud-native solutions are also critical for extending cloud visibility.
XDR goes far beyond being an improved EDR solution; it should instead be your end game for threat detection and response activities — as part of a unified platform. Reaching that level of maturity is a goal that takes time, with the basics in place and a clear strategy as prerequisites for how to get started with XDR. With powerful automation, artificial intelligence and expert-built detection and prescribed response actions available through a unified user experience, security teams can counter attacks across silos — mitigating risk and resolving threats fast.
Ultimately, XDR makes it easier for the people managing and responding to threats on a daily basis to do the work. Open standards mean we can better serve customers and the community, preventing time and dollars lost to ripping and replacing technology. Advanced analytics, constantly updated threat intelligence and a streamlined user workforce empower analysts to be more efficient and spend their valuable time on investigations — not gathering the data.
People and culture are the keys to the SOC. By uniting threat detection data and tools and strengthening ability and context for fast incident response, XDR enables the collaboration and openness that helps teams thrive.