Happy Holidays and Happy New Year from Elite Paradigm LLC

Happy Holidays and Happy New Year from Elite Paradigm LLC

During this season, we find ourselves reflecting on the past year and the customers who’ve helped shape our success. In this spirit, the team at ELITE PARADIGM wishes you and yours a happy holiday season!

Get connected with Elite Paradigm LLC. for your 2024 I.T. and Cyber Security needs.

Enable Unified Analytics

HPE GreenLake edge-to-cloud platform rolls out industry’s first cloud-native unified analytics and data lakehouse cloud services optimized for hybrid environments

IN THIS ARTICLE

  • First cloud-native solution to bring Kubernetes-based Apache Spark analytics and the simplicity of unified data lakehouses using Delta Lake on-premises 
  • Only data fabric to combine S3-native object store, files, streams and databases in one scalable data platform
  • Cloud-native unified analytics platform enables customers to modernize legacy data lakes and warehouses without complex data migration, application rewrites or lock-in
  • 37 solution partners support HPE Ezmeral with 15 joining the HPE Ezmeral Partner Program in the past 60 days

Built on HPE Ezmeral software, analytics and data science teams benefit from frictionless access to data from edge to cloud and a unified platform for accelerated Apache Spark and SQL 

In the Age of Insight, data has become the heart of every digital transformation initiative in every industry, and data analytics has become critical to building successful enterprises. Simply put, data drives competitive advantage.  However, for most organizations, significant challenges remain for organizations to successfully execute data-first modernization initiatives. Until now, organizations have been stuck with legacy analytics platforms that were either built for a pre-cloud era and lack cloud-native capabilities, or require complex migrations to public clouds, risking vendor lock-in, high costs and forcing adoption of new processes. This situation has left the big data and analytics software market1 — which IDC forecasts will reach $110 billion by 2023 – ripe for disruption.

Today, I am excited to announce two disruptive HPE GreenLake cloud services that will enable customers to overcome these trade-offs.  There are four big value propositions we optimized for:
 

1.     Seamless experience for a variety of analytics, SQL, and data science users

2.     Top-notch performance

3.     Choice and open ecosystem by leveraging pure open source in a hybrid environment

4.     An intense focus on reducing TCO by up to 35% for many of the Workloads we are targeting

Built from the ground up to be open and cloud-native, our new HPE GreenLake for analytics cloud services will help enterprises unify, modernize, and analyze all of their data, from edge-to-cloud, in any and every place it’s stored. Now analytics and data science teams can leverage the industry’s first cloud-native solution on-premises, scale up Apache Spark lakehouses, and speed up AI and ML workflows. Today’s news is part of a significant set of new cloud services for the HPE GreenLake edge-to-cloud platform, announced today in a virtual launch event from HPE. The new HPE GreenLake for analytics cloud services include the following:

HPE Ezmeral Unified Analytics

HPE now offers an alternative to customers previously limited to solutions in a hyperscale environment by delivering modern analytics on-premises, enabling up to 35%2 more cost efficiencies than the public cloud for data-intensive, long running jobs typical in mission critical environments. Available on the HPE GreenLake edge-to-cloud platform, HPE Ezmeral Unified Analytics is the industry’s first unified, modern, hybrid analytics and data lakehouse platform.

  • HPE Ezmeral Unified Analytics is the industry’s first unified, modern, hybrid analytics and data lakehouse platform Share

We believe it is the first solution to architecturally optimize and leverage three key advancements simultaneously which no one else in the industry has done.
 

1.     Optimize for a Kubernetes based Spark environment for on-premises deployment providing the cloud-native elasticity and agility customers want

2.     Handle the diversity of data types from files, tables, streams, and objects in one consistent platform to avoid silos and make data engineering easier

3.     Embrace the edge by enabling a data platform environment which can span from edge to hybrid cloud

Instead of requiring all of your data to live in a public cloud, HPE Ezmeral Unified Analytics is optimized for on-premises and hybrid deployments, and uses open source software to ensure as-needed data portability. We designed our solution with the flexibility and scale to accommodate enterprises’ large data sets, or lakehouses, so customers have the elasticity they need for advanced analytics, everywhere.

Just a few key advantages of HPE Ezmeral Unified Analytics include:

  • Dramatic performance acceleration: Together NVIDIA RAPIDS Accelerator for Apache Spark and HPE Ezmeral can accelerate Spark data prep, model training, and visualization by up to 29x3, allowing data scientists and engineers to build, develop, and deploy at scale analytics solutions into production faster.
  • Next-generation architecture: We have built on Kubernetes and added value through an orchestration plane to make it easy to get the scale-out elasticity customers want. Our multi-tenant Kubernetes environment supports a compute-storage separation cloud model, providing the combined performance and elasticity required for advanced analytics, while enabling users to create unified real-time and batch analytics lakehouses with Delta Lake integration.
  • Optimized for data analytics:Enterprises can create a unified data repository for use by data scientists, developers, and analysts, including usage and sharing controls, creating the foundation for a silo-free digital transformation that scales with the business as it grows, and reaches new data sources. Support for NVIDIA Multi-Instance GPU technology enables enterprises to support a variety of workload requirements and maximize efficiency with up to seven instances per GPU.
  • Enhanced collaboration: Integrated workflows from analytics to ML/AI span hybrid clouds and edge locations, including native open-source integrations with Airflow, ML Flow, and Kubeflow technologies to help data science, data engineering, and data analytics teams collaborate and deploy models faster.
  • Choice and no vendor lock-in: On-premises Apache Spark workloads offer the freedom to choose deployment environments, tools, and partners needed to innovate faster

“Today’s news provides the market with more choice in deploying their modern analytics initiatives with a hybrid-native solution, enabling faster access to data, edge to cloud,” said Carl Olofson, Research Vice President, IDC. “HPE Ezmeral is advancing the data analytics market with continued innovations that fill a gap in the market for an on-premises unified analytics platform, helping enterprises unlock insights to outperform the competition.”

HPE Ezmeral Data Fabric Object Store

Our second disruptive new solution is the HPE Ezmeral Data Fabric Object Store: the industry’s first Data Fabric to combine S3-native object store, files, streams and databases in one scalable data platform that spans edge-to-cloud. Available on bare metal and Kubernetes-native deployments, HPE Ezmeral Data Fabric Object Store provides a global view of an enterprise’s dispersed data assets and unified access to all data within a cloud-native model, securely accessible to the most demanding data engineering, data analytics, and data science applications. Designed with native S3 API, and optimized for advanced analytics, HPE Ezmeral Data Fabric Object Store enables customers to orchestrate both apps and data in a single control plane, while delivering the best price for outstanding performance.

We are proud of the innovation that has resulted in what we believe is an industry first: A consistent data platform which is able to handle a diversity of data types, is optimized for analytics, and is able to span from edge to cloud.

Several key features include:

  • Optimized performance for analytics: Designed for scalable object stores, HPE Ezmeral Object Store is the industry’s only solution that supports file, streams, database, and now object data types within a common persistent store, optimized for best performance across edge-to-cloud analytics workloads.
  • Globally synchronized edge-to cloud data: Clusters and data are orchestrated together to support dispersed edge operations, and a single Global Namespace provides simplified access to edge-to-cloud topologies from any application or interface. While data can be mirrored, snapshotted, and replicated, advanced security and policies ensure the right people and applications have access to the right data, when they need it.
  • Continuous scaling: Enterprises can grow as needed by adding nodes and configuring policies for data persistence while the data store handles the rest. 
  • Performance and cost balance: Adapting to small or large objects, auto-tiering policies automatically move data from high-performance storage to low-cost storage.  

Expanding the HPE Ezmeral Partner Ecosystem

We first introduced the HPE Ezmeral Partner Program in March 2021, enabling the rapid creation of streamlined, customized analytics engines and environments based on full stack solutions validated by trusted ISV partners. With 76% of enterprises expecting to be using on-premises, third-party-managed private cloud infrastructure for data and analytics workloads within the next year4, we’re excited to announce six new ISV partners today, including: NVIDIA NGC, Pepperdata, Confluent, Weka, Ahana and gopaddle.

“NVIDIA’s contributions to Apache Spark enable enterprises to process data orders of magnitude faster while significantly lowering infrastructure costs,” said Manuvir Das, head of Enterprise Computing, NVIDIA. “Integrating the NVIDIA RAPIDS Accelerator for Apache Spark and NVIDIA Triton Inference Server into the HPE Ezmeral Unified Analytics Platform streamlines the development and deployment of high-performance analytics, helping customers gain immediate results at lower costs.” 

“Today, companies are using Spark to build their high-performance data applications, accelerating tens to thousands of terabytes of data transitioning from data lakes to AI data modeling,” said Joel Stewart, Vice President Customer Success, Pepperdata. “Pepperdata on HPE Ezmeral Runtime Enterprise can help reduce operating costs and provide deep insights into their Spark applications to improve performance and reliability.”

Since the HPE Ezmeral Partner Program launched, we’ve added 37 solution partners5 to support our customers’ core use cases and workloads, including big data and AI/ML use cases. The Partner Program is also adding support today for open-source projects such as Apache Spark, offering enterprises the ability to transition workloads to a modern, cloud-native architecture.

  • HPE Ezmeral has dozens of new customers, with competitive wins over both traditional big data players, and public cloud vendors Share

HPE GreenLake edge to-cloud platform and HPE Ezmeral are transforming enterprises – and HPE

As an important component of HPE GreenLake cloud services, the HPE Ezmeral software portfolio help enterprises such as GM Financial and Bidtellect advance modern data analytics initiatives. Since it was first introduced in June 2020, HPE Ezmeral has secured dozens of new customers, with significant competitive wins over both traditional big data players, as well as public cloud vendors.

Since vast volumes of applications and data remain will remain on-premises and at the edge as enterprises continue their digital transformations, our elastic, unified analytics solutions will help customers extract maximum value from their data, wherever it lives and moves, from edge-to-cloud. We look forward to working with you to make the most of your data as the Age of Insight continues to reshape enterprises around the world.

Availability and Additional Resources

HPE Ezmeral Unified Analytics and HPE Ezmeral Data Fabric Object Store will be available as HPE GreenLake cloud services beginning November 2021 and Q1 2022, respectively.

Learn more about today’s news from the experts. Join these deep dive sessions as I chat with:

HPE and the HPE logo are trademarks or registered trademarks of HPE and/or its affiliates in the U.S. and other countries.  Third-party trademarks mentioned are the property of their respective owners. 

1 IDC, Worldwide Big Data and Analytics Software Forecast, 2021–2025, July 2021

2 Based on internal HPE competitive analysis, September 2021

Technical Paper: HPE Ezmeral for Apache Spark with NVIDIA GPU, published September 2021

451 Research Voice of the Enterprise: Data & Analytics, Data Platforms 2021

5 Internal HPE documentation on list of partners maintained by the group

Creating the right cloud experience EliteParadigm LLC

Creating the right cloud experience

We’ve seen a rapid acceleration of digital transformation over the last 12 months, and the pandemic has acted as a catalyst to accelerate many transformation projects. The most common one was the rapid shift to support a remote or hybrid workforce to deliver services long-term. But we have also seen organisations thrive as they adapted their business model to enable them to operate and meet their customer needs throughout the various stages of restrictions.

In the early stages of these transformations, immediate decisions had to be made and services quickly provided so the organisation could operate, however these decisions need to be evaluated again as the economic environment changes.

How do we do it?

‘Cloud’ has been many organisations’ default position for a few years now, and while it’s definitely part of that digital transformation journey, organisations need to understand their true requirements or it can lead to inefficiencies, unnecessary expense and can also degrade user experiences. There is no ‘one-size-fits-all’ approach, and it can be difficult to know where to start.

Before we go much further, it’s helpful to create a common baseline for context. We may all be familiar with the term ‘cloud’, but what do we actually mean when we refer to it? If we look at the different types of cloud and what they entail, it helps gain a better understanding of what might suit each organisation’s needs:

  • Public cloud

Computing services offered by a third party by which scalable and elastic IT-enabled capabilities are delivered as-a-Service using internet technologies. Use cases include off-site data storage, online office applications and software development.

  • Private cloud

Infrastructure, software and services combined to deliver the cloud experience within a private network. In this scenario, the organisation has more control over it and increased security. Located in a data centre operated by, or on the behalf of, your organisation.

  • Hybrid cloud

Combines private cloud with one or more public cloud services to work together. Offers agility and flexibility, supporting a remote workforce.

  • Multi cloud

Uses multiple cloud services from different providers to ensure the best service for each workload. Can be cost effective, easily scalable and agile.

We must also consider the fact that all of these options reside within a data centre, local or remote, yet in the majority of cases, we interact with our customers, employees or citizens out in the community. That is why today organisations need to consider the end point as part of the overall mix of service delivery

The Edge – what is it?

As the expectations and requirements of customers evolve, we need to be able to facilitate fast decision-making, so we can exceed these expectations. Recent improvements in connectivity outside the data centre now mean that we can distribute compute and data to deliver more effective outcomes.

Instead of operating from large, centralised data centres, Edge computing operates from a more distributed model, with small centres of data springing up closer to the end user to support immediate low latency insights and decision-making, therefore enhancing the customer experience. Edge will definitely play a role in many organisations’ services mix. The question for every organisation will be how and where?

It is for these reasons that our view of Hybrid Cloud needs to adapt to factor in the importance of the Edge requirement. At HPE we believe that Hybrid cloud is a way to architect and operate IT that takes advantage of cloud capabilities, cost, performance and agility available on-premises, on public and private cloud, and at edge locations. Hybrid cloud is not simply using public cloud and private cloud for your workloads. It is cloud everywhere.

With this expanded thinking, and keeping in mind that 70% of workloads are currently not suitable for public cloud, it is crucial to understand your organisation’s requirements before taking your next step. Areas to consider include:

  • Who are your customers? (Customers, citizens, workforce.)
  • Where are your users? Your organisation doesn’t stop at your cloud edge – how do you interact with them?
  • What are the workloads and applications used? It’s important that they sit where they are best placed. For example, does the workload require fast scalability for peaks in demand, something the public cloud providers excel in, whereas if you have services or workloads that need to be closer to the user or the process, then private cloud will deliver the low latency from your private data centre.
  • What skills do you have today, and what skills will you need in the future?
  • What organisational or cultural changes will this transformation require, to be successful?

Getting the right mix to support your organisation will ensure you have created the best foundation that will enable an agile and scalable environment that is cost effective, efficient and secure, offering a breadth of services to support the skills of your staff and deliver the requirements of your customers.

Taking action

We have seen organisations jump to cloud without proper consideration and understanding of their requirement and the upstream and downstream impacts of these changes on the organisation and their customers. It is therefore very important to involve a partner who can help you understand what the correct mix looks like for your circumstances and can help drive a successful cloud strategy by providing the insights needed to achieve the most efficient, cost-effective and beneficial cloud environment, specific to your requirements. HPE has a vast breadth of knowledge and experience in advising organisations on creating the right mix of cloud, appropriate for their business objectives.

We understand that cloud is a continuous journey, rather than a destination. Contact davin@hpe.com for more information, or if you’re interested in getting assistance with your cloud journey.

In our next blog we will look deeper into why Edge is so important to organisations today.

Evolve your operating model to succeed at digital transformation EliteParadigm LLC

Evolve your operating model to succeed at digital transformation

You don’t need to be born in the cloud to have a cloud operating model.

While organizations embrace a cloud strategy for many reasons, one that stands out is a desire to get IT operations working more efficiently. Companies want to streamline value chains. They want to do more work and do it faster, with less friction.

Cloud-based operating models, done the right way, enable just that. In fact, for many organizations we work with, it is the adoption of the new ways of working that public cloud demands that creates the greatest benefits to business agility, and not merely the technology platform itself.

With this in mind, one of the top initiatives our customers prioritize is the development of cloud operating practices across their IT organizations and their entire IT ecosystem. Ironically, however, the operations domain is also one of the areas in which organizations struggle the most to generate momentum.

Breaking with the past

The biggest challenge organizations face is the need to shed a troublesome piece of baggage: their legacy operating models. Too many companies try to adopt cloud platforms without changing the way they work. They assume they can follow the same procedures they always have and make simple tweaks.

What they end up with is a collection of bad habits, tribal knowledge, and sets of processes, procedures, and tools that don’t respond well to the demands of a modern ecosystem. And, in turn, they fail to leverage operational learnings across their operations, missing opportunities to deliver agility improvements across the entire organization.

Organizations that are born in the cloud sometimes avoid developing these bad habits. They tailor their operating models to the way business is done in the cloud. They set up their operations to other organizational domains, incorporating DevOps, enabling innovation, sustaining applications, and driving strategy and governance.

But companies that have lengthy histories and may not have the luxury of spinning up new models from scratch have work to do to get their operations functioning at such a level. They face steep learning curves, including getting comfortable with new tooling, shared-responsibility models and practices, and job functions.

Please read: New tools boost automation, self-optimization of IT operational services

They have to learn how complex cloud transformations are so they can equip their teams with the resources needed to carry out the necessary work. Rather than relegate the work of getting to a cloud operating model to a side-of-desk task, they need to dedicate a group to adopting the new cloud-everywhere operating model.

If IT operations don’t prioritize the optimization of a comprehensive cloud operating model, some application owners will head to the cloud by themselves.

Where things stand

Based on our engagements with customers, we’ve evaluated enterprise progress in the eight domains making up the HPE Edge to Cloud Adoption Framework .

Figure 1: The Operations domain in the HPE Edge to Cloud Adoption Framework

Operations is a domain that most customers prioritize. It’s a tangible and significant part of most IT organizations, and one where progress is being made overall. Operations is one of the domains where we see the greatest overall progress in capability across our engagements, with an average maturity of 2.1 on a scale of 1 to 5, where a score of 3 indicates a cloud-ready organization.

However, there are some subdomains within operations where organizations are experiencing greater difficulty in achieving traction (see Figure 2):

  • Service operations
  • Platform operations
  • Pipeline operations

Figure 2: Organizational maturity in the Operations domain

Let’s take a look at what’s behind these challenges and what can be done to overcome them.

Service operations: Slow, ineffective incident response

Many of the organizations we engage with have struggled to progress incident response capability to the point where it can work at the velocity of cloud. Though incident response is considered a security function, the operational component is an important measure of a company’s overall success in the cloud.

Companies with immature service operations often fall short in their ability to be proactive and prevent events from happening in the first place. And if a security event takes place, they will often exhibit slow, ineffective incident responses. The time it takes to detect, address, and resolve an issue often places them outside of their service-level agreement (SLA) targets due to a lack of automation and orchestration.

Please read: The fundamentals of security incident response—during a pandemic and beyond

To be proactive, companies need to set up systems to leverage metadata and characteristics from historical events. They start to learn from those events and build out proactive incident responses. Using techniques like comprehensive logging, event curation and correlation, and forensics can determine root causes and prevent issues.

The flip side is when an event has taken place, organizations take a long time to respond. Without automation and orchestration, you can’t scale the volume of elements that need to be remediated during an incident. If you’ve automated their response, it gives you a relatively short window for remediation.

Platforms operations: Lack of clearly defined infrastructure patterns to deliver consistent services

Organizations with a low level of cloud operational maturity lack a set of finite and clearly defined standards for developing applications and corresponding infrastructure. Without these patterns, they struggle to master the art of automating the provisioning of infrastructure and supporting the corresponding services that make applications tick.

This ties back to the legacy hangover issue. Organizations that are used to creating applications outside of cloud-native environments by default model their application architectures to support customized apps that may or may not generate high degrees of value. These architectures can’t scale to satisfy the emerging demands of the business.

As a result, operators find themselves having to provide white-glove service and custom work on an ongoing basis for customized applications. If they started from scratch, they’d be ready to scale. They could focus on those application archetypes that have a certain amount of critical mass. What starts to happen is organizations learn how to establish enterprise standards and policy and leverage cloud-native tools and that newfound operating model. It’s easier to bring others on board because they have that consistency and they generate results.

The new operating model creates its own gravity and starts to attract the attention of application and business owners. They can provide a valuable service as long as organizations play by a set of standards and rules to support scale. Otherwise, organizations are translating bad habits into the new cloud model.

Pipeline operations: Inconsistent management of container Images

Containers have changed the way organizations develop and deploy applications. Their lightweight structures and portable natures make them ideal packaging tools for organizations looking to add new features quickly and cost effectively. Still, a container operation is only as sound as the images that make up the containers themselves.

Organizations that are early in their cloud transformations tend to struggle with image integrity. They haven’t set up systems for hardening container images in a timely and repeatable fashion. They haven’t set up an automated testing and compliance certification process. They haven’t created secure container registries that identify images for use in the continuous integration and continuous delivery (CI/CD) pipeline, retire out-of-state images, and manage artifacts in various states of transit.

Pockets of skill

Some organizations see their cloud operations mature in a scattershot manner. In other words, it’s common to see cloud operations showing up as pockets of individual skills concentrated among certain people and certain groups. This can be helpful. Companies that promote a learning agenda can create first movers that act as a force multiplier. They can become change agents, with an eye toward centralization and supporting federation when the enterprise is ready.

Please read: How to achieve extreme quality with DevOps

But it isn’t always a good thing. If IT operations don’t prioritize the optimization of a comprehensive cloud operating model, some application owners will head to the cloud by themselves. The way these “experts” support their own applications won’t support all of the other application archetypes and the larger ecosystem out there. Costs spike, and the C-suite gets frustrated with the lack of progress. This is a big reason why cloud initiatives fail.

HPE GreenLake’s edge to cloud platform empowers organizations to harness the power of all their data, regardless of location.

Learn more

Where leaders succeed

A deliberate, well-planned approach toward operational maturity is the most effective way to succeed in the cloud. Across Hewlett Packard Enterprise’s base of cloud transformation engagements, leaders display this kind of discipline in these four areas:

  • Service operations: The good news is that organizations have grown more sophisticated in their ability to log a wide variety of activities. The more advanced organizations are doing a better job curating and identifying the right pieces of metadata to turn into insights. They have meta-models in place, but they need to put intelligence around them to put the data to work for them.
  • Architecture and governance: Formal governing bodies have been established, but many organizations still are working to streamline decision-making processes so they can reduce lead time.
  • Availability management: Leaders are developing their ability to define business availability requirements in SLAs. They have well-thought-out plans to improve metrics such as RTO (recovery time objective) and RPO (recovery point objective), especially for tier-one and tier-two applications. Where organizations still need help is in meeting those requirements. Many establish partnerships with vendors that can take on management functions to help achieve these objectives.
  • Platform operations: Leaders have created a mechanism for meta-data-based visibility across their ecosystems. This is important when they integrate with CI/CD pipelines; they know which components to pull from their repositories and where to put them. Infrastructure patterns and standards are important here too. A major bank we worked with was looking to increase adoption of its own internal private cloud. By using well-defined infrastructure patterns and corresponding storage standards, it was able to reduce infrastructure provisioning time, which in turn improved the reputation and buy-in for its initiative.

Getting your house in order

Moving to the cloud presents significant opportunities for companies to transform their operations—to make them more efficient and more focused on delivering business value. But to mature to the point where they’re accomplishing meaningful transformation, organizations need to get their own houses in order.

They need to commit to a new operating model, assess their strengths and weaknesses, and forge a plan to set their operations up for success. This operating model will yield benefits not only for the portion of operations that move to public cloud but across the entire enterprise, edge to cloud.

What Is Extended Detection and Response EliteParadigm LLC

What Is Extended Detection and Response (XDR)?

Extended detection and response (XDR) is a security solution that delivers end-to-end visibility, detection, investigation and response across multiple security layers. Core components of an XDR architecture include federation of security signals, higher-level behavioral and cross-correlated analytics, and closed-loop and highly automated responses. This creates a truly unified experience supported by a solutions architecture that equals more than the sum of its parts. Security teams are able to get more value from an XDR that meets the following criteria:

  • Supports open standards
  • Delivers advanced analytics
  • Provides a simpler, unified analyst experience
  • Streamlines operations
  • Enhances response through automation

An XDR solution can achieve improved visibility across an organization’­­s networks, endpoints, SIEM and more via open-system integration. Open source standards can help move the industry away from expensive and wasteful ‘rip and replace’ programs. Instead, an open approach helps organizations get more out of their existing investments.

An XDR solution can also offer more automation and AI enrichments at all levels of detection, analytics, investigation and response. Automation throughout the threat life cycle can dramatically reduce mean time to detect (MTTD) and mean time to recovery (MTTR). Not only does reducing these metrics have a direct relationship to mitigating the cost of a data breach, it also frees up time for analysts to do more human-led activities like investigation. An XDR solution also bolsters investigation with a unified view of threat activity, a single search and investigation experience and consistent enrichment with threat intelligence and domain expertise.

Is XDR Just Another Acronym or a Fundamental Market Shift?

For many decades now, emerging threats have put organizations at risk. As the IT landscape evolved and threat actors found new ways to attack, security teams needed to find new ways to detect and respond to threats.

Today, this evolving theme of complexity continues. And the list of point solutions being deployed to overcome these burgeoning threats goes on and on — from SIEM, to cloud workload protection, to endpoint detection and response (EDR), to network detection and response (NDR) and more. While these investments each do their part to solve immediate and dire issues, in combination they’ve created a bigger challenge: how to use and get value from them together.

This is why we call them point tools; they were made to address specific challenges. Now that security teams face a myriad of challenges, it’s never been more critical to have them work in concert. Without doing so, limited security operations resources will continue to be spread thin, total cost of ownership will continue to increase and the process of pinpointing and responding to threats will continue to be time-consuming and inefficient.

Extended detection and response (XDR) is the beginning of a shift towards uniting multiple siloed solutions and reducing the complexity that impedes fast detection and response. As stated in the blog Gartner Top 9 Security and Risk Trends for 2020: “The primary goals of an XDR solution are to increase detection accuracy and improve security operations efficiency and productivity.” Gartner identified XDR as the number one security and risk trend at the end of 2020, suggesting now is the moment when all this complexity — too many tools, too many alerts, too little time — is coming to a head, with XDR as a response.

What Are The Different Approaches to XDR?

Industry analysts have outlined two different approaches to extended detection and response: native and hybrid. Native XDR is a suite that integrates with a vendor’s other solutions to collect telemetry and execute response actions. Hybrid or open XDR is a platform that integrates with various vendors and third parties to collect telemetry and execute response actions.

Vendors have been taking different approaches to what is under the hood of XDR, so to speak. For instance, does XDR = EDR plus additional capabilities? Or is it EDR plus NDR, or some other combination? It might be too soon to tell where the market will land as the technology is nascent, but the delineation between native and hybrid XDR is one thing the industry seems to agree on.

How Does XDR ‘Extend’ SIEM?

For some readers, SIEM may have immediately come to mind as you perused the qualities of XDR. There are some key differences between the two. Correlation and alerting tend to be fully automated, employing use cases that are provided and tuned by the vendor. Lastly, incident response tends to focus on highly repeatable actions that can be automated, such as sending a suspicious file to a sandbox for detonation, enriching an alert with threat intelligence or blocking an email sender tied to phishing emails. This approach differs from SOAR, which can be broadly customized with custom playbooks and used to unite people in addition to technology.

XDR in many ways can extend the detection and response capabilities that are today enabled by SIEM. In fact, SIEM can play an integral role to support an XDR architecture in gathering, organizing and assessing massive amounts of data for SOC analysts. In this capacity, XDR builds on the data and events flowing through your SIEM solution. By bringing together the capabilities of multiple point solutions, XDR can take SIEM analytics one step further and amplify the outcome. As an example, when you receive analytics from a SIEM, endpoints and networks separately, it can be like having three different witnesses to an attack. XDR helps you immediately bring all three witnesses together and create one complete story — helping an analyst see more clearly across multiple sources.

XDR is not just a place where you consolidate security signals but a place where you can run more advanced, correlated analytics. As The Forrester Wave for Security Analytics Platforms, Q4 2020 asserts, security analytics and endpoint detection and response have been on a “collision course” for some time. Bringing together these capabilities with XDR can provide “highly enriched telemetry, speedy investigations, and automated response actions.” Behavioral analytics or machine learning analytics can also enrich content, increase accuracy and lead to automated response actions.

How Does XDR Compare to MDR?

Even though XDR vendors are striving to untangle the complexity problem, it will take time to make inroads. Compounding this challenge is the skills shortage. The dire need for talent to run security analysis and investigations leads many organizations to utilize a partner for managed detection and response (MDR) services.

MDR is an approach to managing sophisticated detection and response tools — whether via endpoint, network or SIEM technology. Some MDR providers include proactive threat hunting to reveal undetected threats faster. Research from EMA conducted in 2020 found that 94% of respondents not already using an MDR service were currently evaluating or had plans to evaluate MDR services over the next 18 months.

MDR services can provide critical skills and advanced threat intelligence, plus rapid response with 24/7 SOC coverage. As Jon Oltsik, senior principal analyst at ESG, stated, “XDR success still seems to be based on human expertise,” making MDR an invaluable companion to XDR for customers who could use a helping hand.

How Does XDR Support Customers With Zero Trust Aspirations?

If you set up a game of security buzzword bingo, there’s no doubt you’d come across both zero trust and XDR. There’s industry chatter around these powerful security frameworks with good reason — one concept can help enforce the other.

Zero trust is a framework that starts with an assumption of compromise, then continuously validates the conditions for connection between users’ data and resources to determine authorization and need. XDR provides an essential function to zero trust by continuously monitoring for incidents and responding in the most targeted way possible to avoid disruption to the business.

How so? XDR enables analysts to determine if their organization is under attack and figure out as quickly as possible what’s happening, what will happen next and how to prevent that from unfolding. Instead of placing blind trust in a system and saying the controls are enough, with XDR you constantly monitor the places where things could go wrong.

In this way, XDR is ensuring that zero trust security controls are working. The ‘never trust, always verify’ zero trust methodology is supported by verification. When it comes to detecting and responding to threats, as well as improving protection policies based on insights, a zero trust framework and an XDR solution can work hand in hand. And it’s exactly why identity tools, such as identity and access management (IAM), will play a critical role tying into XDR solution architectures to ensure the appropriate user-centric context is being employed for threat detection and response.

What Should Customers Look for in an XDR Solution?

Your XDR should be an open, extensible solution that enables your organization to get more from its existing investments. Look for integrations with third parties that will save your organization from a costly and unrealistic rip-and-replace approach. Cloud-native solutions are also critical for extending cloud visibility.

XDR goes far beyond being an improved EDR solution; it should instead be your end game for threat detection and response activities — as part of a unified platform. Reaching that level of maturity is a goal that takes time, with the basics in place and a clear strategy as prerequisites for how to get started with XDR. With powerful automation, artificial intelligence and expert-built detection and prescribed response actions available through a unified user experience, security teams can counter attacks across silos — mitigating risk and resolving threats fast.

Ultimately, XDR makes it easier for the people managing and responding to threats on a daily basis to do the work. Open standards mean we can better serve customers and the community, preventing time and dollars lost to ripping and replacing technology. Advanced analytics, constantly updated threat intelligence and a streamlined user workforce empower analysts to be more efficient and spend their valuable time on investigations — not gathering the data.

People and culture are the keys to the SOC. By uniting threat detection data and tools and strengthening ability and context for fast incident response, XDR enables the collaboration and openness that helps teams thrive.

Learn more about realizing a vision for XDR in ‘Beyond Endpoints: A Case for Open XDR’, presented at the RSA Conference 2021.

Understanding the cybercrime gig EliteParadigm LLC

Ransomware as a service: Understanding the cybercrime gig economy and how to protect yourself

April 2023 update – Microsoft Threat Intelligence has shifted to a new threat actor naming taxonomy aligned around the theme of weather. To learn more about this evolution, how the new taxonomy represents the origin, unique traits, and impact of threat actors, and a complete mapping of threat actor names, read this blog: Microsoft shifts to a new threat actor naming taxonomy.

September 2022 update – New information about recent Qakbot campaigns leading to ransomware deployment.

July 2022 update – New information about DEV-0206-associated activity wherein existing Raspberry Robin infections are used to deploy FakeUpdates, which then leads to follow-on actions resembling DEV-0243.

June 2022 update – More details in the Threat actors and campaigns section, including recently observed activities from DEV-0193 (Trickbot LLC), DEV-0504, DEV-0237, DEV-0401, and a new section on Qakbot campaigns that lead to ransomware deployments.

Microsoft processes 24 trillion signals every 24 hours, and we have blocked billions of attacks in the last year alone. Microsoft Security tracks more than 35 unique ransomware families and 250 unique threat actors across observed nation-state, ransomware, and criminal activities.

That depth of signal intelligence gathered from various domains—identity, email, data, and cloud—provides us with insight into the gig economy that attackers have created with tools designed to lower the barrier for entry for other attackers, who in turn continue to pay dividends and fund operations through the sale and associated “cut” from their tool’s success.

The cybercriminal economy is a continuously evolving connected ecosystem of many players with different techniques, goals, and skillsets. In the same way our traditional economy has shifted toward gig workers for efficiency, criminals are learning that there’s less work and less risk involved by renting or selling their tools for a portion of the profits than performing the attacks themselves. This industrialization of the cybercrime economy has made it easier for attackers to use ready-made penetration testing and other tools to perform their attacks.

Within this category of threats, Microsoft has been tracking the trend in the ransomware as a service (RaaS) gig economy, called human-operated ransomware, which remains one of the most impactful threats to organizations. We coined the industry term “human-operated ransomware” to clarify that these threats are driven by humans who make decisions at every stage of their attacks based on what they find in their target’s network.

Unlike the broad targeting and opportunistic approach of earlier ransomware infections, attackers behind these human-operated campaigns vary their attack patterns depending on their discoveries—for example, a security product that isn‘t configured to prevent tampering or a service that’s running as a highly privileged account like a domain admin. Attackers can use those weaknesses to elevate their privileges to steal even more valuable data, leading to a bigger payout for them—with no guarantee they’ll leave their target environment once they’ve been paid. Attackers are also often more determined to stay on a network once they gain access and sometimes repeatedly monetize that access with additional attacks using different malware or ransomware payloads if they aren’t successfully evicted.

Ransomware attacks have become even more impactful in recent years as more ransomware as a service ecosystems have adopted the double extortion monetization strategy. All ransomware is a form of extortion, but now, attackers are not only encrypting data on compromised devices but also exfiltrating it and then posting or threatening to post it publicly to pressure the targets into paying the ransom. Most ransomware attackers opportunistically deploy ransomware to whatever network they get access to, and some even purchase access to networks from other cybercriminals. Some attackers prioritize organizations with higher revenues, while others prefer specific industries for the shock value or type of data they can exfiltrate.

All human-operated ransomware campaigns—all human-operated attacks in general, for that matter—share common dependencies on security weaknesses that allow them to succeed. Attackers most commonly take advantage of an organization’s poor credential hygiene and legacy configurations or misconfigurations to find easy entry and privilege escalation points in an environment. 

In this blog, we detail several of the ransomware ecosystems  using the RaaS model, the importance of cross-domain visibility in finding and evicting these actors, and best practices organizations can use to protect themselves from this increasingly popular style of attack. We also offer security best practices on credential hygiene and cloud hardening, how to address security blind spots, harden internet-facing assets to understand your perimeter, and more. Here’s a quick table of contents:

  1. How RaaS redefines our understanding of ransomware incidents
    • The RaaS affiliate model explained
    • Access for sale and mercurial targeting
  2. “Human-operated” means human decisions
    • Exfiltration and double extortion
    • Persistent and sneaky access methods
  3. Threat actors and campaigns deep dive: Threat intelligence-driven response to human-operated ransomware attacks
  4. Defending against ransomware: Moving beyond protection by detection

How RaaS redefines our understanding of ransomware incidents

With ransomware being the preferred method for many cybercriminals to monetize attacks, human-operated ransomware remains one of the most impactful threats to organizations today, and it only continues to evolve. This evolution is driven by the “human-operated” aspect of these attacks—attackers make informed and calculated decisions, resulting in varied attack patterns tailored specifically to their targets and iterated upon until the attackers are successful or evicted.

In the past, we’ve observed a tight relationship between the initial entry vector, tools, and ransomware payload choices in each campaign of one strain of ransomware. The RaaS affiliate model, which has allowed more criminals, regardless of technical expertise, to deploy ransomware built or managed by someone else, is weakening this link. As ransomware deployment becomes a gig economy, it has become more difficult to link the tradecraft used in a specific attack to the ransomware payload developers.

Reporting a ransomware incident by assigning it with the payload name gives the impression that a monolithic entity is behind all attacks using the same ransomware payload and that all incidents that use the ransomware share common techniques and infrastructure. However, focusing solely on the ransomware stage obscures many stages of the attack that come before, including actions like data exfiltration and additional persistence mechanisms, as well as the numerous detection and protection opportunities for network defenders.

We know, for example, that the underlying techniques used in human-operated ransomware campaigns haven’t changed very much over the years—attacks still prey on the same security misconfigurations to succeed. Securing a large corporate network takes disciplined and sustained focus, but there’s a high ROI in implementing critical controls that prevent these attacks from having a wider impact, even if it’s only possible on the most critical assets and segments of the network. 

Without the ability to steal access to highly privileged accounts, attackers can’t move laterally, spread ransomware widely, access data to exfiltrate, or use tools like Group Policy to impact security settings. Disrupting common attack patterns by applying security controls also reduces alert fatigue in security SOCs by stopping the attackers before they get in. This can also prevent unexpected consequences of short-lived breaches, such as exfiltration of network topologies and configuration data that happens in the first few minutes of execution of some trojans.

In the following sections, we explain the RaaS affiliate model and disambiguate between the attacker tools and the various threat actors at play during a security incident. Gaining this clarity helps surface trends and common attack patterns that inform defensive strategies focused on preventing attacks rather than detecting ransomware payloads. Threat intelligence and insights from this research also enrich our solutions like Microsoft 365 Defender, whose comprehensive security capabilities help protect customers by detecting RaaS-related attack attempts.

The RaaS affiliate model explained

The cybercriminal economy—a connected ecosystem of many players with different techniques, goals, and skillsets—is evolving. The industrialization of attacks has progressed from attackers using off-the-shelf tools, such as Cobalt Strike, to attackers being able to purchase access to networks and the payloads they deploy to them. This means that the impact of a successful ransomware and extortion attack remains the same regardless of the attacker’s skills.

RaaS is an arrangement between an operator and an affiliate. The RaaS operator develops and maintains the tools to power the ransomware operations, including the builders that produce the ransomware payloads and payment portals for communicating with victims. The RaaS program may also include a leak site to share snippets of data exfiltrated from victims, allowing attackers to show that the exfiltration is real and try to extort payment. Many RaaS programs further incorporate a suite of extortion support offerings, including leak site hosting and integration into ransom notes, as well as decryption negotiation, payment pressure, and cryptocurrency transaction services

RaaS thus gives a unified appearance of the payload or campaign being a single ransomware family or set of attackers. However, what happens is that the RaaS operator sells access to the ransom payload and decryptor to an affiliate, who performs the intrusion and privilege escalation and who is responsible for the deployment of the actual ransomware payload. The parties then split the profit. In addition, RaaS developers and operators might also use the payload for profit, sell it, and run their campaigns with other ransomware payloads—further muddying the waters when it comes to tracking the criminals behind these actions.

Diagram showing the relationship between players in the ransomware-as-a-service affiliate model. Access brokers compromise networks and persist on systems. The RaaS operator develops and maintain tools. The RaaS affiliate performs the attack.
Figure 1. How the RaaS affiliate model enables ransomware attacks

Access for sale and mercurial targeting

A component of the cybercriminal economy is selling access to systems to other attackers for various purposes, including ransomware. Access brokers can, for instance, infect systems with malware or a botnet and then sell them as a “load”. A load is designed to install other malware or backdoors onto the infected systems for other criminals. Other access brokers scan the internet for vulnerable systems, like exposed Remote Desktop Protocol (RDP) systems with weak passwords or unpatched systems, and then compromise them en masse to “bank” for later profit. Some advertisements for the sale of initial access specifically cite that a system isn’t managed by an antivirus or endpoint detection and response (EDR) product and has a highly privileged credential such as Domain Administrator associated with it to fetch higher prices.

Most ransomware attackers opportunistically deploy ransomware to whatever network they get access to. Some attackers prioritize organizations with higher revenues, while some target specific industries for the shock value or type of data they can exfiltrate (for example, attackers targeting hospitals or exfiltrating data from technology companies). In many cases, the targeting doesn’t manifest itself as specifically attacking the target’s network, instead, the purchase of access from an access broker or the use of existing malware infection to pivot to ransomware activities.

In some ransomware attacks, the affiliates who bought a load or access may not even know or care how the system was compromised in the first place and are just using it as a “jump server” to perform other actions in a network. Access brokers often list the network details for the access they are selling, but affiliates aren’t usually interested in the network itself but rather the monetization potential. As a result, some attacks that seem targeted to a specific industry might simply be a case of affiliates purchasing access based on the number of systems they could deploy ransomware to and the perceived potential for profit.

“Human-operated” means human decisions

Microsoft coined the term “human-operated ransomware” to clearly define a class of attacks driven by expert human intelligence at every step of the attack chain and culminate in intentional business disruption and extortion. Human-operated ransomware attacks share commonalities in the security misconfigurations of which they take advantage and the manual techniques used for lateral movement and persistence. However, the human-operated nature of these actions means that variations in attacks—including objectives and pre-ransom activity—evolve depending on the environment and the unique opportunities identified by the attackers.

These attacks involve many reconnaissance activities that enable human operators to profile the organization and know what next steps to take based on specific knowledge of the target. Many of the initial access campaigns that provide access to RaaS affiliates perform automated reconnaissance and exfiltration of information collected in the first few minutes of an attack.

After the attack shifts to a hands-on-keyboard phase, the reconnaissance and activities based on this knowledge can vary, depending on the tools that come with the RaaS and the operator’s skill. Frequently attackers query for the currently running security tools, privileged users, and security settings such as those defined in Group Policy before continuing their attack. The data discovered via this reconnaissance phase informs the attacker’s next steps.

If there’s minimal security hardening to complicate the attack and a highly privileged account can be gained immediately, attackers move directly to deploying ransomware by editing a Group Policy. The attackers take note of security products in the environment and attempt to tamper with and disable these, sometimes using scripts or tools provided with RaaS purchase that try to disable multiple security products at once, other times using specific commands or techniques performed by the attacker.  

This human decision-making early in the reconnaissance and intrusion stages means that even if a target’s security solutions detect specific techniques of an attack, the attackers may not get fully evicted from the network and can use other collected knowledge to attempt to continue the attack in ways that bypass security controls. In many instances, attackers test their attacks “in production” from an undetected location in their target’s environment, deploying tools or payloads like commodity malware. If these tools or payloads are detected and blocked by an antivirus product, the attackers simply grab a different tool, modify their payload, or tamper with the security products they encounter. Such detections could give SOCs a false sense of security that their existing solutions are working. However, these could merely serve as a smokescreen to allow the attackers to further tailor an attack chain that has a higher probability of success. Thus, when the attack reaches the active attack stage of deleting backups or shadow copies, the attack would be minutes away from ransomware deployment. The adversary would likely have already performed harmful actions like the exfiltration of data. This knowledge is key for SOCs responding to ransomware: prioritizing investigation of alerts or detections of tools like Cobalt Strike and performing swift remediation actions and incident response (IR) procedures are critical for containing a human adversary before the ransomware deployment stage.

Exfiltration and double extortion

Ransomware attackers often profit simply by disabling access to critical systems and causing system downtime. Although that simple technique often motivates victims to pay, it is not the only way attackers can monetize their access to compromised networks. Exfiltration of data and “double extortion,” which refers to attackers threatening to leak data if a ransom hasn’t been paid, has also become a common tactic among many RaaS affiliate programs—many of them offering a unified leak site for their affiliates. Attackers take advantage of common weaknesses to exfiltrate data and demand ransom without deploying a payload.

This trend means that focusing on protecting against ransomware payloads via security products or encryption, or considering backups as the main defense against ransomware, instead of comprehensive hardening, leaves a network vulnerable to all the stages of a human-operated ransomware attack that occur before ransomware deployment. This exfiltration can take the form of using tools like Rclone to sync to an external site, setting up email transport rules, or uploading files to cloud services. With double extortion, attackers don’t need to deploy ransomware and cause downtime to extort money. Some attackers have moved beyond the need to deploy ransomware payloads and are shifting straight to extortion models or performing the destructive objectives of their attacks by directly deleting cloud resources. One such extortion attackers is DEV-0537 (also known as LAPSUS$), which is profiled below.  

Persistent and sneaky access methods

Paying the ransom may not reduce the risk to an affected network and potentially only serves to fund cybercriminals. Giving in to the attackers’ demands doesn’t guarantee that attackers ever “pack their bags” and leave a network. Attackers are more determined to stay on a network once they gain access and sometimes repeatedly monetize attacks using different malware or ransomware payloads if they aren’t successfully evicted.

The handoff between different attackers as transitions in the cybercriminal economy occur means that multiple attackers may retain persistence in a compromised environment using an entirely different set of tools from those used in a ransomware attack. For example, initial access gained by a banking trojan leads to a Cobalt Strike deployment, but the RaaS affiliate that purchased the access may choose to use a less detectable remote access tool such as TeamViewer to maintain persistence on the network to operate their broader series of campaigns. Using legitimate tools and settings to persist versus malware implants such as Cobalt Strike is a popular technique among ransomware attackers to avoid detection and remain resident in a network for longer.

Some of the common enterprise tools and techniques for persistence that Microsoft has observed being used include:

  • AnyDesk
  • Atera Remote Management
  • ngrok.io
  • Remote Manipulator System
  • Splashtop
  • TeamViewer

Another popular technique attackers perform once they attain privilege access is the creation of new backdoor user accounts, whether local or in Active Directory. These newly created accounts can then be added to remote access tools such as a virtual private network (VPN) or Remote Desktop, granting remote access through accounts that appear legitimate on the network. Ransomware attackers have also been observed editing the settings on systems to enable Remote Desktop, reduce the protocol’s security, and add new users to the Remote Desktop Users group.

The time between initial access to a hands-on keyboard deployment can vary wildly depending on the groups and their workloads or motivations. Some activity groups can access thousands of potential targets and work through these as their staffing allows, prioritizing based on potential ransom payment over several months. While some activity groups may have access to large and highly resourced companies, they prefer to attack smaller companies for less overall ransom because they can execute the attack within hours or days. In addition, the return on investment is higher from companies that can’t respond to a major incident. Ransoms of tens of millions of dollars receive much attention but take much longer to develop. Many groups prefer to ransom five to 10 smaller targets in a month because the success rate at receiving payment is higher in these targets. Smaller organizations that can’t afford an IR team are often more likely to pay tens of thousands of dollars in ransom than an organization worth millions of dollars because the latter has a developed IR capability and is likely to follow legal advice against paying. In some instances, a ransomware associate threat actor may have an implant on a network and never convert it to ransom activity. In other cases, initial access to full ransom (including handoff from an access broker to a RaaS affiliate) takes less than an hour.

Funnel diagram showing targeting and rate of success. Given 2,500 potential target orgs, 60 encounter activity associated with known ransomware attackers. Out of these, 20 are successfully compromised, and 1 organization sees a successful ransomware event.
Figure 2. Human-operated ransomware targeting and rate of success, based on a sampling of Microsoft data over six months between 2021 and 2022

The human-driven nature of these attacks and the scale of possible victims under control of ransomware-associated threat actors underscores the need to take targeted proactive security measures to harden networks and prevent these attacks in their early stages.

Threat actors and campaigns deep dive: Threat intelligence-driven response to human-operated ransomware attacks

For organizations to successfully respond to evict an active attacker, it’s important to understand the active stage of an ongoing attack. In the early attack stages, such as deploying a banking trojan, common remediation efforts like isolating a system and resetting exposed credentials may be sufficient. As the attack progresses and the attacker performs reconnaissance activities and exfiltration, it’s important to implement an incident response process that scopes the incident to address the impact specifically. Using a threat intelligence-driven methodology for understanding attacks can assist in determining incidents that need additional scoping.

In the next sections, we provide a deep dive into the following prominent ransomware threat actors and their campaigns to increase community understanding of these attacks and enable organizations to better protect themselves:

Microsoft threat intelligence directly informs our products as part of our commitment to track adversaries and protect customers. Microsoft 365 Defender customers should prioritize alerts titled “Ransomware-linked emerging threat activity group detected”. We also add the note “Ongoing hands-on-keyboard attack” to alerts that indicate a human attacker is in the network. When these alerts are raised, it’s highly recommended to initiate an incident response process to scope the attack, isolate systems, and regain control of credentials attackers may be in control of.

A note on threat actor naming: as part of Microsoft’s ongoing commitment to track both nation-state and cybercriminal threat actors, we refer to the unidentified threat actors as a “development group”. We use a naming structure with a prefix of “DEV” to indicate an emerging threat group or unique activity during investigation. When a nation-state group moves out of the DEV stage, we use chemical elements (for example, PHOSPHORUS and NOBELIUM) to name them. On the other hand, we use volcano names (such as ELBRUS) for ransomware or cybercriminal activity groups that have moved out of the DEV state. In the cybercriminal economy, relationships between groups change very rapidly. Attackers are known to hire talent from other cybercriminal groups or use “contractors,” who provide gig economy-style work on a limited time basis and may not rejoin the group. This shifting nature means that many of the groups Microsoft tracks are labeled as DEV, even if we have a concrete understanding of the nature of the activity group.

DEV-0193 cluster (Trickbot LLC): The most prolific ransomware group today

A vast amount of the current cybercriminal economy connects to a nexus of activity that Microsoft tracks as DEV-0193, also referred to as Trickbot LLC. DEV-0193 is responsible for developing, distributing, and managing many different payloads, including Trickbot, Bazaloader, and AnchorDNS. In addition, DEV-0193 managed the Ryuk RaaS program before the latter’s shutdown in June 2021, and Ryuk’s successor, Conti as well as Diavol. Microsoft has been tracking the activities of DEV-0193 since October 2020 and has observed their expansion from developing and distributing the Trickbot malware to becoming the most prolific ransomware-associated cybercriminal activity group active today. 

DEV-0193’s actions and use of the cybercriminal gig economy means they often add new members and projects and utilize contractors to perform various parts of their intrusions. As other malware operations have shut down for various reasons, including legal actions, DEV-0193 has hired developers from these groups. Most notable are the acquisitions of developers from Emotet, Qakbot, and IcedID, bringing them to the DEV-0193 umbrella.

A subgroup of DEV-0193, which Microsoft tracks as DEV-0365, provides infrastructure as a service for cybercriminals. Most notably, DEV-0365 provides Cobalt Strike Beacon as a service. These DEV-0365 Beacons have replaced unique C2 infrastructure in many active malware campaigns. DEV-0193 infrastructure has also been implicated in attacks deploying novel techniques, including exploitation of CVE-2021-40444. 

The leaked chat files from a group publicly labeled as the “Conti Group” in February 2022 confirm the wide scale of DEV-0193 activity tracked by Microsoft. Based on our telemetry from 2021 and 2022, Conti has become one of the most deployed RaaS ecosystems, with multiple affiliates concurrently deploying their payload—even as other RaaS ecosystems (DarkSide/BlackMatter and REvil) ceased operations. However, payload-based attribution meant that much of the activity that led to Conti ransomware deployment was attributed to the “Conti Group,” even though many affiliates had wildly different tradecraft, skills, and reporting structures. Some Conti affiliates performed small-scale intrusions using the tools offered by the RaaS, while others performed weeks-long operations involving data exfiltration and extortion using their own techniques and tools. One of the most prolific and successful Conti affiliates—and the one responsible for developing the “Conti Manual” leaked in August 2021—is tracked as DEV-0230. This activity group also developed and deployed the FiveHands and HelloKitty ransomware payloads and often gained access to an organization via DEV-0193’s BazaLoader infrastructure.

Microsoft hasn’t observed a Conti deployment in our data since April 19, 2022, suggesting that the Conti program has shut down or gone on hiatus, potentially in response to the visibility of DEV-0230’s deployment of Conti in high-profile incidents or FBI’s announcement of a reward for information related to Conti. As can be expected when a RaaS program shuts down, the gig economy nature of the ransomware ecosystem means that affiliates can easily shift between payloads. Conti affiliates who had previously deployed Conti have moved on to other RaaS payloads. For example, DEV-0506 was deploying BlackBasta part-time before the Conti shutdown and is now deploying it regularly. Similarly, DEV-0230 shifted to deploying QuantumLocker around April 23, 2022.

ELBRUS: (Un)arrested development

ELBRUS, also known as FIN7, has been known to be in operation since 2012 and has run multiple campaigns targeting a broad set of industries for financial gain. ELBRUS has deployed point-of-sale (PoS) and ATM malware to collect payment card information from in-store checkout terminals. They have also targeted corporate personnel who have access to sensitive financial data, including individuals involved in SEC filings.

In 2018, this activity group made headlines when three of its members were arrested. In May 2020, another arrest was made for an individual with alleged involvement with ELBRUS. However, despite law enforcement actions against suspected individual members, Microsoft has observed sustained campaigns from the ELBRUS group itself during these periods.

ELBRUS is responsible for developing and distributing multiple custom malware families used for persistence, including JSSLoader and Griffon. ELBRUS has also created fake security companies called “Combi Security” and “Bastion Security” to facilitate the recruitment of employees to their operations under the pretense of working as penetration testers.

In 2020 ELBRUS transitioned from using PoS malware to deploying ransomware as part of a financially motivated extortion scheme, specifically deploying the MAZE and Revil RaaS families. ELBRUS developed their own RaaS ecosystem named DarkSide. They deployed DarkSide payloads as part of their operations and recruited and managed affiliates that deployed the DarkSide ransomware. The tendency to report on ransomware incidents based on payload and attribute it to a monolithic gang often obfuscates the true relationship between the attackers, which is very accurate of the DarkSide RaaS. Case in point, one of the most infamous DarkSide deployments wasn’t performed by ELBRUS but by a ransomware as a service affiliate Microsoft tracks as DEV-0289.

ELBRUS retired the DarkSide ransomware ecosystem in May 2021 and released its successor, BlackMatter, in July 2021. Replicating their patterns from DarkSide, ELBRUS deployed BlackMatter themselves and ran a RaaS program for affiliates. The activity group then retired the BlackMatter ransomware ecosystem in November 2021.

While they aren’t currently publicly observed to be running a RaaS program, ELBRUS is very active in compromising organizations via phishing campaigns that lead to their JSSLoader and Griffon malware. Since 2019, ELBRUS has partnered with DEV-0324 to distribute their malware implants. DEV-0324 acts as a distributor in the cybercriminal economy, providing a service to distribute the payloads of other attackers through phishing and exploit kit vectors. ELBRUS has also been abusing CVE-2021-31207 in Exchange to compromise organizations in April of 2022, an interesting pivot to using a less popular authenticated vulnerability in the ProxyShell cluster of vulnerabilities. This abuse has allowed them to target organizations that patched only the unauthenticated vulnerability in their Exchange Server and turn compromised low privileged user credentials into highly privileged access as SYSTEM on an Exchange Server.  

DEV-0504: Shifting payloads reflecting the rise and fall of RaaS programs

An excellent example of how clustering activity based on ransomware payload alone can lead to obfuscating the threat actors behind the attack is DEV-0504. DEV-0504 has deployed at least six RaaS payloads since 2020, with many of their attacks becoming high-profile incidents attributed to the “REvil gang” or “BlackCat ransomware group”. This attribution masks the actions of the set of the attackers in the DEV-0504 umbrella, including other REvil and BlackCat affiliates. This has resulted in a confusing story of the scale of the ransomware problem and overinflated the impact that a single RaaS program shutdown can have on the threat environment.  

Timeline showing DEV-0504's ransomware payloads over time.
Figure 3. Ransomware payloads distributed by DEV-0504 between 2020 and June 2022

DEV-0504 shifts payloads when a RaaS program shuts down, for example the deprecation of REvil and BlackMatter, or possibly when a program with a better profit margin appears. These market dynamics aren’t unique to DEV-0504 and are reflected in most RaaS affiliates. They can also manifest in even more extreme behavior where RaaS affiliates switch to older “fully owned” ransomware payloads like Phobos, which they can buy when a RaaS isn’t available, or they don’t want to pay the fees associated with RaaS programs.

DEV-0504 appears to rely on access brokers to enter a network, using Cobalt Strike Beacons they have possibly purchased access to. Once inside a network, they rely heavily on PsExec to move laterally and stage their payloads. Their techniques require them to have compromised elevated credentials, and they frequently disable antivirus products that aren’t protected with tamper protection.

DEV-0504 was responsible for deploying BlackCat ransomware in companies in the energy sector in January 2022. Around the same time, DEV-0504 also deployed BlackCat in attacks against companies in the fashion, tobacco, IT, and manufacturing industries, among others. BlackCat remains DEV-0504’s primary payload as of June 2022.

DEV-0237: Prolific collaborator

Like DEV-0504, DEV-0237 is a prolific RaaS affiliate that alternates between different payloads in their operations based on what is available. DEV-0237 heavily used Ryuk and Conti payloads from Trickbot LLC/DEV-0193, then Hive payloads more recently. Many publicly documented Ryuk and Conti incidents and tradecraft can be traced back to DEV-0237.

After the activity group switched to Hive as a payload, a large uptick in Hive incidents was observed. Their switch to the BlackCat RaaS in March 2022 is suspected to be due to public discourse around Hive decryption methodologies; that is, DEV-0237 may have switched to BlackCat because they didn’t want Hive’s decryptors to interrupt their business. Overlap in payloads has occurred as DEV-0237 experiments with new RaaS programs on lower-value targets. They have been observed to experiment with some payloads only to abandon them later.

Figure 4. Ransomware payloads distributed by DEV-0237 between 2020 and June 2022

Beyond RaaS payloads, DEV-0237 uses the cybercriminal gig economy to also gain initial access to networks. DEV-0237’s proliferation and success rate come in part from their willingness to leverage the network intrusion work and malware implants of other groups versus performing their own initial compromise and malware development.

Relationship diagram showing the relationship between DEV-0237 and DEV-0447, DEV-0387, and DEV-0193.
Figure 5. Examples of DEV-0237’s relationships with other cybercriminal activity groups

Like all RaaS operators, DEV-0237 relies on compromised, highly privileged account credentials and security weaknesses once inside a network. DEV-0237 often leverages Cobalt Strike Beacon dropped by the malware they have purchased, as well as tools like SharpHound to conduct reconnaissance. The group often utilizes BITSadmin /transfer to stage their payloads. An often-documented trademark of Ryuk and Conti deployments is naming the ransomware payload xxx.exe, a tradition that DEV-0237 continues to use no matter what RaaS they are deploying, as most recently observed with BlackCat. In late March of 2022, DEV-0237 was observed to be using a new version of Hive again.

In May 2022, DEV-0237 started to routinely deploy Nokoyawa, a payload that we observed the group previously experimenting with when they weren’t using Hive. While the group used other payloads such as BlackCat in the same timeframe, Nokoyawa became a more regular part of their toolkits. By June 2022, DEV-0237 was still primarily deploying Hive and sometimes Nokoyawa but was seen experimenting with other ransomware payloads, including Agenda and Mindware.

DEV-0237 is also one of several actors observed introducing other tools into their attacks to replace Cobalt Strike. Cobalt Strike’s ubiquity and visible impact has led to improved detections and heightened awareness in security organizations, leading to observed decreased use by actors. DEV-0237 now uses the SystemBC RAT and the penetration testing framework Sliver in their attacks, replacing Cobalt Strike.

DEV-0450 and DEV-0464: Distributing Qakbot for ransomware deployment

The evolution of prevalent trojans from being commodity malware to serving as footholds for ransomware is well documented via the impact of Emotet, Trickbot, and BazaLoader. Another widely distributed malware, Qakbot, also leads to handoffs to RaaS affiliates. Qakbot is delivered via email, often downloaded by malicious macros in an Office document. Qakbot’s initial actions include profiling the system and the network, and exfiltrating emails (.eml files) for later use as templates in its malware distribution campaigns.

Qakbot is prevalent across a wide range of networks, building upon successful infections to continue spreading and expanding. Microsoft tracks DEV-0450 and DEV-0464 as  Qakbot distributors that result in observed ransomware attacks. DEV-0450 distributes the “presidents”-themed Qakbot, using American presidents’ names in their malware campaigns. Meanwhile, DEV-0464 distributes the “TR” Qakbot and other malware such as SquirrelWaffle. DEV-0464 also rapidly adopted the Microsoft Support Diagnostic Tool (MSDT) vulnerability (CVE-2022-30190) in their campaigns. The abuse of malicious macros and MSDT can be blocked by preventing Office from creating child processes, which we detail in the hardening guidance below.

Historically, Qakbot infections typically lead to hands-on-keyboard activity and ransomware deployments by DEV-0216, DEV-0506, and DEV-0826. DEV-0506 previously deployed Conti but switched to deploying Black Basta around April 8, 2022. This group uses DEV-0365’s Cobalt Strike Beacon infrastructure instead of maintaining their own. In late September 2022, Microsoft observed DEV-0506 adding Brute Ratel as a tool to facilitate their hands-on-keyboard access as well as Cobalt Strike Beacons.

Another RaaS affiliate that acquired access from Qakbot infections was DEV-0216, which maintains their own Cobalt Strike Beacon infrastructure and has operated as an affiliate for Egregor, Maze, Lockbit, REvil, and Conti in numerous high-impact incidents. Microsoft no longer sees DEV-0216 ransomware incidents initiating from DEV-0464 and DEV-0450 infections, indicating they may no longer be acquiring access via Qakbot.

DEV-0206 and DEV-0243: An “evil” partnership

Malvertising, which refers to taking out a search engine ad to lead to a malware payload, has been used in many campaigns, but the access broker that Microsoft tracks as DEV-0206 uses this as their primary technique to gain access to and profile networks. Targets are lured by an ad purporting to be a browser update, or a software package, to download a ZIP file and double-click it. The ZIP package contains a JavaScript file (.js), which in most environments runs when double-clicked. Organizations that have changed the settings such that script files open with a text editor by default instead of a script handler are largely immune from this threat, even if a user double clicks the script.

Once successfully executed, the JavaScript framework, also referred to SocGholish, acts as a loader for other malware campaigns that use access purchased from DEV-0206, most commonly Cobalt Strike payloads. These payloads have, in numerous instances, led to custom Cobalt Strike loaders attributed to DEV-0243. DEV-0243 falls under activities tracked by the cyber intelligence industry as “EvilCorp,”  The custom Cobalt Strike loaders are similar to those seen in publicly documented Blister malware’s inner payloads. In DEV-0243’s initial partnerships with DEV-0206, the group deployed a custom ransomware payload known as WastedLocker, and then expanded to additional DEV-0243 ransomware payloads developed in-house, such as PhoenixLocker and Macaw.

Around November 2021, DEV-0243 started to deploy the LockBit 2.0 RaaS payload in their intrusions. The use of a RaaS payload by the “EvilCorp” activity group is likely an attempt by DEV-0243 to avoid attribution to their group, which could discourage payment due to their sanctioned status.

Attack chain diagram showing DEV-0206 gaining access to target organizations and deploying JavaScript implant. After which, DEV-0243 begins hands-on keyboard actions.
Figure 6. The handover from DEV-0206 to DEV-0243

On July 26, 2022, Microsoft researchers discovered the FakeUpdates malware being delivered via existing Raspberry Robin infections. Raspberry Robin is a USB-based worm first publicly discussed by Red Canary. The DEV-0206-associated FakeUpdates activity on affected systems has since led to follow-on actions resembling DEV-0243 pre-ransomware behavior.

DEV-0401: China-based lone wolf turned LockBit 2.0 affiliate

Differing from the other RaaS developers, affiliates, and access brokers profiled here, DEV-0401 appears to be an activity group involved in all stages of their attack lifecycle, from initial access to ransomware development. Despite this, they seem to take some inspiration from successful RaaS operations with the frequent rebranding of their ransomware payloads. Unique among human-operated ransomware threat actors tracked by Microsoft, DEV-0401 is confirmed to be a China-based activity group.

DEV-0401 differs from many of the attackers who rely on purchasing access to existing malware implants or exposed RDP to enter a network. Instead, the group heavily utilizes unpatched vulnerabilities to access networks, including vulnerabilities in Exchange, Manage Engine AdSelfService Plus, Confluence, and Log4j 2. Due to the nature of the vulnerabilities they preferred, DEV-0401 gains elevated credentials at the initial access stage of their attack.

Once inside a network, DEV-0401 relies on standard techniques such as using Cobalt Strike and WMI for lateral movement, but they have some unique preferences for implementing these behaviors. Their Cobalt Strike Beacons are frequently launched via DLL search order hijacking. While they use the common Impacket tool for WMI lateral movement, they use a customized version of the wmiexec.py module of the tool that creates renamed output files, most likely to evade static detections. Ransomware deployment is ultimately performed from a batch file in a share and Group Policy, usually written to the NETLOGON share on a Domain Controller, which requires the attackers to have obtained highly privileged credentials like Domain Administrator to perform this action.

Timeline diagram showing DEV-0401's ransomware payloads over time
Figure 7. Ransomware payloads distributed by DEV-0401 between 2021 and April 2022

Because DEV-0401 maintains and frequently rebrands their own ransomware payloads, they can appear as different groups in payload-driven reporting and evade detections and actions against them. Their payloads are sometimes rebuilt from existing for-purchase ransomware tools like Rook, which shares code similarity with the Babuk ransomware family. In February of 2022, DEV-0401 was observed deploying the Pandora ransomware family, primarily via unpatched VMware Horizon systems vulnerable to the Log4j 2 CVE-2021-44228 vulnerability.

Like many RaaS operators, DEV-0401 maintained a leak site to post exfiltrated data and motivate victims to pay, however their frequent rebranding caused these systems to sometimes be unready for their victims, with their leak site sometimes leading to default web server landing pages when victims attempt to pay.  In a notable shift—possibly related to victim payment issues—DEV-0401 started deploying LockBit 2.0 ransomware payloads in April 2022. Around June 6, 2022, it began replacing Cobalt Strike with the Sliver framework in their attacks.

DEV-0537: From extortion to destruction

An example of a threat actor who has moved to a pure extortion and destruction model without deploying ransomware payloads is an activity group that Microsoft tracks as DEV-0537, also known as LAPSUS$. Microsoft has detailed DEV-0537 actions taken in early 2022 in this blog. DEV-0537 started targeting organizations mainly in Latin America but expanded to global targeting, including government entities, technology, telecom, retailers, and healthcare. Unlike more opportunistic attackers, DEV-0537 targets specific companies with an intent. Their initial access techniques include exploiting unpatched vulnerabilities in internet-facing systems, searching public code repositories for credentials, and taking advantage of weak passwords. In addition, there is evidence that DEV-0537 leverages credentials stolen by the Redline password stealer, a piece of malware available for purchase in the cybercriminal economy. The group also buys credentials from underground forums which were gathered by other password-stealing malware.

Once initial access to a network is gained, DEV-0537 takes advantage of security misconfigurations to elevate privileges and move laterally to meet their objectives of data exfiltration and extortion. While DEV-0537 doesn’t possess any unique technical capabilities, the group is especially cloud-aware. They target cloud administrator accounts to set up forwarding rules for email exfiltration and tamper with administrative settings on cloud environments. As part of their goals to force payment of ransom, DEV-0537 attempts to delete all server infrastructure and data to cause business disruption. To further facilitate the achievement of their goals, they remove legitimate admins and delete cloud resources and server infrastructure, resulting in destructive attacks. 

DEV-0537 also takes advantage of cloud admin privileges to monitor email, chats, and VOIP communications to track incident response efforts to their intrusions. DEV-0537 has been observed on multiple occasions to join incident response calls, not just observing the response to inform their attack but unmuting to demand ransom and sharing their screens while they delete their victim’s data and resources.

Defending against ransomware: Moving beyond protection by detection

A durable security strategy against determined human adversaries must include the goal of mitigating classes of attacks and detecting them. Ransomware attacks generate multiple, disparate security product alerts, but they could easily get lost or not responded to in time. Alert fatigue is real, and SOCs can make their lives easier by looking at trends in their alerts or grouping alerts into incidents so they can see the bigger picture. SOCs can then mitigate alerts using hardening capabilities like attack surface reduction rules. Hardening against common threats can reduce alert volume and stop many attackers before they get access to networks. 

Attackers tweak their techniques and have tools to evade and disable security products. They are also well-versed in system administration and try to blend in as much as possible. However, while attacks have continued steadily and with increased impact, the attack techniques attackers use haven’t changed much over the years. Therefore, a renewed focus on prevention is needed to curb the tide.

Ransomware attackers are motivated by easy profits, so adding to their cost via security hardening is key in disrupting the cybercriminal economy.

Building credential hygiene

More than malware, attackers need credentials to succeed in their attacks. In almost all attacks where ransomware deployment was successful, the attackers had access to a domain admin-level account or local administrator passwords that were consistent throughout the environment. Deployment then can be done through Group Policy or tools like PsExec (or clones like PAExec, CSExec, and WinExeSvc). Without the credentials to provide administrative access in a network, spreading ransomware to multiple systems is a bigger challenge for attackers. Compromised credentials are so important to these attacks that when cybercriminals sell ill-gotten access to a network, in many instances, the price includes a guaranteed administrator account to start with.

Credential theft is a common attack pattern. Many administrators know tools like Mimikatz and LaZagne, and their capabilities to steal passwords from interactive logons in the LSASS process. Detections exist for these tools accessing the LSASS process in most security products. However, the risk of credential exposure isn’t just limited to a domain administrator logging in interactively to a workstation. Because attackers have accessed and explored many networks during their attacks, they have a deep knowledge of common network configurations and use it to their advantage. One common misconfiguration they exploit is running services and scheduled tasks as highly privileged service accounts.

Too often, a legacy configuration ensures that a mission-critical application works by giving the utmost permissions possible. Many organizations struggle to fix this issue even if they know about it, because they fear they might break applications. This configuration is especially dangerous as it leaves highly privileged credentials exposed in the LSA Secrets portion of the registry, which users with administrative access can access. In organizations where the local administrator rights haven’t been removed from end users, attackers can be one hop away from domain admin just from an initial attack like a banking trojan. Building credential hygiene is developing a logical segmentation of the network, based on privileges, that can be implemented alongside network segmentation to limit lateral movement.

Here are some steps organizations can take to build credential hygiene:

  • Aim to run services as Local System when administrative privileges are needed, as this allows applications to have high privileges locally but can’t be used to move laterally. Run services as Network Service when accessing other resources.
  • Use tools like LUA Buglight to determine the privileges that applications really need.
  • Look for events with EventID 4624 where the logon type is 2, 4, 5, or 10 and the account is highly privileged like a domain admin. This helps admins understand which credentials are vulnerable to theft via LSASS or LSA Secrets. Ideally, any highly privileged account like a Domain Admin shouldn’t be exposed on member servers or workstations.
  • Monitor for EventID 4625 (Logon Failed events) in Windows Event Forwarding when removing accounts from privileged groups. Adding them to the local administrator group on a limited set of machines to keep an application running still reduces the scope of an attack as against running them as Domain Admin.
  • Randomize Local Administrator passwords with a tool like Local Administrator Password Solution (LAPS) to prevent lateral movement using local accounts with shared passwords.
  • Use a cloud-based identity security solution that leverages on-premises Active Directory signals get visibility into identity configurations and to identify and detect threats or compromised identities

Auditing credential exposure

Auditing credential exposure is critical in preventing ransomware attacks and cybercrime in general. BloodHound is a tool that was originally designed to provide network defenders with insight into the number of administrators in their environment. It can also be a powerful tool in reducing privileges tied to administrative account and understanding your credential exposure. IT security teams and SOCs can work together with the authorized use of this tool to enable the reduction of exposed credentials. Any teams deploying BloodHound should monitor it carefully for malicious use. They can also use this detection guidance to watch for malicious use.

Microsoft has observed ransomware attackers also using BloodHound in attacks. When used maliciously, BloodHound allows attackers to see the path of least resistance from the systems they have access, to highly privileged accounts like domain admin accounts and global administrator accounts in Azure.

Prioritizing deployment of Active Directory updates

Security patches for Active Directory should be applied as soon as possible after they are released. Microsoft has witnessed ransomware attackers adopting authentication vulnerabilities within one hour of being made public and as soon as those vulnerabilities are included in tools like Mimikatz. Ransomware activity groups also rapidly adopt vulnerabilities related to authentication, such as ZeroLogon and PetitPotam, especially when they are included in toolkits like Mimikatz. When unpatched, these vulnerabilities could allow attackers to rapidly escalate from an entrance vector like email to Domain Admin level privileges.

Cloud hardening

As attackers move towards cloud resources, it’s important to secure cloud resources and identities as well as on-premises accounts. Here are ways organizations can harden cloud environments:

Cloud identity hardening

Multifactor authentication (MFA)

  • Enforce MFA on all accounts, remove users excluded from MFA, and strictly require MFA from all devices, in all locations, at all times.
  • Enable passwordless authentication methods (for example, Windows Hello, FIDO keys, or Microsoft Authenticator) for accounts that support passwordless. For accounts that still require passwords, use authenticator apps like Microsoft Authenticator for MFA. Refer to this article for the different authentication methods and features.
  • Identify and secure workload identities to secure accounts where traditional MFA enforcement does not apply.
  • Ensure that users are properly educated on not accepting unexpected two-factor authentication (2FA).
  • For MFA that uses authenticator apps, ensure that the app requires a code to be typed in where possible, as many intrusions where MFA was enabled (including those by DEV-0537) still succeeded due to users clicking “Yes” on the prompt on their phones even when they were not at their computers. Refer to this article for an example.
  • Disable legacy authentication.

Cloud admins

Addressing security blind spots

In almost every observed ransomware incident, at least one system involved in the attack had a misconfigured security product that allowed the attacker to disable protections or evade detection. In many instances, the initial access for access brokers is a legacy system that isn’t protected by  antivirus or EDR solutions. It’s important to understand that the lack security controls on these systems that have access to highly privileged credentials act as blind spots that allow attackers to perform the entire ransomware and exfiltration attack chain from a single system without being detected. In some instances, this is specifically advertised as a feature that access brokers sell.

Organizations should review and verify that security tools are running in their most secure configuration and perform regular network scans to ensure appropriate security products are monitoring and protecting all systems, including servers. If this isn’t possible, make sure that your legacy systems are either physically isolated through a firewall or logically isolated by ensuring they have no credential overlap with other systems.

For Microsoft 365 Defender customers, the following checklist eliminates security blind spots:

  • Turn on cloud-delivered protection in Microsoft Defender Antivirus to cover rapidly evolving attacker tools and techniques, block new and unknown malware variants, and enhance attack surface reduction rules and tamper protection.
  • Turn on tamper protection features to prevent attackers from stopping security services.
  • Run EDR in block mode so that Microsoft Defender for Endpoint can block malicious artifacts, even when a non-Microsoft antivirus doesn’t detect the threat or when Microsoft Defender Antivirus is running in passive mode. EDR in block mode also blocks indicators identified proactively by Microsoft Threat Intelligence teams.
  • Enable network protection to prevent applications or users from accessing malicious domains and other malicious content on the internet.
  • Enable investigation and remediation in full automated mode to allow Microsoft Defender for Endpoint to take immediate action on alerts to resolve breaches.
  • Use device discovery to increase visibility into the network by finding unmanaged devices and onboarding them to Microsoft Defender for Endpoint.
  • Protect user identities and credentials using Microsoft Defender for Identity, a cloud-based security solution that leverages on-premises Active Directory signals to monitor and analyze user behavior to identify suspicious user activities, configuration issues, and active attacks.

Reducing the attack surface

Microsoft 365 Defender customers can turn on attack surface reduction rules to prevent common attack techniques used in ransomware attacks. These rules, which can be configured by all Microsoft Defender Antivirus customers and not just those using the EDR solution, offer significant hardening against attacks. In observed attacks from several ransomware-associated activity groups, Microsoft customers who had the following rules enabled were able to mitigate the attack in the initial stages and prevented hands-on-keyboard activity:

In addition, Microsoft has changed the default behavior of Office applications to block macros in files from the internet, further reduce the attack surface for many human-operated ransomware attacks and other threats.

Hardening internet-facing assets and understanding your perimeter

Organizations must identify and secure perimeter systems that attackers might use to access the network. Public scanning interfaces, such as RiskIQ, can be used to augment data. Some systems that should be considered of interest to attackers and therefore need to be hardened include:

  • Secure Remote Desktop Protocol (RDP) or Windows Virtual Desktop endpoints with MFA to harden against password spray or brute force attacks.
  • Block Remote IT management tools such as Teamviewer, Splashtop, Remote Manipulator System, Anydesk, Atera Remote Management, and ngrok.io via network blocking such as perimeter firewall rules if not in use in your environment. If these systems are used in your environment, enforce security settings where possible to implement MFA.

Ransomware attackers and access brokers also use unpatched vulnerabilities, whether already disclosed or zero-day, especially in the initial access stage. Even older vulnerabilities were implicated in ransomware incidents in 2022 because some systems remained unpatched, partially patched, or because access brokers had established persistence on a previously compromised systems despite it later being patched.

Some observed vulnerabilities used in campaigns between 2020 and 2022 that defenders can check for and mitigate include:

Ransomware attackers also rapidly adopt new vulnerabilities. To further reduce organizational exposure, Microsoft Defender for Endpoint customers can use the threat and vulnerability management capability to discover, prioritize, and remediate vulnerabilities and misconfigurations.

Microsoft 365 Defender: Deep cross-domain visibility and unified investigation capabilities to defend against ransomware attacks

The multi-faceted threat of ransomware requires a comprehensive approach to security. The steps we outlined above defend against common attack patterns and will go a long way in preventing ransomware attacks. Microsoft 365 Defender is designed to make it easy for organizations to apply many of these security controls.

Microsoft 365 Defender’s industry-leading visibility and detection capabilities, demonstrated in the recent MITRE Engenuity ATT&CK® Evaluations, automatically stop most common threats and attacker techniques. To equip organizations with the tools to combat human-operated ransomware, which by nature takes a unique path for every organization, Microsoft 365 Defender provides rich investigation features that enable defenders to seamlessly inspect and remediate malicious behavior across domains.

Learn how you can stop attacks through automated, cross-domain security and built-in AI with Microsoft Defender 365.

In line with the recently announced expansion into a new service category called Microsoft Security Experts, we’re introducing the availability of Microsoft Defender Experts for Hunting for public preview. Defender Experts for Hunting is for customers who have a robust security operations center but want Microsoft to help them proactively hunt for threats across Microsoft Defender data, including endpoints, Office 365, cloud applications, and identity.

Join our research team at the Microsoft Security Summit digital event on May 12 to learn what developments Microsoft is seeing in the threat landscape, as well as how we can help your business mitigate these types of attacks. Ask your most pressing questions during the live chat Q&A. Register today.