Happy Holidays and Happy New Year from Elite Paradigm LLC

Happy Holidays and Happy New Year from Elite Paradigm LLC

During this season, we find ourselves reflecting on the past year and the customers who’ve helped shape our success. In this spirit, the team at ELITE PARADIGM wishes you and yours a happy holiday season!

Get connected with Elite Paradigm LLC. for your 2024 I.T. and Cyber Security needs.

Enable Unified Analytics

HPE GreenLake edge-to-cloud platform rolls out industry’s first cloud-native unified analytics and data lakehouse cloud services optimized for hybrid environments

IN THIS ARTICLE

  • First cloud-native solution to bring Kubernetes-based Apache Spark analytics and the simplicity of unified data lakehouses using Delta Lake on-premises 
  • Only data fabric to combine S3-native object store, files, streams and databases in one scalable data platform
  • Cloud-native unified analytics platform enables customers to modernize legacy data lakes and warehouses without complex data migration, application rewrites or lock-in
  • 37 solution partners support HPE Ezmeral with 15 joining the HPE Ezmeral Partner Program in the past 60 days

Built on HPE Ezmeral software, analytics and data science teams benefit from frictionless access to data from edge to cloud and a unified platform for accelerated Apache Spark and SQL 

In the Age of Insight, data has become the heart of every digital transformation initiative in every industry, and data analytics has become critical to building successful enterprises. Simply put, data drives competitive advantage.  However, for most organizations, significant challenges remain for organizations to successfully execute data-first modernization initiatives. Until now, organizations have been stuck with legacy analytics platforms that were either built for a pre-cloud era and lack cloud-native capabilities, or require complex migrations to public clouds, risking vendor lock-in, high costs and forcing adoption of new processes. This situation has left the big data and analytics software market1 — which IDC forecasts will reach $110 billion by 2023 – ripe for disruption.

Today, I am excited to announce two disruptive HPE GreenLake cloud services that will enable customers to overcome these trade-offs.  There are four big value propositions we optimized for:
 

1.     Seamless experience for a variety of analytics, SQL, and data science users

2.     Top-notch performance

3.     Choice and open ecosystem by leveraging pure open source in a hybrid environment

4.     An intense focus on reducing TCO by up to 35% for many of the Workloads we are targeting

Built from the ground up to be open and cloud-native, our new HPE GreenLake for analytics cloud services will help enterprises unify, modernize, and analyze all of their data, from edge-to-cloud, in any and every place it’s stored. Now analytics and data science teams can leverage the industry’s first cloud-native solution on-premises, scale up Apache Spark lakehouses, and speed up AI and ML workflows. Today’s news is part of a significant set of new cloud services for the HPE GreenLake edge-to-cloud platform, announced today in a virtual launch event from HPE. The new HPE GreenLake for analytics cloud services include the following:

HPE Ezmeral Unified Analytics

HPE now offers an alternative to customers previously limited to solutions in a hyperscale environment by delivering modern analytics on-premises, enabling up to 35%2 more cost efficiencies than the public cloud for data-intensive, long running jobs typical in mission critical environments. Available on the HPE GreenLake edge-to-cloud platform, HPE Ezmeral Unified Analytics is the industry’s first unified, modern, hybrid analytics and data lakehouse platform.

  • HPE Ezmeral Unified Analytics is the industry’s first unified, modern, hybrid analytics and data lakehouse platform Share

We believe it is the first solution to architecturally optimize and leverage three key advancements simultaneously which no one else in the industry has done.
 

1.     Optimize for a Kubernetes based Spark environment for on-premises deployment providing the cloud-native elasticity and agility customers want

2.     Handle the diversity of data types from files, tables, streams, and objects in one consistent platform to avoid silos and make data engineering easier

3.     Embrace the edge by enabling a data platform environment which can span from edge to hybrid cloud

Instead of requiring all of your data to live in a public cloud, HPE Ezmeral Unified Analytics is optimized for on-premises and hybrid deployments, and uses open source software to ensure as-needed data portability. We designed our solution with the flexibility and scale to accommodate enterprises’ large data sets, or lakehouses, so customers have the elasticity they need for advanced analytics, everywhere.

Just a few key advantages of HPE Ezmeral Unified Analytics include:

  • Dramatic performance acceleration: Together NVIDIA RAPIDS Accelerator for Apache Spark and HPE Ezmeral can accelerate Spark data prep, model training, and visualization by up to 29x3, allowing data scientists and engineers to build, develop, and deploy at scale analytics solutions into production faster.
  • Next-generation architecture: We have built on Kubernetes and added value through an orchestration plane to make it easy to get the scale-out elasticity customers want. Our multi-tenant Kubernetes environment supports a compute-storage separation cloud model, providing the combined performance and elasticity required for advanced analytics, while enabling users to create unified real-time and batch analytics lakehouses with Delta Lake integration.
  • Optimized for data analytics:Enterprises can create a unified data repository for use by data scientists, developers, and analysts, including usage and sharing controls, creating the foundation for a silo-free digital transformation that scales with the business as it grows, and reaches new data sources. Support for NVIDIA Multi-Instance GPU technology enables enterprises to support a variety of workload requirements and maximize efficiency with up to seven instances per GPU.
  • Enhanced collaboration: Integrated workflows from analytics to ML/AI span hybrid clouds and edge locations, including native open-source integrations with Airflow, ML Flow, and Kubeflow technologies to help data science, data engineering, and data analytics teams collaborate and deploy models faster.
  • Choice and no vendor lock-in: On-premises Apache Spark workloads offer the freedom to choose deployment environments, tools, and partners needed to innovate faster

“Today’s news provides the market with more choice in deploying their modern analytics initiatives with a hybrid-native solution, enabling faster access to data, edge to cloud,” said Carl Olofson, Research Vice President, IDC. “HPE Ezmeral is advancing the data analytics market with continued innovations that fill a gap in the market for an on-premises unified analytics platform, helping enterprises unlock insights to outperform the competition.”

HPE Ezmeral Data Fabric Object Store

Our second disruptive new solution is the HPE Ezmeral Data Fabric Object Store: the industry’s first Data Fabric to combine S3-native object store, files, streams and databases in one scalable data platform that spans edge-to-cloud. Available on bare metal and Kubernetes-native deployments, HPE Ezmeral Data Fabric Object Store provides a global view of an enterprise’s dispersed data assets and unified access to all data within a cloud-native model, securely accessible to the most demanding data engineering, data analytics, and data science applications. Designed with native S3 API, and optimized for advanced analytics, HPE Ezmeral Data Fabric Object Store enables customers to orchestrate both apps and data in a single control plane, while delivering the best price for outstanding performance.

We are proud of the innovation that has resulted in what we believe is an industry first: A consistent data platform which is able to handle a diversity of data types, is optimized for analytics, and is able to span from edge to cloud.

Several key features include:

  • Optimized performance for analytics: Designed for scalable object stores, HPE Ezmeral Object Store is the industry’s only solution that supports file, streams, database, and now object data types within a common persistent store, optimized for best performance across edge-to-cloud analytics workloads.
  • Globally synchronized edge-to cloud data: Clusters and data are orchestrated together to support dispersed edge operations, and a single Global Namespace provides simplified access to edge-to-cloud topologies from any application or interface. While data can be mirrored, snapshotted, and replicated, advanced security and policies ensure the right people and applications have access to the right data, when they need it.
  • Continuous scaling: Enterprises can grow as needed by adding nodes and configuring policies for data persistence while the data store handles the rest. 
  • Performance and cost balance: Adapting to small or large objects, auto-tiering policies automatically move data from high-performance storage to low-cost storage.  

Expanding the HPE Ezmeral Partner Ecosystem

We first introduced the HPE Ezmeral Partner Program in March 2021, enabling the rapid creation of streamlined, customized analytics engines and environments based on full stack solutions validated by trusted ISV partners. With 76% of enterprises expecting to be using on-premises, third-party-managed private cloud infrastructure for data and analytics workloads within the next year4, we’re excited to announce six new ISV partners today, including: NVIDIA NGC, Pepperdata, Confluent, Weka, Ahana and gopaddle.

“NVIDIA’s contributions to Apache Spark enable enterprises to process data orders of magnitude faster while significantly lowering infrastructure costs,” said Manuvir Das, head of Enterprise Computing, NVIDIA. “Integrating the NVIDIA RAPIDS Accelerator for Apache Spark and NVIDIA Triton Inference Server into the HPE Ezmeral Unified Analytics Platform streamlines the development and deployment of high-performance analytics, helping customers gain immediate results at lower costs.” 

“Today, companies are using Spark to build their high-performance data applications, accelerating tens to thousands of terabytes of data transitioning from data lakes to AI data modeling,” said Joel Stewart, Vice President Customer Success, Pepperdata. “Pepperdata on HPE Ezmeral Runtime Enterprise can help reduce operating costs and provide deep insights into their Spark applications to improve performance and reliability.”

Since the HPE Ezmeral Partner Program launched, we’ve added 37 solution partners5 to support our customers’ core use cases and workloads, including big data and AI/ML use cases. The Partner Program is also adding support today for open-source projects such as Apache Spark, offering enterprises the ability to transition workloads to a modern, cloud-native architecture.

  • HPE Ezmeral has dozens of new customers, with competitive wins over both traditional big data players, and public cloud vendors Share

HPE GreenLake edge to-cloud platform and HPE Ezmeral are transforming enterprises – and HPE

As an important component of HPE GreenLake cloud services, the HPE Ezmeral software portfolio help enterprises such as GM Financial and Bidtellect advance modern data analytics initiatives. Since it was first introduced in June 2020, HPE Ezmeral has secured dozens of new customers, with significant competitive wins over both traditional big data players, as well as public cloud vendors.

Since vast volumes of applications and data remain will remain on-premises and at the edge as enterprises continue their digital transformations, our elastic, unified analytics solutions will help customers extract maximum value from their data, wherever it lives and moves, from edge-to-cloud. We look forward to working with you to make the most of your data as the Age of Insight continues to reshape enterprises around the world.

Availability and Additional Resources

HPE Ezmeral Unified Analytics and HPE Ezmeral Data Fabric Object Store will be available as HPE GreenLake cloud services beginning November 2021 and Q1 2022, respectively.

Learn more about today’s news from the experts. Join these deep dive sessions as I chat with:

HPE and the HPE logo are trademarks or registered trademarks of HPE and/or its affiliates in the U.S. and other countries.  Third-party trademarks mentioned are the property of their respective owners. 

1 IDC, Worldwide Big Data and Analytics Software Forecast, 2021–2025, July 2021

2 Based on internal HPE competitive analysis, September 2021

Technical Paper: HPE Ezmeral for Apache Spark with NVIDIA GPU, published September 2021

451 Research Voice of the Enterprise: Data & Analytics, Data Platforms 2021

5 Internal HPE documentation on list of partners maintained by the group

Networks Are Becoming Cloud-centric. Network Security Must Adapt.

This post is also available in: 日本語 (Japanese)

Today’s digital journey is long and complex, creating equal parts opportunity and risk for organizations. The recent crisis of the pandemic has fueled more complexity in an already complicated world, and the digital landscape has been no exception. Networks have further expanded into the cloud, and organizations have reinvented themselves even while reacting and responding to new circumstances – and new cyberthreats. One question is top of mind: Where do we go from here? It’s clear that cybersecurity is no longer simply a defense. In a world that’s moving from cloud-ready to cloud-centric, cybersecurity has become a critical component in the foundation of the enterprise.

The physical world and the digital world have never been more interconnected and interdependent. You’ve no doubt seen the evidence – employees moving out of their offices, sensitive data and workloads leaving the friendly confines of the data center, legacy and SaaS applications needing to peacefully coexist, and every “thing” connecting to the Internet of Things. Network security is evolving to meet these challenges, and it’s critical to have the right cybersecurity strategy and partner.

Limitations of Legacy Approaches in a Cloud-Centric World

Legacy approaches to securing the network and cloud applications are broken due to several critical limitations:

  • Disjointed, complex SaaS security: Current Cloud-Access Security Brokers (CASB) solutions are complex to deploy and maintain, exist separately from the rest of the security infrastructure, and result in high total cost of ownership (TCO). In addition, they offer subpar security as threats morph and more data and applications reside in a “distributed cloud” that is spread over thousands of SaaS applications, multiple cloud providers and on-premises locations.
  • Reactive security: Legacy network security solutions still rely on a signature-based approach that requires security analysts to hunt down zero-day attacks in retrospect, rather than placing machine learning (ML) inline for realtime prevention. Meanwhile, attackers are using automation and the computing power of the cloud to constantly morph threats. Over the last decade, the numbers of new malware variants have increased from thousands per day to millions per day. In addition, hundreds of thousands of new malicious URLs are created daily, and security based on URL databases must evolve.
  • Lack of holistic identity-based security: The identity of users is no longer confined to on-premises directories. 87% of organizations use or plan to move to a cloud-based directory service to store user identities. Organizations need to configure, maintain and synchronize their network security ecosystem with the multiple identity providers used by an enterprise, which can be time-consuming and resource-intensive. Network security tools don’t apply identity-based security controls consistently, which creates a significant barrier to adopting Zero Trust measures to protect organizations against data breaches. As more people are working from anywhere, they require fast and always-on access to data and applications in the distributed cloud, regardless of location.
  • Trading performance for security: Users are accessing more data-rich applications hosted in the cloud. Performance of network security devices degrades severely when legacy security services and decryption are enabled. That’s why too often in the past, organizations have been forced to choose between performance in order to deliver good user experience or security to keep data and users safe.

Where Network Security Will Go From Here

Today’s distributed cloud operates at hyperscale – storing vast amounts of data and applications, and using near-infinite nodes to store that data. Traffic, especially web traffic, flowing between users and this distributed cloud is growing tremendously. The latest numbers from Google show that up to 98% of this traffic is being encrypted. In order to offer agility and flexibility, organizations moving toward this distributed cloud model aspire to become “cloud like,” providing on-demand access to resources and applications at hyperscale.

To meet the new challenges, security teams need cloud-centric network security solutions that:

a. See and control all applications, including thousands of SaaS applications that employees access daily – and the many new ones that keep becoming available at an incredible cloud velocity – using a risk-based approach for prioritization that takes into account data protection and compliance.

b. Stop known and unknown threats, including zero-day web attacks, in near realtime.

c. Enable access for the right users, irrespective of where user identity data is stored – on-premises, in the cloud or a hybrid of both.

d. Offer comprehensive security, including decryption, without compromising performance, allowing security to keep pace with growing numbers of users, devices and applications.

e. Have integrated, inline and simple security controls that are straightforward to set up and operate.

Palo Alto Networks has a 15-year history of delivering best-in-class security. We’re here to help secure the next steps on the digital journey, wherever they take us. Whether you’re a seasoned traveler or just starting out, we can help our customers find a new approach to network security – one that better matches today’s cloud-centric networks. What’s next for us will be revealed soon. Follow us on LinkedIn to be the first to know about our upcoming events.

Read More…

Hybrid Cloud Data CenterNetwork PerimeterNext-Generation FirewallsPoints of ViewCASBcloud-centricdata centerhybrid cloudnetwork securitySaaS

Transforming energy at Suncor-EliteParadigm llc

Transforming energy at Suncor

Canada’s largest integrated energy company is using cloud to reinvent every aspect of its business.

It’s not too much of a stretch to say that at my company, Suncor Energy, we are using technology to rethink and reinvent every aspect of our business.

Headquartered in Calgary, the largest city in Canada’s beautiful western Prairie Province of Alberta, Suncor is Canada’s premier integrated energy company and the fifth largest energy company in North America.

Our energy operations range from traditional fossil fuels to significant investments in wind, biofuels, and other alternative, renewable energy sources. Perhaps most noteworthy is that we feature a unique, highly integrated business model from upstream to downstream and basically everything in between. Not only do we get the oil, gas, and renewables from their sources, but we also operate a network of more than 1,500 retail and wholesale gas outlets through our subsidiary, Petro-Canada. In fact, our presence is so ubiquitous north of the 49th parallel that it has earned our subsidiary the nickname Canada’s Gas Station.

We take pride in our leadership in the energy sector. So it is important for us that we remain on the leading edge of technology transformation as well—not from a technology for technology’s sake perspective but from a business perspective, to ensure that our technology provides the agility and responsiveness we need to support the speed of business in a digital world.

Ultimately, it comes down to how being people-focused, technology-enabled, and data-informed is transforming our business and our organization.

Our transformation has been—and continues to be—a journey. And, as we’ve advanced, we’ve learned some valuable lessons that can be carried forward. Here are a few of them.

People, culture, and leadership

When we first started our cloud transformation, we learned quickly that success requires a different approach than the traditional IT business model.

It was apparent to those of us in the IT domain that investing in public cloud was the right thing to do. But when we started to interact with our business leaders, we found that the biggest pain point the business had was less about the technologies involved and more about how IT operated itself.

They claimed that we moved too slow, that we did not move at the speed of business. So our initial experiments with public cloud drove us to make changes to how we work so that we could build the capabilities that would make IT more agile—and demonstrate the benefits of that new agility to the business.

This kind of change to the way an organization works demands a cultural shift that has to catch up to the technology transformation underway. To execute a change of this magnitude, you need buy-in from all the stakeholders at every level of the organization.

Leadership and communications has proved critical to establishing and maintaining this buy-in. With the benefit of hindsight, we could, as a group, have better explained the changes we were driving toward. As a result, there was some hesitancy across the organization and even a lack of full commitment.

Please read: Why compute management is moving to the cloud

This was not necessarily a bad thing. Sometimes the best things a business does are from the grassroots up, and you need some of that friction to finally have the right person in the right position make it known across the organization that this is the direction we’re taking. Those opinion leaders have been an amazing source of motivation for the rest of the population.

The key role of governance

Our transformation partners at Hewlett Packard Enterprise helped us establish an interdisciplinary team to help guide and manage our cloud strategy. This cloud governance office brought together stakeholders from every side of the IT organization—security, application, traditional infrastructure, and architecture—as well as business and communications leaders.

I’ve been a part of past cloud transformations where that governance was more of an afterthought. Within a year, those programs were a mess, with no direction or understanding of what they were doing or why—just out of control. It was such a breath of fresh air that we had that governance piece figured out in advance. As you progress through your journey, things will change. But ultimately, having that governance perspective, that definition, has been a huge part of our progress.

HPE GreenLake’s edge to cloud platform empowers organizations to harness the power of all their data, regardless of location.

Learn more

The right mix of application platforms

When we started our cloud journey, we had five data centers and 4,000 servers, all on premises. Now, we have three public clouds, with 18 containerized applications and three data centers.

We’ve decided that a hybrid approach is right for us, for a variety of reasons. In some cases, it’s latency. We’re in the middle of the prairie here in Alberta, and there are no public cloud data centers that are sub-80 milliseconds anywhere near us. There are also regulatory issues around where we maintain integrations with the electrical grid.

That said, our on-prem estate needs to be as close as possible to the public cloud in terms of being consumption-based, flexible, and highly automated. So one of our goals is to have a true, next-gen on-prem cloud by the end of the year, which will incorporate nearly all our remaining on-prem systems.

This is where our new muscles in cloud governance and culture will help us. Since cloud-first culture, skills and ways of working are just as, if not more, important than the technology platforms, having that cloud-first approach across our IT operations is really a critical enabler of the success of our on-prem cloud.

Please read: How to optimize databases for a hybrid cloud world

In terms of modernizing the technology, we’ve done everything from classic “lift and shift” to some more modern re-platforming projects. It’s been interesting to see how we can take a modern approach to even some of the older estate, like Unix mainframes and so on.

But that’s one of the unexpected benefits of cloud: It reveals opportunities you never thought you had. A perfect illustration of this point is what we discovered when we needed to prepare our environment for the initial public cloud migrations. We hadn’t looked at our estate in probably 10 years because we used a managed service provider. Whatever we had was just a black box. So we needed to take a cold, hard look at what we had—those 4,000-plus servers—to gain a real data-driven view of what the right mix of platforms should be for us.

And guess what? When we looked at the first 600, we decommissioned 400 of them because we found they were not being used.

So a huge and unexpected opportunity our cloud journey presented was an understanding of how our estate was being used, what should go to the public cloud, what should stay in our on-prem cloud, and what was just no longer needed—saving us in operating costs, licensing fees, management overhead, and technical debt.

Bringing automation and visibility to operations

Automation is a key goal, and it’s a necessity if you want to move at the speed of cloud. Our high-level goal is to have at least 80 percent of all provisioning or requests done by self-service. My team would perhaps audit them, but they will be handled automatically. We realize that, as a large business, we’re not going to be able to go beyond around 85 percent self-service because there’s just too many complex requirements, integrations, multiple networks, and so on.

We are also well on the way to creating an in-house platform team. Ever since we began our cloud journey, we have been moving away from what was a managed services environment. We are transitioning to an entirely in-house IT organization, and already, it’s removing a huge roadblock from our ability to move faster and get things done—the original issues the business side of the house had with our IT organization when our cloud journey first began.

One of the early key benefits we saw in our public cloud migration was that it opened our eyes to the value of visibility and insights. The sheer process of going to the cloud also gives us a “tagging” strategy. We now understand exactly what this business unit is paying or using every month. That understanding is just huge for us and probably our No. 1 gap prior to launching our cloud journey.

Please read: Enterprise security moves to the edge

Now, we want to apply that visibility across our operations so that we can track our assets and know who the owners are and their lifecycle plans. And as a result, we can make better predictions.

Innovation

Ultimately, it comes down to how being people-focused, technology-enabled, and data-informed is transforming our business and our organization. A huge play for us right now is our investment in advanced analytics and IoT.

We currently have use cases in our pipeline areas and our refineries where we can respond very quickly in an automated way through sensor data. We’ve extended this same IoT initiative all the way downstream to our Petro-Canada retail gas pump sites. On the retail side, it creates the capability to use sensor data to do real Amazon-like stuff, such as micro-target marketing, and other meaningful opportunities for our business revenue and efficiency.

That’s a huge benefit, and it’s magnified even further by the fact that Suncor now has all this data basically in the same place. We have pipeline sensor data, IT data, finance data, and all sorts of consumer and retail user data. It’s all in different types of data lakes and places, but ultimately, it’s all in the same usable space, and that opens a host of opportunities for Suncor’s businesses. We now have the capability to assemble all this data and, with our growing analytics capabilities, derive new data-informed value from it.

Final thoughts

When we first embarked on our cloud journey, we said we wanted to transform our organization from top to bottom. We made it part of an initiative we called Suncor 4.0. Our goal was to use our cloud journey and digital transformation to become a new and better organization characterized as people-focused, data-informed, and technology-enabled.

As a person who loves to play guitar and considers himself a musician first and an IT professional second, I can confidently say that when it comes to meeting our business goals and our cloud journey, we’re right in tune.

Evolve your operating model to succeed at digital transformation EliteParadigm LLC

Evolve your operating model to succeed at digital transformation

You don’t need to be born in the cloud to have a cloud operating model.

While organizations embrace a cloud strategy for many reasons, one that stands out is a desire to get IT operations working more efficiently. Companies want to streamline value chains. They want to do more work and do it faster, with less friction.

Cloud-based operating models, done the right way, enable just that. In fact, for many organizations we work with, it is the adoption of the new ways of working that public cloud demands that creates the greatest benefits to business agility, and not merely the technology platform itself.

With this in mind, one of the top initiatives our customers prioritize is the development of cloud operating practices across their IT organizations and their entire IT ecosystem. Ironically, however, the operations domain is also one of the areas in which organizations struggle the most to generate momentum.

Breaking with the past

The biggest challenge organizations face is the need to shed a troublesome piece of baggage: their legacy operating models. Too many companies try to adopt cloud platforms without changing the way they work. They assume they can follow the same procedures they always have and make simple tweaks.

What they end up with is a collection of bad habits, tribal knowledge, and sets of processes, procedures, and tools that don’t respond well to the demands of a modern ecosystem. And, in turn, they fail to leverage operational learnings across their operations, missing opportunities to deliver agility improvements across the entire organization.

Organizations that are born in the cloud sometimes avoid developing these bad habits. They tailor their operating models to the way business is done in the cloud. They set up their operations to other organizational domains, incorporating DevOps, enabling innovation, sustaining applications, and driving strategy and governance.

But companies that have lengthy histories and may not have the luxury of spinning up new models from scratch have work to do to get their operations functioning at such a level. They face steep learning curves, including getting comfortable with new tooling, shared-responsibility models and practices, and job functions.

Please read: New tools boost automation, self-optimization of IT operational services

They have to learn how complex cloud transformations are so they can equip their teams with the resources needed to carry out the necessary work. Rather than relegate the work of getting to a cloud operating model to a side-of-desk task, they need to dedicate a group to adopting the new cloud-everywhere operating model.

If IT operations don’t prioritize the optimization of a comprehensive cloud operating model, some application owners will head to the cloud by themselves.

Where things stand

Based on our engagements with customers, we’ve evaluated enterprise progress in the eight domains making up the HPE Edge to Cloud Adoption Framework .

Figure 1: The Operations domain in the HPE Edge to Cloud Adoption Framework

Operations is a domain that most customers prioritize. It’s a tangible and significant part of most IT organizations, and one where progress is being made overall. Operations is one of the domains where we see the greatest overall progress in capability across our engagements, with an average maturity of 2.1 on a scale of 1 to 5, where a score of 3 indicates a cloud-ready organization.

However, there are some subdomains within operations where organizations are experiencing greater difficulty in achieving traction (see Figure 2):

  • Service operations
  • Platform operations
  • Pipeline operations

Figure 2: Organizational maturity in the Operations domain

Let’s take a look at what’s behind these challenges and what can be done to overcome them.

Service operations: Slow, ineffective incident response

Many of the organizations we engage with have struggled to progress incident response capability to the point where it can work at the velocity of cloud. Though incident response is considered a security function, the operational component is an important measure of a company’s overall success in the cloud.

Companies with immature service operations often fall short in their ability to be proactive and prevent events from happening in the first place. And if a security event takes place, they will often exhibit slow, ineffective incident responses. The time it takes to detect, address, and resolve an issue often places them outside of their service-level agreement (SLA) targets due to a lack of automation and orchestration.

Please read: The fundamentals of security incident response—during a pandemic and beyond

To be proactive, companies need to set up systems to leverage metadata and characteristics from historical events. They start to learn from those events and build out proactive incident responses. Using techniques like comprehensive logging, event curation and correlation, and forensics can determine root causes and prevent issues.

The flip side is when an event has taken place, organizations take a long time to respond. Without automation and orchestration, you can’t scale the volume of elements that need to be remediated during an incident. If you’ve automated their response, it gives you a relatively short window for remediation.

Platforms operations: Lack of clearly defined infrastructure patterns to deliver consistent services

Organizations with a low level of cloud operational maturity lack a set of finite and clearly defined standards for developing applications and corresponding infrastructure. Without these patterns, they struggle to master the art of automating the provisioning of infrastructure and supporting the corresponding services that make applications tick.

This ties back to the legacy hangover issue. Organizations that are used to creating applications outside of cloud-native environments by default model their application architectures to support customized apps that may or may not generate high degrees of value. These architectures can’t scale to satisfy the emerging demands of the business.

As a result, operators find themselves having to provide white-glove service and custom work on an ongoing basis for customized applications. If they started from scratch, they’d be ready to scale. They could focus on those application archetypes that have a certain amount of critical mass. What starts to happen is organizations learn how to establish enterprise standards and policy and leverage cloud-native tools and that newfound operating model. It’s easier to bring others on board because they have that consistency and they generate results.

The new operating model creates its own gravity and starts to attract the attention of application and business owners. They can provide a valuable service as long as organizations play by a set of standards and rules to support scale. Otherwise, organizations are translating bad habits into the new cloud model.

Pipeline operations: Inconsistent management of container Images

Containers have changed the way organizations develop and deploy applications. Their lightweight structures and portable natures make them ideal packaging tools for organizations looking to add new features quickly and cost effectively. Still, a container operation is only as sound as the images that make up the containers themselves.

Organizations that are early in their cloud transformations tend to struggle with image integrity. They haven’t set up systems for hardening container images in a timely and repeatable fashion. They haven’t set up an automated testing and compliance certification process. They haven’t created secure container registries that identify images for use in the continuous integration and continuous delivery (CI/CD) pipeline, retire out-of-state images, and manage artifacts in various states of transit.

Pockets of skill

Some organizations see their cloud operations mature in a scattershot manner. In other words, it’s common to see cloud operations showing up as pockets of individual skills concentrated among certain people and certain groups. This can be helpful. Companies that promote a learning agenda can create first movers that act as a force multiplier. They can become change agents, with an eye toward centralization and supporting federation when the enterprise is ready.

Please read: How to achieve extreme quality with DevOps

But it isn’t always a good thing. If IT operations don’t prioritize the optimization of a comprehensive cloud operating model, some application owners will head to the cloud by themselves. The way these “experts” support their own applications won’t support all of the other application archetypes and the larger ecosystem out there. Costs spike, and the C-suite gets frustrated with the lack of progress. This is a big reason why cloud initiatives fail.

HPE GreenLake’s edge to cloud platform empowers organizations to harness the power of all their data, regardless of location.

Learn more

Where leaders succeed

A deliberate, well-planned approach toward operational maturity is the most effective way to succeed in the cloud. Across Hewlett Packard Enterprise’s base of cloud transformation engagements, leaders display this kind of discipline in these four areas:

  • Service operations: The good news is that organizations have grown more sophisticated in their ability to log a wide variety of activities. The more advanced organizations are doing a better job curating and identifying the right pieces of metadata to turn into insights. They have meta-models in place, but they need to put intelligence around them to put the data to work for them.
  • Architecture and governance: Formal governing bodies have been established, but many organizations still are working to streamline decision-making processes so they can reduce lead time.
  • Availability management: Leaders are developing their ability to define business availability requirements in SLAs. They have well-thought-out plans to improve metrics such as RTO (recovery time objective) and RPO (recovery point objective), especially for tier-one and tier-two applications. Where organizations still need help is in meeting those requirements. Many establish partnerships with vendors that can take on management functions to help achieve these objectives.
  • Platform operations: Leaders have created a mechanism for meta-data-based visibility across their ecosystems. This is important when they integrate with CI/CD pipelines; they know which components to pull from their repositories and where to put them. Infrastructure patterns and standards are important here too. A major bank we worked with was looking to increase adoption of its own internal private cloud. By using well-defined infrastructure patterns and corresponding storage standards, it was able to reduce infrastructure provisioning time, which in turn improved the reputation and buy-in for its initiative.

Getting your house in order

Moving to the cloud presents significant opportunities for companies to transform their operations—to make them more efficient and more focused on delivering business value. But to mature to the point where they’re accomplishing meaningful transformation, organizations need to get their own houses in order.

They need to commit to a new operating model, assess their strengths and weaknesses, and forge a plan to set their operations up for success. This operating model will yield benefits not only for the portion of operations that move to public cloud but across the entire enterprise, edge to cloud.

What Is Extended Detection and Response EliteParadigm LLC

What Is Extended Detection and Response (XDR)?

Extended detection and response (XDR) is a security solution that delivers end-to-end visibility, detection, investigation and response across multiple security layers. Core components of an XDR architecture include federation of security signals, higher-level behavioral and cross-correlated analytics, and closed-loop and highly automated responses. This creates a truly unified experience supported by a solutions architecture that equals more than the sum of its parts. Security teams are able to get more value from an XDR that meets the following criteria:

  • Supports open standards
  • Delivers advanced analytics
  • Provides a simpler, unified analyst experience
  • Streamlines operations
  • Enhances response through automation

An XDR solution can achieve improved visibility across an organization’­­s networks, endpoints, SIEM and more via open-system integration. Open source standards can help move the industry away from expensive and wasteful ‘rip and replace’ programs. Instead, an open approach helps organizations get more out of their existing investments.

An XDR solution can also offer more automation and AI enrichments at all levels of detection, analytics, investigation and response. Automation throughout the threat life cycle can dramatically reduce mean time to detect (MTTD) and mean time to recovery (MTTR). Not only does reducing these metrics have a direct relationship to mitigating the cost of a data breach, it also frees up time for analysts to do more human-led activities like investigation. An XDR solution also bolsters investigation with a unified view of threat activity, a single search and investigation experience and consistent enrichment with threat intelligence and domain expertise.

Is XDR Just Another Acronym or a Fundamental Market Shift?

For many decades now, emerging threats have put organizations at risk. As the IT landscape evolved and threat actors found new ways to attack, security teams needed to find new ways to detect and respond to threats.

Today, this evolving theme of complexity continues. And the list of point solutions being deployed to overcome these burgeoning threats goes on and on — from SIEM, to cloud workload protection, to endpoint detection and response (EDR), to network detection and response (NDR) and more. While these investments each do their part to solve immediate and dire issues, in combination they’ve created a bigger challenge: how to use and get value from them together.

This is why we call them point tools; they were made to address specific challenges. Now that security teams face a myriad of challenges, it’s never been more critical to have them work in concert. Without doing so, limited security operations resources will continue to be spread thin, total cost of ownership will continue to increase and the process of pinpointing and responding to threats will continue to be time-consuming and inefficient.

Extended detection and response (XDR) is the beginning of a shift towards uniting multiple siloed solutions and reducing the complexity that impedes fast detection and response. As stated in the blog Gartner Top 9 Security and Risk Trends for 2020: “The primary goals of an XDR solution are to increase detection accuracy and improve security operations efficiency and productivity.” Gartner identified XDR as the number one security and risk trend at the end of 2020, suggesting now is the moment when all this complexity — too many tools, too many alerts, too little time — is coming to a head, with XDR as a response.

What Are The Different Approaches to XDR?

Industry analysts have outlined two different approaches to extended detection and response: native and hybrid. Native XDR is a suite that integrates with a vendor’s other solutions to collect telemetry and execute response actions. Hybrid or open XDR is a platform that integrates with various vendors and third parties to collect telemetry and execute response actions.

Vendors have been taking different approaches to what is under the hood of XDR, so to speak. For instance, does XDR = EDR plus additional capabilities? Or is it EDR plus NDR, or some other combination? It might be too soon to tell where the market will land as the technology is nascent, but the delineation between native and hybrid XDR is one thing the industry seems to agree on.

How Does XDR ‘Extend’ SIEM?

For some readers, SIEM may have immediately come to mind as you perused the qualities of XDR. There are some key differences between the two. Correlation and alerting tend to be fully automated, employing use cases that are provided and tuned by the vendor. Lastly, incident response tends to focus on highly repeatable actions that can be automated, such as sending a suspicious file to a sandbox for detonation, enriching an alert with threat intelligence or blocking an email sender tied to phishing emails. This approach differs from SOAR, which can be broadly customized with custom playbooks and used to unite people in addition to technology.

XDR in many ways can extend the detection and response capabilities that are today enabled by SIEM. In fact, SIEM can play an integral role to support an XDR architecture in gathering, organizing and assessing massive amounts of data for SOC analysts. In this capacity, XDR builds on the data and events flowing through your SIEM solution. By bringing together the capabilities of multiple point solutions, XDR can take SIEM analytics one step further and amplify the outcome. As an example, when you receive analytics from a SIEM, endpoints and networks separately, it can be like having three different witnesses to an attack. XDR helps you immediately bring all three witnesses together and create one complete story — helping an analyst see more clearly across multiple sources.

XDR is not just a place where you consolidate security signals but a place where you can run more advanced, correlated analytics. As The Forrester Wave for Security Analytics Platforms, Q4 2020 asserts, security analytics and endpoint detection and response have been on a “collision course” for some time. Bringing together these capabilities with XDR can provide “highly enriched telemetry, speedy investigations, and automated response actions.” Behavioral analytics or machine learning analytics can also enrich content, increase accuracy and lead to automated response actions.

How Does XDR Compare to MDR?

Even though XDR vendors are striving to untangle the complexity problem, it will take time to make inroads. Compounding this challenge is the skills shortage. The dire need for talent to run security analysis and investigations leads many organizations to utilize a partner for managed detection and response (MDR) services.

MDR is an approach to managing sophisticated detection and response tools — whether via endpoint, network or SIEM technology. Some MDR providers include proactive threat hunting to reveal undetected threats faster. Research from EMA conducted in 2020 found that 94% of respondents not already using an MDR service were currently evaluating or had plans to evaluate MDR services over the next 18 months.

MDR services can provide critical skills and advanced threat intelligence, plus rapid response with 24/7 SOC coverage. As Jon Oltsik, senior principal analyst at ESG, stated, “XDR success still seems to be based on human expertise,” making MDR an invaluable companion to XDR for customers who could use a helping hand.

How Does XDR Support Customers With Zero Trust Aspirations?

If you set up a game of security buzzword bingo, there’s no doubt you’d come across both zero trust and XDR. There’s industry chatter around these powerful security frameworks with good reason — one concept can help enforce the other.

Zero trust is a framework that starts with an assumption of compromise, then continuously validates the conditions for connection between users’ data and resources to determine authorization and need. XDR provides an essential function to zero trust by continuously monitoring for incidents and responding in the most targeted way possible to avoid disruption to the business.

How so? XDR enables analysts to determine if their organization is under attack and figure out as quickly as possible what’s happening, what will happen next and how to prevent that from unfolding. Instead of placing blind trust in a system and saying the controls are enough, with XDR you constantly monitor the places where things could go wrong.

In this way, XDR is ensuring that zero trust security controls are working. The ‘never trust, always verify’ zero trust methodology is supported by verification. When it comes to detecting and responding to threats, as well as improving protection policies based on insights, a zero trust framework and an XDR solution can work hand in hand. And it’s exactly why identity tools, such as identity and access management (IAM), will play a critical role tying into XDR solution architectures to ensure the appropriate user-centric context is being employed for threat detection and response.

What Should Customers Look for in an XDR Solution?

Your XDR should be an open, extensible solution that enables your organization to get more from its existing investments. Look for integrations with third parties that will save your organization from a costly and unrealistic rip-and-replace approach. Cloud-native solutions are also critical for extending cloud visibility.

XDR goes far beyond being an improved EDR solution; it should instead be your end game for threat detection and response activities — as part of a unified platform. Reaching that level of maturity is a goal that takes time, with the basics in place and a clear strategy as prerequisites for how to get started with XDR. With powerful automation, artificial intelligence and expert-built detection and prescribed response actions available through a unified user experience, security teams can counter attacks across silos — mitigating risk and resolving threats fast.

Ultimately, XDR makes it easier for the people managing and responding to threats on a daily basis to do the work. Open standards mean we can better serve customers and the community, preventing time and dollars lost to ripping and replacing technology. Advanced analytics, constantly updated threat intelligence and a streamlined user workforce empower analysts to be more efficient and spend their valuable time on investigations — not gathering the data.

People and culture are the keys to the SOC. By uniting threat detection data and tools and strengthening ability and context for fast incident response, XDR enables the collaboration and openness that helps teams thrive.

Learn more about realizing a vision for XDR in ‘Beyond Endpoints: A Case for Open XDR’, presented at the RSA Conference 2021.