top of page

Security Basics That Still Get Overlooked in 2026

In 2026, most breaches are not caused by zero-day exploits, and they never really were. Yet we continue to spend heavily on cutting-edge controls while basic security hygiene is neglected, and sloppiness undermines the very tools dominating our budgets. Incidents are far more often caused by misconfigured, ignored, or poorly implemented controls, combined with people who aren’t kept engaged in security best practices.


There are probably too many to cover, but let’s discuss some of our (least) favourite security basics that still get overlooked.


Asset Inventory & Attack Surface Awareness


asset inventory and attack surface awareness

Organisations continue to struggle with basic visibility into what they actually own and operate. Shadow IT is increasing, whether it's SaaS sprawl, developer tools spun up without security review, or employees using AI services to process business data. Forgotten cloud assets, abandoned test environments, and legacy VPN appliances left exposed to the internet remain common findings during security assessments.


The reality is simple: you can’t protect what you don’t know exists. Many organisations rely on internal CMDBs or inventory tools that don’t match the reality of what is deployed. Attackers don’t care about internal documentation, they care about what responds on the internet. This disconnect is why continuous discovery matters more than annual audits. Infrastructure is dynamic. Cloud resources, SaaS tools, and externally facing services change constantly, and asset visibility needs to keep pace.


A theme we’ll develop with all these overlooked security basics is that they don’t exist in isolation, and they bleed over into other security holes and dependencies, having a much broader impact than we usually realize. Asset visibility, for example, isn’t an isolated control. It underpins almost every other security program. It’s difficult to confidently review privileged access if you don’t have a complete inventory of systems. Orphaned systems often mean orphaned service accounts, API tokens, and credentials, quietly expanding the identity attack surface.


Data classification and endpoint protection lose effectiveness when unmanaged devices or shadow IT environments exist outside visibility. Patch management and vulnerability scanning assume you know what needs to be maintained in the first place. During incident response, incomplete asset inventories consistently slow containment and increase dwell time because teams don’t have a clear picture of what systems are impacted (or even exist). When asset discovery is incomplete, every downstream control becomes partially blind. So, on that bombshell, on to our next topic…

Privileged Access Hygiene

Privileged Access

Privileged access remains one of the most consistent enablers of serious security incidents. Advanced detection tools and zero-trust architectures do little good when basic access hygiene is ignored. Common issues still show up everywhere: too many administrators with broad permissions, administrators performing daily work from privileged accounts, shared admin credentials across teams, and standing privileges that never expire instead of just-in-time elevation. Each of these patterns increases the likelihood that a single compromised account can escalate into a much larger incident. If, as it’s said, identity is the new perimeter, then privilege sprawl is the new flat network or overly permissive trust zone (or whatever the most apt metaphor is for the triviality of lateral movement once attackers gain a foothold).

Improving privileged access hygiene starts with enforcing least privilege by default and treating administrative access as temporary, auditable, and exception-based. This means separating standard user and admin accounts, reducing the total number of privileged users, eliminating shared credentials and implementing just-in-time access with approval and session logging


Regular privilege reviews are critical, not as checkbox exercises, but as active efforts to remove access that is no longer required. The goal isn’t zero admins, it’s minimising standing trust and shrinking blast radius when something inevitably goes wrong.


Privileged access issues rarely exist in isolation, either, like we pointed out about asset inventory woes impacting many other controls. Excessive or persistent admin rights increase blast radius during endpoint compromise and make lateral movement significantly easier once an attacker gains an initial foothold. Weak privilege controls also undermine identity monitoring, since abnormal behaviour becomes harder to distinguish when elevated access is common. Incident response is impacted as well when overprivileged accounts complicate containment because disabling a single identity can disrupt business-critical systems.


Security Hygiene

MFA Coverage Gaps (Having It vs. Enforcing It)

Most organizations now “have MFA”, but far fewer have MFA implemented consistently and enforced where it actually matters. Common gaps include VPNs and legacy applications that still rely on password-only authentication, service accounts that never had MFA applied, executive exceptions created for convenience, and administrator accounts that aren’t subject to the same authentication requirements as standard users. MFA fatigue attacks, where attackers repeatedly trigger push notifications until a user approves one, remain effective in environments without rate limiting or challenge-based approval methods. SMS-based MFA also continues to be widely used, despite being vulnerable to common real-time phishing tactics.


Closing MFA coverage gaps starts with consistency and enforcement. MFA should be mandatory for all external access points, privileged accounts, and cloud identity logins, without permanent exceptions. Authentication is only going to be as strong as its weakest link, and exceptions intentionally weaken otherwise strong authentication policies, allowing attackers to exploit weaker authentication methods. Push bombing is still working, and conditional access policies matter far more than simply checking the “MFA enabled” box. Context-aware authentication that factors in device trust, location, risk signals, and session behaviour is what actually determines whether MFA meaningfully reduces risk.


In 2026, we have so many great and powerful tools and advanced protections, but when someone can walk through the front door because of MFA exceptions or poor implementation, it undermines otherwise strong endpoint, network, and detection controls by allowing attackers to authenticate legitimately instead of exploiting vulnerabilities. This shifts incidents from “intrusion” to “authorised access,” making identity-based detection harder and delaying response. MFA exceptions, legacy protocol allowances, and weak authentication methods also increase reliance on downstream controls like EDR and SIEM to catch activity that should have been blocked at the door.

Patch Management (Reality vs. Policy, Especially for Edge Devices)

Patch Management

Many organizations still operate with patching strategies that look good on paper but fail in practice. Patch plans exist, tools are deployed, yet coordination between vulnerability management and remediation teams is weak, prioritization is inconsistent, and risk-based decision-making is often absent. Patch management failures remain one of the most consistent root causes of real-world breaches, especially when perimeter and edge devices are excluded from standard patching workflows. Attackers aren’t exploiting zero-days; they’re exploiting vulnerabilities that already have patches available.


While endpoints and servers often receive the most attention, critical infrastructure frequently gets overlooked: firewalls, VPN appliances, hypervisors, NAS devices, and OT/IoT management platforms. These systems sit directly on the perimeter, process sensitive traffic, and often provide privileged network access, making them high-value targets. Servers get patches, endpoint patching gets enforced, but the perimeter is where exploitation is rising fastest, and it is often treated as “infrastructure maintenance” rather than a frontline security control. Organizations need to move from policy-based patching to risk-based, exposure-driven patching, prioritizing internet-facing and high-impact systems first. Patching needs to be driven by exposure and business risk, not just monthly maintenance windows.


Patch management gaps quickly cascade into other security failures. Unpatched perimeter systems often become the initial access vector, bypassing endpoint protections and identity controls entirely. Vulnerability management loses effectiveness when remediation isn’t prioritized based on exposure and business impact.

Backup & Recovery Testing (Not Just Having Backups)

Ransomware remains a viable business model for attackers largely because many organizations still rely on fragile recovery processes.  Common issues include backups that are not immutable, no offline or isolated copies, lack of restore testing, and recovery objectives that don’t align with actual business tolerance for downtime or data loss. Backups may exist, but existence alone does not guarantee recoverability.


Many organizations discover backup failures during an incident, not before. This is something we routinely surface during tabletop exercises, where recovery assumptions often don’t match operational reality. Reviewing backup posture on paper is not enough. What matters is whether critical systems can actually be restored within defined RTOs and RPOs, and whether recovered data is complete and usable. The practical next step is simple: regularly test restores for your highest-impact systems and validate recovery outcomes, not just backup success reports.


Backup failures don’t just impact recovery, they amplify every other security incident. When restoration processes are unreliable, ransomware incidents become business continuity crises instead of containment events. Poor backup validation also complicates incident response timelines, increases pressure on security teams to negotiate or rush remediation, and exposes gaps in asset inventory and data classification when critical systems can’t be prioritized effectively. Resilience depends on more than detection; it depends on proven recovery.


Vendor & SaaS Risk Blind Spots

Supply chain breaches are not rare and third-party risk is no longer limited to a handful of strategic vendors. SaaS sprawl, outsourced IT services, and embedded integrations mean most organizations now depend on dozens or hundreds of external entities that can impact security posture.


Supply chain breaches rarely stay contained to the vendor. When a SaaS platform or software supplier is compromised, attackers can inherit trusted access into customer environments, bypassing perimeter controls and security reviews, as seen yet again with the recent Salesloft/Salesforce compromise. The downstream impact often includes credential exposure, data leakage, forced system shutdowns, and emergency access revocation across multiple business units. For many organizations, the most disruptive part isn’t the breach itself; it’s the operational fallout of untangling third-party access and restoring trust relationships at scale.

Supply Chain Management

Meanwhile, many organizations still rely on outdated risk assessment approaches. Vendor reviews may be skipped entirely or reduced to self-attested questionnaires that are time-consuming to complete and already outdated by the time they’re reviewed. Risk is often accepted by default because switching vendors is inconvenient or politically difficult.


Additional blind spots continue to grow. Many environments lack SaaS security posture management entirely, leaving IT teams unaware of what applications are connected, what data they access, and how permissions are scoped. OAuth applications are frequently overprivileged, granting broad access to mailboxes, files, and APIs without meaningful oversight. Ownership gaps also persist, with security, procurement, IT, and legal teams each holding partial responsibility but no single group accountable for the full vendor risk lifecycle.


Addressing vendor and SaaS risk starts with visibility and prioritization. Organizations need a clear inventory of third-party services and integrations, an understanding of what data each vendor can access, and a tiered risk model that focuses attention on the suppliers that matter most. Security reviews should move beyond one-time questionnaires toward continuous validation of access, permissions, and security posture, and breach susceptibility. Just as importantly, ownership needs to be clearly defined so vendor risk is actively managed instead of fragmented across security, procurement, IT, and legal teams.


Third-party risk gaps don’t stay contained to vendors. Overprivileged SaaS integrations and unmanaged suppliers expand the effective attack surface and weaken identity controls by introducing external access paths that bypass traditional perimeter defenses. Incident response becomes more complex when third-party access must be audited, revoked, and revalidated at scale. Vendor blind spots also undermine data protection efforts, since sensitive information often flows outside organizational boundaries without consistent classification or monitoring. While supply chain risk is a fact of life and a cost of doing business to some degree, we can’t just throw up our hands and say, “Well, we were always going to do business with [insert ubiquitous software company] anyway.” Understanding, reviewing, and mitigating third-party risks is our best hope to make a material difference when the inevitable happens downstream and across vendor ecosystems.

Email Security Configuration Basics (Still #1 After All These Years 🏆)

Email remains the number one initial access vector for attackers. Despite years of investment in email security, phishing remains the easiest way for attackers to get a foothold inside enterprise environments.


Many organizations still struggle with basic configuration hygiene. DMARC is often left in monitoring mode indefinitely, allowing spoofed emails to continue reaching inboxes. SPF and DKIM configurations are misaligned or incomplete, weakening sender authentication. Phishing-resistant MFA is not enforced for cloud email access, allowing attackers to bypass standard MFA using real-time phishing tools. User reporting workflows are slow, fragmented, or cumbersome, reducing early detection and allowing malicious messages to spread internally. Inbound filtering controls are often misconfigured or underutilised as well. Link rewriting, attachment sandboxing, and malicious content detection exist in most enterprise email platforms, but default policies are frequently too permissive, poorly tuned, or inconsistently applied across user groups. This creates gaps where technically “legitimate” emails still deliver harmful payloads or phishing links into inboxes. Point being, Email security failures are usually configuration and policy enforcement failures, not product failures.


The most effective improvement is to move beyond “enabled” email security settings and validate that controls are actually enforced — including DMARC in reject mode, properly aligned SPF/DKIM, phishing-resistant authentication for email access, fast, automated user reporting workflows, and inbound filtering policies for links, attachments, and suspicious content that are tuned for real-world threat activity rather than default settings. Small configuration changes in this area consistently deliver outsized risk reduction.


Email security misconfigurations ripple across multiple control layers and even across organizations. Compromised mailboxes and access gained by threat actors allow them to masquerade as a legitimate business contact, sending and receiving mail with business relationships, and attempting to compromise business partners, vendors and clients under the reputation of the infected organization. Weak domain authentication allows brand impersonation that feeds phishing campaigns, while non-phishing-resistant MFA enables attackers to turn successful phishing into full account compromise. Poor reporting workflows slow detection and increase internal spread, forcing security teams into reactive cleanup instead of rapid containment. Because email is often the initial access vector, small configuration gaps can have disproportionate downstream impact on identity, endpoint, and incident response efforts.


Honourable Mention, As Always: User Awareness Training


User Awareness Training

Human risk is probably eternal, at least until AI fully takes over in the coming months (that was a joke, sort of). Training quality is part of the problem, and moving from annual checkbox videos to role-based training and realistic simulations with feedback absolutely matters. But even good training struggles when people are overloaded, rushed, and constantly interrupted. The real challenge isn’t just teaching people what’s dangerous; it’s creating an environment where employees have the space, confidence, and incentive to make good security decisions. Attackers understand this psychology and continuously adapt phishing techniques to mirror real workflows, communication styles, and business urgency.


The takeaway is simple: user awareness training works best when it’s treated as part of organizational culture, not a compliance exercise. Security outcomes improve when people are empowered to question unusual requests, report issues without friction, and feel like they’re part of the defense—not the weakest link.

Improving human risk isn’t about finding more entertaining training platforms. It’s about changing how security fits into daily work. That means giving employees time to slow down instead of rewarding constant urgency, making reporting frictionless instead of bureaucratic, and reinforcing that cautious behaviour is valued even when it introduces small delays. It also means leadership modelling the same behaviour they expect from staff, rather than treating security as something that only applies when it’s convenient.


This is where the human element intersects with every other “basic” control. In addition to our earlier refrain about email misconfiguration, the inbox remains the most common attack vector not just because it’s technically vulnerable, but because it exploits real workplace pressures: speed, trust, multitasking, and interruption. The same dynamics undermine MFA approvals, encourage risky exceptions, delay patching, and normalize privilege sprawl. Controls matter, but culture determines whether those controls are consistently respected or quietly bypassed. In the end, security maturity isn’t just about what tools are deployed; it’s about whether the organization creates the conditions for people to find value in risk reduction.


In summary

The common thread across all of these overlooked security basics is that none of them fail because the technology doesn’t exist. They fail because visibility is incomplete, controls are inconsistently enforced, and security decisions are shaped more by convenience and operational pressure than by actual risk. Asset inventories drift out of date, privileges quietly accumulate, MFA exceptions become permanent, patching falls behind reality, backups go untested, vendors multiply without ownership, and email remains misconfigured despite years of hard-earned lessons. Individually, each gap may seem manageable. Together, they create the conditions for most real-world incidents.


What makes these issues especially persistent is that they sit in the space between tools and people. They require not just products, but continuous attention, coordination across teams, and a willingness to treat security as an ongoing operational discipline rather than a one-time implementation.



If you want help answering that question, our team works with organizations to review these exact security fundamentals in real-world environments. No doom, no vendor bingo, just a practical look at what’s working, what’s drifting, and where small changes can have outsized impact.




Comments


bottom of page