This web page was created programmatically, to learn the article in its authentic location you may go to the hyperlink bellow:
https://gadget.co.za/mismatchincybersecurity8m/
and if you wish to take away this text from our website please contact us
A important hole is rising in African cybersecurity: the disconnect between what leaders imagine about worker readiness and what workers really expertise.
A glimpse into this mismatch is revealed within the KnowBe4 Africa Human Risk Management Report 2025. The outcomes unveil that many leaders are overestimating their workers’ preparedness, and underestimating the gaps in belief, coaching, and motion.
As organisations strengthen defences and put money into safety consciousness coaching, this neglected divide poses a rising threat.
“It’s not just that awareness alone isn’t enough – it’s that the level of employee’s awareness is being misunderstood by the organisational leaders responsible for it,” says Anna Collard, SVP of content material technique and evangelist at KnowBe4 Africa.
The notion hole is rising, however measurable
While 50% of decision-makers in 2025 fee worker cyber threat-reporting confidence at 4 out of 5, in 2024, solely 43% of workers mentioned that they felt assured recognising a risk, whereas one-third disagreed that their coaching was enough.
More than two-thirds of decision-makers (68%) imagine that SAT inside their organisations is tailor-made by function. However, solely 33% of workers in 2024 felt that to be true – with 16% actively disagreeing.
The implications are critical, as a result of a workforce that seems educated and conscious on paper might in truth be unsure, unsupported, and weak.
“This discrepancy between perception and experience is exactly where human risk thrives,” says Collard. “If leaders don’t correct course, they’re building security strategies on false confidence.”
Why measuring consciousness is not sufficient
One of probably the most regularly cited challenges within the report is deceptively easy: measuring if SAT works. More than 4 in ten respondents mentioned that they wrestle to trace whether or not their safety consciousness programmes translate into safer behaviours.
A key contributing issue, recognized within the report, is that many organisations nonetheless depend on one-size-fits-all SAT, typically delivered solely yearly or biannually, with out role-specific customisation or behavioural suggestions loops.
While the report finds that 68% of organisations provide role-based coaching, this declare is undermined by the truth that an absence of function alignment stays one of many high challenges. The discrepancy is clearest in sectors like manufacturing and healthcare, the place generic SAT is commonest.
Larger organisations are constantly much less assured in worker readiness, prepare much less regularly, and wrestle extra to measure outcomes.
“Awareness without action is like an alarm that no one responds to,” says Collard. “Organisations are investing in security awareness training, but without the structure, tailoring, and follow-through to translate that into secure behaviour.”
Beyond BYOD: The new blind spot is AI
One of probably the most pressing themes to emerge is the speedy rise of “shadow AI” use. With practically half of all organisations nonetheless busy growing formal AI insurance policies, but as much as 80% of workers utilizing private units for work, the danger of unmonitored, unsanctioned AI utilization is rising quick.
“Technology has moved faster than policy,” says Collard. “And unless AI tools are properly governed, they become as much a risk vector as they are an asset.”
East Africa is main the best way with extra proactive AI governance, whereas Southern Africa, regardless of topping coaching frequency, lags behind on AI coverage implementation.
This lack of oversight is echoed within the South African Generative AI Roadmap 2025, a latest report by World Wide Worx in partnership with Dell Technologies and Intel. It discovered that 67% of huge South African enterprises are already utilizing generative AI (GenAI), but fewer than one in seven have a complete technique to handle its use. Even extra regarding, 59% both don’t have any governance in place or are nonetheless within the planning phases.
While the GenAI increase displays technological ambition, it additionally highlights a rising human threat. The report reveals that solely 13% of organisations have applied security, privateness, and bias safeguards – that means most workers could also be partaking with highly effective instruments with out clear steering or accountability. Untrained or unauthorised AI use doesn’t simply threaten productiveness – it introduces new cyber dangers.
The street forward: Action, alongside consciousness
The KnowBe4 Africa Human Risk Management Report 2025 outlines 5 imperatives for African organisations:
- Customise SAT by function and threat publicity.
- Track what issues – not simply participation, however behavioural outcomes.
- Formalise reporting constructions workers belief and perceive.
- Close the AI coverage hole earlier than misuse turns into systemic.
- Contextualise methods primarily based on area and sector – as a result of resilience shouldn’t be one-size-fits-all.
“The human element is often spoken about, but rarely measured in ways that lead to action that acknowledges context,” says Collard. “Our purpose is to assist organisations cease guessing and begin structuring their defences round actual, contextual insights.
“This is a second to maneuver from compliance-driven box-ticking to culture-driven resilience. We have the information. Now we’d like the need.
* Download the ‘KnowBe4 Africa Human Risk Management Report 2025’ report here.
This web page was created programmatically, to learn the article in its authentic location you may go to the hyperlink bellow:
https://gadget.co.za/mismatchincybersecurity8m/
and if you wish to take away this text from our website please contact us

