Top CCPA/GDPR Implementation Pitfall and How to Avoid It
Here is the top pitfall we’ve encountered from implementing CCPA/GDPR in large organizations across U.S., Europe and APAC. This is what we’ve learned and what you can do to avoid it.
Here is the main pitfall and what I’ve learned from it:
“Find it if you can”
Many large organizations are misled by vendor offerings to believe that discovery and classification is a critical piece of compliance.
Do you believe in a “magical AI discovery wand” that will to identify and classify all sensitive data everywhere within a few weeks?
And even if such exists – knowing where personal information is located is important, but do you really need to discover and map the hundreds of thousands of cryptic columns that might contain personal data while having no clue how these columns are actually being accessed, when and by whom?
Reality bites back…
- Even while using the best “AI”, discovery and classification accuracy can reach 75% – 85%, hence keeping your risk of data loss high with all personal data you don’t know.
- It takes about a month to discover and scan all your personal data repositories you know.
- After a month of work, discovery level drops as new repositories and clones are created, data models change – creating more blindness
- By focusing and investing all your “IT capital” into a discovery ghost chase, you are not able to perform more important compliance IT work such as applying “Right of erasure”, Consent and minimizing access to personal data on a “need-to-know” basis.
- Most importantly, boiling the ocean through an endless discovery project will give you a false feeling of progress and misleading level of true compliance. You still have no clue on who is accessing the data, how frequent and from which application/tool – and this is before you even start implementing “Right of erasure”, Consent or any level of access controls on these systems to enforce the “need to know” legal basis access.
Recommendation: a risk-based compliance initiative
We’ve learned that what worked best for our customers was using a top-down risk-based compliance project approach with the following way:
Identify your top sources of personal data processing.
Naturally, your CRM, customer-facing applications, Datawarehouse and big data is where you start. A list of a dozen of systems where most of your personal data resides and processed is a good place to start.
This can be done by simply using a questionnaire to your application owners asking for an estimated personal data volumes and exposure both to end-users (internal end-users, IT staff, 3rd party and customers) or via APIs.
* End users column can be broken into internal users, external users, IT staff and external API calls.
For every system in scope, perform the following process:
- Discover sensitive data – either by using a crawler and/or by using Screen-based Discovery (where you simply click on the application screens that present the personal data and SecuPi will automatically classify the underline tables and columns appropriately).
- Monitor access to personal data and apply anomaly detection/behavior analytics detection for detecting insider privilege abuse and hacker credential theft.
- Apply remediation controls, including personal data access filters (row or column), alerts when abnormal behavior is detected, encryption at-rest of specific columns and masking access to sensitive data while addressing “need-to-know” principle.
- Implement data-subject rights including “Right of erasure”, Consent and DSR extracts
From our experience, all four steps can be completed within days per system, and without developers changing application code or database configurations.
Benefits of the risk-based compliance approach:
- Comply with all major requirements across your major systems within a few weeks, enjoying measurable outcomes and a clear path forward.
- Gain sensitive data usage visibility, real-time access monitoring, forensics and assurance as required by auditors.
- Optimal usage of your budget and available IT resources.