Regulatory Risk Reporting: Common Data Challenges And How to Overcome Them

by Susmi Sengupta, on November 29, 2016

Do you often find yourself spending a lot of time creating Regulatory Risk reports rather than analyzing your data for Risk? If you do, you’re not alone! With each new Regulation comes fresh complexity and demand for more data. New rules for extraction, reference, aggregation, governance, transformation and normalization.

risk_reporting_data_challenges.png 

At its very core, the ask for Risk Reporting is pretty straight forward – to create one view of Risk for the entire organization for pro-active Risk Management and effective decision making. Then why is it so difficult to collect and present this information? Why do more and more financial institutions and their IT departments grapple with the challenges of meeting these requirements? The answer lies in the way the Data is managed and structured across the organization. Here are a few symptoms to look out for:

  1. Data is stored in multiple disparate sources across multiple departments    

  2. Data is conflicting

  3. Data lineage is unknown

  4. Data Quality is questionable

  5. Data definition is misunderstood

  6. Manual interventions are performed to fill data gaps

  7. Data is not easily accessible, hence purchased from outside

  8. Data Silos are created; everyone has their “own” dataset

  9. Lack of Reference data

  10. Data Solutions take too long to be Implemented, Meanwhile Regulations change

 

Ok, so we have these issues with Data and much more, what should we do? It’s probably a no-brainer that most of our problems would be solved if only we could consolidate all required data into a Data warehouse/Mart/Lake to create a Single Version of the Truth within set timelines. All this and more is definitely achievable with the plethora of BI tools that are available at our disposal these days. However, before starting your DW implementation you should take the below Data Management Pre-steps to reduce the cycle of delivery and the angst of failing to be compliant:

  1. Understand the Data requirements and implications; upstream and downstream impacts

  2. Understand Source of your data and increase awareness within Risk

  3. Appoint a Data custodian(s); Define

    1. Who owns the data?

    2. Who has the right to edit and manage the data?

    3. Who uses this data? What are their use cases?

    4. Who reconciles the data?

    5. What is the Source System of Record?

  4. Write clear and concise Data Quality Rules; Build an upstream DQ Rules engine

  5. Create data lineage so you can enhance it with your DW implementation

  6. Establish enterprise-wide Data definition and Create a dictionary

  7. Establish Data Reconciliation processes

  8. Consolidate Source Systems where possible

  9. Remove all manual inputs as much as possible

  10. Build a Roadmap for incremental Data development

 

Too tough to follow in practice? Too many pre-steps? There is an easy (but not holistic!) way out: 

  1. Build What You Need; Don’t overbuild and certainly Do Not build everything

  2. Don’t just build “The Perfect Data warehouse”; Build one that “Works” and delivers "On Time"

  3. Adopt Agile Data Delivery Methods; cut your Delivery Life cycle in half

  4. Encourage early development of Reports


New Call-to-action

Topics:Data ScienceData Analytics

The Arkatechture Blog

A place for visualization veterans, analytics enthusiasts, and self-aware artificial intelligence to binge on all things data. 

Subscribe to our Blog