Auditing Risk Exposure For Client Portfolios


A major financial institution, Fin Bank, maintains their client hierarchy as a parent-child hierarchy that can go up to 8 levels deep. Legal and sales maintains their hierarchies separately. Fin Bank needs to proactively communicate and generate action plans when their clients become over-allocated in a specific product type or industry. To accomplish this, Fin Bank needs to be able to perform periodic and ad hoc audits of their clients’ financial positions at different levels of client hierarchy, and compare market exposures across various time series.

Market exposure refers to the amount of funds, or percentage of a portfolio an investor can lose from the risks unique to a particular investment.


The solutions that the Fin Bank technology department has implemented did not scale well. Risk analysis took a long time and queries were crashing. The approach to this challenge was to split each analysis into several queries running across client subsets limited to the past two years.

Below are the simplified tables used in this project.
Fin Bank is currently restricted to producing client reports that are rigid, time consuming, and can only cover two years. None of the technologies they have tried enabled them to run computations on bi-temporal tables.

To meet the growing demands from regulators and to increase revenues, Fin Bank must improve their data processing capabilities. The technology department needed to build a system that enables periodic audits for a seven-year time frame, as well as run ad hoc analysis on requests from clients, regulators, internal sales teams, marketing, and trading management. For example, the system must be able to perform historical analysis between two dates by using the latest client hierarchy. The analysis enables Fin Bank to perform:
  1. Compute change of exposure to:
    • A specific product for a specific client entity in the client hierarchy.
    • All products for a specific client entity in the client hierarchy and identify how the client’s portfolio should be rebalanced.
    • A specific product for all client roll-ups at a specific level, and find clients with the highest exposure levels.
  2. Generate a time series of exposures to different products at different levels of the client hierarchy with different time intervals (by year, month, etc)

The Xcalar Solution: Xcalar Data Platform

Realizing that their existing legacy data warehouse could not handle the increase in analytics workloads, the services group chose Xcalar Data Platform, which meets the bank’s requirements as follows:
  • Xcalar Compute Engine handles time series analytics on growing datasets from multiple sources.
  • Analytical applications are integrated with Xcalar REST services for ad hoc queries, reducing the dependency on data engineers and the need to turn every ad hoc query into a project.
  • Xcalar’s scale-out architecture provides the capacity to expand compute or storage as needed by simply adding nodes to the cluster.
  • With Xcalar Design, algorithm creation and workflow are visual, which enables less technical personnel such as business analysts to build data models. It also provides dataflow graphs for data lineage analysis and operationalization on petabytes of data.
  • True Data in PlaceTM technology reduces the size of raw storage space by over 10 times as compared to the bank’s legacy system.
Xcalar’s field team helped this customer navigate the roadblocks of deploying this solution on-premise. Extensive customer testing showed superior performance and resiliency, with minimal latency impact.

Demonstrated Key Benefits of Xcalar Technology

  1. Data Access without Movement
    • Xcalar can be deployed near the data source, on-prem or in the cloud. For this particular project, data source files were located on prem.
    • Xcalar True Data In-place technology ensures that duplicate versions of master data do not proliferate, while simplifying governance and data quality procedures
  2. Performance and Scalability
    • Batch jobs are created from dataflow models that run with optimized performance, whether periodically or on demand.
    • Generated analytics reports can be used by non-technical users for better decision making.
    • Xcalar’s scale-out architecture allows Fin Bank to process bi-temporal reference data across extensive time periods.
    • Alternative big data technologies take days for what Xcalar can do in hours, making iterations faster and cost effective.
  3. Extensibility
    • Complex operations can be simplified through reusable User Defined FunctionsExtensions can be built to encapsulate a series of operations.
    • Batch dataflows operations can be parameterized.
  4. Simplicity and Transparency
    • Simplifying complex data transformations through an intuitive visual design tool makes analysis accessible to non-technical users.
    • Xcalar delivers clear auditability and data lineage, which shows the sequence of transformations and intermediate results from processing data in the petabyte range.
  5. Operationalize Models on Full Datasets for Production Scale
    • Modeling algorithms can be saved as batch dataflows that can run on the production cluster against full datasets.
    • The batch dataflow is optimized for end-to-end execution efficiency, which consumes a fraction of the memory utilized in modeling.