IQVIA Payer Flexible Data Bridging

IQVIA Payer Flexible Data Bridging Overview

In FDB end-to-end flow, reference and source payer data loaded into DataMart through pipeline. Create views from RM hierarchy and generic source data, then match reference and source views data using iterative bridging and sent back matched result to RM as leaf node.

Load Payer Data and Generate Hierarchy

Import Pipeline Template

To Import Pipeline Template

  1. Connect to IDP default s3 bucket and go to the folder bucket_name>/templates/product.

  2. Download the IQVIA_Payer_Inherit_Hierarchy_<version>.json file template to local windows folder.

    Note:   

    If multiple versions for the same pipeline exists then, consider the latest pipeline version.

  3. Open the pipeline template IQVIA_Payer_Inherit_Hierarchy_<version>.json in any text editor and replace all <__DATA_SOURCE_NAME__> placeholder with environment Data Source Name. For example, in dev environment, replace all <__DATA_SOURCE_NAME__> places using IDP_IQVIADEV_OA_DEV_SANDBOX_ENV_DWH.

  4. Login to IDP platform with valid credentials.

  5. Go to Data Pipeline app, click Task Group from Template and import the template using updated template file.

  6. After the successful import, a task group with name IQVIA_Payer_Inherit_Hierarchy is created.

  7. Download and import the pipeline template file IQVIA_Payer_Inherit_Hierarchy_Template_<version>.json, see Import RM Template.

  8. Create folder in S3 bucket root/iqvia_payer/input, if not exists.

Operational Steps

The pipeline task group consists of four steps:

  1. Excel to CSV: If file format is.csv then file source destination root/iqvia_payer/input and we're ignore this step otherwise if the file format is.xlsx then source destination of.xlsx file is root/iqvia_payer. Convert imported.xlsx file into.csv format. Save into root/iqvia_payer/input folder.

  2. Stage: Create "IQVIA_PAYER" table based on this converted.csv file. First row of this file consider as table column and data also populate in table specific rows. In this process two tables also create internally IQVIA_PAYER_HIST in ODP_CORE_STAGING and IQVIA_PAYER_LND in ODP_CORE_LANDING schema.

    Note:   

    The input to this step is a.csv file generated from previous step (Excel to CSV). After successfully execution of this step input csv file is moved to processed folder. 

  3. Create Table: In this step we create require tables for RM using Create If Not Exists statement. (Example: RM_PAYER_CANONICAL, RM_PAYER_CANONICAL_BK etc).

    Note:   

    (Created view from IQVIA_PAYER_HIST table and using store procedure we create require tables with creating if not exists statement.)

  4. Canonical Datamart: IQVIA_PAYER_HIST data populate into RM_PAYER_CANONICAL table using information map plugin and type is SCD (Incremental).

  5. Import RM Template, see Import RM Template to import the RM application template This step is a one-time step and you can ignore this step if the relationship is already present in the IDP RM application.

  6. Generate Hierarchy: Using OA App Connector configure and sync relationship. Select relationship name and generate hierarchy.

  7. Publish Hierarchy: Using OA App Connector configure and sync relationship. Select relationship name and publish hierarchy.

Troubleshooting

Make sure that input.xlsx or.csv file place at right place or not.

For the first time only this task group must be run step by step (Excel to CSV (optional) → Stage → Create Table → Canonical Datamart → Generate Hierarchy → Publish Hierarchy).

Import RM Template

To Import RM Template

  1. Connect to IDP default s3 bucket and go to the folder <bucket_name>/templates/product.

  2. Download the pipeline template file IQVIA_Payer_Inherit_Hierarchy_Template_<version>.json.

    Note:   

    If multiple versions for the same pipeline exists then, consider the latest pipeline version.

  3. Login to IDP platform using valid credentials and

  4. Go to RM present under Data Management.

  5. Click Import and upload template file.

  6. After the successful import, relationship created.

(Before importing this template, if it already exists, then delete the IQVIA Payer Inherit Configuration configuration and related relationship.)

Load Source Data and Move to RM Datamart

Import Pipeline Template

To Import Pipeline Template

  1. Connect to IDP default s3 bucket and go to the folder <bucket_name>/templates/product.

  2. Download the pipeline template file GENERIC_SOURCE_PAYER_To_RM_<version>.json.

    Note:   

    If multiple versions for the same pipeline exists then, consider the latest pipeline version.

  3. Edit the file GENERIC_SOURCE_PAYER_To_RM_<version>.json and replace the following placeholders.

    1. <__DATABASE_NAME__> with the snowflake database name. This placeholder is required to access the correct database for the currently running environment.

    2. <___DATA_SOURCE_NAME__> with the client's name that is. BMS. This placeholder is required to be changed.

    3. <__STAGING_TABLE_NAME__> with the desired table name. In addition to the table name, add the value _PAYER. For example, if the staging table name is BMS, then the value for this placeholder should be BMS_PAYER.

  4. Login to IDP platform with valid credentials.

  5. Go to Data Pipeline app, click Task Group from Template and import the template using updated template file.

  6. After the successful import, a task group with name GENERIC_SOURCE_PAYER To_RM is created.

  7. Create folder in S3 bucket "root/generic_payer/input" if not exists.

Operational Steps

The pipeline task group consists of three steps:

  1. Load Source Data: Create a table named <__STAGING_TABLE_NAME__> based on the input.csv file where the first row should be considered as the table column names and data should also be populated from the subsequent rows. In this process, two tables are also created internally <__STAGING_TABLE_NAME__>_HIST in ODP_CORE_STAGING and <__STAGING_TABLE_NAME__>_LND in ODP_CORE_LANDING schema.

  2. Copy to Canonical: <__STAGING_TABLE_NAME__>_HIST is used to populate data into RM_PAYER_CANONICAL table using information map plugin and type is default.

    Note:   

    After successfully execution input file move into processed folder.

Troubleshooting

Make sure that input excel file place at right place or not.

For the first time only this task group must be run step by step (Load Source Data→ Copy to Canonical).

Payer Iterative Data Bridging and Publish Back to RM

Import Pipeline Template: Iterative Data Bridging

To Import Pipeline Template

  1. Connect to IDP default s3 bucket and go to the folder <bucket_name>/templates/product.

  2. Download the pipeline template file Payer_Flexible_Data_Bridging_Hierarchy_<version>.json.

    Note:   

    If multiple versions for the same pipeline exists then, consider the latest pipeline version.

  3. Login to IDP platform with valid credentials.

  4. Go to Data Pipeline app, click Task Group from Template and import the template using updated template file.

  5. After the successful import, a task group with name Payer_Flexible_Data_Bridging_Hierarchy is created.

  6. Configure parameters:

    Step Name

    Parameter Variable

    Parameter Value Placeholder

    Replaceable String

    Create Reference Views

    Relationship_Name

    <__Relationship_Name__>

    Always set this value as 'IQVIA Payer Inherit Hierarchy'

    Note:   

    Value must be enclosed with single quotation marks. 

    Create Reference Views

    Source_Name

    <__Source_Name__>

    Always set this value as 'IQVIAPayer'

    Note:   

    Value must be enclosed with single quotation marks. 

    Create Source Views

    Client_Source_Name

    <__Client_Source_Name__>

    This place holder should be replaced with the appropriate client source name. Set multiple source names using a comma separator.

    Example: 'Novatis','BMS'

    Note:   

    Value must be enclosed with single quotation marks. 

    Create Source Views

    Relationship_Name

                 <__Relationship_Name__>

    Always set this value as 'IQVIA Payer Inherit Hierarchy'

    Note:   

    Value must be enclosed with single quotation marks. 

Operational Steps

The pipeline task group consists of five steps:

  1. Create Reference Views: In this step we creates four views for bridging as reference source.

  2. Create Source Views: In this step we creates four source views for bridging as source data. We're exclude data in each view, those already matched in below level or already used as leaf node in RM.

  3. After successful completion of step 1 and 2, connect to IDP default s3 bucket and go to the folder <bucket_name>/templates/product.

  4. Download the pipeline template file DB-project-Payer_Flexible_Data_Bridging_<version>.json latest version.

  5. Login to IDP platform using valid credentials.

  6. Go to Data-Bridging present under Data Management.

  7. Click import and upload template file.

  8. After the successful import, bridging project created.

  9. Go to the project edit page and configure the Data Pipeline Task group name to Extract_Bridging_to_RM, if not configured yet.

  10. If the Extract_Bridging_to_RM task group name is not present in the bridging Data Pipeline Task group name dropdown, then download and import the pipeline template file Extract_Bridging_to_RM_<version>.json in the Data Pipeline app, see Import Pipeline Template: RM Extract to import Extract_Bridging_to_RM template.

  11. Publish: In this step, check if any source data is already used as an aleaf node manually or not. If used, then publish those data as leaf nodes in RM and exclude published data from generic source views.

  12. Run Iterative Bridging: Run project bridge one by one sequentially.

  13. RM DB Integration: In this step sent back bridging match result data to RM.

  14. Publish: In this step set and publish bridging match result data as leaf node of existing hierarchy.

    For the first time only this task group first 3 steps must be run step by step (Create Reference ViewsCreate Source ViewsImport Bridging Template).

Troubleshooting

Make sure that configure import properly or not. Parameters are set properly or not. Run iterative bridges one by one.

Import Pipeline Template: RM Extract

To Import Pipeline Template

  1. Connect to IDP default s3 bucket and go to the folder <bucket_name>/templates/product

  2. Download the pipeline template file Extract_Bridging_to_RM_<version>.json.

    Note:   

    If multiple versions for the same pipeline exists then, consider the latest pipeline version.

  3. Login to IDP platform with valid credentials.

  4. Go to Data Pipeline app, click Task Group from Template and import the template using updated template file.

  5. After the successful import, a task group with name Extract_Bridging_to_RM is created with data time. Keep the Extract_Bridging_to_RM name without the extended datetime part. Only the Extract_Bridging_to_RM (no datetime name) was addressed by the Bridging project.

Operational Steps

The pipeline task group consists of four steps:

  1. RM DB Integration: Pull the matched record and generate hierarchy leaf node where they matched.

  2. Publish: Publish the relationship and it can generate HIER, LOG, REF, FLAT table.

    Note:   

    After successful execution, Relationship publish can work.

Import Pipeline Template: Payer Flexible Data Bridging Hierarchy Extract to Client

Note:   

Currently, we are supporting only full refreshed data extraction. Later on, we can implement delta changes.

To Import Pipeline Template

  1. Connect to IDP default s3 bucket and go to the folder <bucket_name>/templates/product.

  2. Download the pipeline template file Payer Flexible Data Bridging Hierarchy Extract to Client<version>.json.

    Note:   

    If multiple versions for the same pipeline exists then, consider the latest pipeline version.

  3. Login to IDP platform with valid credentials.

  4. Go to Data Pipeline app, click Task Group from Template and import the template using updated template file.

  5. After the successful import, a task group with name Payer Flexible Data Bridging Hierarchy Extract to Client is created. Verify the below task group and task parameters. Configure them accordingly based on the requirement.

  6. Open the task group and configure the below task parameters present under File_Extract task. Configure these parameters accordingly based on the requirement.

    Parameter Name

    Parameter Level

    Parameter Value Placeholder

    Description

    DATABASE_NAME

    Task (File_Extract)

    <__DATABASE_NAME__>

    This place holder should be replaced with appropriate database name. 

    Example: IDP_IQVIADEV_OA_DEV_SANDBOX_ENV_DWH

    Client_Source_Name

    Task (File_Extract)

    <__Client_Source_Name__>

    This place holder should be replaced with appropriate client source name. 

    Example: Novatis

    EXTRACT_FOLDER Task (File_Extract) <__EXTRACT_FOLDER__>

    This place holder should be replaced with an appropriate folder name in s3.

    Example: Payer_Flexible_Data_Bridging_Hierarchy

Operational Steps

The pipeline task group consists of four steps:

  1. Publish: Publish the relationship and it can generate HIER, LOG, REF, FLAT table.

  2. File_Extract: Extract file to S3.

    Note:   

    After successful execution, S3 folder you can see three files named like (Example: PAYER_FDB_Entities_20211223110327.csv, PAYER_FDB_Hierarchy Summary_20211223110327.csv, PAYER_FDB_Relations_20211223110327.csv)

See DID of Reference Views Layout For Bridging DID, Source View Layout For Bridging DID and Load GENERIC_SOURCE_PAYER_To_RM Mapping DID in Data Interface Documents.