IICS Random Interview Questions
Tell me about yourself, Tools and Technologies, Roles and Responsibilities
Hi, This is Nataraj, from Hindupur, Andhra Pradesh. I Have 4+ years of Experience in to Informatica PowerCenter and IICS/IDMC both CDI & CAI.
I have worked on Different databases like Oracle, SQL Server, Snowflake etc.
I am Currently working as a ETL developer in <Company name>. My current Project is Wesco (Tell your Client’s name).
Wesco is a major Electrical Supplier Vendor in USA. In this project, we have got the data in the form of flat files and tables.
We have loaded the data to Staging area by using incremental loading logic. From Staging to Dimension tables which are in snowflake DB, we have loaded by using SCD type-1 or type-2 logic and then loaded to fact table based on the requirement.
My Roles and responsibilities in the current project are
- I have created many IICS mappings by using almost all transformations like Source, Target, Aggregator, joiner, Normalizer, Sequence, Lookup, Hierarchy Builder, Hierarchy Parse and etc.
- I have created Different kinds of Tasks in IICS like Mapping Task, Data Synchronization tasks, Dynamic Mapping Task and Data Transfer Task in IICS using various data sources as Oracle DB, saved queries and targets as Snowflake.
- I have created different types of Components like Saved Query, Hierarchical Schema, Fixed width file format, Shared Sequence and File Listener.
- I have created Input Parameters / In-out Parameters from Parameter panel in Data Integration. Created and used Parameter files in different MCTs.
- I have Implemented SCD Type-1 and SCD type-2 Mapping Logic in IICS to load to the dimension tables.
Current Architecture of your Project?
My Current project is Wesco. Wesco is a major Electrical supplier in US.
In this Project, we have got the sources in the form of Tables and Flat files. We have loaded these table and files into Staging Area by using incremental loading logic, by using in-out parameters.
From Staging area, we have loaded the data to Dimension tables which are Snowflake DB by using SCD type-1 or SCD type-2 based on the Business logic.
From Dimensions tables, we have loaded the data to fact table.
My Roles and responsibilities in the current project are
Tell any 5 roles and responsibilities as per CV.
Have you worked with Cloud data bases? Can we load data from on-premise to cloud in PowerCenter?
Yes, I have worked on mainly Snowflake cloud DB. Yes, we can load the data from on premise Databases like Oracle and SQL Server to Snowflake cloud DB by using Data Transfer task.
What are the Transformations you have used?Can u give example when you use Connected and Un-connected LKP? How will you decide?
Usually, we will use Connected lookup most of the cases. But we will use Un-connected lookup when we wanted to pass different columns in different times to the lookup transformation.
How can you make LKP Connected and Un-connected?
Drag the lookup transformation to the mapping canvas. Click on the lookup transformation, go to general tab under lookup, then click on Un-connected lookup.
How many fields will return by Un-connected lookup?
The Un-connected Lookup transformation returns one column to the calling transformation.
Have you used Expression transformation?
Yes, I have used expression transformation to perform non-aggregate calculations and also used to calculate values within a single row.
Do you know Macros?
Yes. I have used Expression Macros in IICS to create repetitive or complex expressions in mappings.
How can you use the Macros?
We can create Vertical, Horizontal or hybrid macros in expression transformation in IICS.
A Vertical Macro expands an expression vertically. The vertical macro generates a set of similar expressions to perform the same calculation on multiple incoming fields.
A horizontal Macro expands an expression horizontally. The horizontal macro generates one extended expression that includes a set of fields or constants.
A hybrid Macro expands an expression both vertically and horizontally. A hybrid macro generates a set of vertical expressions that also expand horizontally.
Have u Parameterized in IICS?
Yes, I have used INPUT PARAMETERS to parametrize Source, target, Lookup connections.
And I have used IN-OUT PARAMETERS to create mapping variables and mapping parameters.
Have u used lookups? what kind of lookup? What are the differences between connected and un connected?
Yes, I have used lookup to lookup on a table or file and get the related values to target.
I have used Connected lookups mostly in my project.
A connected Lookup transformation receives source data, performs a lookup, and returns data.
An unconnected Lookup transformation is not connected to a source or target. A transformation calls the Lookup transformation with a lookup expression. The unconnected Lookup transformation returns one column to the calling transformation.
What are the Transformations you have used?
I have mainly used difference transformations like Source, Target, Filter, Router, Sorter, Aggregator, Lookup, Joiner, Sequence, Hierarchy Builder, Hierarchy Parser and Normalizer etc.
Which scenario you have used for Synchronisation task?
We have used Synchronisation task to load the data from Source to stage tables in our project.
Which tool you are using to get the mapping requirement and how to implement the same?
My Team Lead/ Scrum Master have attached the mapping document in the Jira tool. In each sprint, we have downloaded the mapping document from Jira tool and will start the mapping.
How can you schedule the mapping?
We have scheduled our Mapping task using Informatica IICS Schedular.
After loading the data into target table how can you check the target table in snowflake database and for example you are updating the rows, how can you check in snowflake database that those rows are updated or not?
We have done the unit testing to validate the target table which is in snowflake database.
In Dev environment, I have inserted one new record in stage table and ran the mapping task first, and checked that record in the target table which is in snowflake whether it is inserted or not. Then, I have updated one existing record in stage table and the ran the mapping Task again and checked that the record is updated in the target table.
After updating the target table records, where can you check those rows whether it is updated or not?
In Dev environment, I have loaded few existing records in stage table and then ran the mapping Task, and checked these records are updated in the target table. This I have recorded in my unit test case document.
What if a workflow failed in production environment? How can you resolve this? What is the approach that you followed, explain step by step process?
If any Workflow got failed in Prod Environment, I have followed the below steps in IDMC/IICS,
- Restart the mapping task and check if the mapping task succeed or not.
Check the log file and understand the error. - Check if the today’s source file we have received is present in the source location or not.
- Check if any database / IICS server is down, ask your DBA to restart the DB servers.
- If any mapping change is required, Change the mapping in Dev environment, do the unit testing, Deploy the code to test and then to Prod.
What is the approach of deploying code to different environments, Dev to QA, QA to Prod? Like on which folders we can we import or export?
In real-time, we will have three different IICS login links for Dev, Test and PROD.
We will have Same Folders with the Same names in Each Environment. We can export the mapping task from Dev and import to Test env to the same folder. After that Team Lead or Deployment team will take care of the Test to Prod code deployment.
What is Data warehouse, Data lake and Data hub?
▹ A data warehouse (DWH) is a centralized data repository for structured, filtered and for a specific purpose, processed data. It is primarily used for data analysis and reporting, particularly of historical data. It could be considered a cornerstone of business intelligence.
▹ A data lake (DL) is also a single or centralized data repository, but it is used for storing vast amounts of un-processed data in its natural or raw format. How the data will eventually be used can be decided at a later date.
▹ A data hub (DH) can be thought of as a hub-and-spoke approach to storing and managing data. Data is physically moved and reindexed into a new system. This provides more structure to the data and permits diverse business users to access information that they need more rapidly than in a data lake.
When do we use Synchronization task, Replication task? On what scenarios we can use this kind of tasks?
Synchronization task: By using Synchronization Task, we used to Synchronize the data between source and target to integrate applications. I have used Synchronization task to load the data from source to stage tables.
Replication task: By using Replication task, we used to copy data from one database to another database. I have used Replication task to take backup of all the tables in a database and transfer the tables with the data to another database.
How can you receive your work by using Agile methodology?
In my current project, we are using agile methodology and we will follow sprints. Each sprint will have 2 weeks.
For every Sprint, Scrum master will Create the tasks in JIRA tool and assign the tasks. I should complete these tasks with in that sprint.
In Jira tool, Scrum master will attach the mapping document in the task and I will take the mapping document from Jira tool, I will understand the mapping document and will start the mapping.
What is Retrospective? What is the use of Retrospective?
In my current project, we have Sprint retrospective meeting before the sprint Completion.
In the Sprint retrospective meeting we will discuss the below points,
▹ Scrum master will arrange the Sprint retrospective meeting to discuss the Status of the current sprint.
▹ If any team member is unable to complete the task, Scrum master will move the task to next sprint.
▹ Scrum master will take the status of current task with each team member.
What is Agile methodology story points?
Story Points in Jira are a way to measure the relative workload of an issue. They allow teams to estimate the complexity, priority and size of a task without committing to actual hours.
Story points are numbers that can be assigned to a task based on certain parameters for work completion.
The story point is assigned based on the following:
- Complexity of the task
- Total team experience for the task
- Available resources for the task
The story point for one task may vary from person to person based upon his experience and other parameters.
If a workflow got failed in Prod environment, can we do the code change for that failed mapping in Production environment or in Dev environment? Mention the step-by-step approach
▹ If a workflow got failed in Prod environment, First check the reason for failure.
▹ If any Code change is required, Change the mapping always in Dev environment, do the unit testing and then Deploy the code to test and then to Prod.
▹ We should never Develop or change the mapping in Test or PROD environment.
In case if we are working on failed workflow, what are the timelines to resolve this Prod issue? What is the process and to whom we can connect to the further approaches?
If any Workflow failed in Prod Environment, I have followed the below steps in IDMC/IICS.
- Restart the mapping task and check if the mapping task succeed or not.
- Check the log file and understand the error.
- Check if the today’s source file we have received in the source location or not
- Check if any database / IICS server is down, ask your DBA to restart the DB servers.
- If any mapping change is required, we will raise a CR to implement the changes and assign the CR to L3 support team.
- L3 support team will Change the mapping in Dev, do the unit testing, Deploy the code to test and then to Prod.
Leave a comment