Name of the client you are working for?
Please Do a google search and explain about the client of your project, which is mentioned in your Resume. For example, Tell like below.
“My client is Wesco. Wesco is a major Electric Supplier vendor in USA.”
What type of work you have in your project?
As I am working as a Informatica Developer in my current project, My Day-to-day work will be on Informatica PowerCenter, Databases like Oracle and SQL Server and Unix.
I will be creating mappings on daily basis by using different transformations like filter, router, Sequence Generator, Lookup, Sorter, Aggregator and source qualifier etc.
If you are searching for the Informatica project, The company which you are working has different projects right? Why don’t you to move to that project in same company and why are looking for different company.
Initially I have tried the internal Informatica projects which were there in my company. But No luck. There were no openings in these projects. Also, I am looking for a change for better prospectus. So, I’m looking for a project in different company.
If you are serving your notice period, with what hope you are serving it without having an offer in your hand.
As a Informatica Developer, I am having very good experience on Informatica. I am having good confidence that I will get a job with in the notice period. That’s why I have resigned my job searching for an another one in the notice period.
Suppose if source has junk characters in the source file, how would you remove it? Will you remove it before importing to informatica PowerCenter or after importing informatica PowerCenter?
We can Both the ways as mentioned in the below.
Remove Junk characters before importing to PowerCenter: We can Remove the junk Characters from a source file by writing a Unix command or Shell script and call that in Pre- session Command task in informatica PowerCenter.
Remove Junk characters after importing to PowerCenter: After importing the source file, we can Remove the junk Characters by using Expression Transformation.
In case of job failure in production whom will you report?
As a L2 Team member, if any job failed in Production, first we will try to restart the workflow. If it is still failed, we will check the session log and try to find out the error and will resolve it.
If any code change is required for any job failure, we will report to L3 Support team to do the code change.
Are you comfortable if you been placed to bigdata management projects?
As a Informatica Developer, I am having very good experience on Informatica Projects. If give me some idea or KT (knowledge Transfer) about the Bigdata, I am happy learn and work.
If we dont have end date then how we develop SCD type2.
SCD type-2 can be implemented in 3 types of logics.
- Date Effective Logic : We should have Start_date and End_date to implement this logic.
- Flag Logic : We will use flag column to implement this logic
- Version Logic : We will use Version column to implement this logic .
If we dnt have end date, then we can use Flag or Version Logic to implement Scd type-2.
What are the output files does informatica create, during running the session
Whenever there is a session running there are various output file that is created by the Informatica server such as Session Log file, Bad files, Cache files and Output files.
If you want in detail, please read the below.
The following are the files created by the PowerCenter server.
PowerCenter Server Log: Informatica PowerCenter creates a log for all status and error messages (default name: pm.server.log). It also creates an error log for error messages. These files will be created in Informatica home directory.
Session Log File: Informatica PowerCenter Server creates session log file for each session. It writes information about session into log files such as initialization process, creation of SQL commands for reader and writer threads, errors encountered and load summary. The amount of detail in session log file depends on the tracing level.
Session Detail File: This file contains load statistics for each target in mapping. Session details include information such as table name, number of rows written or rejected. This file can be viewed by double clicking on the session in monitor window.
Performance Detail File: This file contains information known as session performance details which indicates where ever there is a need for performance improvement. To generate this file, the performance detail option in the session property sheet has to be selected.
Reject File or Bad File: This file contains the rows of data that the writer does not write to target(s).
Control File: Informatica PowerCenter Server creates control file and a target file when running a session that uses the external loader. The control file contains the information about the target flat file, such as, data format and loading instructions for the external loader.
Post Session Email: Post Session Email automatically communicates information about the session to the designated recipients. There are 2 messages that are generated. First, when the session is completed successfully. Secondly, when the session has failed,
Indicator File: When a flat file is utilized as a target, the PowerCenter Server can be configured to create an indicator file. For each target row, the indicator file contains a number to indicate whether the row was marked for insert, update, delete or reject.
Output File: If session writes to a target file, the PowerCenter Server creates the target file based on file properties entered in the session property sheet.
Cache files: When the PowerCenter Server creates memory cache, it also creates cache files. The PowerCenter Server creates Index and Data Cache for the following circumstances:
- Aggregator Transformation
- Joiner Transformation
- Rank Transformation
- Lookup Transformation
- Sorter Transformation
Can we store previous session logs in informatica PowerCenter? If yes How?
Yes, we can save the previous session logs by using the below option in informatica PowerCenter.
1. Open the Workflow Manager.
2. Select Tasks > Session Configuration > Edit
3. Click the Properties tab.
4. For Save Session log by select Session runs, if you selected Session runs in step 4, enter a numeric value for the Save session log for these runs attribute as shown below (Click on the button).
I have Used update strategy in mapping, but records are only getting inserted not updated why?
Check update strategy transformation, and make sure that you have used DD_UPDATE.
Actually, how you exactly fit into this project?
As This project is Informatica Development project, I do have total 3+ years of very good
experience into Informatica Development. Due to this I can fit into this project.
How to check the performance of complex queries in your project?
Performance of the complex queries can be checked by using below methods.
- Looking at the execution plan
- Rewriting some parts
- Using temporary tables
But in my Project, we have a DBA (Database Administrator), who will take care of the complex queries.
What are HLD and LLD documents in a Data warehouse project?
HLD Documents: High level Documents in the project in the sense like SOW (Scope of work) document which contains the high-level information about the project like, ETL Tool, DB, Tables, relationships etc.
LLD Documents: Low level documents in the project in the sense like Mapping document or Requirement documents which contains the information like source, Target and Mapping logic.
What is Md5 function and how to use it in Realtime in informatica?
What is MD5 Function?
- MD5 is a Hash Function in Informatica.
- It is said as Message Digest Algorithm and has many applications in cryptography.
- It returns a 32-bit hexadecimal number which can be a-f and 0-9. It returns Null if the input is NULL.
How can we use the MD5 in practical implementation?
- When we need to load only the updated records in Target.
- If you want to see if a particular Employee_id information updated or not.
- It will be complex if you will compare the columns one by one. So, this is the benefit to use MD5 here. It reduces the memory cache needs.
- MD5 is used in Expression Transformation is the syntax for the function.
- It will generate the 32-bit hexadecimal number, for example, It returns A1223FGHJKJ.
- We can use lookup transformation to see if any customer information changes. And you can insert updated information directly using Update Strategy.
If there is requirement to support over weekend where there is a critical due to timelines, will you be able to support and manage?
Yes, I am happy to work or support over weekends if there any critical issues in the project.
What is the Etl process to load the data to downstream (Target) in informatica?
By using Informatica PowerCenter, we can easily load the data to downstream by using below steps.
- Import source and Target using informatica PowerCenter designer.
- Create a mapping and do the mapping using different transformations as per the requirement.
- Create a session, configure the session, create a workflow and run the workflow by using Informatica workflow manager
- Check the statistics like Source and target success rows using Informatica Workflow Monitor.
What is Customer (Client) perspective in a Data warehouse project?
In Data warehousing projects, Customers needed ultimately Business reports. Customers expect the Different Reports like Daily, Weekly, Monthly and Yearly Reports and also, they will expect Branch wise, Citywide, Statewide, Zonal wide, Country wide reports from the Developers.
How do you raise ticket (Incident or Service Request)?
In our current project, we have raised a ticket (Incident or Service Request) by using Service now tool. We can use different ticketing tools like HP-ALM and also Jira.
Explain end to end project life cycle from dev to production?
We will follow the below steps from Dev to Production.
- Dev (Development / Lower) Environment: ETL Developers will do the mapping development in Dev Repository using the tables in Development (DEV) Database (Which have sample data). Once the mapping Development is completed, Developers will run the workflow and load the data to Dev Database tables. Also, Developers will Complete unit testing and move the code from Dev to Test Environment.
- Test Environment (QA/SIT): ETL Testers will run the workflow which is in testing (QA) repository. When they run the workflows, the data (Near Realtime data) will load the Testing Database tables. ETL testers will do thorough testing with the Testing database tables. If thy find any discrepancies, testing team will raise Incident/service request and assign to Dev team.
Dev Team, should see issue raise by Testing team, if any code changing is required, Developers will change the code in Dev repository, they do the unit testing and again developers will deploy the code from Dev to Test Repository.
Testing team will do the thorough testing again until they see correct data as per the logic in the target table which is in Testing database. Once ETL Tester completes the Testing, He will give Testing Sign off.
Prod (Production / Higher) Environment: A Team lead or a senior team mate will Deploy the code from Test environment to Prod Environment. First Team lead will do the dry run with the data of 2 to 3 days. Once the Run successfully completed, He will do the full load first, and then daily the Workflow will run on daily basis in production.
What all other tools use in your project?
I have used the below tools in project.
- Informatica PowerCenter Client Tools
- Oracle SQL Developer
- Putty (for UNIX files)
- WinSCP (for UNIX files)
- Jira (For Tasks)
- Service Now (to raise Incidents and Service Requests)
If there are errors in UAT phase, how will you resolve it & how you are maintaining versions?
If any there any Errors in UAT, Users will raise a ticket and assign to Dev team. As an ETL Developer, I will modify the code in Dev environment, and also, I will do the unit testing and then deploy the code from dev to test Environment.
In this case we have used Versioning concept (Check-in & Check-out) to maintain the mapping versions in Dev.
How will you get updates in your projects from your client?
Client will raise a BR (Business Request) or CR (Change Request) if there is any new requirement. Based on the BR and CR raised by Client, our team lead/scrum master will assign the work through the tasks in Jira.
What is your project name?
Its Depend on your project. Usually, Client name will be the project name.
Ex:
Wesco DW
ICICI DW
AMFAM DW
How to pass values(variables) between sessions in a workflow? How to pass one session parameter to another session ?
- Create a workflow variable $$wf_var.
- On successful completion of mapping 1(session 1) , assign $$wf_var=$$var1 (post session workflow assignment)
- In mapping2(session2), use pre-session workflow assignment, and assign $$Var2=$$wf_var.
Find the nth highest salary without using rank function in oracle?
Select * From Employee E1
Where N = (Select Count(Distinct(E2.Salary)) From Employee E2 Where E2.Salary >= E1.Salary)
Is it necessary to enable sorted input option in joiner Transformation in Informatica PowerCenter?
Informatica PowerCenter expects the Joiner to have sorted input if the master-detail inputs are from the same source, Since the sorted input is selected, ensure that there are Sorter transformations added for each Joiner input.
What is incremental aggregation in Informatica?
- Using incremental aggregation, you can apply changes captured from the source to aggregate calculations such as Sum, Min, Max, Average etc…
- If the source changes incrementally and you can capture changes, you can configure the session to process those changes.
- This allows the Integration Service to update the target incrementally, rather than forcing it to process the entire source and recalculate the same data each time you run the session.
What is trigger in PL/SQL?
Triggers in oracle are blocks of PL/SQL code which oracle can execute automatically based on some action or event to be performed in response to an insert, update, or delete operation against a table Triggers are created using the PL/SQL CREATE TRIGGER statement.
How to optimize mapping in informatica?
Generally, we will identify source bottlenecks, Target bottlenecks and Transformation bottlenecks. Based on these we will optimize Source side , Target side or Transformation side in a mapping .
Read the below link for Detailed explanation of Performance tuning.
What Type of data you are getting for your project?
Usually, we will get Business Transactional data in the form or source flat files or source tables.
How many ways we can override the query in Informatica PowerCenter?
In Informatica PowerCenter, we can override the query as mentioned in the below.
- By using SQL Query option in Source Qualifier Transformation.
- By using Lookup SQL Override in Lookup Transformation.
- By using Pre-SQL and Post-SQL in Source and target side.
- By using SQL Transformation in Informatica.
How will you do error handling in informatica?
There Are many ways to do Error handling in Informatica. But in our project, we have followed the below steps as a part of error handling.
- Make Stop on Error as 1 to avoid the in-session properties to avoid the workflow success when any rejected records.
- Use the options “Fail Parent if the task fails” and also “Fail the parent if this task does not run “to avoid workflow success when any session failures.
- Use the property Recovery Strategy as Restart Task.
If you have special characters in the column first name, how will you remove it in informatica?
REG_REPLACE(FIRST_NAME, ‘[^\w,-. ]’, ”)
Use the above function in expression Transformation, that removes the special characters and retains only alphanumeric characters, commas, dashes, and periods.
How to remove duplicates in file using UNIX command?
sort duplicates.txt | uniq > noDuplicates.txt
Oracle Query to split FullName into first name , middle name , last name (3columns)
Consider the name column holding name “Adhya Nataraj Virupaksham”. Now write below SQL query:
select substr(name, 1, (instr(name, “ “, 1,1) - 1)) as fname, substr(name, (instr(name, “ “, 1,1) + 1), (instr(name,
“ “, 1, 2) -1)) as middlename, substr(name, (instr(name, “ “, 1, 2) + 1) ) as lame from tablename;
What is Persistent cache in Informatica?
- You can configure the cache to permanently save the data. You can configure a Lookup transformation to use a persistent (Permanent) cache. When you use a persistent cache, Data Integration saves and reuses the cache files from mapping run to mapping run.
- By default, the cache is created as non-persistent, that is, the cache will be deleted once the session run is complete. If the lookup table or file does not change across the session runs, you can use the existing persistent cache.
What is Parallel processing in Informatica?
Parallel processing is the process of data through multiple pipelines. We will use Partitioning, a performance tuning technique in Informatica which optimizes the performance by allowing parallel processing of data through multiple pipelines.
How will concatenate the strings using UNIX?
Use the below syntax to concatenate two strings in unix.
string1="hello"
string2="world"
mystring="$string1$string2"
echo $mystring
If the Informatica workflow rejected 50 records in the production, what is your next step?
If any Workflow rejected any records in the production, follow the below steps.
- Check the session log and identify the Error, which is causing the rejection.
- Check the Target table for any kind of datatype issues or constraint issues or missing required fields etc.
- Validate the Rejected records in the source data to verify the reason for rejection.
What is Target override in informatica?
By Default, Integration Service updates the target tables based on key values.
If you want to update the target table when target table is not having any primary key, we can use Target update override. We can write the update override query in the target transformation in mapping level like below.
UPDATE T_SALES
SET
EMP_NAME = :TU.EMP_NAME,
DATE_SHIPPED = :TU.DATE_SHIPPED,
TOTAL_SALES = :TU.TOTAL_SALES
WHERE
EMP_ID = :TU.EMP_ID
Leave a comment