Corporate Training
Request Demo
Click me
Menu
Let's Talk
Request Demo

Frequently asked Apache Pig Interview Questions and Answers

by Subashini, on Jul 20, 2022 8:23:13 PM

Frequently asked Apache Pig Interview Questions and Answers

Q1. Define Apache Pig

Ans

To analyze large data sets representing them as data flows, we use Apache Pig. Basically,  to provide an abstraction over MapReduce, reducing the complexities of writing a MapReduce task using Java programming, Apache Pig is designed.

Q2. Why Do We Need Apache Pig?

Ans

At times, while performing any MapReduce tasks, programmers who are not so good at Java normally used to struggle to work with Hadoop. Hence, Pig is a boon for all such programmers. The reason is:

  • Using Pig Latin, programmers can perform MapReduce tasks easily, without having to type complex codes in Java.
  • Since Pig uses multi-query approach, it also helps in reducing the length of codes.
  • It is easy to learn Pig when you are familiar with SQL. It is because Pig Latin is SQL-like language.
  • In order to support data operations, it offers many built-in operators like joins, filters, ordering, and many more. And, it offers nested data types that are missing from MapReduce, for example, tuples, bags, and maps.

Q3. What is the difference between Apache Pig and Hive?

Ans

Criteria Pig Hive
Language Pig Latin SQL-like
Application Programming purposes Report creation
Operation Client Side Server side
Data support Semi-structured Structured
Connectivity Can be called by other applications JDBC & BI tool integration

 

Q4. Explain the uses of PIG.

Ans

We can use Pig in three categories, they are

  • ETL data pipeline : It helps to populate our data warehouse. Pig can pipeline the data to an external application, it will wait until it’s finished, so that it has receive the processed data and continue from there. It is the most common use case for Pig.
  • Research on raw data.
  • Iterative processing.

Q5. What is the difference between Pig and SQL?

Ans

  • Pig Latin shifts from SQL in a declarative style of encoding whereas Hive's query language is similar to SQL.
  • Pig is above Hadoop and runs on principle, which can sit on top of Dryad too.
  • Hive & Pig, both their commands collect to MapReduce jobs.

Apache Pig Online Training

Q6. Explain the requirement of MapReduce while we program in Apache Pig.

Ans

The programs of Apache Pig are written in a language referred to as Pig Latin, which is analogous to SQL language. To carry out the query, we require an engine for execution. Pig engine alters all the queries to MapReduce tasks. Thus MapReduce operates as the primary execution engine needed to execute the programs.

Q7. Explain BloomMapFile.

Ans

BloomMapFile is categorized as the class, which broadens MapFile class, and is generally used for HBase table arrangement to speed up the relationship test for keys, which uses the filters of dynamic bloom.

Q8. What is a bag in Pig?

Ans

A compilation of tuples is known as the bag, in Apache Pig.

Q9. Why do we need the for each operation in Pig scripts?

Ans

The operation FOREACH in Apache Pig is required to apply to each component in the data bag, for which the respective action can be performed to create data items.

Q10. Explain the different data types in Pig.

Ans

Following are the three complex data types that are supported by Apache Pig:

  • Map, which is the key, value store, connected mutually using #.
  • Tuples, similar to the row in the table, where a comma separates various items. Tuples may possess multiple attributes.
  • Bags are a collection of tuples, in an unsynchronized manner, which allows many duplicate tuples.

Q11. What is the function of Flatten in Pig?

Ans

Many times there are data in one of the tuples or bags which on removal, lead to the next level of nesting for that data. In those cases, Flatten, a modifier, embedded in Pig is used. Flatten uninstalls bags & tuples and replaces all the areas in a tuple, whereas the un-nesting bags are more complex of its need in creating a new tuple.

Q12. What are describe & explain in Apache Pig scripts?

Ans

Explain & Describe are important utilities for debugging in Apache Pig.

Describe is helpful to all developers when scripting Pig because it displays the schema of the relation in a script. For developers, who are freshers & are learning Apache Pig use this utility to recognize the process of these operator making the modification to this data. Pig script has many describe.
Explain utility is extremely helpful to developers of Hadoop when they are trying to optimize Pig Latin scripts or debug errors. Explain is applied on a specific alias in scripts or is applied on the entire script in the interactive shell of grunt. Explain utility creates many text graphs, which are printed to files.

Q13. How does the user communicate with shell in Apache Pig?

Ans

Users interact with HDFS or any local file system through Grunt, which is the Apache Pig’s communicative shell. To initiate Grunt, users need to invoke the Apache Pig with a no command as follows:

  • Executing command “pig –x local” will prompt - grunt >
  • Pig Latin scripts can run either in local mode or the cluster mode by setting up the configuration in PIG_CLASSPATH.
  • For exiting from grunt shell, users need to press CTRL+D or just key in the exit.

Q14. What is the function of illustrating in Apache Pig?

Ans

Illustrate is used for implementing the scripts of Pig on vast sets of data, which generally is time-consuming. That’s why developers execute the scripts of a pig on sample data where it’s possible that the selected sample data, may not execute the script correctly. E.g., if the script consists of a join operator then there must be few records in sample data which has the same key, or else the join operation may not return the results. For managing these issues, developers use the function, illustrate, which takes data from the sample and whenever it faces operators like the filter or join, which removes the data, it makes sure that some records go through whereas some are restricted, by modifying records in such so that they follow the condition set. Illustrate shows the output of every step but does not execute MapReduce jobs.

Q15. What do we know about the case sensitivity of Pig?

Ans

Firstly, it is hard to find whether Pig is case sensitive or insensitive. E.g., in user-specific functions, field names, and relations in pig are case sensitive. The function COUNT is not similar to the functions of count or X=load ‘foo’ is not similar to  x=load ‘foo.' Additionally, keywords in Pig are obviously case insensitive. E.g.  LOAD is similar to the load.

Q16. Distinguish between physical & logical plans in an Apache Pig script.

Ans

Physical & logical plans are generated while executing a pig script. Pig is based on the function of interpreter checking. The Logical plan is generated after the semantic verification & parsing while the processing of no data takes place in the generation of any logical plan. A consistent plan consists of a compilation of operators but does not consist of edges involving the operators. After the generation of the logical plan, the execution of the script goes to the physical plan. The physical plan is the explanation of physical operators, which Pig will use, for the execution of the script. It is more or less similar to a sequence of MapReduce works, but the plans don’t have any such reference of its execution in MapReduce. While the generation of any physical plan, the logical operator cogroup is transformed into physical operators, which are – Global Rearrange, Local Rearrange, and Package.

Q17. Is Co-group is a group of more than 1 data set?

Ans

A group of data sets is referred to as Co-group. In any case, of more than one data set, co-group, groups all the data sets and then joins them based on a common field. That is why; we can say that a co-group is obviously a group of more than one data set.

Q18. Differentiate between HiveQL & PigLatin.

Ans

  • PigLatin is a procedural language, whereas HiveQL is declarative.
  • In HiveQL it is necessary to specify the schema, whereas in PigLatin it is optional.
  • PigLatin has a nested relative data model, whereas HiveQL has a flat data model.

Q19. What are the uses of Apache Pig?

Ans

Pig big data tools are specifically used for processing iteratively, for traditional ETL data pipelines & research on raw data. Pig operates in situations where the schema is unknown, incomplete, or inconsistent; it is used by all developers who want to use the data before being loaded into the data warehouse. For building prediction models for behavior, it is used by the website to detect the reply of visitors to a variety of images, ads, articles, etc.

Q20. Is PigLatin strongly typed language?

Ans

Strongly typed language, is characterized where the user should state all the type of variables openly, whereas in Pig, the description of the data, it anticipates the data to approach in the mentioned format. If the schema is unknown, the script adapts to the actual data types at the runtime. That’s why it is stated that PigLatin might be strongly typed in many scenarios, but in some situations, it is otherwise gently typed. It keeps on working with the data, which may not be up to the expectations.

Q21. Distinguish between COGROUP & GROUP operators.

Ans

A GROUP & COGROUP operator is the same & works within one or many relations. Operator GROUP is usually used for grouping the data in any one single relation, for enhanced readability, while COGROUP is for gathering the data for 2 or higher relations. COGROUP is a mixture of JOIN & GROUP, i.e., it can group the tables, which are based on columns, and joins them into grouped pieces. At any given time, cogroup can feature up to 127 relations.

Q22. What do we understand by the outer bag and inner bag in Pig?

Ans

The outer bag is just any relation in Pig whereas sny relation within a bag is known as the inner bag.

Q23. Differentiate between COUNT and COUNT_STAR functions in Pig.

Ans

The Function COUNT_STAR (0) comprises NULL values as it counts, whereas the COUNT function doesn’t include the NULL value when counting the number of elements in a bag.

Q24. Do Pig support multi-line commands?

Ans

Pig supports single & multi-line commands, In the single-line command, it carries out the data but doesn’t store the file in the system, but in multiple lines commands it stores the data in HDFS.

Q25. If I have a relation R then how can I get the top 10 tuples from the relation R?

Ans

Function TOP () returns the top (N) tuples from a relation or a bag of tuples. (N) Is passed as a constraint to function top () with the column, where the values are supposed to be evaluated in comparison to the relation R.

Q26. How can we combine the contents of two or more relations & then divide them into a single relation into two or more relations?

Ans

The operation can be easily done by using the SPLIT and UNION operators.

Q27. What are the various types of UDF’s in Java supported by Apache Pig?

Ans

Types of User-Defined Functions supported in Pig are, Eval Algebraic and Filter functions are.

Q28. What are the standard functionalities between Pig and Hive?

Ans

PigLatin and HiveQL both alter the commands to MapReduce work & cannot be used for transactions in OLAP as it is extremely difficult in executing queries of low latency.

Q29. If we have a file employee.txt in the Hadoop Data File System directory with minimum 100 records, & want to see the first 25 records only from the employee.txt file. How can we do this?

Ans

Firstly we need to load the file employee.txt with the relation name as Employee. Then we can pull the first ten records of the data from the employee file by using the limit operator – Result = limit employee 25.

Q30. What are the limitations of Pig Script?

Ans

Following are some of the Limitations of the Apache Pig:

  • Apache Pig isn’t preferable for analytics of a single record in huge data sets.
  • Pig platform is specifically designed for ETL-type use cases, it’s not a good choice for synchronized or real-time scenarios.
  • Apache Pig is built on top of MapReduce, which is itself batch processing oriented.

Q31. Why do we use Filters in Apache Pig?

Ans

As the clause in SQL, Apache Pig has to filter for extraction of the records, which are based on a predicate or specified conditions. The records are then passed through the pipeline if the condition turns true. Predicate surrounds a variety of operators like ==, <=,!=, >=. For instance - Y = filter X by symbol matches ‘Mr.*’; X= load ‘inputs’ as(name,address)

Q32. What is UDF in Pig?

Ans

If the Built-in operators do not provide some of the basic functions, then developers can apply those functions by writing the user-defined functions by using programming languages like PythonJava, Ruby, etc. (UDF’s) better known as User Defined Functions are then rooted into the Pig Latin Script.

Q33. How to write Java UDF?

Ans

UDFs can be developed by extending EvalFunc class and overriding the execution method.
Example: This UDF replaces a given string with another string

Package kelly.training.pig.udf;
Import java.io.IOException;
Import org.apache.hadoop.conf.configuration;
Import org.apache.pig.EvalFunc;
Import org.apache.pig.data Tuple;
Import org.apache.pig.impl.util.UDFContext;
Public classTransform extends EvalFunc{
Public string exec(Tuple input) throws IOException {
if(input == null || input.size[] == 0) {
Return null;
}
Configuration conf=UDFContext.getUDFContext().getJobConf();
String from = conf.get(“replace.string”);
if(from == null){
Throw new IOException (“replace.string should not be null”);
}
String to = conf.get(“replace.by.string”);
if(to==null){
Throw new IOException (“replace.by.string should not be null”);
}
Try{
String str = (string) input.get(0);
Return str.replace(from, to);
} catch (exception e){
Throw new IOException(“caught exception processing input row”,e);
}
}
}

Q34. What is Grunt Shell?

Ans

Grunt Shell is an interactive-based shell. This means where exactly we will get the output then and their itself. Whether it is a success (or) fail.

Q35. What is Pigstorage?

Ans

Loads or stores relations using field delimited text format.

Each line is broken into fields using a configurable field delimiter (defaults to a tab character) to be stored in the tuples fields. It is the default storage when none is specified.

Q36. Where Does Pig Live?

Ans

  1. Pig is installed on the user machine.
  2. No need to install anything on the Hadoop cluster
  3. Pig and Hadoop versions must be compatible.
  4. Pig submits and executes jobs to the Hadoop cluster

Q37. Hive used for types of applications?

Ans

  • Summarization
    Ex:- Daily/Weekly aggregations of impression/click counts
  • Complex measure of user engagement
  • Ad Hoc Analysis
    Ex:- How many group admins are broken down by state/country
  • Data Mining (Assembling Training Data)
    Ex:- User engagement as a function of user attributes.
  • Spam Detection
  • Anomalous patterns for site integrity. 
  • Application API usage patterns
  • Ad Optimizations
  • Document indexing 
  • Customer-facing business intelligence (Ex: Google analytics) Predictive modeling, hypothesis testing

Q38. What is Hive QL?

Ans

  • Support SQL-like query Language called HiveQL for select, join aggregate, union all, and subquery in the from clause. 
  • Support DDL statements such as CREATE table with serialization format, partitioning, and bucketing columns.
  • Command to load data from external sources and INSERT into HIVE tables.
  • Do not support UPDATE and DELETE.
  • Support multi-table INSERT
  • Support user-defined column transformation (UDF) and aggregation (UDAF) function written in Java.

Q39. What is the Difference Between Pig & SQL?

Ans

Pig SQL
Pig is procedural SQL is declarative
Nested relational data model Flat relational data model
Schema is optional Schema is required
Scan Centric analytic workloads OLTP + OLAA workloads
Limited query optimization Significant opportunity for query optimization

 

Q40. What is the Difference Between Mapreduce & Pig?

Ans

Mapreduce Pig
Mapreduce expects the programming language skills for writing the business logic Pig there is no much of programming skills. As we are writing whole logic will making use of pig transformation (or) operations.
If we can do any change in the Mapreduce reduce the program, we need to certain problems we can change the process entire.
Compiling the program
Executing the program
Packing up the program
Deploying the same cluster environment
In the pig, we can complete dealing with simple scripting we can avoid another transaction process.
5 % of the Mapreduce code
5% of the Mapreduce development time
Increases programmer productivity
25% of the Mapreduce execution time
As a general saying of the Hadoop Mapreduce program writes 200 lines of mapreduce code. In pig we can that type of Mapreduce program, we can write 10 lines of code.
Mapreduce requires multiple stages, Leading to long development life cycles Rapid prototyping increase productivity. Pig provides the log analysis
Ad Hoc queries across various large data sets.

 

Q41. Explain The Difference Between Count_star And Count Functions In Apache Pig?

Ans

COUNT function does not include the NULL value when counting the number of elements in a bag, whereas COUNT_STAR (0 function includes NULL values while counting.

Q42. What Are The Various Diagnostic Operators Available In Apache Pig?

Ans

  • Dump Operator- It is used to display the output of pig Latin statements on the screen, so that developers can debug the code.
  • Describe Operator-Explained in apache pig interview question no- 10
  • Explain Operator-Explained in apache pig interview question no -10
  • Illustrate Operator- Explained in apache pig interview question no -11

Q43.  How Will You Merge The Contents Of Two Or More Relations And Divide A Single Relation Into Two Or More Relations?

Ans

This can be accomplished using the UNION and SPLIT operators.

Q44. What Are The Different Types Of Udf’s In Java Supported By Apache Pig?

Ans

Algebraic, Eval and Filter functions are the various types of UDF’s supported in Pig.

Q45. Explain About The Scalar Datatypes In Apache Pig.?

Ans 

Integer, float, double, long, bytearray and char array are the available scalar datatypes in Apache Pig.

Topics:Interview Questions with Answers

Comments

Subscribe

Top Courses in Python

Top Courses in Python

We help you to choose the right Python career Path at myTectra. Here are the top courses in Python one can select. Learn More →

aathirai cut mango pickle

More...