i) Data Ingestion – The foremost step in deploying big data solutions is to extract data from different sources which could be an Enterprise Resource Planning System like SAP, any CRM like Salesforce or Siebel , RDBMS like MySQL or Oracle, or could be the log files, flat files, documents, images, social media feeds. This data needs to be stored in HDFS. Data can either be ingested through batch jobs that run every 15 minutes, once every night and so on or through streaming in real-time from 100 ms to 120 seconds.
ii) Data Storage – The subsequent step after ingesting data is to store it either in HDFS or NoSQL database like HBase. HBase storage works well for random read/write access whereas HDFS is optimized for sequential access.
iii) Data Processing – The ultimate step is to process the data using one of the processing frameworks like mapreduce, spark, pig, hive, etc.
Posted Date:- 2021-08-31 05:21:54
Write the command used to copy data from the local system onto HDFS?
Explain the usage of Context Object.
Whenever a client submits a hadoop job, who receives it?
What is a rack awareness and on what basis is data stored in a rack?
How can you skip the bad records in Hadoop?
What is the process to change the files at arbitrary locations in HDFS?
Where does Hive store the table data by default?
Explain about the process of inter cluster data copying.
What is the default replication factor?
Mention features of Apache sqoop.
What do you mean by SerDe in Hive? Explain.
Mention a business use case where you worked with the Hadoop Ecosystem
What is meant by over/under- replicated blocks in Hadoop?
Explain the difference between HBase and Hive.
Explain the three core methods of a reducer.
What are the Benefits of using zookeeper?
How can you restart the NameNode in Hadoop?
What makes Hadoop Fault tolerant?
How will you choose various file formats for storing and processing data using Apache Hadoop ?
What are the steps involved in deploying a big data solution?
What are the most commonly defined input formats in Hadoop?
What is the best hardware configuration to run Hadoop?
What are the main components of a Hadoop Application?
On what concept the Hadoop framework works?
What is the port number for NameNode?
Explain how data is stored in a rack.
Compare HDFS with Network Attached Storage (NAS).
What is meant by a block and block scanner?
What is indexing? How is indexing done in HDFS?
Explain the difference between HDFS and regular FileSystem.
What is shuffling in MapReduce?
What are the Limitations of Hadoop 1.0 ?
Name some companies that use Hadoop.
What are some limitations of Hadoop?
What are the three modes in which Hadoop runs?
How big data analysis helps businesses increase their revenue? Give example.
What do the four V’s of Big Data denote?
Explain the Storage Unit In Hadoop (HDFS).
What are the challenges faced with Big Data, and why do we use Hadoop for Big Data?