Spark stores data in-memory whereas Hadoop stores data on disk. Hadoop uses replication to achieve fault tolerance whereas Spark uses different data storage model, RDD. RDDs achieve fault tolerance through a notion of lineage: if a partition of an RDD is lost, the RDD has enough information to rebuild just that partition.This removes the need for replication to achieve fault tolerance.
Posted Date:- 2021-10-22 04:38:36
What is the difference between persist() and cache()
Is Apache Spark a good fit for Reinforcement learning?
What are the analytic algorithms provided in Apache Spark GraphX?
What are the different types of operators provided by the Apache GraphX library?
How can you compare Hadoop and Spark in terms of ease of use?
What are the different levels of persistence in Spark?
Does Apache Spark provide checkpoints?
How is Spark SQL different from HQL and SQL?
Explain the types of operations supported by RDDs.
What do you understand by Lazy Evaluation?
When running Spark applications, is it necessary to install Spark on all the nodes of YARN cluster?
What API is used for Graph Implementation in Spark?
How are automatic clean-ups triggered in Spark for handling the accumulated metadata?
How can you trigger automatic clean-ups in Spark to handle accumulated metadata?
Why is there a need for broadcast variables when working with Apache Spark?
How do you convert a Spark RDD into a DataFrame?
What is the significance of Sliding Window operation?
What are the types of Transformation on DStream?
How does Spark achieve full tolerance as compared to Hadoop?
What do you understand by Caching RDDs in Spark? Name the function calls for caching an RDD
What is the use of VectorAssembler in Spark MlLib?
What is the significance of Sliding Window operation?
HOW IS MACHINE LEARNING CARRIED OUT IN SPARK?
Which languages can Spark be integrated with?
What are the benefits of Spark over MapReduce?
WHAT IS IMPLIED BY THE TREATMENT OF MEMORY IN SPARK?
What is the difference between DataFrame and RDD?
On which port the Spark UI is available?
What are the benefits of using Spark with Apache Mesos?
Explain about the major libraries that constitute the Spark Ecosystem
How can you trigger automatic clean-ups in Spark to handle accumulated metadata?
Why is there a need for broadcast variables when working with Apache Spark?
Illustrate some demerits of using Spark.
Why do we need broadcast variables in Spark?
What do you understand by worker node?
What makes Spark good at low latency workloads like graph processing and Machine Learning?
List the functions of Spark SQL.
How can the data transfers be minimized while working with Spark?