Spring Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: get65

Databricks Updated Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Exam Questions and Answers by zaviyar

Page: 5 / 6

Databricks Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Exam Overview :

Exam Name: Databricks Certified Associate Developer for Apache Spark 3.0 Exam
Exam Code: Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Dumps
Vendor: Databricks Certification: Databricks Certification
Questions: 180 Q&A's Shared By: zaviyar
Question 20

Which of the following code blocks uses a schema fileSchema to read a parquet file at location filePath into a DataFrame?

Options:

A.

spark.read.schema(fileSchema).format("parquet").load(filePath)

B.

spark.read.schema("fileSchema").format("parquet").load(filePath)

C.

spark.read().schema(fileSchema).parquet(filePath)

D.

spark.read().schema(fileSchema).format(parquet).load(filePath)

E.

spark.read.schema(fileSchema).open(filePath)

Discussion
Question 21

The code block shown below should add a column itemNameBetweenSeparators to DataFrame itemsDf. The column should contain arrays of maximum 4 strings. The arrays should be composed of

the values in column itemsDf which are separated at - or whitespace characters. Choose the answer that correctly fills the blanks in the code block to accomplish this.

Sample of DataFrame itemsDf:

1.+------+----------------------------------+-------------------+

2.|itemId|itemName |supplier |

3.+------+----------------------------------+-------------------+

4.|1 |Thick Coat for Walking in the Snow|Sports Company Inc.|

5.|2 |Elegant Outdoors Summer Dress |YetiX |

6.|3 |Outdoors Backpack |Sports Company Inc.|

7.+------+----------------------------------+-------------------+

Code block:

itemsDf.__1__(__2__, __3__(__4__, "[\s\-]", __5__))

Options:

A.

1. withColumn

2. "itemNameBetweenSeparators"

3. split

4. "itemName"

5. 4

(Correct)

B.

1. withColumnRenamed

2. "itemNameBetweenSeparators"

3. split

4. "itemName"

5. 4

C.

1. withColumnRenamed

2. "itemName"

3. split

4. "itemNameBetweenSeparators"

5. 4

D.

1. withColumn

2. "itemNameBetweenSeparators"

3. split

4. "itemName"

5. 5

E.

1. withColumn

2. itemNameBetweenSeparators

3. str_split

4. "itemName"

5. 5

Discussion
Kingsley
Do anyone guide my how these dumps would be helpful for new students like me?
Haris Jan 12, 2026
Absolutely! They are highly recommended for anyone looking to pass their certification exam. The dumps are easy to understand and follow, making it easier for you to study and retain the information.
Alessia
Amazing Dumps. Found almost all questions in actual exam whih I prepared from these valuable dumps. Recommended!!!!
Belle Jan 4, 2026
That's impressive. I've been struggling with finding good study material for my certification. Maybe I should give Cramkey Dumps a try.
Laila
They're such a great resource for anyone who wants to improve their exam results. I used these dumps and passed my exam!! Happy customer, always prefer. Yes, same questions as above I know you guys are perfect.
Keira Jan 7, 2026
100% right….And they're so affordable too. It's amazing how much value you get for the price.
Hendrix
Great website with Great Exam Dumps. Just passed my exam today.
Luka Jan 14, 2026
Absolutely. Cramkey Dumps only provides the latest and most updated exam questions and answers.
Question 22

In which order should the code blocks shown below be run in order to create a DataFrame that shows the mean of column predError of DataFrame transactionsDf per column storeId and productId,

where productId should be either 2 or 3 and the returned DataFrame should be sorted in ascending order by column storeId, leaving out any nulls in that column?

DataFrame transactionsDf:

1.+-------------+---------+-----+-------+---------+----+

2.|transactionId|predError|value|storeId|productId| f|

3.+-------------+---------+-----+-------+---------+----+

4.| 1| 3| 4| 25| 1|null|

5.| 2| 6| 7| 2| 2|null|

6.| 3| 3| null| 25| 3|null|

7.| 4| null| null| 3| 2|null|

8.| 5| null| null| null| 2|null|

9.| 6| 3| 2| 25| 2|null|

10.+-------------+---------+-----+-------+---------+----+

1. .mean("predError")

2. .groupBy("storeId")

3. .orderBy("storeId")

4. transactionsDf.filter(transactionsDf.storeId.isNotNull())

5. .pivot("productId", [2, 3])

Options:

A.

4, 5, 2, 3, 1

B.

4, 2, 1

C.

4, 1, 5, 2, 3

D.

4, 2, 5, 1, 3

E.

4, 3, 2, 5, 1

Discussion
Question 23

The code block displayed below contains an error. The code block is intended to return all columns of DataFrame transactionsDf except for columns predError, productId, and value. Find the error.

Excerpt of DataFrame transactionsDf:

transactionsDf.select(~col("predError"), ~col("productId"), ~col("value"))

Options:

A.

The select operator should be replaced by the drop operator and the arguments to the drop operator should be column names predError, productId and value wrapped in the col operator so they

should be expressed like drop(col(predError), col(productId), col(value)).

B.

The select operator should be replaced with the deselect operator.

C.

The column names in the select operator should not be strings and wrapped in the col operator, so they should be expressed like select(~col(predError), ~col(productId), ~col(value)).

D.

The select operator should be replaced by the drop operator.

E.

The select operator should be replaced by the drop operator and the arguments to the drop operator should be column names predError, productId and value as strings.

(Correct)

Discussion
Page: 5 / 6

Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0
PDF

$36.75  $104.99

Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Testing Engine

$43.75  $124.99

Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 PDF + Testing Engine

$57.75  $164.99