New Year Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: get65

Amazon Web Services Updated DAS-C01 Exam Questions and Answers by erik

Page: 4 / 14

Amazon Web Services DAS-C01 Exam Overview :

Exam Name: AWS Certified Data Analytics - Specialty
Exam Code: DAS-C01 Dumps
Vendor: Amazon Web Services Certification: AWS Certified Data Analytics
Questions: 207 Q&A's Shared By: erik
Question 16

A central government organization is collecting events from various internal applications using Amazon Managed Streaming for Apache Kafka (Amazon MSK). The organization has configured a separate Kafka topic for each application to separate the data. For security reasons, the Kafka cluster has been configured to only allow TLS encrypted data and it encrypts the data at rest.

A recent application update showed that one of the applications was configured incorrectly, resulting in writing data to a Kafka topic that belongs to another application. This resulted in multiple errors in the analytics pipeline as data from different applications appeared on the same topic. After this incident, the organization wants to prevent applications from writing to a topic different than the one they should write to.

Which solution meets these requirements with the least amount of effort?

Options:

A.

Create a different Amazon EC2 security group for each application. Configure each security group to have access to a specific topic in the Amazon MSK cluster. Attach the security group to each application based on the topic that the applications should read and write to.

B.

Install Kafka Connect on each application instance and configure each Kafka Connect instance to write to a specific topic only.

C.

Use Kafka ACLs and configure read and write permissions for each topic. Use the distinguished name of the clients' TLS certificates as the principal of the ACL.

D.

Create a different Amazon EC2 security group for each application. Create an Amazon MSK cluster and Kafka topic for each application. Configure each security group to have access to the specific cluster.

Discussion
Question 17

A manufacturing company wants to create an operational analytics dashboard to visualize metrics from equipment in near-real time. The company uses Amazon Kinesis Data Streams to stream the data to other applications. The dashboard must automatically refresh every 5 seconds. A data analytics specialist must design a solution that requires the least possible implementation effort.

Which solution meets these requirements?

Options:

A.

Use Amazon Kinesis Data Firehose to store the data in Amazon S3. Use Amazon QuickSight to build the dashboard.

B.

Use Apache Spark Streaming on Amazon EMR to read the data in near-real time. Develop a custom application for the dashboard by using D3.js.

C.

Use Amazon Kinesis Data Firehose to push the data into an Amazon Elasticsearch Service (Amazon ES) cluster. Visualize the data by using a Kibana dashboard.

D.

Use AWS Glue streaming ETL to store the data in Amazon S3. Use Amazon QuickSight to build the dashboard.

Discussion
Honey
I highly recommend it. They made a big difference for me and I'm sure they'll help you too. Just make sure to use them wisely and not solely rely on them. They should be used as a supplement to your regular studies.
Antoni Oct 25, 2024
Good point. Thanks for the advice. I'll definitely keep that in mind.
Alessia
Amazing Dumps. Found almost all questions in actual exam whih I prepared from these valuable dumps. Recommended!!!!
Belle Nov 2, 2024
That's impressive. I've been struggling with finding good study material for my certification. Maybe I should give Cramkey Dumps a try.
Kylo
What makes Cramkey Dumps so reliable? Please guide.
Sami Aug 29, 2024
Well, for starters, they have a team of experts who are constantly updating their material to reflect the latest changes in the industry. Plus, they have a huge database of questions and answers, which makes it easy to study and prepare for the exam.
Freddy
I passed my exam with flying colors and I'm confident who will try it surely ace the exam.
Aleksander Sep 26, 2024
Thanks for the recommendation! I'll check it out.
Question 18

A company's system operators and security engineers need to analyze activities within specific date ranges of AWS CloudTrail logs. All log files are stored in an Amazon S3 bucket, and the size of the logs is more than 5 T B. The solution must be cost-effective and maximize query performance.

Which solution meets these requirements?

Options:

A.

Copy the logs to a new S3 bucket with a prefix structure of . Use the date column as a partition key. Create a table on Amazon Athena based on the objects in the new bucket. Automatically add metadata partitions by using the MSCK REPAIR TABLE command in Athena. Use Athena to query the table and partitions.

B.

Create a table on Amazon Athena. Manually add metadata partitions by using the ALTER TABLE ADD PARTITION statement, and use multiple columns for the partition key. Use Athena to query the table and partitions.

C.

Launch an Amazon EMR cluster and use Amazon S3 as a data store for Apache HBase. Load the logs from the S3 bucket to an HBase table on Amazon EMR. Use Amazon Athena to query the table and partitions.

D.

Create an AWS Glue job to copy the logs from the S3 source bucket to a new S3 bucket and create a table using Apache Parquet file format, Snappy as compression codec, and partition by date. Use Amazon Athena to query the table and partitions.

Discussion
Question 19

A company analyzes historical data and needs to query data that is stored in Amazon S3. New data is generated daily as .csv files that are stored in Amazon S3. The company's data analysts are using Amazon Athena to perform SQL queries against a recent subset of the overall data.

The amount of data that is ingested into Amazon S3 has increased to 5 PB over time. The query latency also has increased. The company needs to segment the data to reduce the amount of data that is scanned.

Which solutions will improve query performance? (Select TWO.)

Use MySQL Workbench on an Amazon EC2 instance. Connect to Athena by using a JDBC connector. Run the query from MySQL Workbench instead of

Athena directly.

Options:

A.

Configure Athena to use S3 Select to load only the files of the data subset.

B.

Create the data subset in Apache Parquet format each day by using the Athena CREATE TABLE AS SELECT (CTAS) statement. Query the Parquet data.

C.

Run a daily AWS Glue ETL job to convert the data files to Apache Parquet format and to partition the converted files. Create a periodic AWS Glue crawler to automatically crawl the partitioned data each day.

D.

Create an S3 gateway endpoint. Configure VPC routing to access Amazon S3 through the gateway endpoint.

Discussion
Page: 4 / 14

DAS-C01
PDF

$36.75  $104.99

DAS-C01 Testing Engine

$43.75  $124.99

DAS-C01 PDF + Testing Engine

$57.75  $164.99