본문 바로가기
AWS/Solutions Architect Associate (SAA)

[AWS] Solutions Architect Associate SAA-C03 Dump 문제 정리 (1)

by HYUNHP 2025. 4. 19.
728x90
반응형

안녕하세요 HELLO

 

이번에 Solutions Architect Associate를 준비하면서, 문제 은행 Dump 사이트에서 문제 및 해설을 정리했습니다. 한 곳에 정리된 글이 없어서, 공부하기가 어려웠기에, 이를 활용해서 다들 공부에 도움 되었으면 합니다.

 

■ Solutions Architect Associate Dump 정리

 

1. 현재 페이지 (문제 1~20)

- 추후 업데이트 예정 -

 

반응형

 

#1. A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection. The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity.


Which solution meets these requirements?

 

  • A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
  • B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.
  • C. Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket.
  • D. Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.

 

더보기

Selected Answer: A

 

General line: Collect huge amount of the files across multiple continents
Conditions: High speed Internet connectivity
Task: aggregate the data from all in a single S3 bucket
Requirements: as quick as possible, minimize operational complexity

Correct answer A: S3 Transfer Acceleration because:
- ideally works with objects for long-distance transfer (uses Edge Locations)
- can speed up content transfers to and from S3 as much as 50-500%
- use cases: mobile & web application uploads and downloads, distributed office transfers, data exchange with trusted partners. Generally for sharing of large data sets between companies, customers can set up special access to their S3 buckets with accelerated uploads to speed data exchanges and the pace of innovation.

B - about disaster recovery


C - about transferring data between your local environment and the AWS Cloud


D - about disaster recovery


#2. A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket. Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing architecture.


What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?

 

  • A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.
  • B. Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.
  • C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.
  • D. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed.

 

더보기

Selected Answer: C


Keyword:
- Queries will be simple and will run on-demand.
- Minimal changes to the existing architecture.

 

A: Incorrect - We have to do 2 step. load all content to Redshift and run SQL query (This is simple query so we can you Athena, for complex query we will apply Redshit)


B: Incorrect - Our query will be run on-demand so we don't need to use CloudWatch Logs to store the logs.


C: Correct - This is simple query we can apply Athena directly on S3


D: Incorrect - This take 2 step: use AWS Glue to catalog the logs and use Spark to run SQL query


#3. A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.

Which solution meets these requirements with the LEAST amount of operational overhead?

 

  • A. Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
  • B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
  • C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.
  • D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.

 

 

더보기

Selected Answer: A


Condition keys: AWS provides condition keys that you can query to provide more granular control over certain actions. The following condition keys are especially useful with AWS Organizations:

aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to listing all the account IDs for all AWS accounts in an organization. Instead of listing all of the accounts that are members of an organization, you can specify the organization ID in the Condition element.

aws:PrincipalOrgPaths – Use this condition key to match members of a specific organization root, an OU, or its children. The aws:PrincipalOrgPaths condition key returns true when the principal (root user, IAM user, or role) making the request is in the specified organization path. A path is a text representation of the structure of an AWS Organizations entity.


#4. An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.


Which solution will provide private network connectivity to Amazon S3?

 

  • A. Create a gateway VPC endpoint to the S3 bucket.
  • B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
  • C. Create an instance profile on Amazon EC2 to allow S3 access.
  • D. Create an Amazon API Gateway API with a private link to access the S3 endpoint.

 

더보기

Selected Answer: A


Keywords:
- EC2 in VPC
- EC2 instance needs to access the S3 bucket without connectivity to the internet

A: Correct - Gateway VPC endpoint can connect to S3 bucket privately without additional cost


B: Incorrect - You can set up interface VPC endpoint for CloudWatch Logs for private network from EC2 to CloudWatch. But from CloudWatch to S3 bucket: Log data can take up to 12 hours to become available for export and the requirement only need EC2 to S3


C: Incorrect - Create an instance profile just grant access but not help EC2 connect to S3 privately


D: Incorrect - API Gateway like the proxy which receive network from out site and it forward request to AWS Lambda, Amazon EC2, Elastic Load Balancing products such as Application Load Balancers or Classic Load Balancers, Amazon DynamoDB, Amazon Kinesis, or any publicly available HTTPS-based endpoint. But not S3


#5. A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.


What should a solutions architect propose to ensure users see all of their documents at once?

 

  • A. Copy the data so both EBS volumes contain all the documents
  • B. Configure the Application Load Balancer to direct a user to the server with the documents
  • C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
  • D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server

 

 

더보기

Selected Answer: C


The answer is C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS.

The current architecture is using two separate EBS volumes, one for each EC2 instance. This means that each instance only has a subset of the documents. When a user refreshes the website, the Application Load Balancer will randomly direct them to one of the two instances. If the user's documents are not on the instance that they are directed to, they will not be able to see them.

 


#6. A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.


Which solution will meet these requirements?

 

  • A. Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.
  • B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
  • C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
  • D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.

 

더보기

Selected Answer: B


B. On a Snowball Edge device you can copy files with a speed of up to 100Gbps. 70TB will take around 5600 seconds, so very quickly, less than 2 hours. The downside is that it'll take between 4-6 working days to receive the device and then another 2-3 working days to send it back and for AWS to move the data onto S3 once it reaches them. Total time: 6-9 working days. Bandwidth used: 0.

C. File Gateway uses the Internet, so maximum speed will be at most 1Gbps, so it'll take a minimum of 6.5 days and you use 70TB of Internet bandwidth.

D. You can achieve speeds of up to 10Gbps with Direct Connect. Total time 15.5 hours and you will use 70TB of bandwidth. However, what's interesting is that the question does not specific what type of bandwidth? Direct Connect does not use your Internet bandwidth, as you will have a dedicate peer to peer connectivity between your on-prem and the AWS Cloud, so technically, you're not using your "public" bandwidth.

Although D might also be correct if the bandwidth usage refers strictly to your public connectivity.


#7. A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to decouple the solution and increase scalability.


Which solution meets these requirements?

 

  • A. Persist the messages to Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages.
  • B. Deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU metrics.
  • C. Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store them in Amazon DynamoDB. Configure the consumer applications to read from DynamoDB to process the messages.
  • D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SQS) subscriptions. Configure the consumer applications to process the messages from the queues.

 

더보기

Selected Answer: D


Keywords:
- The number of messages varies drastically
- Sometimes increases suddenly to 100,000 each second

A: Incorrect - Don't confuse between Kinesis Data Analytics and Kinesis Data Stream =)) Kinesis Data Analytics will get the data from Kinesis Data Stream or Kinesis Data FireHose or MSK (Managed Stream for apache Kafka) for analytic purpose. It can not consume message and send to applications.


B: Incorrect - Base on the keywords -> Auto Scaling group not scale well because it need time to check the CPU metric and need time to start up the EC2 and the messages varies drastically. Example: we have to scale from 10 to 100 EC2. Our servers may be down a while when it was scaling.


C: Incorrect - Kinesis Data Streams can handle this case but we should increase the more shards but not single shard.


D: Correct: We can handle high workload well with fan-out pattern SNS + multiple SQS -> This is good for use case:
- The number of messages varies drastically
- Sometimes increases suddenly to 100,000 each second


#8. A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.


How should a solutions architect design the architecture to meet these requirements?

 

  • A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.
  • B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
  • C. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.
  • D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes.

 

더보기

Selected Answer: B


1.Amazon SQS for Decoupling:
Amazon SQS provides a fully managed message queue to decouple the primary server from the compute nodes. Jobs are sent to the queue, and compute nodes process them independently.
This architecture eliminates a single point of failure (the primary server) and increases resilience.

2.Auto Scaling Group for Compute Nodes:
EC2 instances managed in an Auto Scaling group process the jobs in the SQS queue. Auto Scaling dynamically adjusts the number of instances based on the queue size, ensuring scalability to handle variable workloads.

3.Scalability Based on Queue Size:
Scaling based on the size of the SQS queue ensures that the system adjusts to workload demands efficiently. When the queue grows, more instances are launched; when the queue shrinks, instances are terminated.


#9. A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days the files are rarely accessed.
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues.


Which solution will meet these requirements?

 

  • A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
  • B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
  • C. Create an Amazon FSx for Windows File Server file system to extend the company's storage space.
  • D. Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.

 

더보기

Selected Answer: B


Keywords:
- After 7 days the files are rarely accessed.
-The total data size is increasing and is close to the company's total storage capacity.
- Increase the company's available storage space without losing low-latency access to the most recently accessed files. -> (for rarely accessed files we can access it with high-latency)
- Must also provide file lifecycle management to avoid future storage issues.

A: Incorrect - Don't mention how to increase company's available storage space.


B: Correct - extend storage space and fast access with S3 File Gateway (cache recent access file), reduce cost and storage by move to S3 Glacier Deep Archive after 7 days.


C: Incorrect - Didn't handle file lifecycle management.


D: Incorrect - Don't mention about increase the company's available storage space.


#10. A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.


Which solution will meet these requirements?

 

  • A. Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
  • B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
  • C. Use an API Gateway authorizer to block any requests while the application processes an order.
  • D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.

 

더보기

Selected Answer: B

 

Explanation:
FIFO Queues: Amazon SQS FIFO queues are specifically designed to maintain the order of messages. This ensures that messages are processed in the exact sequence they were sent, which is crucial for order processing in an e-commerce application.

Why other options are less suitable:
A. Amazon SNS: SNS is a publish-subscribe service. It doesn't guarantee message order.


C. API Gateway Authorizer: Authorizers are used for authentication and authorization, not for managing message order.


D. Amazon SQS Standard Queue: Standard SQS queues do not guarantee message order.


#11. A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.


What should a solutions architect do to accomplish this goal?

 

  • A. Use AWS Secrets Manager. Turn on automatic rotation.
  • B. Use AWS Systems Manager Parameter Store. Turn on automatic rotation.
  • C. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket.
  • D. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume.

 

더보기

Selected Answer: A


Option A: Using AWS Secrets Manager and enabling automatic rotation is the recommended solution for minimizing the operational overhead of credential management. AWS Secrets Manager provides a secure and centralized service for storing and managing secrets, such as database credentials. By leveraging Secrets Manager, the application can retrieve the database credentials programmatically at runtime, eliminating the need to store them locally in a file. Enabling automatic rotation ensures that the database credentials are regularly rotated without manual intervention, enhancing security and compliance.


#12. A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.


What should a solutions architect do to meet these requirements?

 

  • A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution.
  • B. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Configure Route 53 to route traffic to the CloudFront distribution.
  • C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the custom domain name as an endpoint for the web application.
  • D. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.

 

더보기

Selected Answer: A


Q: How is AWS Global Accelerator different from Amazon CloudFront?

A: AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.


#13. A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions.


Which solution will meet these requirements with the LEAST operational overhead?

 

  • A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule.
  • B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the required Regions. Configure Systems Manager to rotate the secrets on a schedule.
  • C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials.
  • D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate the secrets.
더보기

Selected Answer: A


AWS Secrets Manager is a secrets management service that enables you to store, manage, and rotate secrets such as database credentials, API keys, and SSH keys. Secrets Manager can help you minimize the operational overhead of rotating credentials for your Amazon RDS for MySQL databases across multiple Regions.

 

With Secrets Manager, you can store the credentials as secrets and use multi-Region secret replication to replicate the secrets to the required Regions. You can then configure Secrets Manager to rotate the secrets on a schedule so that the credentials are rotated automatically without the need for manual intervention. This can help reduce the risk of secrets being compromised and minimize the operational overhead of credential management.


#14. A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.


The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.


Which solution will meet these requirements?

 

  • A. Use Amazon Redshift with a single node for leader and compute functionality.
  • B. Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.
  • C. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
  • D. Use Amazon ElastiCache for Memcached with EC2 Spot Instances.

 

더보기

Selected Answer: C


Option C, using Amazon Aurora with a Multi-AZ deployment and configuring Aurora Auto Scaling with Aurora Replicas, would be the best solution to meet the requirements.

Aurora is a fully managed, MySQL-compatible relational database that is designed for high performance and high availability. Aurora Multi-AZ deployments automatically maintain a synchronous standby replica in a different Availability Zone to provide high availability. Additionally, Aurora Auto Scaling allows you to automatically scale the number of Aurora Replicas in response to read workloads, allowing you to meet the demand of unpredictable read workloads while maintaining high availability. This would provide an automated solution for scaling the database to meet the demand of the application while maintaining high availability.


Option A, using Amazon Redshift with a single node for leader and compute functionality, would not provide high availability.

Option B, using Amazon RDS with a Single-AZ deployment and configuring RDS to add reader instances in a different Availability Zone, would not provide high availability and would not automatically scale the number of reader instances in response to read workloads.

Option D, using Amazon ElastiCache for Memcached with EC2 Spot Instances, would not provide a database solution and would not meet the requirements.

 


#15. A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud.


Which solution will meet these requirements?

 

  • A. Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC.
  • B. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.
  • C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.
  • D. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.

 

더보기

Selected Answer: C


option C: Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.

AWS Network Firewall is a managed firewall service that provides filtering for both inbound and outbound network traffic. It allows you to create rules for traffic inspection and filtering, which can help protect your production VPC.

Option A: Amazon GuardDuty is a threat detection service, not a traffic inspection or filtering service.

Option B: Traffic Mirroring is a feature that allows you to replicate and send a copy of network traffic from a VPC to another VPC or on-premises location. It is not a service that performs traffic inspection or filtering.

Option D: AWS Firewall Manager is a security management service that helps you to centrally configure and manage firewalls across your accounts. It is not a service that performs traffic inspection or filtering.


#16. A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company's management team should have full access to all the visualizations. The rest of the company should have only limited access.


Which solution will meet these requirements?

 

  • A. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles.
  • B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.
  • C. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
  • D. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.

 

더보기

Selected Answer: B

 

Keywords:
- Data lake on AWS.
- Consists of data in Amazon S3 and Amazon RDS for PostgreSQL.
- The company needs a reporting solution that provides data VISUALIZATION and includes ALL the data sources within the data lake.

A - Incorrect: Amazon QuickSight only support users(standard version) and groups (enterprise version). users and groups only exists without QuickSight. QuickSight don't support IAM. We use users and groups to view the QuickSight dashboard


B - Correct: as explained in answer A and QuickSight is used to created dashboard from S3, RDS, Redshift, Aurora, Athena, OpenSearch, Timestream


C - Incorrect: This way don't support visulization and don't mention how to process RDS data


D - Incorrect: This way don't support visulization and don't mention how to combine data RDS and S3


#17. A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket.


What should the solutions architect do to meet this requirement?

 

  • A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
  • B. Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.
  • C. Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.
  • D. Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances.

 

더보기

Selected Answer: A


The correct option to meet this requirement is A: Create an IAM role that grants access to the S3 bucket and attach the role to the EC2 instances.

An IAM role is an AWS resource that allows you to delegate access to AWS resources and services. You can create an IAM role that grants access to the S3 bucket and then attach the role to the EC2 instances. This will allow the EC2 instances to access the S3 bucket and the documents stored within it.

Option B is incorrect because an IAM policy is used to define permissions for an IAM user or group, not for an EC2 instance.

Option C is incorrect because an IAM group is used to group together IAM users and policies, not to grant access to resources.

Option D is incorrect because an IAM user is used to represent a person or service that interacts with AWS resources, not to grant access to resources.


#18. An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function, and store the image in its compressed form in a different S3 bucket. A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.


Which combination of actions will meet these requirements? (Choose two.)

 

  • A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket.
  • B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue.
  • C. Configure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the file name to a text file in memory and use the text file to keep track of the images that were processed.
  • D. Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue, log the file name in a text file on the EC2 instance and invoke the Lambda function.
  • E. Configure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert to an Amazon ample Notification Service (Amazon SNS) topic with the application owner's email address for further processing.

 

더보기

Selected Answer: AB


To design a solution that uses durable, stateless components to process images automatically, a solutions architect could consider the following actions:

Option A involves creating an SQS queue and configuring the S3 bucket to send a notification to the queue when an image is uploaded. This allows the application to decouple the image upload process from the image processing process and ensures that the image processing process is triggered automatically when a new image is uploaded.

Option B involves configuring the Lambda function to use the SQS queue as the invocation source. When the SQS message is successfully processed, the message is deleted from the queue. This ensures that the Lambda function is invoked only once per image and that the image is not processed multiple times.

 

Option C is incorrect because it involves storing state (the file name) in memory, which is not a durable or scalable solution.

Option D is incorrect because it involves launching an EC2 instance to monitor the SQS queue, which is not a stateless solution.

Option E is incorrect because it involves using Amazon EventBridge (formerly Amazon CloudWatch Events) to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic, which is not related to the image processing process.


#19. A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets.
A solutions architect needs to integrate the web application with the appliance to inspect all traffic to the application before the traffic reaches the web server.


Which solution will meet these requirements with the LEAST operational overhead?

 

  • A. Create a Network Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.
  • B. Create an Application Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.
  • C. Deploy a transit gateway in the inspection VPConfigure route tables to route the incoming packets through the transit gateway.
  • D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.

 

더보기

Selected Answer: D


The solution that will meet these requirements with the least operational overhead is D: Deploy a Gateway Load Balancer in the inspection VPC and create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.

A Gateway Load Balancer is a fully managed service that provides a single point of contact for clients and distributes incoming traffic across multiple targets, such as Amazon Elastic Compute Cloud (EC2) instances and containers, in one or more virtual private clouds (VPCs). You can deploy a Gateway Load Balancer in the inspection VPC and create a Gateway Load Balancer endpoint to receive the incoming packets from the web servers in the application's VPC and forward the packets to the appliance for packet inspection. This will allow you to inspect all traffic to the web application with minimal operational overhead.

 

Option A is incorrect because a Network Load Balancer is designed to handle traffic at the connection level and is not suitable for packet inspection.

Option B is incorrect because an Application Load Balancer is designed to handle traffic at the request level and is not suitable for packet inspection.

Option C is incorrect because a transit gateway is designed to allow multiple VPCs and on-premises networks to connect to each other, but it is not suitable for packet inspection.


#20. A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the production environment. The software that accesses this data requires consistently high I/O performance.


A solutions architect needs to minimize the time that is required to clone the production data into the test environment.
Which solution will meet these requirements?

 

  • A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment.
  • B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment.
  • C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production EBS snapshots.
  • D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.

 

더보기

Selected Answer: D


Keywords:
- Modifications to the cloned data must not affect the production environment.
- Minimize the time that is required to clone the production data into the test environment.

A: Incorrect - we can do this But it is not minimize the time as requirement.


B: Incorrect - This approach use same EBS volumes for produciton and test. If we modify test then it will be affected prodution environment.


C: Incorrect - EBS snapshot will create new EBS volumes. It can not restore from existing volumes.


D: Correct - Turn on the EBS fast snapshot restore feature on the EBS snapshots -> no latency on first use


■ 마무리

'Solutions Architect Associate'에 대해서 정리해 봤습니다.

 

그럼 오늘 하루도 즐거운 나날 되길 기도하겠습니다

좋아요댓글 부탁드립니다 :)

 

감사합니다.

 

반응형

댓글