AWS Solutions Architect Associate Exam Questions
These AWS questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site.
AWS Solutions Architect Practice Exam Questions
If you’re looking for AWS Solutions Architect Associate practice exams and realistic AWS certification exam questions, you’ve come to the right place.
And when I say “real,” I don’t mean actual exam questions. These aren’t Solutions Architect exam dumps or AWS braindumps.
Every AWS exam question here is carefully written to reflect the style, tone, and technical depth of the real AWS Solutions Architect Associate exam.
All exam questions come from my AWS Solutions Architect Udemy course and the certificationexams.pro website, which has helped thousands of candidates earn their v.
Study consistently, take plenty of AWS practice tests, and review key AWS architecture principles like fault tolerance, scalability, and cost optimization. With the right preparation, you’ll be ready to pass the AWS Solutions Architect Associate exam on your first attempt.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
A research startup is deploying a data processing agent on a single Amazon EC2 instance that needs a durable block device attached. Compliance requires encryption at rest and hands-free, scheduled volume-level backups, such as daily snapshots retained for 35 days. What is the most appropriate approach to meet these requirements?
-
❏ A. Create an encrypted Amazon EFS file system and use Amazon EventBridge to trigger AWS Lambda to copy data for backups
-
❏ B. Enable server-side encryption on an Amazon S3 bucket and configure Cross-Region Replication for redundancy
-
❏ C. Attach an encrypted Amazon EBS volume and configure Amazon Data Lifecycle Manager to run automated snapshots
-
❏ D. Use encrypted instance store volumes and schedule a cron job to copy data to another EC2 instance
Within about 36 hours what should you implement to accelerate delivery of a dynamic web application to users in Europe while keeping the origin servers in the United States?
-
❏ A. Amazon CloudFront with Lambda@Edge
-
❏ B. Amazon CloudFront with a custom origin to on-prem servers
-
❏ C. AWS Global Accelerator in front of the US origin
-
❏ D. Amazon S3 static website + Route 53 latency routing
A regional nonprofit plans to deploy a lightweight reporting service on a single Amazon EC2 instance with an attached EBS volume. Usage is occasional, with brief surges twice per day lasting about 90 minutes during commute hours. Disk activity is uneven and can spike to roughly 2,800 IOPS. Which EBS volume type should the solutions architect choose to minimize cost while still handling these bursts?
-
❏ A. Amazon EBS Throughput Optimized HDD (st1)
-
❏ B. Amazon EBS General Purpose SSD (gp2)
-
❏ C. Amazon EBS Provisioned IOPS SSD (io2)
-
❏ D. Amazon EBS Cold HDD (sc1)
Which approach ensures personally identifiable information is encrypted at rest for both EC2 EBS volumes and Amazon RDS?
-
❏ A. Deploy CloudHSM and encrypt only the database
-
❏ B. Enable EBS default encryption and turn on RDS encryption with KMS
-
❏ C. Use Secrets Manager and TLS; leave storage unchanged
-
❏ D. Enable RDS encryption; use AWS Backup Vault Lock for EC2 volumes
A retail analytics startup runs a three-tier web application with ten front-end Amazon EC2 instances in an Auto Scaling group placed in a single Availability Zone behind an Application Load Balancer. The company wants to improve resiliency without changing any application code. What architectural change should the solutions architect implement to achieve high availability?
-
❏ A. Create a second Auto Scaling group in a different AWS Region and use Amazon Route 53 failover routing
-
❏ B. Enable cross-zone load balancing on the existing Application Load Balancer only
-
❏ C. Reconfigure the Auto Scaling group to span at least two Availability Zones and register the instances with the existing Application Load Balancer
-
❏ D. Add additional subnets from the same Availability Zone to the Auto Scaling group
Which AWS database supports relational SQL and automatically scales to handle unpredictable traffic spikes?
-
❏ A. Amazon DynamoDB on-demand
-
❏ B. Amazon Aurora Serverless v2
-
❏ C. Amazon RDS for PostgreSQL (provisioned)
-
❏ D. Amazon Redshift Serverless
A renewable energy startup ingests turbine telemetry from hundreds of wind towers into Amazon Kinesis Data Streams. Several consumer services read the same stream concurrently, and the team is noticing growing delays between events written by producers and events read by consumers during traffic surges. What should a solutions architect do to increase read performance for this workload?
-
❏ A. Replace Amazon Kinesis Data Streams with Amazon SQS FIFO queues
-
❏ B. Enable Enhanced Fan-Out for Amazon Kinesis Data Streams
-
❏ C. Replace Amazon Kinesis Data Streams with Amazon Kinesis Data Firehose
-
❏ D. Replace Amazon Kinesis Data Streams with Amazon SQS Standard queues
What is the most cost effective way to enable EC2 instances in 10 separate AWS accounts to communicate using only private IP addresses?
-
❏ A. Attach every VPC to a single AWS Transit Gateway and route traffic through it
-
❏ B. Use AWS RAM to share subnets from a central VPC so all accounts launch into one VPC
-
❏ C. Establish a full mesh of VPC peering connections among all VPCs
-
❏ D. Use AWS Cloud WAN to interconnect the VPCs across accounts
EduCast Labs runs an on-prem telemetry pipeline that gathers clickstream and playback metrics from smart TV and mobile video apps. The system ingests roughly 300,000 events per minute, and stakeholders expect dashboards to reflect changes within about 10 seconds. The team wants to migrate to AWS using fully managed, AWS-native services for ingestion, storage, search, and visualization with minimal operational overhead. Which architecture should they implement?
-
❏ A. Use Amazon EC2 instances to ingest and process the event stream into Amazon S3 for storage, then use AWS Glue to catalog the data and Amazon Athena for querying, and visualize with Amazon QuickSight
-
❏ B. Use Amazon MSK (Managed Streaming for Apache Kafka) for ingestion, land the data in Amazon Redshift for analysis, run external queries with Redshift Spectrum, and build visuals in Amazon QuickSight
-
❏ C. Use Amazon Kinesis Data Streams to ingest the events and process them with AWS Lambda, index the results in Amazon OpenSearch Service for search and analytics, and create dashboards with Amazon Managed Grafana
-
❏ D. Use Amazon EMR to handle stream processing, store records in Amazon DynamoDB, query with DynamoDB, and create dashboards in Amazon CloudWatch
Which security controls should be enabled for a new AWS account root user? (Choose 2)
-
❏ A. Use a long, complex root password
-
❏ B. Use AWS Identity Center for root sign-in
-
❏ C. Create root access keys and store them in Secrets Manager
-
❏ D. Turn on MFA for the root user
-
❏ E. Rotate root access keys every 45 days
These AWS questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site
An online sports analytics company operates a compact reporting service on a single Amazon EC2 On-Demand instance. The application is stateless and resilient, optimized for rapid chart rendering. When viral games or breaking scores cause sudden surges, users see increased latency and occasional 5xx responses. The team needs a scalable approach that keeps performance steady during unpredictable spikes while minimizing spend during quiet periods without maintaining idle capacity. Which solution should they implement?
-
❏ A. Containerize the service on Amazon ECS with Fargate and run one task, using an Amazon CloudWatch alarm to raise the task’s vCPU and memory reservations when average CPU exceeds 65 percent
-
❏ B. Create a golden AMI and a launch template, then use an EC2 Auto Scaling group with Spot Instances and attach an Application Load Balancer to distribute traffic across scaled-out instances
-
❏ C. Build an Auto Scaling group that uses only On-Demand instances and place it behind an Application Load Balancer to handle peak load
-
❏ D. Use Amazon EventBridge to monitor EC2 metrics and invoke an AWS Lambda function to redeploy the instance in another Availability Zone when CPU utilization goes above 80 percent
Which AWS solution stores the full dataset in Amazon S3 while keeping a small on premises cache of frequently accessed logs?
-
❏ A. AWS DataSync
-
❏ B. AWS Storage Gateway File Gateway
-
❏ C. AWS Direct Connect
-
❏ D. AWS Storage Gateway Volume Gateway cached volumes
A multinational e-commerce marketplace runs its order status API on Amazon EC2 instances in the eu-central-1 Region. Shoppers call the API for near real-time updates on their orders. Users in Southeast Asia and Latin America are experiencing slower responses than customers in Central Europe. The company wants to lower latency for these distant users without building full duplicate stacks in additional Regions and to keep costs low. Which approach should the company choose?
-
❏ A. Create identical EC2-based API stacks in ap-southeast-1 and sa-east-1 and use Amazon Route 53 latency-based routing to direct users to the nearest Regional endpoint
-
❏ B. Use AWS Global Accelerator to front the API and route client traffic over the AWS global network to the closest edge location with endpoint groups targeting the API in eu-central-1
-
❏ C. Provision AWS Direct Connect public VIFs from international customer networks to eu-central-1 and send API requests over those dedicated links
-
❏ D. Put Amazon CloudFront in front of the API and enable aggressive caching with the CachingOptimized policy for dynamic responses
Which AWS service provides a high-performance POSIX compliant shared file system for a Linux high performance computing workload running across about 96 EC2 instances and offers seamless import and export to low-cost durable storage?
-
❏ A. Amazon EFS with EFS Infrequent Access
-
❏ B. AWS DataSync with Amazon S3 Intelligent-Tiering
-
❏ C. Amazon FSx for Lustre linked to Amazon S3
-
❏ D. Amazon FSx for ONTAP with Amazon S3
A digital media company, Vireo Media, runs its web application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. In the last day, automated traffic has surged beyond 240 requests per second, while typical users generate around 7 requests per second. You want a straightforward way to stop abusive sources from overwhelming the application without changing the application code. What should you implement?
-
❏ A. AWS Shield Advanced
-
❏ B. Configure sticky sessions on the Application Load Balancer
-
❏ C. Configure AWS WAF on the ALB with a rate-based rule
-
❏ D. Amazon CloudFront
Which serverless architecture options can expose DynamoDB data to a simple web dashboard while requiring minimal operational effort? (Choose 2)
-
❏ A. Application Load Balancer to EC2 querying DynamoDB
-
❏ B. Amazon API Gateway REST direct integration with DynamoDB
-
❏ C. Amazon CloudFront calling DynamoDB directly
-
❏ D. Application Load Balancer targeting DynamoDB
-
❏ E. Amazon API Gateway invoking AWS Lambda to read DynamoDB
PolarRay VFX recently moved approximately 85 TB of project footage and assets into Amazon S3 to lower storage costs. Its render farm still runs in the company’s on-premises facility and frequently reads large source files and writes intermediate frames during business hours. The team needs a solution that provides the on-premises render nodes with low-latency, file-based access to the data in S3 while keeping overall costs minimal. Which approach should they implement?
-
❏ A. Deploy Amazon FSx for Lustre and load data from S3 with AWS DataSync, then mount it to the render farm over a VPN
-
❏ B. Configure an Amazon S3 File Gateway to expose the S3 bucket as NFS or SMB shares to the render nodes
-
❏ C. Use Mountpoint for Amazon S3 on the on-premises render hosts for direct access to the bucket
-
❏ D. Amazon File Cache
How can you enforce a 60 day retention period for all Amazon RDS for MySQL backups including automated backups and manual snapshots with minimal cost and effort?
-
❏ A. Enable cross-Region automated backups with 60-day retention
-
❏ B. Use AWS Backup 60-day policy for automated backups and Lambda to purge manual snapshots
-
❏ C. Set RDS automated backup retention to 60 days; run a small script to delete manual snapshots older than 60 days
-
❏ D. Disable automated backups and rely on AWS Backup daily jobs with 60-day retention
A logistics company, Polar Route Logistics, must move about 300 TB of on-premises file data into Amazon S3. The data center already uses a 10 Gbps AWS Direct Connect link to AWS. The team wants a fully managed approach that keeps traffic private over the existing link, automates recurring synchronizations, and accelerates transfers into AWS storage services. What should the solutions architect propose?
-
❏ A. Deploy an AWS Storage Gateway file gateway with a local cache and store the primary dataset in Amazon S3
-
❏ B. Amazon S3 Transfer Acceleration
-
❏ C. Use AWS DataSync with an on-premises agent and a VPC interface endpoint over Direct Connect
-
❏ D. Use AWS DataSync with an on-premises agent and the public service endpoint
How can an EC2 instance in a VPC upload data to Amazon S3 over the AWS private network without using public endpoints?
-
❏ A. Create an S3 access point alias
-
❏ B. Gateway VPC endpoint for S3 with route table and endpoint/bucket policy
-
❏ C. Enable S3 Transfer Acceleration
-
❏ D. Use a NAT gateway and the public S3 endpoint
These AWS questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site
An architecture firm operates a private web portal for employees. The portal is hosted on Amazon EC2 instances behind an Application Load Balancer, with an EC2 Auto Scaling group spanning multiple Availability Zones. Overnight the group scales down to 3 instances, then during business hours it rises to as many as 24 instances. Users report the site is sluggish at the start of the workday but performs normally later in the morning. What changes to the scaling approach will resolve the morning slowdown while keeping expenses low?
-
❏ A. Create a scheduled scaling action to set the Auto Scaling group desired capacity to 24 just before business hours
-
❏ B. Create a scheduled scaling action to set both the minimum and maximum capacity to 24 just before business hours
-
❏ C. Configure a target tracking scaling policy with a lower CPU utilization target and shorten the cooldown period
-
❏ D. Use step scaling with a reduced CPU threshold and a shorter cooldown to scale out earlier
What method lets you parallelize 90 minute cron jobs that currently run on a single Amazon EC2 instance while requiring minimal changes and keeping operational overhead low?
-
❏ A. Run jobs with AWS Batch scheduled by Amazon EventBridge
-
❏ B. Use AWS Lambda with Amazon EventBridge schedules
-
❏ C. Bake an AMI and scale with an Auto Scaling group
-
❏ D. Use Amazon EKS CronJobs with EventBridge
BlueOrbit Logistics operates a RESTful microservice on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. Most API requests finish in about 150 ms, but one route that aggregates reports can run for 2 to 3 minutes, and these long calls are consuming web resources and slowing responses for other clients. What should the architect implement to lessen the impact of these prolonged requests on the rest of the API?
-
❏ A. Increase the Application Load Balancer idle timeout to several minutes
-
❏ B. Use Amazon SQS to offload the long-running work and process it asynchronously
-
❏ C. AWS Step Functions
-
❏ D. Migrate the EC2 instances to a Nitro-based family with enhanced networking
An AWS data warehouse is connected to the corporate network via AWS Direct Connect. Average query results are 100 MB and each dashboard page without caching transfers 300 KB. Where should the business intelligence tool run to minimize data transfer egress costs?
-
❏ A. Host the BI tool on-premises; fetch results from AWS over Direct Connect
-
❏ B. Host the BI tool in AWS; users access via Site-to-Site VPN
-
❏ C. Host the BI tool in AWS in the same Region as the warehouse; users access via Direct Connect
-
❏ D. Host the BI tool in AWS; users access over the public internet
Meridian Care, a healthcare startup, is planning a cross-Region disaster recovery strategy for its transactional workload. The team needs a relational database that can meet a Recovery Point Objective of 2 seconds and a Recovery Time Objective of 45 seconds across multiple AWS Regions. Which AWS solution should they implement?
-
❏ A. Amazon RDS Multi-AZ
-
❏ B. Amazon DynamoDB global tables
-
❏ C. Amazon Aurora Global Database
-
❏ D. Amazon RDS cross-Region read replica
CSV files stored in Amazon S3 must be encrypted at rest and analysts need to run serverless SQL queries directly on those files. Which solution satisfies these requirements?
-
❏ A. Enable S3 SSE; query via Redshift Spectrum
-
❏ B. Encrypt S3 with KMS; use Amazon EMR Serverless for SQL
-
❏ C. Enable S3 SSE-KMS; query with Amazon Athena
-
❏ D. Use S3 SSE; load into Aurora Serverless for SQL
A live sports analytics startup needs to boost the availability and latency of a worldwide real-time feed processor that communicates over UDP. They require rapid failover between Regions if an AWS Region becomes unavailable, and they plan to keep using their own external DNS provider. Which AWS service should they adopt to meet these needs?
-
❏ A. Amazon CloudFront
-
❏ B. AWS Global Accelerator
-
❏ C. Amazon Route 53
-
❏ D. AWS Elastic Load Balancing (ELB)
How can you quickly obtain one-minute EC2 instance metrics with minimal maintenance during a sudden load surge?
-
❏ A. Install the CloudWatch agent and analyze via Logs with Athena
-
❏ B. CloudWatch Synthetics
-
❏ C. Enable EC2 Detailed Monitoring and view 1-minute metrics in CloudWatch
-
❏ D. Stream EC2 OS logs to OpenSearch and visualize in Dashboards
At NovaWell Health, five self-managed MySQL databases run on Amazon EC2 across two Availability Zones. The engineering team configures replication and scales instances manually as traffic fluctuates. The company needs the database tier to automatically add and remove compute capacity as needed while improving performance, scalability, and durability with minimal operational effort. What should they do?
-
❏ A. Consolidate the schemas into one MySQL instance and run on larger EC2 instances
-
❏ B. Migrate the MySQL workloads to Amazon Aurora Serverless v2 (Aurora MySQL)
-
❏ C. Shift the database layer to an EC2 Auto Scaling group and rebuild the environment before migrating the current MySQL data
-
❏ D. Migrate the databases to Amazon Aurora Serverless (Aurora PostgreSQL)
Which approaches provide a managed SFTP endpoint into Amazon S3 with federated per-partner access control and minimal operational overhead? (Choose 2)
-
❏ A. AWS DataSync with an SFTP location to move files into S3
-
❏ B. AWS Transfer Family SFTP server backed by S3 with per-user IAM role mapping
-
❏ C. AWS Storage Gateway File Gateway for partners to upload via NFS/SMB
-
❏ D. Integrate Transfer Family with an identity provider and enforce S3 bucket policies per prefix
-
❏ E. Self-managed SFTP on EC2 that syncs uploads to S3
Norhaven Logistics operates smart card readers at every secured doorway across 24 distribution hubs. Each tap causes the reader to send a small JSON message over HTTPS identifying the user and entry point. A solutions architect must design a highly available ingestion and processing path that scales and stores the processed outcomes in a durable data store for later reporting. Which architecture should be proposed?
-
❏ A. Create a single Amazon EC2 instance to host an HTTPS endpoint that processes messages and writes results to Amazon S3
-
❏ B. Use Amazon Route 53 to direct device requests straight to an AWS Lambda function that stores data in Amazon DynamoDB
-
❏ C. Expose an HTTPS API with Amazon API Gateway that invokes AWS Lambda to process events and persist results in Amazon DynamoDB
-
❏ D. Place an Amazon CloudFront distribution in front and use Lambda@Edge to handle POST requests and write to Amazon DynamoDB
A Linux application will scale across multiple EC2 instances that must read and write the same dataset which currently resides on a 3 TB EBS gp3 volume, and which service provides a highly available POSIX shared file system with minimal application changes?
-
❏ A. Amazon FSx for Lustre
-
❏ B. Amazon EFS
-
❏ C. EBS Multi-Attach (io2)
-
❏ D. Amazon S3 with s3fs
A geospatial mapping startup stores regulated datasets in Amazon S3 across more than 350 buckets to keep data isolated by project. The team discovered that bucket lifecycle rules are inconsistent and difficult to manage at scale, leading to increased storage expenses. Which approach will lower S3 costs while requiring minimal ongoing effort from the IT staff?
-
❏ A. Amazon S3 on Outposts
-
❏ B. Amazon S3 Intelligent-Tiering storage class
-
❏ C. Amazon S3 Glacier Deep Archive
-
❏ D. Amazon S3 One Zone-Infrequent Access
Which approach most effectively reduces read latency for a read-heavy Amazon RDS for MySQL workload during traffic spikes while requiring minimal changes?
-
❏ A. Amazon RDS Proxy
-
❏ B. Add Amazon ElastiCache to cache hot reads
-
❏ C. Amazon RDS MySQL read replicas with app read/write split
-
❏ D. Scale up RDS instance and enable Multi-AZ
A media analytics firm has operated only in eu-central-1 and stores objects in Amazon S3 encrypted with SSE-KMS. To strengthen security and improve disaster recovery, the team wants an encrypted copy in eu-west-3. The security policy requires that objects be encrypted and decrypted with the same key material and key ID in both Regions. What is the most appropriate solution?
-
❏ A. Enable replication on the existing bucket in eu-central-1 to a bucket in eu-west-3 and share the current AWS KMS key from eu-central-1 with eu-west-3
-
❏ B. Use an Amazon EventBridge schedule to trigger an AWS Lambda function daily that copies objects from eu-central-1 to eu-west-3 with permissions to encrypt and decrypt using the KMS key
-
❏ C. Convert the existing single-Region AWS KMS key to a multi-Region key and use Amazon S3 Batch Replication to backfill historical objects to eu-west-3
-
❏ D. Create a new S3 bucket in eu-central-1 with SSE-KMS using a multi-Region KMS key, configure replication to a bucket in eu-west-3 that uses the related replica key, and copy current data into the new source bucket
Solutions Architect Exam Questions Answered
These AWS questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site
A research startup is deploying a data processing agent on a single Amazon EC2 instance that needs a durable block device attached. Compliance requires encryption at rest and hands-free, scheduled volume-level backups, such as daily snapshots retained for 35 days. What is the most appropriate approach to meet these requirements?
-
✓ C. Attach an encrypted Amazon EBS volume and configure Amazon Data Lifecycle Manager to run automated snapshots
The correct choice is Attach an encrypted Amazon EBS volume and configure Amazon Data Lifecycle Manager to run automated snapshots. This option provides an EC2-attached durable block device with encryption at rest and hands-free, scheduled volume-level snapshots that can be retained for the required 35 day window.
Amazon EBS is the block storage service designed to attach to EC2 instances and it supports encryption at rest with AWS managed or customer managed keys. Amazon Data Lifecycle Manager lets you create snapshot lifecycle policies that run on a schedule and enforce retention rules so backups are automated and retained for the specified period without manual intervention.
Create an encrypted Amazon EFS file system and use Amazon EventBridge to trigger AWS Lambda to copy data for backups is incorrect because Amazon EFS is network file storage accessed over NFS rather than an EC2-attached block device and orchestrating Lambda copy jobs produces file level copies rather than consistent volume snapshots.
Enable server-side encryption on an Amazon S3 bucket and configure Cross-Region Replication for redundancy is incorrect because Amazon S3 is object storage and cross region replication provides object redundancy rather than volume level backups of an EC2 root or data volume.
Use encrypted instance store volumes and schedule a cron job to copy data to another EC2 instance is incorrect because instance store is ephemeral and data can be lost when an instance stops or fails and a cron based copy is an ad hoc file level process that does not provide automated, consistent volume snapshots.
When a question asks for EC2-attached persistent block storage with encryption at rest and automated volume snapshots choose Amazon EBS and configure Data Lifecycle Manager to schedule and retain snapshots.
Within about 36 hours what should you implement to accelerate delivery of a dynamic web application to users in Europe while keeping the origin servers in the United States?
-
✓ B. Amazon CloudFront with a custom origin to on-prem servers
Amazon CloudFront with a custom origin to on-prem servers is correct because CloudFront can front an on-prem HTTP(S) origin, terminate TLS and cache where allowed while keeping the origin in the United States so you can accelerate delivery to European users quickly.
CloudFront with a custom origin routes requests to the nearest edge and then uses optimized connections to the origin to reduce transatlantic latency, and it can cache responses that are cacheable while forwarding dynamic requests to the origin so you do not need to migrate the application to another region for rapid deployment.
Amazon CloudFront with Lambda@Edge is unnecessary for this requirement because Lambda@Edge is for request and response customization and header manipulation and it does not by itself solve transatlantic latency since the acceleration comes from CloudFront edges.
AWS Global Accelerator in front of the US origin is not appropriate because Global Accelerator speeds TCP and UDP traffic to AWS endpoints and does not cache HTTP content, and it cannot directly target an on-prem origin without adding AWS infrastructure which increases complexity and time.
Amazon S3 static website + Route 53 latency routing will not work because S3 static sites cannot host a dynamic application and DNS latency routing only steers clients to endpoints and does not reduce the physical round trip to a single US origin.
When you must keep the origin in the US and deliver quickly to Europe choose CloudFront with a custom origin to gain edge acceleration and caching, and do not confuse CloudFront caching with Global Accelerator TCP acceleration.
A regional nonprofit plans to deploy a lightweight reporting service on a single Amazon EC2 instance with an attached EBS volume. Usage is occasional, with brief surges twice per day lasting about 90 minutes during commute hours. Disk activity is uneven and can spike to roughly 2,800 IOPS. Which EBS volume type should the solutions architect choose to minimize cost while still handling these bursts?
-
✓ B. Amazon EBS General Purpose SSD (gp2)
The correct choice is Amazon EBS General Purpose SSD (gp2). This volume type balances cost and performance and can handle occasional bursts up to about 3,000 IOPS which fits the reported spikes near 2,800 IOPS.
Amazon EBS General Purpose SSD (gp2) delivers low latency SSD performance and uses a credit based burst model that lets volumes exceed their baseline IOPS for short periods. For an intermittent reporting service with two daily 90 minute surges and otherwise light usage, gp2 avoids the higher ongoing cost of provisioned IOPS while still covering the peak IOPS needs when burst credits are available.
Amazon EBS Throughput Optimized HDD (st1) is designed for large, sequential throughput and not for low latency random IOPS, so it will not reliably serve spiky, latency sensitive database or reporting I/O.
Amazon EBS Cold HDD (sc1) targets infrequently accessed, throughput oriented workloads and it does not provide the random IOPS needed for interactive reporting, so it will underperform for this pattern.
Amazon EBS Provisioned IOPS SSD (io2) can deliver sustained, very high IOPS and low latency, but it is more expensive and is intended for workloads that require guaranteed, continuous IOPS. For intermittent bursts the additional cost makes io2 an uneconomical choice.
Match the EBS type to the workload and choose burst capable SSD for infrequent spikes and provisioned IOPS only when you need sustained, guaranteed performance.
Which approach ensures personally identifiable information is encrypted at rest for both EC2 EBS volumes and Amazon RDS?
-
✓ B. Enable EBS default encryption and turn on RDS encryption with KMS
Enable EBS default encryption and turn on RDS encryption with KMS is correct because it ensures encryption at rest for both EC2 attached EBS volumes and Amazon RDS storage.
EBS default encryption makes new volumes and snapshots encrypted automatically and RDS encryption is applied at instance creation or by restoring from an encrypted snapshot. Both services integrate with AWS KMS for centralized key management and audit logging which simplifies rotation and access control and meets most compliance needs.
Deploy CloudHSM and encrypt only the database is incorrect because it would leave EBS volumes unencrypted and adds significant operational overhead. CloudHSM is appropriate only when dedicated hardware key custody or specialized cryptographic controls are explicitly required.
Use Secrets Manager and TLS and leave storage unchanged is incorrect because Secrets Manager protects credentials and TLS protects data in transit and neither provides encryption at rest for disks and snapshots.
Enable RDS encryption and use AWS Backup Vault Lock for EC2 volumes is incorrect because Vault Lock enforces backup immutability but does not encrypt active EBS volumes attached to EC2 instances. Active volume encryption requires EBS encryption to be enabled.
Remember that RDS encryption is typically enabled at creation and EBS default encryption covers new volumes automatically. Use KMS for centralized key management unless the question explicitly requires CloudHSM.
A retail analytics startup runs a three-tier web application with ten front-end Amazon EC2 instances in an Auto Scaling group placed in a single Availability Zone behind an Application Load Balancer. The company wants to improve resiliency without changing any application code. What architectural change should the solutions architect implement to achieve high availability?
-
✓ C. Reconfigure the Auto Scaling group to span at least two Availability Zones and register the instances with the existing Application Load Balancer
The correct choice is Reconfigure the Auto Scaling group to span at least two Availability Zones and register the instances with the existing Application Load Balancer. This option moves instances into separate fault domains and lets the load balancer route traffic to healthy targets if one Availability Zone becomes unavailable.
Reconfigure the Auto Scaling group to span at least two Availability Zones and register the instances with the existing Application Load Balancer improves resilience because Auto Scaling can launch replacements in other zones within the same Region and the Application Load Balancer can distribute traffic across those zones without any application code changes. Using multiple Availability Zones provides high availability at the infrastructure level while keeping the architecture simple.
Create a second Auto Scaling group in a different AWS Region and use Amazon Route 53 failover routing is incorrect because it introduces cross-Region complexity and it requires data replication and failover planning which goes beyond the stated requirement to improve availability without changing application code.
Enable cross-zone load balancing on the existing Application Load Balancer only is incorrect because cross-zone load balancing does not help if all instances are still located in a single Availability Zone. If that AZ fails there will be no healthy targets to receive traffic even with cross-zone balancing enabled.
Add additional subnets from the same Availability Zone to the Auto Scaling group is incorrect because adding more subnets in the same AZ leaves the deployment within a single fault domain and does not protect against an AZ outage.
For EC2 fleets behind an ALB think Multi-AZ first and ensure the Auto Scaling group includes subnets in multiple Availability Zones rather than adding subnets in the same zone.
Which AWS database supports relational SQL and automatically scales to handle unpredictable traffic spikes?
-
✓ B. Amazon Aurora Serverless v2
The correct choice is Amazon Aurora Serverless v2. It is a relational SQL database that automatically scales capacity in fine grained increments to handle unpredictable traffic spikes without manual capacity planning or intervention.
Amazon Aurora Serverless v2 provides a fully relational SQL engine with compatibility for Aurora MySQL and Aurora PostgreSQL and it supports ACID transactions and typical SQL features. Its serverless v2 architecture lets capacity grow and shrink quickly so it can absorb sudden traffic surges while maintaining transactional consistency.
Amazon DynamoDB on-demand is designed to scale automatically for unpredictable load but it is a NoSQL key value and document store and it does not offer a relational SQL engine or the same relational ACID semantics.
Amazon RDS for PostgreSQL (provisioned) is a relational SQL database but it relies on fixed instance sizing and manual scaling or instance changes which can be slower to react to sudden spikes.
Amazon Redshift Serverless is a serverless data warehouse tuned for analytic OLAP queries and it is not intended as a primary transactional OLTP relational database.
When a question pairs relational SQL with automatic scaling for spikes choose Amazon Aurora Serverless v2 as the best fit.
A renewable energy startup ingests turbine telemetry from hundreds of wind towers into Amazon Kinesis Data Streams. Several consumer services read the same stream concurrently, and the team is noticing growing delays between events written by producers and events read by consumers during traffic surges. What should a solutions architect do to increase read performance for this workload?
-
✓ B. Enable Enhanced Fan-Out for Amazon Kinesis Data Streams
The correct choice is Enable Enhanced Fan-Out for Amazon Kinesis Data Streams. This setting gives each registered consumer its own dedicated read pipeline so concurrent readers do not contend for the same shard throughput.
By default each shard provides a shared read throughput which can cause lag when many consumers read the same data. Enhanced Fan-Out allocates a dedicated 2 MB per second pipe per consumer per shard and uses HTTP/2 push. This reduces per consumer latency and improves end to end read performance during traffic surges.
Replace Amazon Kinesis Data Streams with Amazon Kinesis Data Firehose is not appropriate because Firehose is a managed delivery service that buffers and delivers data to destinations like S3 and Redshift. It is designed for delivery and storage pipelines and it does not support multiple independent real time consumers reading the same stream.
Replace Amazon Kinesis Data Streams with Amazon SQS Standard queues is not suitable because SQS implements a queue model where messages are typically consumed by a single reader and delivery semantics are at least once and unordered. That model does not provide the multi consumer fan out needed for this workload.
Replace Amazon Kinesis Data Streams with Amazon SQS FIFO queues is not suitable because FIFO queues prioritize strict ordering and deduplication at lower throughput. FIFO still follows a queue semantics and it cannot provide multiple independent consumers with dedicated concurrent reads of the same data stream which is required here.
When multiple applications must read the same stream with low latency register Enhanced Fan-Out consumers so each reader receives dedicated throughput and avoids per shard read contention.
What is the most cost effective way to enable EC2 instances in 10 separate AWS accounts to communicate using only private IP addresses?
-
✓ B. Use AWS RAM to share subnets from a central VPC so all accounts launch into one VPC
Use AWS RAM to share subnets from a central VPC so all accounts launch into one VPC is the correct and most cost effective option to enable EC2 instances in ten separate accounts to communicate using only private IP addresses.
Use AWS RAM to share subnets from a central VPC so all accounts launch into one VPC allows multiple accounts to place EC2 instances into shared subnets inside a single VPC so all traffic stays on private IPs. AWS Resource Access Manager does not add a service charge and shared subnets avoid per attachment or per GB data processing fees that alternatives can incur. This approach also reduces operational overhead because you do not need to manage many peering links or multiple Transit Gateway attachments.
The option Attach every VPC to a single AWS Transit Gateway and route traffic through it is more expensive because Transit Gateway has attachment hourly charges and per GB data processing fees. Transit Gateway is appropriate when you need a hub and spoke at large scale or cross region connectivity but it costs more for a simple private connectivity requirement.
The option Establish a full mesh of VPC peering connections among all VPCs does not scale operationally because the number of peering connections grows rapidly as VPCs increase. It also introduces inter VPC data transfer and management complexity that make it less cost efficient than sharing subnets for this use case.
The option Use AWS Cloud WAN to interconnect the VPCs across accounts is intended for large global network overlays and brings additional cost and operational complexity that are unnecessary for straightforward private instance communication within AWS.
For many to many private EC2 connectivity across accounts choose VPC sharing via AWS RAM when accounts can trust a central VPC. Use Transit Gateway when you need hub and spoke at scale or cross region capabilities.
EduCast Labs runs an on-prem telemetry pipeline that gathers clickstream and playback metrics from smart TV and mobile video apps. The system ingests roughly 300,000 events per minute, and stakeholders expect dashboards to reflect changes within about 10 seconds. The team wants to migrate to AWS using fully managed, AWS-native services for ingestion, storage, search, and visualization with minimal operational overhead. Which architecture should they implement?
-
✓ C. Use Amazon Kinesis Data Streams to ingest the events and process them with AWS Lambda, index the results in Amazon OpenSearch Service for search and analytics, and create dashboards with Amazon Managed Grafana
Use Amazon Kinesis Data Streams to ingest the events and process them with AWS Lambda, index the results in Amazon OpenSearch Service for search and analytics, and create dashboards with Amazon Managed Grafana is correct because it pairs a managed, low latency ingest service with serverless processing and a search engine and it supports seconds‑level freshness with minimal operational overhead.
Amazon Kinesis Data Streams provides high throughput and low end to end latency so it can handle 300,000 events per minute and meet a 10 second freshness target and AWS Lambda scales processing without managing servers so you can perform record transformation and enrichment in near real time. Amazon OpenSearch Service is built for near real time indexing and full text search and it supports aggregations and analytics for interactive queries. Amazon Managed Grafana integrates with OpenSearch and provides rich, managed dashboards so stakeholders can view searchable, seconds fresh dashboards with low operational burden.
Use Amazon EC2 instances to ingest and process the event stream into Amazon S3 for storage, then use AWS Glue to catalog the data and Amazon Athena for querying, and visualize with Amazon QuickSight is not ideal because EC2 increases operational overhead and storing to S3 with Athena is a query on read pattern that is best for batch or microbatch workloads and it will struggle to deliver sub 10 second searchable dashboards.
Use Amazon MSK (Managed Streaming for Apache Kafka) for ingestion, land the data in Amazon Redshift for analysis, run external queries with Redshift Spectrum, and build visuals in Amazon QuickSight can handle high volume streaming but Redshift and Spectrum are designed for analytics over larger, batched or microbatched datasets and Redshift is not a search engine so it will not provide low latency, full text search and the same interactive search experience as OpenSearch.
Use Amazon EMR to handle stream processing, store records in Amazon DynamoDB, query with DynamoDB, and create dashboards in Amazon CloudWatch is a poor match because EMR adds operational complexity, DynamoDB is a key value and document database that is not optimized for full text search and complex aggregations, and CloudWatch dashboards are metric oriented and do not provide the same searchable, analytics focused dashboards as a Grafana plus OpenSearch combination.
When the exam requires seconds level freshness and searchable dashboards pick a streaming ingest service with serverless processors and a search engine and prefer managed services to reduce operational overhead.
Which security controls should be enabled for a new AWS account root user? (Choose 2)
-
✓ A. Use a long, complex root password
-
✓ D. Turn on MFA for the root user
The correct controls to enable for a new AWS account root user are Turn on MFA for the root user and Use a long, complex root password.
Turn on MFA for the root user adds a second authentication factor and it greatly reduces the chance of compromise if the password is exposed or phished.
Use a long, complex root password ensures the highest privilege account resists brute force and credential stuffing attacks and it provides a strong secret that protects account recovery and billing actions.
Use AWS Identity Center for root sign-in is incorrect because the root user cannot be federated through Identity Center and Identity Center does not replace root authentication.
Create root access keys and store them in Secrets Manager is incorrect because AWS recommends not creating root access keys at all and it is better to use IAM users or roles for API operations.
Rotate root access keys every 45 days is incorrect because it assumes root access keys should exist and the recommended practice is to avoid creating root keys and to rotate nonroot credentials instead.
Keep the root account locked down and use MFA and a strong unique password. Do not create root access keys and use IAM or Identity Center accounts for daily tasks.
These AWS questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site
An online sports analytics company operates a compact reporting service on a single Amazon EC2 On-Demand instance. The application is stateless and resilient, optimized for rapid chart rendering. When viral games or breaking scores cause sudden surges, users see increased latency and occasional 5xx responses. The team needs a scalable approach that keeps performance steady during unpredictable spikes while minimizing spend during quiet periods without maintaining idle capacity. Which solution should they implement?
-
✓ B. Create a golden AMI and a launch template, then use an EC2 Auto Scaling group with Spot Instances and attach an Application Load Balancer to distribute traffic across scaled-out instances
The best choice is Create a golden AMI and a launch template, then use an EC2 Auto Scaling group with Spot Instances and attach an Application Load Balancer to distribute traffic across scaled-out instances. The app is stateless so this design lets the fleet scale out for sudden spikes and scale in when traffic drops while keeping compute costs low.
This works because a launch template with a golden AMI gives consistent instances and makes replacements fast. An EC2 Auto Scaling group automates scale out and scale in so capacity matches demand. Using Spot Instances is suitable for stateless workloads that can tolerate interruption because it greatly reduces cost during busy periods. An Application Load Balancer spreads traffic across healthy instances and performs health checks so users see steady latency during surges.
Containerize the service on Amazon ECS with Fargate and run one task, using an Amazon CloudWatch alarm to raise the task’s vCPU and memory reservations when average CPU exceeds 65 percent is not a good match because it relies on vertical scaling of a single task which remains a bottleneck. Proper elasticity requires multiple tasks and an ALB and you cannot resize a running Fargate task without replacing it.
Build an Auto Scaling group that uses only On-Demand instances and place it behind an Application Load Balancer to handle peak load does provide scalability but it does not meet the scenario goal of minimizing spend because an On-Demand only fleet is more expensive than using Spot for a stateless service.
Use Amazon EventBridge to monitor EC2 metrics and invoke an AWS Lambda function to redeploy the instance in another Availability Zone when CPU utilization goes above 80 percent is not appropriate because redeploying a single instance is reactive and it does not add capacity or distribute load so latency and 5xx errors will persist during spikes.
When an app is stateless and traffic is spiky think Auto Scaling behind an ALB and prefer Spot Instances to reduce cost while avoiding idle capacity.
Which AWS solution stores the full dataset in Amazon S3 while keeping a small on premises cache of frequently accessed logs?
-
✓ B. AWS Storage Gateway File Gateway
The correct choice is AWS Storage Gateway File Gateway. This option stores the full dataset in Amazon S3 while keeping a small on premises cache of frequently accessed logs so reads are low latency and S3 remains the durable authoritative store.
AWS Storage Gateway File Gateway presents file shares over NFS and SMB that map files to S3 objects and it keeps a local hot cache of recently accessed files. This design lets you serve log files locally from cache for performance while File Gateway handles uploads and metadata so S3 contains the complete dataset.
AWS DataSync is incorrect because it is a data transfer and synchronization service for moving or copying data and it does not provide a continuous local cache backed by S3.
AWS Direct Connect is incorrect because it provides a dedicated network connection to AWS and it does not include storage or caching capabilities.
AWS Storage Gateway Volume Gateway cached volumes is incorrect because it exposes block storage through iSCSI and it is intended for block workloads. Cached volumes keep primary block data in the cloud while caching hot blocks locally but they do not provide the S3 native file interface that File Gateway offers.
When a question mentions a local cache plus S3 as the authoritative store for files choose File Gateway. If the focus is on block storage think Volume Gateway. If the task is one time or scheduled transfers consider DataSync.
A multinational e-commerce marketplace runs its order status API on Amazon EC2 instances in the eu-central-1 Region. Shoppers call the API for near real-time updates on their orders. Users in Southeast Asia and Latin America are experiencing slower responses than customers in Central Europe. The company wants to lower latency for these distant users without building full duplicate stacks in additional Regions and to keep costs low. Which approach should the company choose?
-
✓ B. Use AWS Global Accelerator to front the API and route client traffic over the AWS global network to the closest edge location with endpoint groups targeting the API in eu-central-1
Use AWS Global Accelerator to front the API and route client traffic over the AWS global network to the closest edge location with endpoint groups targeting the API in eu-central-1 is correct because it accelerates dynamic, non-cacheable API traffic by routing clients to the nearest AWS edge and then over the AWS backbone to the eu-central-1 endpoints without requiring duplicate Regional stacks.
Global Accelerator reduces long-haul internet latency and jitter by moving traffic onto the AWS global network at an edge location and keeping the single Region deployment in place. This approach improves performance for real time API calls and keeps operational and infrastructure costs lower than running full EC2 environments in multiple Regions.
Create identical EC2-based API stacks in ap-southeast-1 and sa-east-1 and use Amazon Route 53 latency-based routing to direct users to the nearest Regional endpoint can reduce latency for distant users but it forces you to build and operate full environments in each Region which increases cost and complexity and therefore does not meet the requirement to keep costs low.
Provision AWS Direct Connect public VIFs from international customer networks to eu-central-1 and send API requests over those dedicated links is designed for predictable enterprise connectivity and is expensive and impractical to provision for many disparate consumer networks so it does not solve the global consumer latency problem effectively.
Put Amazon CloudFront in front of the API and enable aggressive caching with the CachingOptimized policy for dynamic responses is not suitable for near real time, non-cacheable API traffic because aggressive caching cannot preserve correctness for dynamic responses and therefore offers limited latency improvement compared with Global Accelerator.
When you need to speed up dynamic, non-cacheable API traffic to a single Region without multi-Region stacks choose AWS Global Accelerator and reserve CloudFront for cacheable content.
Which AWS service provides a high-performance POSIX compliant shared file system for a Linux high performance computing workload running across about 96 EC2 instances and offers seamless import and export to low-cost durable storage?
-
✓ C. Amazon FSx for Lustre linked to Amazon S3
The correct choice is Amazon FSx for Lustre linked to Amazon S3. This service provides a high performance POSIX compliant parallel file system that is designed for HPC workloads and it can transparently import data from S3 and export results back to S3 so compute nodes see a fast shared namespace while S3 handles long term storage.
Amazon FSx for Lustre delivers low latency and very high throughput for parallel I O so it suits clusters of tens or hundreds of EC2 instances such as a 96 instance HPC job. The S3 data repository integration lets you stage large datasets from S3 into the Lustre namespace for fast processing and then push output back to S3 for cost effective retention and lifecycle management.
The option Amazon EFS with EFS Infrequent Access is a general purpose POSIX file system, but it is not optimized for extreme parallel I O at HPC scale and it does not provide native S3 import and export semantics in the same way as Lustre.
The option AWS DataSync with Amazon S3 Intelligent-Tiering is focused on efficient data transfer and object storage tiering, and it does not present a mountable POSIX shared filesystem to compute nodes so it is not suitable when applications require a directly mounted parallel file system.
The option Amazon FSx for ONTAP with Amazon S3 offers enterprise NAS features and can tier data to S3, but it does not provide the Lustre parallel semantics and the same level of HPC throughput that a Lustre filesystem provides for large scale parallel workloads.
When a scenario mentions Linux HPC and a mounted POSIX shared filesystem with the need to export to S3 choose the service built for parallel I O and S3 integration such as Amazon FSx for Lustre.
A digital media company, Vireo Media, runs its web application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. In the last day, automated traffic has surged beyond 240 requests per second, while typical users generate around 7 requests per second. You want a straightforward way to stop abusive sources from overwhelming the application without changing the application code. What should you implement?
-
✓ C. Configure AWS WAF on the ALB with a rate-based rule
The correct choice is Configure AWS WAF on the ALB with a rate-based rule. This option allows you to block or throttle abusive clients at the load balancer so your EC2 Auto Scaling group is not overwhelmed and you do not need to change application code.
WAF integrates natively with an Application Load Balancer and supports rate based rules that count requests over a five minute window and then block or limit offending IP addresses. Using WAF is a straightforward layer seven control that stops automated traffic patterns before they reach your instances.
AWS Shield Advanced provides managed DDoS protection and enhanced telemetry but it does not provide configurable per IP rate based rules for layer seven request throttling. Shield Advanced is useful for larger DDoS mitigation but it is not the granular request policing tool required here.
Configure sticky sessions on the Application Load Balancer only controls session affinity and target routing and it does not police request rates per client. Enabling stickiness can concentrate traffic on individual targets and can make load problems worse.
Amazon CloudFront can absorb and cache traffic at the edge and it can reduce origin load but by itself it does not enforce per IP rate limits. To get per client rate based blocking at the edge you would pair CloudFront with WAF.
For layer seven request floods without code changes use AWS WAF rate based rules on the ALB and add CloudFront with WAF when you need edge caching and protection.
Which serverless architecture options can expose DynamoDB data to a simple web dashboard while requiring minimal operational effort? (Choose 2)
-
✓ B. Amazon API Gateway REST direct integration with DynamoDB
-
✓ E. Amazon API Gateway invoking AWS Lambda to read DynamoDB
The best choices are Amazon API Gateway REST direct integration with DynamoDB and Amazon API Gateway invoking AWS Lambda to read DynamoDB. Both approaches are serverless and fully managed which minimizes operational effort while giving secure and scalable access to DynamoDB for a simple web dashboard.
With Amazon API Gateway REST direct integration with DynamoDB you can map HTTP requests to DynamoDB actions without provisioning servers or managing runtime environments which reduces operational overhead and simplifies the architecture for straightforward read patterns.
With Amazon API Gateway invoking AWS Lambda to read DynamoDB you can implement any necessary business logic, transformations, or authorization checks in Lambda while still avoiding server management because Lambda scales automatically and is fully managed.
Application Load Balancer to EC2 querying DynamoDB increases operational burden because it requires managing EC2 instances, scaling, patching, and capacity planning which contradicts the requirement for minimal operational effort.
Application Load Balancer targeting DynamoDB is not supported because ALB target groups accept instances, IP addresses, or Lambda functions and they cannot target a database directly.
Amazon CloudFront calling DynamoDB directly is invalid because CloudFront requires an HTTP or HTTPS origin and it cannot invoke DynamoDB as a backend data store directly.
When a question emphasizes least operational overhead choose managed serverless integrations such as API Gateway direct service integration or API Gateway with Lambda instead of EC2 or custom load balancer solutions.
PolarRay VFX recently moved approximately 85 TB of project footage and assets into Amazon S3 to lower storage costs. Its render farm still runs in the company’s on-premises facility and frequently reads large source files and writes intermediate frames during business hours. The team needs a solution that provides the on-premises render nodes with low-latency, file-based access to the data in S3 while keeping overall costs minimal. Which approach should they implement?
-
✓ B. Configure an Amazon S3 File Gateway to expose the S3 bucket as NFS or SMB shares to the render nodes
Configure an Amazon S3 File Gateway to expose the S3 bucket as NFS or SMB shares to the render nodes is the correct choice because it gives on premises render nodes file based access with a local cache while keeping the primary data in S3 to minimize storage cost.
The Configure an Amazon S3 File Gateway to expose the S3 bucket as NFS or SMB shares to the render nodes solution installs a Storage Gateway appliance on premises and maintains a local cache of frequently accessed objects for low latency reads and fast writes, and it presents standard NFS or SMB shares so render processes can use familiar file semantics without extensive application changes.
Deploy Amazon FSx for Lustre and load data from S3 with AWS DataSync, then mount it to the render farm over a VPN is a strong option for high performance compute inside AWS, but it adds network hops and VPN latency for on premises clients and increases operational complexity and cost without providing the local on premises cache that the render farm needs.
Use Mountpoint for Amazon S3 on the on-premises render hosts for direct access to the bucket is not appropriate because mountpoint solutions are intended for direct S3 access without a persistent local cache and they may not provide full POSIX compatibility for heavy render workflows, so latency and compatibility can suffer for on premises render nodes.
Amazon File Cache locates the cache in AWS rather than in the customer data center, so on premises clients must traverse the WAN which reduces performance and raises network and egress costs compared to a local file gateway cache.
Remember that when on premises compute needs file semantics and local caching against S3 you should choose S3 File Gateway for low latency and lower total cost of ownership.
How can you enforce a 60 day retention period for all Amazon RDS for MySQL backups including automated backups and manual snapshots with minimal cost and effort?
-
✓ C. Set RDS automated backup retention to 60 days; run a small script to delete manual snapshots older than 60 days
Set RDS automated backup retention to 60 days; run a small script to delete manual snapshots older than 60 days is the correct choice because it uses native RDS automated backup retention for point in time recovery and enforces the same 60 day retention on manual snapshots with a lightweight scheduled job so cost and operational effort remain low.
The Set RDS automated backup retention to 60 days; run a small script to delete manual snapshots older than 60 days approach relies on RDS automated backups to provide point in time recovery and to retain automated backups for 60 days. A simple scheduled script using the AWS CLI or a small automation runbook can list manual snapshots and delete those older than 60 days. This solution minimizes additional services and cross region transfer or storage charges while keeping operations simple.
The Enable cross-Region automated backups with 60-day retention option is not ideal because cross region copies incur additional storage and data transfer costs and it still does not manage manual snapshots that users may create.
The Use AWS Backup 60-day policy for automated backups and Lambda to purge manual snapshots option adds AWS Backup and a Lambda function which increases complexity and cost when RDS automated backup retention can already satisfy the automated backup requirement.
The Disable automated backups and rely on AWS Backup daily jobs with 60-day retention option removes native point in time recovery and typically increases operational overhead and expense by replacing built in capabilities with external scheduled jobs.
Prefer native service features when they meet the requirement and use a small scheduled script or automation to enforce lifecycle on manual snapshots.
A logistics company, Polar Route Logistics, must move about 300 TB of on-premises file data into Amazon S3. The data center already uses a 10 Gbps AWS Direct Connect link to AWS. The team wants a fully managed approach that keeps traffic private over the existing link, automates recurring synchronizations, and accelerates transfers into AWS storage services. What should the solutions architect propose?
-
✓ C. Use AWS DataSync with an on-premises agent and a VPC interface endpoint over Direct Connect
Use AWS DataSync with an on-premises agent and a VPC interface endpoint over Direct Connect is correct because it provides a fully managed, accelerated transfer method that can keep traffic private over the existing Direct Connect link.
DataSync uses a purpose built protocol to maximize throughput and it supports scheduled and incremental tasks so recurring synchronizations are automated and easy to monitor. The service uses an on premises agent to read files and a VPC interface endpoint to ensure traffic stays on the private Direct Connect path rather than the public internet.
Deploy an AWS Storage Gateway file gateway with a local cache and store the primary dataset in Amazon S3 is not ideal because File Gateway is designed to expose SMB and NFS access to S3 and provide local caching for hybrid access. It does not focus on accelerating large bulk migrations or orchestrating high throughput replication tasks.
Amazon S3 Transfer Acceleration is not suitable because it relies on the public AWS edge network and internet based uploads and cannot be constrained to the Direct Connect private path. It also does not provide built in, recurring synchronization workflows from on premises.
Use AWS DataSync with an on-premises agent and the public service endpoint violates the requirement for private connectivity because using the public endpoint would route data over the internet rather than over the Direct Connect link.
When you need to move large on premises datasets and keep traffic private prefer DataSync with a VPC interface endpoint so transfers use Direct Connect and tasks can be scheduled for recurring syncs.
How can an EC2 instance in a VPC upload data to Amazon S3 over the AWS private network without using public endpoints?
-
✓ B. Gateway VPC endpoint for S3 with route table and endpoint/bucket policy
Gateway VPC endpoint for S3 with route table and endpoint/bucket policy is correct because it routes S3 traffic over the AWS private network so EC2 instances do not traverse the public internet.
The Gateway VPC endpoint for S3 with route table and endpoint/bucket policy works by adding routes for S3 prefixes into your subnet route tables so traffic to S3 uses the AWS backbone. You can apply an endpoint policy or a bucket policy that validates aws:SourceVpce to restrict which endpoints or VPCs can access the bucket. This combination keeps transfers private and avoids sending data through Use a NAT gateway and the public S3 endpoint or an internet gateway.
Create an S3 access point alias is incorrect because access points focus on access management and custom hostnames and they do not by themselves create a private VPC path to S3. You can use access points with VPC restricted policies but the alias alone does not change network routing.
Enable S3 Transfer Acceleration is incorrect because transfer acceleration uses public edge endpoints to speed transfers across long distances and it does not keep traffic inside your VPC or on the AWS private network.
Use a NAT gateway and the public S3 endpoint is incorrect because NAT routes traffic to the internet so S3 access goes over public endpoints and not exclusively over the AWS backbone. NAT also incurs internet egress and does not provide the private connectivity required by the question.
For private EC2 to S3 transfers choose a gateway VPC endpoint and update subnet route tables and policies to restrict access.
These AWS questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site
An architecture firm operates a private web portal for employees. The portal is hosted on Amazon EC2 instances behind an Application Load Balancer, with an EC2 Auto Scaling group spanning multiple Availability Zones. Overnight the group scales down to 3 instances, then during business hours it rises to as many as 24 instances. Users report the site is sluggish at the start of the workday but performs normally later in the morning. What changes to the scaling approach will resolve the morning slowdown while keeping expenses low?
-
✓ C. Configure a target tracking scaling policy with a lower CPU utilization target and shorten the cooldown period
Configure a target tracking scaling policy with a lower CPU utilization target and shorten the cooldown period is correct because it causes the Auto Scaling group to add instances earlier as CPU climbs so the portal warms up before the main workday traffic arrives while still avoiding constant overprovisioning.
Target tracking adjusts capacity automatically as the metric approaches the target which produces smoother scaling than waiting for thresholds to be breached. A lower CPU target triggers scale out sooner and a shorter cooldown lets the group respond to successive increases more quickly while scale in still removes excess capacity after demand falls.
Create a scheduled scaling action to set the Auto Scaling group desired capacity to 24 just before business hours can prevent the slow start but it may overprovision if morning load is lower than peak and that increases cost compared with demand driven scaling.
Create a scheduled scaling action to set both the minimum and maximum capacity to 24 just before business hours guarantees capacity but it is the most expensive option because it prevents any scale down while the schedule is active.
Use step scaling with a reduced CPU threshold and a shorter cooldown to scale out earlier can help but step scaling reacts when thresholds are exceeded which can be slower and more brittle than tracking a metric. AWS generally recommends target tracking for common, predictable load patterns because it is simpler to tune and self adjusting.
Use target tracking for most demand driven cases and reserve scheduled scaling for fixed known needs. Lower the CPU target and shorten cooldowns to preempt predictable morning spikes without running instances all night.
What method lets you parallelize 90 minute cron jobs that currently run on a single Amazon EC2 instance while requiring minimal changes and keeping operational overhead low?
-
✓ C. Bake an AMI and scale with an Auto Scaling group
Bake an AMI and scale with an Auto Scaling group is the correct choice because it clones the existing runtime and lets you run the same 90 minute cron jobs in parallel across multiple identical EC2 instances while requiring minimal changes and keeping operational overhead low.
Bake an AMI and scale with an Auto Scaling group preserves the current environment so you do not need to refactor scripts or repackage artifacts and you can use managed scaling to add instances when you need parallelism. This approach supports long running processes and relies on familiar EC2 tooling for image baking and group scaling which reduces operational complexity.
Run jobs with AWS Batch scheduled by Amazon EventBridge is powerful for batch workloads but it typically requires containerizing or packaging jobs and defining job queues and compute environments which adds setup and ongoing configuration compared to reusing an AMI.
Use AWS Lambda with Amazon EventBridge schedules is not suitable because Lambda functions are limited to 15 minutes which is far below the 90 minute job duration.
Use Amazon EKS CronJobs with EventBridge would require containerizing the jobs and operating an EKS control plane and worker nodes which introduces more operational overhead than necessary for simply parallelizing existing EC2 based cron jobs.
When a question stresses minimal changes and low operational overhead for long running jobs prefer reusing AMIs and Auto Scaling rather than moving to containers or serverless because of packaging work and runtime limits.
BlueOrbit Logistics operates a RESTful microservice on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. Most API requests finish in about 150 ms, but one route that aggregates reports can run for 2 to 3 minutes, and these long calls are consuming web resources and slowing responses for other clients. What should the architect implement to lessen the impact of these prolonged requests on the rest of the API?
-
✓ B. Use Amazon SQS to offload the long-running work and process it asynchronously
Use Amazon SQS to offload the long-running work and process it asynchronously is the correct choice because it decouples the slow aggregation route from the synchronous web tier so API requests return quickly while background workers handle heavy processing.
Use Amazon SQS to offload the long-running work and process it asynchronously lets the API enqueue a job and respond immediately and worker instances or containers can pull messages from the queue and scale independently to complete report aggregation. This pattern prevents long running tasks from consuming web server threads or connections and reduces contention at the Application Load Balancer and Auto Scaling group level.
Increase the Application Load Balancer idle timeout to several minutes is incorrect because increasing the timeout only permits longer active connections and it does not remove the heavy work from the request path or free web resources.
AWS Step Functions is not the best single answer for this scenario because it is intended for orchestrating workflows and state machines and it will not by itself decouple synchronous API requests unless you design an asynchronous invocation around it.
Migrate the EC2 instances to a Nitro-based family with enhanced networking is not the right remedy because improved network performance does not address the root cause which is the long execution time of the aggregation task and it will not stop requests from occupying web workers.
When one endpoint runs long think asynchronous patterns and prefer decoupling with a queue so the API can respond quickly while background workers scale independently.
An AWS data warehouse is connected to the corporate network via AWS Direct Connect. Average query results are 100 MB and each dashboard page without caching transfers 300 KB. Where should the business intelligence tool run to minimize data transfer egress costs?
-
✓ C. Host the BI tool in AWS in the same Region as the warehouse; users access via Direct Connect
Host the BI tool in AWS in the same Region as the warehouse, users access via Direct Connect is correct because the large 100 MB query results remain inside AWS and only the much smaller 300 KB rendered dashboard pages traverse out to users.
The key reason this Host the BI tool in AWS in the same Region as the warehouse, users access via Direct Connect choice minimizes cost is that the heavy data movement stays internal to AWS where there is no per query egress charge. Only the rendered pages leave AWS over Direct Connect and those bytes are orders of magnitude smaller. Direct Connect typically yields lower effective egress cost per gigabyte than public internet so sending small pages this way reduces total spend.
Host the BI tool on-premises, fetch results from AWS over Direct Connect is wrong because every 100 MB query result would have to egress AWS to on premises and that multiplies egress costs dramatically compared to keeping results inside AWS.
Host the BI tool in AWS, users access over the public internet is wrong because although only small pages leave AWS the public internet egress rates are generally higher than Direct Connect which increases cost for the same bytes out.
Host the BI tool in AWS, users access via Site-to-Site VPN is wrong because VPN traffic typically traverses the internet and does not capture the Direct Connect egress advantage and it can add overhead that makes the path more expensive than using Direct Connect for the same traffic volume.
Quick tip Identify which bytes actually leave AWS and compare their sizes and egress paths. Keep large transfers inside AWS and prefer Direct Connect over the public internet when egress cost is a factor.
Meridian Care, a healthcare startup, is planning a cross-Region disaster recovery strategy for its transactional workload. The team needs a relational database that can meet a Recovery Point Objective of 2 seconds and a Recovery Time Objective of 45 seconds across multiple AWS Regions. Which AWS solution should they implement?
-
✓ C. Amazon Aurora Global Database
The correct choice is Amazon Aurora Global Database. This option is the only relational AWS service purpose built for multi Region disaster recovery with near real time replication and rapid regional failover, and it meets the stated RPO and RTO targets.
Amazon Aurora Global Database typically sustains cross Region replication lag under a second and it allows fast promotion of a secondary Region so you can achieve an RPO of a couple of seconds and an RTO within the 45 second target for transactional workloads.
Amazon RDS Multi-AZ provides synchronous standby replicas for high availability but those standbys are in the same Region and they do not provide cross Region disaster recovery, so this option cannot meet the cross Region RPO and RTO requirement.
Amazon DynamoDB global tables offer global replication but DynamoDB is a NoSQL key value store and it is not a relational database, so it does not satisfy the requirement for a relational database solution.
Amazon RDS cross-Region read replica uses asynchronous replication and it can experience replication lag and a slower promotion process, and those characteristics make it unreliable for a 2 second RPO and a 45 second RTO.
When a question demands a relational database with sub-second RPO and sub-minute RTO across Regions select Amazon Aurora Global Database.
CSV files stored in Amazon S3 must be encrypted at rest and analysts need to run serverless SQL queries directly on those files. Which solution satisfies these requirements?
-
✓ C. Enable S3 SSE-KMS; query with Amazon Athena
The correct option is Enable S3 SSE-KMS; query with Amazon Athena. This option meets the requirement for encryption at rest using AWS KMS managed keys and it provides serverless SQL directly over CSV files stored in S3 so analysts can run queries without provisioning servers.
With Enable S3 SSE-KMS; query with Amazon Athena the S3 objects are encrypted with KMS keys and Athena can read and query those encrypted objects when the KMS key policy and IAM permissions allow access. Athena is fully serverless and billed per query so it supports ad hoc analysis on files in place without ETL or infrastructure management.
Enable S3 SSE; query via Redshift Spectrum is not ideal because Spectrum depends on a Redshift cluster and adds a data warehouse layer and operational overhead compared with querying in place with Athena.
Encrypt S3 with KMS; use Amazon EMR Serverless for SQL can execute SQL through engines like Spark or Hive but it requires submitting and managing jobs and is more complex and often costlier for simple ad hoc queries than Athena.
Use S3 SSE; load into Aurora Serverless for SQL is unsuitable because it forces you to load and transform files into a relational database and maintain schemas which adds unnecessary ETL, storage, and management when the goal is to query CSVs directly in S3.
Focus on the words serverless and query directly on S3 to map to Athena and ensure encryption at rest maps to SSE-KMS.
A live sports analytics startup needs to boost the availability and latency of a worldwide real-time feed processor that communicates over UDP. They require rapid failover between Regions if an AWS Region becomes unavailable, and they plan to keep using their own external DNS provider. Which AWS service should they adopt to meet these needs?
-
✓ B. AWS Global Accelerator
AWS Global Accelerator is the correct choice because it offers static anycast IP addresses on the AWS global backbone, supports UDP, and enables rapid cross Region failover while allowing the startup to continue using its external DNS provider by pointing A or AAAA records to the accelerator’s static IPs.
The accelerator provides continuous health checks and shifts traffic to the nearest healthy Regional endpoint which reduces failover time and lowers latency for a worldwide real time feed processor. It supports both UDP and TCP which is necessary for applications that communicate over UDP and it uses the AWS global network rather than routing over the public internet.
Amazon CloudFront is focused on HTTP and HTTPS acceleration and caching and it does not proxy arbitrary UDP traffic nor deliver deterministic UDP failover across Regions.
Amazon Route 53 provides DNS based routing and failover but the scenario requires keeping a third party DNS provider and DNS TTLs can delay failover compared to an anycast edge proxy.
AWS Elastic Load Balancing (ELB) is Regional and cannot provide global ingress or anycast IPs which means it cannot perform automatic inter Region failover without an additional global traffic layer.
Remember to check protocol support and static anycast IPs when the requirement is UDP plus fast global failover and external DNS.
How can you quickly obtain one-minute EC2 instance metrics with minimal maintenance during a sudden load surge?
-
✓ C. Enable EC2 Detailed Monitoring and view 1-minute metrics in CloudWatch
Enable EC2 Detailed Monitoring and view 1-minute metrics in CloudWatch is correct because it raises EC2 metric resolution from five minutes to one minute with a simple toggle and minimal ongoing maintenance.
Enable EC2 Detailed Monitoring and view 1-minute metrics in CloudWatch provides native instance metrics such as CPUUtilization, NetworkIn, and NetworkOut directly in CloudWatch and it can be enabled from the EC2 console or API without installing additional agents. This gives near real time visibility with little operational overhead compared to solutions that require log ingestion or heavy configuration.
Install the CloudWatch agent and analyze via Logs with Athena is unnecessary for the stated need because installing the agent and routing logs to Athena involves more setup and query overhead and it does not provide the simple one minute EC2 metric resolution that Detailed Monitoring gives out of the box.
CloudWatch Synthetics is intended for synthetic endpoint and API monitoring and it does not collect instance level EC2 performance metrics at 60 second intervals, so it will not meet the requirement.
Stream EC2 OS logs to OpenSearch and visualize in Dashboards relies on log ingestion and parsing and it does not natively produce minute by minute EC2 metrics, so it is slower to enable and more maintenance heavy than enabling Detailed Monitoring.
When a question mentions 1-minute metrics and minimal maintenance for EC2 prefer enabling Detailed Monitoring unless the exam asks for sub minute or custom metrics which require the CloudWatch agent or PutMetricData.
At NovaWell Health, five self-managed MySQL databases run on Amazon EC2 across two Availability Zones. The engineering team configures replication and scales instances manually as traffic fluctuates. The company needs the database tier to automatically add and remove compute capacity as needed while improving performance, scalability, and durability with minimal operational effort. What should they do?
-
✓ B. Migrate the MySQL workloads to Amazon Aurora Serverless v2 (Aurora MySQL)
The correct choice is Migrate the MySQL workloads to Amazon Aurora Serverless v2 (Aurora MySQL). This option preserves MySQL compatibility while providing on demand compute scaling, Multi AZ durability, automatic failover, and managed replication which reduces operational effort and improves performance and scalability.
Migrate the MySQL workloads to Amazon Aurora Serverless v2 (Aurora MySQL) delivers on demand capacity so the database tier can add and remove compute resources automatically as traffic changes. It is a managed service so backups, failover, and replication are handled by the service and the environment avoids the manual scaling and replication management that self managed MySQL on EC2 requires.
Shift the database layer to an EC2 Auto Scaling group and rebuild the environment before migrating the current MySQL data is incorrect because relational databases are stateful and cannot be treated like stateless web servers. Auto Scaling groups help with stateless compute but they do not eliminate the complexity of replication, backups, and consistent failover for database instances.
Consolidate the schemas into one MySQL instance and run on larger EC2 instances is incorrect because vertical scaling creates a single point of failure and does not provide the elastic capacity or managed high availability that the requirement asks for. Larger EC2 instances still require manual scaling and operational maintenance.
Migrate the databases to Amazon Aurora Serverless (Aurora PostgreSQL) is incorrect because it forces a cross engine migration from MySQL to PostgreSQL which requires schema and application changes. That approach is unnecessary when the goal is to retain MySQL compatibility and reduce operational effort.
Match the engine to avoid cross engine migrations and prefer Aurora Serverless v2 when you need automatic, fine grained compute scaling with managed durability for MySQL workloads.
Which approaches provide a managed SFTP endpoint into Amazon S3 with federated per-partner access control and minimal operational overhead? (Choose 2)
-
✓ B. AWS Transfer Family SFTP server backed by S3 with per-user IAM role mapping
-
✓ D. Integrate Transfer Family with an identity provider and enforce S3 bucket policies per prefix
The correct choices are AWS Transfer Family SFTP server backed by S3 with per-user IAM role mapping and Integrate Transfer Family with an identity provider and enforce S3 bucket policies per prefix. These options provide a fully managed SFTP endpoint into S3 and support federated per partner access control with minimal operational overhead.
AWS Transfer Family SFTP server backed by S3 with per-user IAM role mapping offers a managed SFTP interface that stores files directly in S3 so you avoid running and maintaining servers. You can map each user to a scoped IAM role so partners only see their assigned buckets or prefixes and you retain least privilege access controls.
Integrate Transfer Family with an identity provider and enforce S3 bucket policies per prefix lets you federate authentication to existing partner identities while combining IAM role mapping and S3 bucket policies to enforce access at the prefix level. This keeps operational effort low because AWS manages the endpoint and standard identity and policy mechanisms enforce per partner access.
AWS DataSync with an SFTP location to move files into S3 is incorrect because DataSync transfers data from an existing SFTP server and does not provide a managed SFTP endpoint for partners to connect to.
AWS Storage Gateway File Gateway for partners to upload via NFS/SMB is incorrect because File Gateway exposes NFS and SMB protocols and it does not offer an SFTP interface for partners.
Self-managed SFTP on EC2 that syncs uploads to S3 is incorrect because running your own SFTP servers requires ongoing operations such as patching scaling and high availability which increases operational overhead compared with a managed service.
Pick the option that is fully managed and writes directly to S3 while supporting identity federation and per user IAM role mapping.
These AWS questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site
Norhaven Logistics operates smart card readers at every secured doorway across 24 distribution hubs. Each tap causes the reader to send a small JSON message over HTTPS identifying the user and entry point. A solutions architect must design a highly available ingestion and processing path that scales and stores the processed outcomes in a durable data store for later reporting. Which architecture should be proposed?
-
✓ C. Expose an HTTPS API with Amazon API Gateway that invokes AWS Lambda to process events and persist results in Amazon DynamoDB
Expose an HTTPS API with Amazon API Gateway that invokes AWS Lambda to process events and persist results in Amazon DynamoDB is correct because it provides a managed HTTPS entry point that scales automatically and integrates directly with serverless processing and durable storage.
API Gateway gives a highly available HTTPS front door with built in throttling and authentication and it removes the need to manage servers. Lambda enables event driven processing so each reader tap can be validated transformed and recorded without provisioning compute. DynamoDB delivers multi AZ durability and low latency for the many small writes and later reporting that this ingestion pattern requires.
Create a single Amazon EC2 instance to host an HTTPS endpoint that processes messages and writes results to Amazon S3 is not highly available because a single instance is a single point of failure and it forces you to manage scaling patching and recovery.
Use Amazon Route 53 to direct device requests straight to an AWS Lambda function that stores data in Amazon DynamoDB is incorrect because Route 53 is a DNS service and it cannot directly invoke Lambda. You need a routable HTTPS endpoint such as API Gateway or an Application Load Balancer to receive device POSTs.
Place an Amazon CloudFront distribution in front and use Lambda@Edge to handle POST requests and write to Amazon DynamoDB is a poor fit because CloudFront requires an origin and Lambda@Edge has execution and networking constraints that make it unsuitable for high volume write APIs. Lambda@Edge limits and the need for an origin complicate an ingest pipeline compared with a native API Gateway entry point.
For device to cloud HTTPS ingestion favor API Gateway + Lambda + DynamoDB for automatic scaling durability and low operational overhead.
A Linux application will scale across multiple EC2 instances that must read and write the same dataset which currently resides on a 3 TB EBS gp3 volume, and which service provides a highly available POSIX shared file system with minimal application changes?
-
✓ B. Amazon EFS
Amazon EFS is correct because it provides a managed, highly available POSIX compliant NFS file system that multiple EC2 instances can mount concurrently across Availability Zones with minimal or no application changes.
Amazon EFS scales elastically for capacity and throughput and offers multi Availability Zone durability and concurrent mounts so migrating from a single 3 TB EBS volume to an Auto Scaling group does not require rewriting application file system logic. It preserves POSIX semantics including standard file locking and permissions which lets common Linux applications operate without major modification.
Amazon FSx for Lustre is designed for high performance computing and for workloads that need very high throughput or tight integration with S3 backed datasets. It is not aimed at general purpose, durable shared POSIX storage for typical applications that need a simple, cost conscious multi AZ solution.
EBS Multi-Attach (io2) can attach a single block volume to multiple instances but it is constrained to a single Availability Zone and it requires cluster aware coordination to avoid corruption. That adds operational complexity and it does not satisfy multi AZ high availability requirements.
Amazon S3 with s3fs exposes object storage via a FUSE mount and it does not provide full POSIX semantics, reliable file locking, or consistent POSIX performance. It is not a drop in replacement for a POSIX shared file system used by multiple EC2 instances.
When multiple EC2 instances need a POSIX shared file system across Availability Zones remember that Amazon EFS provides concurrent mounts, POSIX semantics, and built in high availability so it is the minimal change solution.
A geospatial mapping startup stores regulated datasets in Amazon S3 across more than 350 buckets to keep data isolated by project. The team discovered that bucket lifecycle rules are inconsistent and difficult to manage at scale, leading to increased storage expenses. Which approach will lower S3 costs while requiring minimal ongoing effort from the IT staff?
-
✓ B. Amazon S3 Intelligent-Tiering storage class
The correct choice is Amazon S3 Intelligent-Tiering storage class because it automatically optimizes costs by monitoring object access and moving objects between access tiers without manual lifecycle rules and without affecting performance.
Amazon S3 Intelligent-Tiering storage class monitors access patterns and shifts objects into frequent and infrequent access tiers and into optional archive tiers to lower storage costs while charging a small monitoring fee. It removes the need for per-bucket lifecycle configuration which makes it well suited for hundreds of buckets and for datasets with unpredictable or variable access.
Amazon S3 One Zone-Infrequent Access lowers cost but stores data in a single Availability Zone which reduces resilience and is not ideal for regulated or business critical datasets. It also does not remove the need for lifecycle planning when access patterns change.
Amazon S3 on Outposts is intended for local data residency and hybrid workloads and it adds hardware and operational overhead. It does not address regional S3 cost optimization and it increases management responsibilities.
Amazon S3 Glacier Deep Archive offers the lowest storage price but retrievals can take hours and retrieval workflows usually depend on lifecycle rules and planning. That makes it a poor fit when the goal is minimal ongoing IT involvement and when access may be unpredictable.
When access patterns are unknown and you need to minimize operational work choose Amazon S3 Intelligent-Tiering so AWS automatically moves objects and you avoid per-bucket lifecycle maintenance.
Which approach most effectively reduces read latency for a read-heavy Amazon RDS for MySQL workload during traffic spikes while requiring minimal changes?
-
✓ B. Add Amazon ElastiCache to cache hot reads
The correct choice is Add Amazon ElastiCache to cache hot reads. Using an in-memory cache such as ElastiCache offloads frequent reads from the database and reduces read latency during traffic spikes while requiring relatively small application changes.
An in-memory cache like ElastiCache serves hot data from memory which is orders of magnitude faster than disk based queries. You can implement cache patterns such as cache-aside or read-through to minimize code changes and tune TTL and eviction to balance freshness and cache hit rate. This approach protects RDS from read amplification during sudden surges and provides predictable low latency for repeated reads.
Amazon RDS Proxy is focused on connection pooling and improving application resilience during failover and connection storms and it does not reduce the execution latency of individual read queries.
Amazon RDS MySQL read replicas with app read/write split can increase read capacity but it usually requires application changes to route reads and writes and replicas can lag during spikes which may return stale data.
Scale up RDS instance and enable Multi-AZ can increase capacity and improve availability but Multi-AZ is for high availability rather than read scaling and vertical scaling can be slow or costly when responding to sudden read spikes.
For sudden read heavy spikes choose ElastiCache for hot data when you need minimal code change and the lowest read latency.
A media analytics firm has operated only in eu-central-1 and stores objects in Amazon S3 encrypted with SSE-KMS. To strengthen security and improve disaster recovery, the team wants an encrypted copy in eu-west-3. The security policy requires that objects be encrypted and decrypted with the same key material and key ID in both Regions. What is the most appropriate solution?
-
✓ D. Create a new S3 bucket in eu-central-1 with SSE-KMS using a multi-Region KMS key, configure replication to a bucket in eu-west-3 that uses the related replica key, and copy current data into the new source bucket
The correct answer is Create a new S3 bucket in eu-central-1 with SSE-KMS using a multi-Region KMS key, configure replication to a bucket in eu-west-3 that uses the related replica key, and copy current data into the new source bucket. This option uses AWS KMS multi-Region keys so the same key material and key ID are available in both Regions which meets the security policy requirement for identical key material.
Multi-Region KMS keys provide a primary key and a related replica key that share key material and a consistent key identity across Regions. Creating a new source bucket encrypted with the multi-Region key and copying existing objects into that bucket ensures historical objects are re-encrypted under the multi-Region key so Amazon S3 replication can replicate encrypted objects to the eu-west-3 bucket that uses the related replica key. This gives a native, automated replication path and preserves the requirement that encryption and decryption use the same key material and key ID.
Enable replication on the existing bucket in eu-central-1 to a bucket in eu-west-3 and share the current AWS KMS key from eu-central-1 with eu-west-3 is not possible because AWS KMS keys are Regional and you cannot share a single-Region key across Regions. You must use a separate regional key or a multi-Region key pair for cross-Region decryption.
Use an Amazon EventBridge schedule to trigger an AWS Lambda function daily that copies objects from eu-central-1 to eu-west-3 with permissions to encrypt and decrypt using the KMS key is operationally heavier and has a worse recovery point objective than native S3 replication. This pattern also does not automatically satisfy the requirement for identical key material and key ID across Regions without creating and managing multi-Region keys and re-encrypting objects, which defeats the simplicity of the solution.
Convert the existing single-Region AWS KMS key to a multi-Region key and use Amazon S3 Batch Replication to backfill historical objects to eu-west-3 is invalid because you cannot convert an existing single-Region KMS key into a multi-Region key. The supported approach is to create a new multi-Region key and re-encrypt objects so replication can use the related replica key.
When cross-Region S3 replication must preserve identical key material and key ID use AWS KMS multi-Region keys and re-encrypt existing objects so native S3 replication can use the related replica key.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
