Blog

Blog

Top 50+ AWS Architect Interview Questions In 2023

Why AWS Architect Interview Questions?

Why AWS Architect Interview Questions?

The AWS Solution Architect Role: With regards to AWS, a Solution Architect would design and define AWS architecture for existing systems, migrating them to cloud architectures as well as developing technical road-maps for future AWS cloud implementations. So, through this AWS Architect interview questions article, I will bring you top and frequently asked AWS interview questions. Gain proficiency in designing, planning, and scaling cloud implementation with the AWS Masters Program.

1. What is Cloud Computing? Can you talk about and compare any two popular Cloud Service Providers?

Cloud computing refers to the delivery of computing services—including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (the cloud). Companies offering these computing services are known as cloud providers, and they typically charge users for the services they consume on a pay-as-you-go basis.

There are many cloud service providers to choose from, each with its own unique set of features and services. Two of the most popular cloud service providers are Amazon Web Services (AWS) and Microsoft Azure.

Amazon Web Services (AWS) is a comprehensive, evolving cloud computing platform provided by Amazon. It offers a wide range of services including computing, storage, networking, database, analytics, machine learning, security, and application development. AWS has a large customer base and is the market leader in the cloud computing industry.

Microsoft Azure is a cloud computing platform and infrastructure created by Microsoft for building, deploying, and managing applications and services through a global network of Microsoft-managed data centers. It offers a range of services including computing, storage, networking, database, analytics, machine learning, the internet of things, and many others. Azure is known for its strong integration with other Microsoft products and technologies and is a popular choice for organizations that use a lot of Microsoft products.

Both AWS and Azure offer a wide range of services and are suitable for a variety of use cases. Choosing the right cloud service provider will depend on your specific needs and requirements.

2. Try the AWS Scenario-based interview question. I have some Private Servers on my premises, also I have distributed some of my workloads on the Public Cloud, what is this Architecture called?

A. Virtual Private Network

B. Private Cloud

C. Virtual Private Cloud

D. Hybrid Cloud

Answer D.

Explanation: This type of architecture would be a hybrid cloud. Why? Because we are using both, the public cloud, and your on-premises servers i.e the private cloud. To make this hybrid architecture easy to use, wouldn’t it be better if your private and public clouds were all on the same network(virtually)? This is established by including your public cloud servers in a virtual private cloud and connecting this virtual cloud with your on-premise servers using a VPN(Virtual Private Network).

Learn to design, develop, and manage a robust, secure, and highly available cloud-based solution for your organization’s needs with the Google Cloud Platform Course.

Amazon EC2 Interview Questions

3. How do you choose an Availability Zone?

An availability zone (AZ) is a physically separate data center within a region. Regions are geographic areas, such as the US or Europe, and each region is made up of multiple availability zones.

Let’s understand this through an example, considering there’s a company that has a user base in India as well as in the US.

Let us see how we will choose the region for this use case :

regions-s3-aws-compressor - Edureka

When choosing an availability zone, there are several factors to consider:

  1. Location: Depending on the location of your users and the data center where your application is hosted, you may want to choose an availability zone that is closer to them in order to reduce latency.
  2. Redundancy: It’s important to consider the level of redundancy offered by the availability zone. This includes factors such as power, networking, and cooling infrastructure, as well as the number of data centers within the zone.
  3. Cost: The cost of using an availability zone may vary depending on the region and the services you are using. It’s important to consider your budget and choose an availability zone that meets your needs at a price that is within your budget.
  4. Services: Different availability zones may offer different services, such as specific types of instances or storage options. Make sure to choose an availability zone that offers the services you need.
  5. Compliance: If you have specific compliance requirements, such as data sovereignty or industry-specific regulations, you may need to choose an availability zone that meets those requirements.

It’s also worth noting that many cloud service providers, such as Amazon Web Services (AWS) and Microsoft Azure, offer tools and features to help you choose the right availability zone for your needs. These tools can provide information on the location, cost, and availability of different availability zones, and help you make an informed decision.

4. Here is A scenario-based interview question. You have a video trans-coding application. The videos are processed according to a queue. If the processing of a video is interrupted in one instance, it is resumed in another instance. Currently, there is a huge backlog of videos that needs to be processed, for this you need to add more instances, but you need these instances only until your backlog is reduced. Which of these would be an efficient way to do it?

In this scenario, using an autoscaling group with a scale-out policy based on the size of the queue would be an efficient way to add more instances as needed to process the video backlog.

Autoscaling groups allow you to automatically increase or decrease the number of instances in a group based on certain triggers, such as the size of a queue or the CPU utilization of the instances. By setting up an autoscaling group with a scale-out policy based on the size of the queue, you can automatically add more instances as needed to process the backlog of videos.

Once the backlog has been reduced and the queue size has returned to a normal level, the autoscaling group can automatically scale down the number of instances to the minimum required to meet your workload needs. This allows you to efficiently use resources and only pay for the instances you need, while also ensuring that your workload is processed in a timely manner.

In addition to using an autoscaling group, there are a few other steps you can take to ensure efficient processing of the video backlog:

  1. Use a distributed processing system, such as a message queue, to distribute the workload across multiple instances.
  2. Use a load balancer to distribute the incoming video workload across multiple instances.
  3. Optimize the processing time of each instance by using the most efficient instance types and configurations.
  4. Monitor the progress of the backlog and the utilization of the instances to ensure that the workload is being processed efficiently.

You should be using an On Demand instance for the same. Why? First of all, the workload has to be processed now, meaning it is urgent, secondly, you don’t need them once your backlog is cleared, therefore Reserved Instance is out of the picture, and since the work is urgent, you cannot stop the work on your instance just because the spot price spiked, therefore Spot Instances shall also not be used. Hence On-Demand instances shall be the right choice in this case.

5. You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way.

Which of the following will meet your requirements?

A. Spot Instances

B. Reserved instances

C. Dedicated instances

D. On-Demand instances

Answer: A

Explanation: Since the work, we are addressing here is not continuous, a reserved instance shall be idle at times, the same goes with On-Demand instances. Also, it does not make sense to launch an On-Demand instance whenever work comes up, since it is expensive. Hence Spot Instances will be the right fit because of their low rates and no long-term commitments.

6. How are stopping and terminating an instance different from each other?

Stopping and terminating an Amazon Elastic Compute Cloud (Amazon EC2) instance are two different actions that can be taken on an instance.

Stopping an instance means that the instance is shut down and the instance’s Amazon Machine Image (AMI) is saved, along with any data stored in the instance’s attached Amazon Elastic Block Store (Amazon EBS) volumes. The instance can be restarted at a later time by starting the instance. When you stop an instance, you are charged for the instance’s EBS volumes, but not for the instance itself.

Terminating an instance means that the instance is shut down and the AMI and any attached EBS volumes are deleted. The instance cannot be restarted after it has been terminated. When you terminate an instance, you are no longer charged for the instance or its EBS volumes.

In general, stopping an instance is useful if you need to temporarily halt an instance for maintenance or cost optimization purposes, and you plan to restart the instance at a later time. Terminating an instance is useful if you no longer need the instance and want to permanently delete it and release the resources it was using.

7. What does the following command do with respect to the Amazon EC2 security groups?

ec2-create-group CreateSecurityGroup

A. Groups the user created security groups into a new group for easy access.

B. Creates a new security group for use with your account.

C. Creates a new group inside the security group.

D. Creates a new rule inside the security group.

Answer B.

Explanation: A Security group is just like a firewall, it controls the traffic in and out of your instance. In AWS terms, the inbound and outbound traffic. The command mentioned is pretty straightforward, it says to create a security group, and does the same. Moving along, once your security group is created, you can add different rules to it. For example, if you have an RDS instance, to access it, you have to add the public IP address of the machine from which you want access to the instance in its security group.

8. If I want my instance to run on a single-tenant hardware, which value do I have to set the instance’s tenancy attribute to?

A. Dedicated

B. Isolated

C. One

D. Reserved

Answer A.

Explanation: The Instance tenancy attribute should be set to Dedicated Instance. The rest of the values are invalid.

To run your Amazon Elastic Compute Cloud (EC2) instance on a single-tenant hardware, you need to set the instance’s tenancy attribute to “dedicated”.

To do this, you can specify the “tenancy” parameter when launching the instance using the AWS Management Console, the AWS CLI, or an API. The value for the “tenancy” parameter should be set to “dedicated”.

For example, if you are launching the instance using the AWS Management Console, you can specify the tenancy by selecting the “Dedicated Instance” option in the “Tenancy” dropdown menu under the “Instance Details” section of the “Step 3: Configure Instance Details” page of the “Launch Instance” wizard.

Keep in mind that instances launched with a tenancy of “dedicated” are physically isolated at the host hardware level from instances that belong to other accounts. This means that your instance will run on a host that is dedicated to running only your instances, and no other instances from other customers will be running on the same host. This can be useful if you have compliance or regulatory requirements that mandate that your instances be run on single-tenant hardware. However, it also means that you will be charged a higher hourly rate for the instance compared to an instance with a tenancy of “default”.

9. When will you incur Costs with an Elastic IP Address (EIP)?

A. When an EIP is allocated.

B. When it is allocated and associated with a running instance.

C. When it is allocated and associated with a stopped instance.

D. Costs are incurred regardless of whether the EIP is associated with a running instance.

Answer C.

Explanation: You are not charged, if only one Elastic IP address is attached to your running instance. But you do get charged under the following conditions:

  • When you use more than one Elastic IPs with your instance.
  • When your Elastic IP is attached to a stopped instance.
  • When your Elastic IP is not attached to any instance.

You will incur costs for an Elastic IP Address (EIP) in the following situations:

  1. When you allocate an EIP: You will be charged a small hourly rate for each allocated EIP, regardless of whether it is associated with a running instance or not.
  2. When you associate an EIP with a running instance: You will be charged a small hourly rate for each EIP that is associated with a running instance.
  3. When you disassociate an EIP from a running instance, but do not release the EIP: If you disassociate an EIP from a running instance, but do not release the EIP back to Amazon Web Services (AWS), you will continue to be charged the hourly rate for the EIP.
  4. When you transfer an EIP from one AWS account to another: If you transfer an EIP from one AWS account to another, you will be charged a one-time fee for the transfer.

It’s important to note that you will not incur any charges for an EIP that is not allocated or associated with a running instance.

Elastic IP addresses are static IP addresses designed for dynamic cloud computing. They allow you to assign a static IP address to a resource in Amazon Elastic Compute Cloud (EC2) or Amazon Virtual Private Cloud (VPC) that can be remapped to any instance in your account. EIPs can be useful for maintaining a consistent DNS name for a resource that is subject to change, such as a server that is behind an Elastic Load Balancer (ELB) or an instance that is part of an Auto Scaling group.

10. How is a Spot instance different from an On-Demand instance or Reserved Instance?

First of all, let’s understand that Spot Instances, On-Demand instances,s and Reserved Instances are all models for pricing. Moving along, spot instances provide the ability for customers to purchase compute capacity with no upfront commitment, at hourly rates usually lower than the On-Demand rate in each region. Spot instances are just like bidding, the bidding price is called Spot Price.

The Spot Price fluctuates based on supply and demand for instance, but customers will never pay more than the maximum price they have specified. If the Spot Price moves higher than a customer’s maximum price, the customer’s EC2 instance will be shut down automatically. But the reverse is not true, if the Spot prices come down again, your EC2 instance will not be launched automatically, one has to do that manually.  

In Spot and On-demand instances, there is no commitment for the duration from the user side, however in reserved instances, one has to stick to the time period that he has chosen.

11. How to use the processor state control feature available on the  c4.8xlarge instance?

The processor state control consists of 2 states:

  • The C state – Sleep state varying from c0 to c6. C6 is the deepest sleep state for a processor
  • The P state – Performance state p0 being the highest and p15 being the lowest possible frequency.

Now, why the C state and P state? Processors have cores, these cores need thermal headroom to boost their performance. Now since all the cores are on the processor the temperature should be kept at an optimal state so that all the cores can perform at the highest performance.

Now how will these states help with that? If a core is put into a sleep state it will reduce the overall temperature of the processor and hence other cores can perform better. Now the same can be synchronized with other cores so that the processor can boost as many cores it can by timely putting other cores to sleep, and thus get an overall performance boost.

Concluding, the C and P states can be customized in some EC2 instances like the c4.8xlarge instance, and thus you can customize the processor according to your workload.

12. Are the Reserved Instances available for Multi-AZ Deployments?

Yes, Amazon Elastic Compute Cloud (EC2) Reserved Instances are available for Multi-AZ Deployments.

A. Multi-AZ Deployments are only available for Cluster Compute instances types

B. Available for all instance types

C. Only available for M3 instance types

D. Not Available for Reserved Instances

Answer B.

Explanation: Reserved Instances are a pricing model, which is available for all instance types in EC2.

EC2 Reserved Instances are a pricing model that allows you to reserve capacity in the EC2 service for a period of one or three years, in exchange for a discounted hourly rate. When you purchase a Reserved Instance, you can choose the instance type, Availability Zone, and tenancy that best fits your needs.

If you want to use a Reserved Instance with a Multi-AZ Deployment, you can do so by selecting the “Multi-AZ” option for the “Instance Count” field when purchasing the Reserved Instance. This will allow you to use the Reserved Instance to launch instances in multiple Availability Zones within a region, providing increased fault tolerance and availability for your applications.

It’s important to note that when you use a Reserved Instance with a Multi-AZ Deployment, you will be charged the full hourly rate for each instance launched in an Availability Zone, in addition to the Reserved Instance hourly rate.

Multi-AZ Deployments are a feature of Amazon Relational Database Service (RDS) and Amazon Elastic Container Service for Kubernetes (EKS) that allow you to create a high availability environment by running two or more instances of your application in different Availability Zones within a region. This can help ensure that your application remains available even if one Availability Zone experiences an outage or degradation.

13. What kind of network performance parameters can you expect when you launch instances in a cluster placement group?

When you launch instances in a cluster placement group, you can expect to achieve low network latency and high network throughput.

A cluster placement group is a logical grouping of instances within a single Availability Zone. When you launch instances in a cluster placement group, they are placed in close proximity to each other, providing low-latency, high-bandwidth networking to support high-performance computing (HPC) workloads.

To achieve optimal performance, it’s important to ensure that the instances in a cluster placement group are the same instance type and are placed in the same Availability Zone. Additionally, all instances in the placement group must be in a VPC that has at least one subnet in the same Availability Zone.

The specific network performance parameters you can expect will depend on the instance type and size that you choose. For example, some instance types, such as the C5, M5, and R5 instances, have higher network performance than others. You can use the AWS Documentation to review the network performance characteristics of different instance types to help you choose the best one for your needs.

It’s important to note that cluster placement groups are not supported for all instance types and are not available in all regions. You should review the AWS documentation for a list of supported instance types and regions before launching instances in a cluster placement group.

14. To deploy a 4-node cluster of Hadoop in AWS which instance type can be used?

First, let’s understand what actually happens in a Hadoop cluster, the Hadoop cluster follows a master-slave concept. The master machine processes all the data, and slave machines store the data and act as data nodes. Since all the storage happens at the slave, a higher capacity hard disk would be recommended and since the master does all the processing, a higher RAM and a much better CPU is required. Therefore, you can select the configuration of your machine depending on your workload.

For e.g. – In this case, c4.8xlarge will be preferred for the master machine whereas for the slave machine we can select the i2.large instance. If you don’t want to deal with configuring your instance and installing the Hadoop cluster manually, you can straight away launch an Amazon EMR (Elastic Map Reduce) instance which automatically configures the servers for you. You dump your data to be processed in S3, and EMR picks it from there, processes it, and dumps it back into S3.

15. Where do you think an AMI fits, when you are designing an architecture for a solution?

AMIs(Amazon Machine Images) are like templates of virtual machines and an instance is derived from an AMI. AWS offers pre-baked AMIs which you can choose while you are launching an instance, some AMIs are not free, therefore can be bought from the AWS Marketplace. You can also choose to create your own custom AMI which would help you save space on AWS.

For example, if you don’t need a set of software on your installation, you can customize your AMI to do that. This makes it cost-efficient since you are removing unwanted things.

So, with reference the regions to choose between are, Mumbai and North Virginia. Now let us first compare the pricing, you have hourly prices, which can be converted to your per-month figure. Here North Virginia emerges as a winner. But, pricing cannot be the only parameter to consider. Performance should also be kept in mind hence, let’s look at latency as well. Latency basically is the time that a server takes to respond to your requests i.e the response time. North Virginia wins again!

So concluding, North Virginia should be chosen for this use case.

16. Is one Elastic IP address enough for every instance that I have running?

In most cases, one Elastic IP (EIP) address is sufficient for every instance that you have running.

An Elastic IP address is a static IP address designed for dynamic cloud computing. It allows you to assign a static IP address to a resource in Amazon Elastic Compute Cloud (EC2) or Amazon Virtual Private Cloud (VPC) that can be remapped to any instance in your account. EIPs can be useful for maintaining a consistent DNS name for a resource that is subject to change, such as a server that is behind an Elastic Load Balancer (ELB) or an instance that is part of an Auto Scaling group.

Each Amazon Web Services (AWS) account is allocated a certain number of EIPs, and you can allocate additional EIPs as needed. In most cases, one EIP is sufficient for each instance that you have running, as long as the instance is not being used as a load balancer or as part of an Auto Scaling group.

There are some cases where you may need more than one EIP per instance. For example, if you are running an application that requires multiple network interfaces, or if you are running multiple applications on a single instance that each require their own EIP, you may need to allocate additional EIPs.

It’s important to note that there are limits to the number of EIPs that you can allocate per AWS account, and you may incur additional charges for allocating and using EIPs. You should review the AWS documentation for more information on EIP limits and pricing.

17. What are the best practices for Security in Amazon EC2?

There are several best practices that you can follow to improve security in Amazon Elastic Compute Cloud (EC2). Some of these best practices include:

  1. Use Amazon Machine Images (AMIs) with the latest security patches: It’s important to ensure that the AMI that you use to launch your EC2 instances is up to date with the latest security patches. You can use the AWS Management Console or the AWS CLI to search for and launch AMIs that have the latest patches.
  2. Enable automatic security updates: You can enable automatic security updates for your EC2 instances to ensure that they are automatically patched with the latest security updates. This can help protect your instances from known vulnerabilities and reduce the risk of security breaches.
  3. Use security groups: Security groups are virtual firewalls that control inbound and outbound traffic to and from your EC2 instances. You can use security groups to specify which protocols, ports, and IP ranges are allowed to access your instances. It’s important to carefully consider the rules you specify in your security groups and to regularly review the rules to ensure that they are still appropriate.
  4. Use strong passwords and enable multifactor authentication: It’s important to use strong passwords for your EC2 instances and to enable multifactor authentication to help protect your instances from unauthorized access. You can use tools like Amazon EC2 Systems Manager to help you manage and rotate your instance credentials.
  5. Use encryption: Encrypting data at rest and in transit can help protect it from unauthorized access. You can use tools like Amazon Elastic Block Store (EBS) encryption and AWS Key Management Service (KMS) to help you encrypt your data.
  6. Monitor your instances: Regularly monitoring your EC2 instances can help you identify and address potential security issues before they become a problem. You can use tools like Amazon CloudWatch and AWS Security Hub to monitor your instances and receive alerts about potential security issues.
  7. Use the principle of least privilege: When working with EC2 instances, it’s important to follow the principle of least privilege and only grant the permissions that are necessary to perform a specific task. This can help reduce the risk of unauthorized access to your instances.

By following these best practices, you can help improve the security of your EC2 instances and protect your applications and data from potential security threats.

Questions on Amazon Storage

18. You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which method will ensure that all objects uploaded to the bucket are set to public read?

To ensure that all objects uploaded to an Amazon Simple Storage Service (S3) bucket are set to public read, you can use a bucket policy.

A bucket policy is a JSON document that defines the access permissions for an S3 bucket. You can use a bucket policy to grant permissions to specific users or groups of users to perform specific actions on the objects in your bucket.

To set all objects in your bucket to public read, you can use the following bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadForGetBucketObjects",
            "Action": "s3:GetObject",
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::my-bucket/*",
            "Principal": "*"
        }
    ]
}

This bucket policy grants the “s3:GetObject” action to all users for all objects in the “my-bucket” bucket. The “GetObject” action allows users to download objects from the bucket, and the “Principal” element grants permissions to all users.

To apply this bucket policy, you can use the AWS Management Console or the AWS CLI. You can find more information on how to use bucket policies in the AWS documentation.

It’s important to note that bucket policies are a powerful tool for controlling access to your S3 buckets, and you should carefully consider the permissions you grant in your policies. You should also regularly review your policies to ensure that they are still appropriate and that they do not grant unnecessary permissions.

19. Can S3 be used with EC2 instances, if yes, how?

Yes, Amazon Simple Storage Service (S3) can be used with Amazon Elastic Compute Cloud (EC2) instances.

There are several ways that you can use S3 with EC2 instances:

  1. Store and retrieve data: You can use S3 to store and retrieve data from your EC2 instances. For example, you can use the AWS SDK for Python (Boto3) to upload and download files from S3 from your EC2 instances.
  2. Serve static assets: You can use S3 to serve static assets, such as images, JavaScript files, and CSS files, for your web application. To do this, you can configure your S3 bucket as a static website and then link to the objects in your bucket from your web application.
  3. Backup data: You can use S3 to back up data from your EC2 instances. For example, you can use tools like AWS Data Pipeline or AWS Backup to schedule regular backups of your EC2 instances to S3.
  4. Use S3 as a data source: You can use S3 as a data source for your EC2 instances. For example, you can use tools like Amazon EMR or Amazon Athena to process data stored in S3 using EC2 instances.

To use S3 with EC2 instances, you will need to ensure that your EC2 instances have the appropriate permissions to access S3. You can do this by creating an IAM role and attaching it to your EC2 instances. The IAM role should have permission to perform the actions that you want to perform on S3, such as “s3:GetObject” to download objects from S3 or “s3:PutObject” to upload objects to S3.

You can find more information on how to use S3 with EC2 instances in the AWS documentation.

20. A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data?

There are several methods that can enable a branch office to access data stored in an AWS Storage Gateway with a gateway-cached volume when the link between the main and branch office is offline:

  1. Use the local cache: The gateway-cached volume stores a copy of the data locally on the Storage Gateway appliance, so the branch office can access the data from the local cache while the link is offline. The local cache is updated periodically from the S3 bucket, so the data in the cache may not be the most recent version of the data.
  2. Use Amazon S3 Transfer Acceleration: You can enable Amazon S3 Transfer Acceleration to improve the speed of data transfer between the branch office and the S3 bucket. Transfer Acceleration uses Amazon CloudFront’s globally distributed edge locations to accelerate transfers over the public internet.
  3. Use a VPN connection: If the branch office has a VPN connection to the main office, they can use the VPN connection to access the data stored in the S3 bucket. This will require setting up a VPN connection between the main and branch office and configuring the Storage Gateway to use the VPN connection.
  4. Use Amazon S3 Transfer Acceleration and a VPN connection: You can use both Amazon S3 Transfer Acceleration and a VPN connection to improve the speed and reliability of data transfer between the branch office and the S3 bucket. This will provide the best performance and reliability, but will also require setting up a VPN connection and enabling Transfer Acceleration.

It’s important to note that the availability of these methods will depend on the specific configuration of your Storage Gateway and the network infrastructure at your main and branch offices. You should review the AWS documentation and work with your network administrator to determine the best solution for your needs.

21. A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third-party software to only the Amazon S3 bucket named “company-backup”?

To limit the access of third-party software to only the Amazon Simple Storage Service (S3) bucket named “company-backup”, you can use an Amazon Identity and Access Management (IAM) policy.

An IAM policy is a JSON document that defines the permissions for an IAM user, group, or role. You can use an IAM policy to grant permissions to specific actions on specific resources, such as an S3 bucket.

To limit the access of the third-party software to only the “company-backup” bucket, you can use the following IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAccessToCompanyBackupBucket",
            "Action": "s3:*",
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::company-backup/*"
        },
        {
            "Sid": "DenyAccessToOtherBuckets",
            "Action": "s3:*",
            "Effect": "Deny",
            "NotResource": "arn:aws:s3:::company-backup/*"
        }
    ]
}

22. How can you speed up data transfer in Snowball?

There are several ways that you can speed up data transfer in Amazon Snowball:

  1. Use multiple Snowballs: If you have a large amount of data to transfer, you can use multiple Snowballs to parallelize the transfer and increase the overall transfer speed.
  2. Use Amazon S3 Transfer Acceleration: You can enable Amazon S3 Transfer Acceleration to improve the speed of data transfer between your on-premises data center and the S3 bucket. Transfer Acceleration uses Amazon CloudFront’s globally distributed edge locations to accelerate transfers over the public internet.
  3. Use a faster network connection: If the network connection between your on-premises data center and the Snowball is slow, you can use a faster network connection, such as a direct network connection or a higher-bandwidth internet connection, to speed up the transfer.
  4. Use a larger Snowball appliance: Snowball appliances come in different sizes, with different storage capacities and transfer speeds. Using a larger Snowball appliance with a higher storage capacity and transfer speed can help speed up the data transfer.
  5. Use a faster data transfer protocol: Snowball supports several data transfer protocols, including Network File System (NFS) and Server Message Block (SMB). Using a faster protocol, such as NFS

23. When you need to move data over long distances using the internet, for instance across countries or continents to your Amazon S3 bucket, which method or service will you use?

To move data over long distances using the internet to an Amazon Simple Storage Service (S3) bucket, you can use Amazon S3 Transfer Acceleration.

Amazon S3 Transfer Acceleration is a network acceleration feature that uses Amazon CloudFront’s globally distributed edge locations to accelerate the transfer of data to and from S3 over the public internet. Transfer Acceleration can improve the speed of data transfer by up to 50% compared to transferring data over a standard internet connection.

To use S3 Transfer Acceleration, you will need to enable the feature for your S3 bucket and then use the Transfer Acceleration endpoint when transferring data to or from the bucket. You can use the AWS Management Console, the AWS CLI, or the S3 API to enable Transfer Acceleration and transfer data using the Transfer Acceleration endpoint.

It’s important to note that S3 Transfer Acceleration is designed to accelerate transfers over long distances and may not provide significant performance improvements for transfers within a single region. Additionally, there are additional charges for using Transfer Acceleration, which is based on the amount of data transferred. You can review the AWS documentation for more information on S3 Transfer Acceleration pricing.

Questions on AWS VPC

24. Can I Connect my Corporate Data Center to the Amazon Cloud?

Yes, you can connect your corporate data center to the Amazon cloud. There are several ways that you can do this:

  1. Direct Connect: Amazon Direct Connect is a network service that allows you to establish a dedicated network connection from your data center to the AWS cloud. Direct Connect uses private network connections, such as Ethernet or AWS Direct Connect Partner Network connections, to provide low-latency, high-bandwidth connectivity between your data center and AWS.
  2. Virtual Private Network (VPN): You can use a VPN connection to establish a secure, encrypted connection between your data center and the AWS cloud. VPN connections can be used over the public internet or over a dedicated network connection, such as a Direct Connect connection.
  3. AWS Transit Gateway: AWS Transit Gateway is a network service that allows you to connect your on-premises data centers, AWS accounts, and VPCs in a single, scalable network. Transit Gateway simplifies the process of connecting your data center to the AWS cloud and allows you to easily manage the connectivity between your resources.
  4. AWS Direct Connect Gateway: AWS Direct Connect Gateway is a service that allows you to connect your on-premises data centers and AWS accounts using a single Direct Connect connection. Direct Connect Gateway simplifies the process of connecting your data center to the AWS cloud and allows you to easily manage the connectivity between your resources.

You can review the AWS documentation for more information on how to connect your corporate data center to the AWS cloud.

25. Is it possible to change the private IP addresses of an EC2 while it is Running/Stopped in a VPC?

Yes, it is possible to change the private IP address of an Amazon Elastic Compute Cloud (EC2) instance while it is running or stopped in a virtual private cloud (VPC).

To change the private IP address of an EC2 instance while it is running, you can use the “Modify Network Interface” action in the AWS Management Console or the “modify-network-interface” AWS CLI command. This will allow you to specify a new private IP address for the instance’s network interface.

To change the private IP address of an EC2 instance while it is stopped, you can use the “Modify Network Interface” action in the AWS Management Console or the “modify-network-interface” AWS CLI command, or you can use the “Modify Instance” action in the AWS Management Console or the “modify-instance” AWS CLI command. Both of these actions allow you to specify a new private IP address for the instance’s network interface.

It’s important to note that changing the private IP address of an EC2 instance may impact the connectivity of the instance, and you should carefully consider the implications of the change before proceeding. You should also be aware that there are limits to the number of private IP addresses that you can assign to an EC2 instance, and you may need to request a limit increase if you need to assign more private IP addresses than the current limit allows. You can review the AWS documentation for more information on private IP address limits and how to request a limit increase.

26. If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a predetermined private IP address you should:

A. Launch the instance from a private Amazon Machine Image (AMI).

B. Assign a group of sequential Elastic IP addresses to the instances.

C. Launch the instances in the Amazon Virtual Private Cloud (VPC).

D. Launch the instances in a Placement Group.

Answer C.

Explanation: The best way of connecting to your cloud resources (for ex- ec2 instances) from your own data center (for eg- private cloud) is a VPC. Once you connect your data center to the VPC in which your instances are present, each instance is assigned a private IP address that can be accessed from your data center. Hence, you can access your public cloud resources, as if they were on your own network.

27. Which of the following is true?

A. You can attach multiple route tables to a subnet

B. You can attach multiple subnets to a routing table

C. Both A and B

D. None of these.

Answer B.

Explanation: Route Tables are used to route network packets, therefore in a subnet having multiple route tables will lead to confusion as to where the packet has to go. Therefore, there is only one route table in a subnet, and since a routing table can have any no. of records or information, hence attaching multiple subnets to a routing table is possible.

28. In CloudFront what happens when content is NOT present at an Edge location and a request is made to it?

A. An Error “404 not found” is returned

B. CloudFront delivers the content directly from the origin server and stores it in the cache of the edge location

C. The request is kept on hold till content is delivered to the edge location

D. The request is routed to the next closest edge location

Answer B. 

Explanation: CloudFront is a content delivery system, which caches data to the nearest edge location from the user, to reduce latency. If data is not present at an edge location, the first time the data may get transferred from the original server, but the next time, it will be served from the cached edge.

29. If I’m using Amazon CloudFront, can I use Direct Connect to transfer objects from my own data center?

Yes, you can use Amazon Direct Connect to transfer objects from your own data center to Amazon CloudFront.

Amazon Direct Connect is a network service that allows you to establish a dedicated network connection from your data center to the AWS cloud. Direct Connect uses private network connections, such as Ethernet or AWS Direct Connect Partner Network connections, to provide low-latency, high-bandwidth connectivity between your data center and AWS.

To use Direct Connect to transfer objects from your data center to CloudFront, you will need to create a Direct Connect connection and configure a virtual interface for CloudFront. You can then use tools like the AWS Management Console, the AWS CLI, or the CloudFront API to transfer objects from your data center to CloudFront using the Direct Connect connection.

It’s important to note that Direct Connect is a paid service and there are charges for using Direct Connect to transfer data to and from CloudFront. You can review the AWS documentation for more information on Direct Connect pricing and the specific charges that apply.

Using Direct Connect to transfer objects from your data center to CloudFront can provide faster and more reliable data transfer compared to transferring data over the public internet. However, it’s important to carefully consider the cost and complexity of using Direct Connect versus other options, such as transferring data over the public internet or using Amazon S3 Transfer Acceleration.

30. Why do you make subnets?

A. Because there is a shortage of networks

B. To efficiently utilize networks that have a large no of hosts

C. Because there is a shortage of hosts.

D. To efficiently utilize networks that have a small no. of hosts.

Answer B.

Explanation: If there is a network that has a large no. of hosts, managing all these hosts can be a tedious job. Therefore we divide this network into subnets (sub-networks) so that managing these hosts becomes simpler.

31. If my AWS Direct Connect fails, will I lose my connectivity?

If your Amazon Web Services (AWS) Direct Connect connection fails, you will lose connectivity to the AWS cloud unless you have implemented a redundant connection or a failover solution.

AWS Direct Connect is a network service that allows you to establish a dedicated network connection from your data center to the AWS cloud. Direct Connect uses private network connections, such as Ethernet or AWS Direct Connect Partner Network connections, to provide low-latency, high-bandwidth connectivity between your data center and AWS.

If your Direct Connect connection fails, you will lose connectivity to the AWS cloud unless you have implemented a redundant connection or a failover solution. You can use one of the following options to provide redundancy and failover for your Direct Connect connection:

  1. Redundant connections: You can create multiple Direct Connect connections to provide redundancy for your connection to the AWS cloud. If one connection fails, the other connections can continue to provide connectivity.
  2. Failover with Virtual Private Network (VPN): You can use a VPN connection as a failover solution for your Direct Connect connection. If your Direct Connect connection fails, you can use the VPN connection to provide connectivity to the AWS cloud.
  3. Failover with AWS Transit Gateway: AWS Transit Gateway is a network service that allows you to connect your on-premises data centers, AWS accounts, and VPCs in a single, scalable network. You can use Transit Gateway as a failover solution for your Direct Connect connection by connecting your Direct Connect connection to Transit Gateway and using the transit gateway to provide connectivity to the AWS cloud.

It’s important to carefully consider your connectivity requirements and implement a redundant connection or failover solution to ensure that you have reliable and continuous connectivity

Questions on Amazon Database

32. How are Amazon RDS, DynamoDB, and Redshift different?

Amazon Relational Database Service (RDS), Amazon DynamoDB, and Amazon Redshift are all database services provided by Amazon Web Services (AWS), but they are designed to meet different database needs and have different characteristics.

  1. Amazon RDS is a fully managed relational database service that makes it easy to set up, operate, and scale a database in the cloud. RDS supports several database engines, including MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server. RDS is designed for use cases that require a traditional, structured, and scalable relational database.
  2. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB is designed for use cases that require high performance and the ability to handle large amounts of data that is spread across many servers. DynamoDB supports key-value and document data models.
  3. Amazon Redshift is a fully managed data warehouse service that makes it easy to set up, operate, and scale a data warehouse in the cloud. Redshift is designed for use cases that require fast querying and analysis of large datasets using SQL and business intelligence tools. Redshift is optimized for fast querying and can handle petabyte-scale datasets.

In summary, RDS is a relational database service, DynamoDB is a NoSQL database service, and Redshift is a data warehouse service. Each service is designed to meet different database needs and has its own unique characteristics. You can review the AWS documentation for more information on RDS, DynamoDB, and Redshift and choose the service that best meets your database needs.

33. When would I prefer Provisioned IOPS over Standard RDS storage?

A. If you have batch-oriented workloads

B. If you use production online transaction processing (OLTP) workloads.

C. If you have workloads that are not sensitive to consistent performance

D. All of the above

Answer A.

Explanation:  Provisioned IOPS delivers high IO rates but on the other hand it is expensive as well. Batch processing workloads do not require manual intervention they enable full utilization of systems, therefore a provisioned IOPS will be preferred for batch-oriented workloads.

34. If I am running my DB Instance as a Multi-AZ deployment, can I use the standby DB Instance for read or write operations along with the primary DB instance?

A. Yes

B. Only with MySQL-based RDS

C. Only for Oracle RDS instances

D. No

Answer D.

Explanation: No, the Standby DB instance cannot be used with the primary DB instance in parallel, as the former is solely used for standby purposes, it cannot be used unless the primary instance goes down.

35. Your company’s branch offices are all over the world, they use software with a multi-regional deployment on AWS, and they use MySQL 5.6 for data persistence.

The task is to run an hourly batch process and read data from every region to compute cross-regional reports which will be distributed to all the branches. This should be done in the shortest time possible. How will you build the DB architecture in order to meet the requirements?

A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region

B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region

C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region

D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region

Answer A.

Explanation: For this, we will take an RDS instance as a master, because it will manage our database for us and since we have to read from every region, we’ll put a read replica of this instance in every region where the data has to be read from. Option C is not correct since putting a read replica would be more efficient than putting a snapshot, a read replica can be promoted if needed to an independent DB instance, but with a Db snapshot, it becomes mandatory to launch a separate DB Instance.

36. Which AWS services will you use to collect and process e-commerce data for near real-time analysis?

A. Amazon ElastiCache

B. Amazon DynamoDB

C. Amazon Redshift

D. Amazon Elastic MapReduce

Answer B, C.

Explanation: DynamoDB is a fully managed NoSQL database service. DynamoDB therefore can be fed any type of unstructured data, which can be data from e-commerce websites as well, and later, an analysis can be done on them using Amazon Redshift. We are not using Elastic MapReduce, since a near real-time analysis is needed.

37. Can I run more than one DB instance for Amazon RDS for free?

No, you cannot run more than one database instance for Amazon Relational Database Service (RDS) for free.

Amazon RDS is a fully managed database service that makes it easy to set up, operate, and scale a database in the cloud. RDS supports several database engines, including MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server.

To use Amazon RDS, you will need to sign up for an AWS account and create a database instance. There are costs associated with creating and running a database instance in Amazon RDS, including charges for the database instance itself, data storage, and data transfer. You can review the AWS documentation for more information on Amazon RDS pricing and the specific charges that apply.

It’s important to note that Amazon RDS does offer a free tier that allows you to run a single database instance for free for a period of time. The free tier includes 750 hours of database instance usage and 20 GB of storage per month for a select group of database engines. You can review the AWS documentation for more information on the free tier and the specific terms and conditions that apply.

38. A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?

A MySQL Installed on two Amazon EC2 Instances in a single Availability Zone

B. Amazon RDS for MySQL with Multi-AZ

C. Amazon ElastiCache

D. Amazon DynamoDB

Answer D.

Explanation: DynamoDB has the ability to scale more than RDS or any other relational database service, therefore DynamoDB would be the apt choice.

39. What happens to my backups and DB Snapshots if I delete my DB Instance?

If you delete your Amazon Relational Database Service (RDS) database instance, your backups and database snapshots will also be deleted.

Amazon RDS is a fully managed database service that makes it easy to set up, operate, and scale a database in the cloud. RDS supports several database engines, including MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server.

When you create a database instance in Amazon RDS, you can configure automated backups to be taken regularly and retain them for a specified number of days. These backups are stored in Amazon S3 and can be used to restore your database to a previous state in the event of data loss or corruption.

You can also manually create database snapshots at any time, which are stored in Amazon S3 and can be used to restore your database to a specific point in time.

If you delete your database instance, both your automated backups and manual database snapshots will be deleted. It’s important to carefully consider the impact of deleting your database instance and ensure that you have a backup or snapshot that you can use to restore your data if necessary. You can review the AWS documentation for more information on backing up and restoring your Amazon RDS database.

40. Which of the following use cases are suitable for Amazon DynamoDB? Choose 2 answers

A. Managing web sessions.

B. Storing JSON documents.

C. Storing metadata for Amazon S3 objects.

D. Running relational joins and complex updates.

Answer C, D.

Explanation: If all your JSON data have the same fields eg [id, name, age] then it would be better to store it in a relational database, the metadata, on the other hand, is unstructured, also running relational joins or complex updates would work on DynamoDB as well.

41. Can I retrieve only a specific element of the data, if I have nested JSON data in DynamoDB?

Yes, you can retrieve only a specific element of nested JSON data in Amazon DynamoDB.

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB supports key-value and document data models and allows you to store complex data structures, including nested JSON data.

To retrieve only a specific element of nested JSON data in DynamoDB, you can use the “Projection Expression” feature of the “GetItem” operation. The projection expression allows you to specify the specific elements of the data that you want to retrieve, rather than retrieving the entire item.

For example, if you have a DynamoDB table with a “data” attribute that contains nested JSON data, you can use a projection expression to retrieve only the “name” element of the data like this:

data.name

You can also use the projection expression to retrieve multiple elements of the data by separating the elements with a comma, like this:

data.name, data.age

It’s important to note that the projection expression feature is not supported by all DynamoDB operations, and you will need to use the “GetItem” operation or a similar operation that supports projection expressions to retrieve specific elements of nested JSON data. You can review the AWS documentation for more information on projection expressions and how to use them to retrieve specific elements of data in DynamoDB.

42. How can I load my data to Amazon Redshift from different data sources like Amazon RDS, Amazon DynamoDB, and Amazon EC2?

There are several ways to load data to Amazon Redshift from different data sources like Amazon Relational Database Service (RDS), Amazon DynamoDB, and Amazon Elastic Compute Cloud (EC2). Some of the options that you can use to load data to Redshift include:

  1. Using the COPY command: The COPY command is a highly optimized data load operation that allows you to load data to Redshift from a variety of sources, including Amazon S3, Amazon EMR, and remote hosts. You can use the COPY command to load data to Redshift from RDS, DynamoDB, and EC2 by specifying the data source and the relevant connection details.
  2. Using AWS Data Pipeline: AWS Data Pipeline is a cloud service that allows you to automate the movement and transformation of data. You can use Data Pipeline to create a pipeline that extracts data from RDS, DynamoDB, or EC2 and loads it to Redshift. Data Pipeline provides a visual interface and pre-built connectors that make it easy to set up data pipelines without writing code.
  3. Using third-party ETL tools: There are various third-party ETL (extract, transform, load) tools that you can use to load data to Redshift from different data sources. Some examples of ETL tools that support Redshift as a destination include Talend, Alteryx, and Fivetran.

It’s important to carefully consider the data loading needs and requirements of your application and choose the option that best meets your needs. You can review the AWS documentation for more information on loading data to Redshift and the specific options that are available.

43. You are running a website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests, you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)

A. Deploy ElastiCache in-memory cache running in each availability zone

B. Implement sharding to distribute the load to multiple RDS MySQL instances

C. Increase the RDS MySQL Instance size and Implement provisioned IOPS

D. Add an RDS MySQL read replica in each availability zone

Answer A, C.

Explanation:  Since it does a lot of reads writes, provisioned IO may become expensive. But we need high performance as well, therefore the data can be cached using ElastiCache which can be used for frequently reading the data. As for RDS since read contention is happening, the instance size should be increased, and provisioned IO should be introduced to increase the performance.

44. Your application has to retrieve data from your user’s mobile every 5 minutes and the data is stored in DynamoDB, later every day at a particular time the data is extracted into S3 on a per-user basis, and then your application is later used to visualize the data to the user. You are asked to optimize the architecture of the backend system to lower cost, what would you recommend?

A. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.

B. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.

C. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.

D. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.

Answer C.

Explanation: Since our work requires the data to be extracted and analyzed, to optimize this process a person would use provisioned IO, but since it is expensive, using an ElastiCache memory instead to cache the results in the memory can reduce the provisioned read throughput and hence reduce cost without affecting the performance.

45. A startup is running a pilot deployment of around 100 sensors to measure street noise and air quality in urban areas for 3 months. It was noted that every month around 4GB of sensor data is generated. The company uses a load-balanced auto-scaled layer of EC2 instances and an RDS database with 500 GB standard storage. The pilot was a success and now they want to deploy at least  100K sensors which need to be supported by the backend. You need to store the data for at least 2 years to analyze it. Which setup of the following would you prefer?

A. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance

B. Ingest data into a DynamoDB table and move old data to a Redshift cluster

C. Replace the RDS instance with a 6-node Redshift cluster with 96TB of storage

D. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

Answer C.

Explanation: A Redshift cluster would be preferred because it is easy to scale, also the work would be done in parallel through the nodes, therefore is perfect for a bigger workload like our use case. Since each month 4 GB of data is generated, therefore in 2 years, it should be around 96 GB. And since the servers will be increased to 100K in number, 96 GB will approximately become 96TB. Hence option C is the right answer.

Questions on Auto Scaling, Load Balancer

46. Suppose you have an application where you have to render images and also do some general computing. From the following services which service will best fit your need?

A. Classic Load Balancer

B. Application Load Balancer

C. Both of them

D. None of these

Answer B.

Explanation: You will choose an application load balancer, since it supports path-based routing, which means it can take decisions based on the URL, therefore if your task needs image rendering it will route it to a different instance, and for general computing, it will route it to a different instance.

47. How will you change the instance type for instances which are running in your application tier and are using Auto Scaling? Where will you change it from the following areas?

A. Auto Scaling policy configuration

B.  Auto Scaling group

C. Auto Scaling tags configuration

D.   Auto Scaling launch configuration

Answer D.

Explanation: Auto scaling tags configuration, is used to attach metadata to your instances, to change the instance type you have to use auto-scaling launch configuration.

48. You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce the load on the Amazon EC2 instance?

  1. Create a load balancer, and register the Amazon EC2 instance with it
  2.   Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin
  3.   Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action
  4.   Create a launch configuration from the instance using the CreateLaunchConfigurationAction

Answer A.

Explanation: Creating alone an autoscaling group will not solve the issue until you attach a load balancer to it. Once you attach a load balancer to an autoscaling group, it will efficiently distribute the load among all the instances. Option B – CloudFront is a CDN, it is a data transfer tool therefore will not help reduce the load on the EC2 instance. Similarly, the other option – Launch configuration is a template for configuration which has no connection with reducing loads.

49. What is the difference between Scalability and Elasticity?

Scalability and elasticity are related concepts that refer to the ability of a system or service to handle increased workloads or demand. However, there are some key differences between the two concepts:

  1. Scalability refers to the ability of a system or service to handle increased workloads without a decrease in performance. A scalable system is one that can increase its capacity to handle additional workloads without reaching a point of diminishing returns.
  2. Elasticity refers to the ability of a system or service to scale up or down in response to changes in demand. An elastic system is one that can automatically and quickly adjust its capacity to meet the changing needs of the workload.

In summary, scalability is a measure of a system’s ability to handle increased workloads, while elasticity is a measure of a system’s ability to automatically and dynamically adjust its capacity to meet changing workloads.

AWS provides several services and features that allow you to build scalable and elastic systems in the cloud, including Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS), and Amazon Elastic Kubernetes Service (EKS). You can review the AWS documentation for more information on these and other services and features that can help you build scalable and elastic systems in the cloud.

50. When should I use a Classic Load Balancer and when should I use an Application load balancer?

Amazon Web Services (AWS) provides two types of load balancers: Classic Load Balancers and Application Load Balancers. You should use a Classic Load Balancer when you need basic load balancing with support for Transport Layer Security (TLS) offloading, and you should use an Application Load Balancer when you need advanced load balancing with support for containerized applications and microservices.

Here are some key differences between Classic Load Balancer and Application Load Balancer:

  1. Protocol support: Classic Load Balancer supports load balancing for both HTTP and TCP traffic, while Application Load Balancer supports load balancing for HTTP, HTTPS, TCP, and Secure TCP (SSL/TLS) traffic.
  2. Load balancing algorithm: Classic Load Balancer uses a simple round-robin algorithm to distribute traffic across the registered targets, while Application Load Balancer allows you to use more advanced algorithms, such as least outstanding requests and weighted target groups, to distribute traffic across the registered targets.
  3. Health checks: Both Classic Load Balancer and Application Load Balancer support health checks to ensure that the registered targets are healthy and can handle the traffic. However, Application Load Balancer allows you to use more advanced health check configurations, such as custom health checks and target response time metrics.
  4. Target groups: Application Load Balancer allows you to create target groups, which are logical groupings of registered targets that can be used to route traffic to specific targets. Target groups can be used to route traffic based on the target’s attributes, such as the target’s IP address or port number.

In summary, Classic Load Balancer is a good choice when you need basic load balancing with support for TLS offloading, while Application Load Balancer is a good choice when you need advanced load balancing with support for containerized applications and microservices. You can review the AWS documentation for more information on Classic Load Balancer and Application Load Balancer and choose the load balancer that best meets your needs.

51. What does Connection draining do?

  A. Terminates instances that are not in use.

B. Re-routes traffic from instances that are to be updated or failed a health check.

  C. Re-routes traffic from instances that have more workload to instances that have less workload.

D. Drains all the connections from an instance, with one click.

Answer B.

Explanation: Connection draining is a service under ELB that constantly monitors the health of the instances. If any instance fails a health check or if any instance has to be patched with a software update, it pulls all the traffic from that instance and re-routes them to other instances.

52. When an instance is unhealthy, it is terminated and replaced with a new one, which of the following services does that?

A. Sticky Sessions

B. Fault Tolerance

C. Connection Draining

D.  Monitoring

Answer B.

Explanation: When ELB detects that an instance is unhealthy, it starts routing incoming traffic to other healthy instances in the region. If all the instances in a region become unhealthy, and if you have instances in some other availability zone/region, your traffic is directed to them. Once your instances become healthy again, they are rerouted back to the original instances.

53. What are lifecycle hooks used for in AutoScaling?

A. They are used to do health checks on instances

B. They are used to put an additional wait time to a scale-in or scale-out event.

C. They are used to shorten the wait time for a scale-in or scale-out event

D. None of these

Answer B.

Explanation: Lifecycle hooks are used for putting wait time before any lifecycle action i.e launching or terminating an instance happens. The purpose of this wait time can be anything from extracting log files before terminating an instance or installing the necessary software in an instance before launching it.

54. A user has set up an Auto Scaling group. Due to some issues, the group has failed to launch a single instance for more than 24 hours. What will happen to Auto Scaling in this condition?

A. Auto Scaling will keep trying to launch the instance for 72 hours

B. Auto Scaling will suspend the scaling process

C. Auto Scaling will start an instance in a separate region

D. The Auto Scaling group will be terminated automatically

Answer B.

Explanation: Auto Scaling allows you to suspend and then resume one or more of the Auto Scaling processes in your Auto Scaling group. This can be very useful when you want to investigate a configuration problem or other issue with your web application, and then make changes to your application, without triggering the Auto Scaling process.

Questions on CloudTrail, Route 53

55. To create a mirror image of your environment in another region for disaster recovery, which of the following AWS resources do not need to be recreated in the second region? ( Choose 2 answers )

A. Route 53 Record Sets

B. Elastic IP Addresses (EIP)

C. EC2 Key Pairs

D. Launch configurations

Security Groups

Answer A.

Explanation: Route 53 record sets are common assets therefore there is no need to replicate them since Route 53 is valid across regions

56. A customer wants to capture all client connection information from his load balancer at an interval of 5 minutes, which of the following options should he choose for his application?

A. Enable AWS CloudTrail for the load balancer.

B. Enable access logs on the load balancer.

C. Install the Amazon CloudWatch Logs agent on the load balancer.

D. Enable Amazon CloudWatch metrics on the load balancer.

Answer A.

Explanation: AWS CloudTrail provides inexpensive logging information for the load balancers and other AWS resources This logging information can be used for analyses and other administrative work, therefore is perfect for this use case.

57. A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer’s requirements? Enable AWS CloudTrail to audit all Amazon S3 bucket access.

A. Enable server access logging for all required Amazon S3 buckets.

B. Enable the Requester Pays option to track access via AWS Billing

C. Enable Amazon S3 event notifications for Put and Post.

Answer A.

Explanation: AWS CloudTrail has been designed for logging and tracking API calls. Also, this service is available for storage, therefore should be used in this use case.

58. Which of the following is true regarding AWS CloudTrail? (Choose 2 answers)

A. CloudTrail is enabled globally

B. CloudTrail is enabled on a per-region and service basis

C. Logs can be delivered to a single Amazon S3 bucket for aggregation.

D. CloudTrail is enabled for all available services within a region.

Answer B, C.

Explanation: Cloudtrail is not enabled for all the services and is also not available for all the regions. Therefore option B is correct, also the logs can be delivered to your S3 bucket, hence C is also correct.

59. You have an EC2 Security Group with several running EC2 instances. You changed the Security Group rules to allow inbound traffic on a new port and protocol and then launched several new instances in the same Security Group. The new rules apply:

A. Immediately to all instances in the security group.

B. Immediately to the new instances only.

C. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.

D. To all instances, but it may take several minutes for old instances to see the changes.

Answer A.

Explanation: Any rule specified in an EC2 Security Group applies immediately to all the instances, irrespective of when they are launched before or after adding a rule.

60. What happens if CloudTrail is turned on for my account but my Amazon S3 bucket is not configured with the correct policy?

If CloudTrail is turned on for your Amazon Web Services (AWS) account but your Amazon Simple Storage Service (S3) bucket is not configured with the correct policy, CloudTrail will not be able to deliver log files to the S3 bucket.

CloudTrail is a web service that records API calls made on your AWS account and delivers the log files to an S3 bucket that you specify. CloudTrail log files contain detailed information about the API calls, including the caller’s identity, the time of the call, the source IP address, the request parameters, and the response elements.

To deliver log files to an S3 bucket, CloudTrail needs the appropriate permissions to access the bucket. You can specify the permissions by attaching an S3 bucket policy to the bucket. The bucket policy defines the permissions that CloudTrail has to access the bucket and should include a statement that allows CloudTrail to perform the “s3:PutObject” action on the bucket.

If CloudTrail is turned on for your account but your S3 bucket is not configured with the correct policy, CloudTrail will not be able to deliver log files to the S3 bucket. This can cause CloudTrail to stop recording API calls or to generate an error. It’s important to ensure that your S3 bucket is properly configured with the correct bucket policy to enable CloudTrail to deliver log files to the bucket. You can review the AWS documentation for more information on configuring an S3 bucket policy for CloudTrail.

61. How do I transfer my existing domain name registration to Amazon Route 53 without disrupting my existing web traffic?

To transfer your existing domain name registration to Amazon Route 53 without disrupting your existing web traffic, you will need to follow these steps:

  1. Check the domain’s transfer eligibility: Before you can transfer your domain to Route 53, you will need to ensure that the domain is eligible for transfer. Some domains are not eligible for transfer due to restrictions imposed by the current registrar or by the domain’s registry. You can check the transfer eligibility of your domain by contacting your current registrar or by using the WHOIS lookup tool.
  2. Unlock the domain: You will need to unlock the domain at your current registrar before you can initiate the transfer. Unlocking the domain will allow you to transfer the domain to a new registrar. You can unlock the domain by contacting your current registrar or by using the registrar’s control panel.
  3. Obtain the domain’s authorization code: You will need to obtain the domain’s authorization code (also known as an “EPP code” or “transfer key”) from your current registrar. The authorization code is a unique code that is used to verify your ownership of the domain and authorize the transfer. You can obtain the authorization code by contacting your current registrar or by using the registrar’s control panel.
  4. Initiate the transfer: Once you have checked the domain’s transfer eligibility, unlocked the domain, and obtained the authorization code, you can initiate the transfer to Route 53 by following the steps in the Route 53 documentation. The transfer process typically takes five to seven days to complete.

It’s important to note that transferring your domain to Route 53 will not disrupt your existing web traffic as long as you do not change the DNS settings for your domain. You can continue to use the same DNS settings and web hosting provider after the transfer is complete. You can review the Route 53 documentation for more information on transferring a domain to Route 53.

Questions on SQS, SNS, SES, ElasticBeanstalk

62. Which of the following services you would not use to deploy an app?

A. Elastic Beanstalk

B. Lambda

C. Opsworks

D. CloudFormation

Answer B.

Explanation: Lambda is used for running serverless applications. It can be used to deploy functions triggered by events. When we say serverless, we mean without you worrying about the computing resources running in the background. It is not designed for creating applications that are publicly accessed.

63. How is AWS Elastic Beanstalk different than AWS OpsWorks?

AWS Elastic Beanstalk is an application management platform while OpsWorks is a configuration management platform. BeanStalk is an easy-to-use service that is used for deploying and scaling web applications developed with Java, .Net, PHP, Node.js, Python, Ruby, Go, and Docker. Customers upload their code and Elastic Beanstalk automatically handles the deployment. The application will be ready to use without any infrastructure or resource configuration.

In contrast, AWS Opsworks is an integrated configuration management platform for IT administrators or DevOps engineers who want a high degree of customization and control over operations.

64. What happens if my application stops responding to requests in beanstalk?

If your application stops responding to requests in Amazon Elastic Beanstalk, there could be several potential causes. Some common causes of application failures in Elastic Beanstalk include:

  1. Resource constraints: If your application is running out of memory, CPU, or other resources, it may stop responding to requests. Elastic Beanstalk allows you to set resource limits for your application, and if the application exceeds these limits, it may stop responding to requests.
  2. Configuration errors: If you have misconfigured your Elastic Beanstalk environment or application, it may cause the application to stop responding to requests. This can include issues with the environment’s platform configuration, the application’s deployment package, or the application’s code.
  3. Application bugs: If your application has bugs or other issues that cause it to crash or hang, it may stop responding to requests. Elastic Beanstalk provides log files and other tools that you can use to diagnose and troubleshoot issues with your application.
  4. Network or connectivity issues: If there are issues with the network or connectivity between Elastic Beanstalk and your application, it may cause the application to stop responding to requests.

If your application stops responding to requests in Elastic Beanstalk, you will need to diagnose and troubleshoot the issue to determine the cause. You can review the Elastic Beanstalk documentation and use the available tools and resources to help you identify and resolve the issue.

65. How does Elastic Beanstalk apply updates?

A. By having a duplicate ready with updates before swapping.

B. By updating the instance while it is running

C. By taking the instance down in the maintenance window

D. Updates should be installed manually

Answer A.

Explanation: Elastic Beanstalk prepares a duplicate copy of the instance, before updating the original instance, and routes your traffic to the duplicate instance, so that, in case your updated application fails, it will switch back to the original instance, and there will be no downtime experienced by the users who are using your application.

Questions on AWS OpsWorks, AWS KMS

66. How is AWS OpsWorks different than AWS CloudFormation?

OpsWorks and CloudFormation both support application modeling, deployment, configuration, management, and related activities. Both support a wide variety of architectural patterns, from simple web applications to highly complex applications. AWS OpsWorks and AWS CloudFormation differ in abstraction level and areas of focus.

AWS CloudFormation is a building block service that enables customers to manage almost any AWS resource via JSON-based domain-specific language. It provides foundational capabilities for the full breadth of AWS, without prescribing a particular model for development and operations. Customers define templates and use them to provision and manage AWS resources, operating systems, and application code.

In contrast, AWS OpsWorks is a higher-level service that focuses on providing highly productive and reliable DevOps experiences for IT administrators and ops-minded developers. To do this, AWS OpsWorks employs a configuration management model based on concepts such as stacks and layers and provides integrated experiences for key activities like deployment, monitoring, auto-scaling, and automation. Compared to AWS CloudFormation, AWS OpsWorks supports a narrower range of application-oriented AWS resource types including Amazon EC2 instances, Amazon EBS volumes, Elastic IPs, and Amazon CloudWatch metrics.

67. I created a key in the Oregon region to encrypt my data in the North Virginia region for security purposes. I added two users to the key and an external AWS account. I wanted to encrypt an object in S3, so when I tried, the key that I just created was not listed.  What could be the reason?  

A. External AWS accounts are not supported.

B. AWS S3 cannot be integrated with KMS.

C. The Key should be in the same region.

D. New keys take some time to reflect in the list.

Answer C.

Explanation: The key created and the data to be encrypted should be in the same region. Hence the approach taken here to secure the data is incorrect.

68. What automation tools can you use to spin up servers?

There are several automation tools that you can use to spin up servers in the cloud. Some examples of automation tools that are commonly used to spin up servers include:

  1. AWS CloudFormation: AWS CloudFormation is a service that allows you to use templates to create, manage, and delete AWS resources. You can use CloudFormation to automate the process of spinning up servers in the cloud by creating a template that specifies the configuration of the servers and the resources that you want to create.
  2. Ansible: Ansible is an open-source automation platform that allows you to automate the deployment, configuration, and management of your infrastructure. You can use Ansible to spin up servers in the cloud by defining the desired state of the servers in a playbook and executing the playbook to create the servers.
  3. Terraform: Terraform is an open-source infrastructure as a code tool that allows you to define and provision infrastructure resources using declarative configuration files. You can use Terraform to spin up servers in the cloud by defining the configuration of the servers in a Terraform configuration file and using the Terraform CLI to create the servers.
  4. Chef: Chef is an open-source configuration management tool that allows you to automate the deployment, configuration, and management of your infrastructure. You can use Chef to spin up servers in the cloud by defining the desired state of the servers in a recipe and using the Chef client to execute the recipe and create the servers.

These are just a few examples of automation tools that you can use to spin up servers in the cloud. You can review the documentation and resources for these and other automation tools to choose the tool that best meets your needs.

69.  A company needs to monitor the read and write IOPS for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this?

A. Amazon Simple Email Service

B. Amazon CloudWatch

C. Amazon Simple Queue Service

D. Amazon Route 53

Answer B.

Explanation: Amazon CloudWatch is a cloud monitoring tool and hence this is the right service for the mentioned use case. The other options listed here are used for other purposes for example route 53 is used for DNS services, therefore CloudWatch will be the apt choice.

70. If I launch a standby RDS instance, will it be in the same Availability Zone as my primary?

A. Only for Oracle RDS types

B. Yes

C. Only if it is configured at launch

D. No

Answer D.

Explanation: No, since the purpose of having a standby instance is to avoid an infrastructure failure (if it happens), therefore the standby instance is stored in a different availability zone, which is a physically different independent infrastructure.

71. What happens when one of the resources in a stack cannot be created successfully in AWS OpsWorks?

In Amazon Web Services (AWS) OpsWorks, a stack is a container for AWS resources that you want to create and manage as a unit. If one of the resources in a stack cannot be created successfully, the stack creation process will fail.

When a stack creation fails in OpsWorks, the service will roll back any changes that have been made and delete any resources that have been created. This is known as a “rollback on failure” behavior.

If you want to create a stack in OpsWorks and one of the resources cannot be created successfully, you will need to identify and resolve the issue that caused the failure. You can use the OpsWorks console, the AWS CLI, or the AWS API to view the stack’s event history and log files, which can provide more information about the failure. You can then use this information to troubleshoot the issue and resolve the problem.

It’s important to note that not all resource creation failures can be resolved automatically. In some cases, you may need to manually delete the failed resource and recreate it, or you may need to modify the stack’s configuration to resolve the issue. You can review the OpsWorks documentation for more information on troubleshooting stack creation failures.

Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
Click outside to hide the comparison bar
Compare

Subscribe to Newsletter

Stay ahead of the rapidly evolving world of technology with our news letters. Subscribe now!