AWS Interview Questions
1. Define and explain the three basic types of cloud services and the AWS products that are built based on them?
Cloud services can be classified into three main categories: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
- Infrastructure as a Service (IaaS): IaaS provides access to computing resources such as virtual machines, storage, and networking. These resources can be used to build and run applications and services. In IaaS, the infrastructure is managed by the cloud provider, and the user is responsible for managing the applications and services that run on top of it. Examples of AWS IaaS products include Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), and Amazon Virtual Private Cloud (VPC).
- Platform as a Service (PaaS): PaaS provides a platform for building, deploying, and managing applications and services. It includes tools and services for developing, testing, and deploying applications, as well as infrastructure resources such as servers, storage, and networking. In PaaS, the infrastructure and platform are managed by the cloud provider, and the user is responsible for developing and managing the applications and services that run on top of it. Examples of AWS PaaS products include Amazon Elastic Beanstalk, Amazon AppStream, and AWS CodePipeline.
- Software as a Service (SaaS): SaaS provides access to software applications that are delivered over the internet and can be used on any device with an internet connection. In SaaS, the software, infrastructure, and platform are all managed by the cloud provider, and the user is responsible for using the software. Examples of AWS SaaS products include Amazon WorkSpaces, Amazon Chime, and Amazon Connect.datavalley.ai
It’s worth noting that these categories are not mutually exclusive, and many cloud services can be classified into multiple categories. For example, Amazon EC2 can be considered both an IaaS and a PaaS service because it provides access to infrastructure resources and also includes tools and services for building and deploying applications.
2. What is the relation between the Availability Zone and Region?
In Amazon Web Services (AWS), a region is a geographical area that consists of multiple availability zones. An availability zone is a physically separate data center within a region, and is designed to be isolated from failures in other availability zones. Each availability zone has its own power, networking, and cooling, and is connected to the other availability zones in the region through low-latency links.
The main purpose of availability zones is to provide high availability and fault tolerance for cloud applications. By distributing resources across multiple availability zones, you can build applications that are resilient to failures in individual data centers. If one availability zone experiences an outage, the other availability zones can continue to operate, ensuring that your applications remain available to users.datavalley.ai
You can choose to launch resources in a specific availability zone, or you can use the AWS Management Console or API to specify that resources should be distributed evenly across multiple availability zones. This can help to ensure that your applications are resilient to failures and have the capacity to handle unexpected increases in traffic.
Overall, the relationship between availability zones and regions is that availability zones are a component of regions, and are used to provide high availability and fault tolerance for cloud applications. By using multiple availability zones within a region, you can build applications that are resilient to failures and can continue to operate even if one or more availability zones experiences an outage.
3. What is auto-scaling?
Auto Scaling is a service that enables you to automatically scale your Amazon Elastic Compute Cloud (EC2) or Amazon Elastic Container Service (ECS) resources based on demand. With Auto Scaling, you can set rules for when to scale up or down the number of resources in your application, and the service will automatically adjust the number of resources to meet those rules.
Auto Scaling can help you to optimize the performance and cost of your applications by automatically scaling resources up or down based on demand. For example, you can use Auto Scaling to ensure that your application has enough capacity to handle increased traffic during peak periods, and to scale down resources when traffic is low to save on costs.
Auto Scaling works by creating and managing a group of EC2 instances or ECS containers, called an Auto Scaling group. You can specify the minimum and maximum number of resources that should be in the group, and set rules for when to scale up or down based on metrics such as CPU utilization or network traffic. Auto Scaling will then automatically add or remove resources from the group to ensure that the number of resources meets your defined rules.
Overall, Auto Scaling is a useful service for managing the capacity and performance of your AWS applications, and can help you to optimize the cost of your resources by scaling them up and down based on demand.datavalley.ai
4. As part of your marketing work requires you to push messages onto Google, Facebook, Windows, and Apple through APIs or AWS Management Console. Which of the following services do you use?
1.AWS CloudTrail
2.AWS Config
3.Amazon Chime
4.AWS Simple Notification Service
If you need to push messages onto Google, Facebook, Windows, and Apple as part of your marketing work, you would likely use AWS Simple Notification Service (SNS). AWS SNS is a fully managed messaging service that enables you to send messages to various devices or services through APIs or the AWS Management Console. It supports a wide range of messaging protocols and delivery methods, including SMS, email, and push notifications to mobile devices..datavalley.ai
5. What is geo-targeting in CloudFront?
Geo-targeting in Amazon CloudFront is a feature that allows you to customize the content that is delivered to users based on their geographic location. This is done by using geographic-based routing rules that are applied to your content when it is delivered via CloudFront.
With geo-targeting, you can specify which regions or countries you want to target with your content. When a user requests content from CloudFront, the service will determine the user’s geographic location based on the IP address of the user’s device. If the user’s location matches one of the target regions or countries that you have specified, CloudFront will deliver the content from the edge location that is closest to the user.
Geo-targeting can be useful for a variety of purposes, including delivering customized content to users in different parts of the world, or directing traffic to the edge location that can provide the fastest delivery of content to the user.
To use geo-targeting in CloudFront, you will need to create geographic routing rules and associate them with your content distribution. You can do this through the CloudFront console or by using the CloudFront API.
6. What are the steps involved in a CloudFormation Solution?
Here are the steps involved in using AWS CloudFormation to create a solution:
- Define the AWS resources that you want to create in a CloudFormation template. The template is a JSON or YAML document that describes the AWS resources and their properties.
- Validate the CloudFormation template to ensure that it is syntactically correct and complies with the required standards.
- Create a new CloudFormation stack or update an existing one by uploading the template and providing any required input parameters.
- Wait for the CloudFormation stack to be created or updated. CloudFormation will create or update the resources in the specified order and roll back any changes if an error occurs.
- Monitor the progress of the CloudFormation stack using the AWS Management Console, the AWS CLI, or the CloudFormation APIs.
- Once the CloudFormation stack has been successfully created or updated, you can start using the AWS resources that it created.
- If you want to make changes to the AWS resources in the stack, you can update the stack by modifying the CloudFormation template and re-running the update command. CloudFormation will update the resources in the stack according to the changes that you specify.
- When you no longer need the resources in the CloudFormation stack, you can delete the stack to release the resources.datavalley
7. How do you upgrade or downgrade a system with near-zero downtime?
There are several approaches that can be used to upgrade or downgrade a system with near-zero downtime, depending on the specific needs and constraints of the system in question. Here are a few general strategies that may be useful:
- Use a load balancer: If your system has multiple servers, you can use a load balancer to distribute incoming traffic across the servers. This allows you to take one server offline at a time for upgrading or downgrading, while continuing to serve requests from the other servers.
- Use blue-green deployment: This approach involves deploying a new version of the system alongside the existing version, then switching traffic over to the new version when it is ready. This allows you to test the new version before it goes live, and also provides a rollback mechanism if necessary.
- Use canary deployment: Similar to blue-green deployment, this approach involves deploying a new version of the system to a small subset of users, and gradually rolling it out to more users as it is tested and proven to be stable. This allows you to test the new version before it goes live, and also provides a rollback mechanism if necessary.
- Use parallel running: This approach involves running both the old and new versions of the system in parallel, and using a feature flag or some other mechanism to toggle between the two versions. This allows you to test the new version before it goes live, and also provides a rollback mechanism if necessary.
It is important to carefully plan and test any system upgrade or downgrade to ensure that it is successful and does not cause any downtime or other disruptions.datavalley
8. What are the tools and techniques that you can use in AWS to identify if you are paying more than you should be, and how to correct it?
There are several tools and techniques that you can use in Amazon Web Services (AWS) to identify if you are paying more than you should be, and to correct it if necessary. Here are a few options:
- AWS Cost Explorer: This is a tool that allows you to visualize and analyze your AWS costs and usage over time. It provides a range of graphs and charts that can help you identify trends and patterns in your spending, and you can use the tool to set up custom alerts and notifications to help you stay on top of your costs.
- AWS Budgets: This is a tool that allows you to set custom budget thresholds for your AWS costs, and to receive notifications when your spending approaches or exceeds those thresholds. You can use this tool to get a better understanding of your costs and to make sure that you’re not overspending.
- AWS Trusted Advisor: This is a tool that provides best practice recommendations for optimizing your AWS resources. It can help you identify opportunities to save money by identifying idle resources or underutilized services, and it can also help you optimize your resources for performance and reliability.
- AWS Cost and Usage Report: This is a detailed report that provides information about your AWS costs and usage at a granular level. You can use this report to identify specific resources or services that are contributing to your costs, and to take corrective action if necessary.
- AWS Cost Optimization Best Practices: AWS provides a range of best practices and guidelines for optimizing your costs in the cloud. You can use these best practices to make sure that you are using your resources efficiently and to identify opportunities to save money.
By using these tools and techniques, you can get a better understanding of your AWS costs and take steps to optimize your spending. datavalley
9. Is there any other alternative tool to log into the cloud environment other than the console?
Yes, there are several alternatives to the console for logging into a cloud environment:
- Command-line interfaces (CLIs): Many cloud providers offer command-line tools that allow you to interact with the cloud environment using a terminal or shell. These tools often provide additional functionality and customization options compared to the console.
- Application programming interfaces (APIs): Cloud environments often provide APIs that allow you to programmatically access and manipulate resources in the environment. This can be useful for automating tasks or integrating cloud resources with other systems.
- Third-party tools: There are also a variety of third-party tools available that provide additional functionality for interacting with cloud environments. These tools may offer features such as centralized management, automation, or visualization of resources.
Ultimately, the best tool for logging into a cloud environment will depend on your specific needs and requirements. It’s a good idea to consider the features and capabilities of each option before deciding which one to use
AWS Interview Questions
10. What services can be used to create a centralized logging solution?
There are several AWS services that can be used to create a centralized logging solution in AWS. Some common options include:
- Amazon CloudWatch Logs: CloudWatch Logs is a service that enables you to monitor, store, and access your logs from AWS resources, such as Amazon EC2 instances, Amazon RDS DB instances, and Amazon S3 buckets. CloudWatch Logs can collect logs from different sources and store them in a centralized location, making it easy to search, analyze, and visualize log data.
- Amazon Elasticsearch Service: Elasticsearch is a search and analytics engine that enables you to store, search, and analyze large volumes of data quickly and in near real-time. You can use Elasticsearch to index and search your logs, and use Kibana, a visual analysis tool, to visualize and analyze your log data.
- Amazon S3: S3 is a highly durable and scalable object storage service that can be used to store and access your logs. You can configure your logs to be delivered to an S3 bucket, where they can be accessed and analyzed using tools like Amazon Athena, a serverless query service that enables you to analyze data in S3 using SQL.
- AWS Lambda: Lambda is a serverless computing service that enables you to run code in response to events or triggers, such as changes to a file in S3 or the creation of a new log group in CloudWatch Logs. You can use Lambda to process and transform your logs in real-time, and send them to a centralized log storage location like Elasticsearch or S3.
- AWS Glue: Glue is a fully managed ETL service that makes it easy to extract, transform, and load data from various sources. You can use Glue to extract data from your logs, transform it into a desired format, and load it into a central data store like S3 or Elasticsearch.
11. What are the native AWS Security logging capabilities?
Most of the AWS services have their logging options. Also, some of them have account-level logging, like in AWS CloudTrail, AWS Config, and others. Let’s take a look at two services in specific:
AWS CloudTrail
This is a service that provides a history of the AWS API calls for every account. It lets you perform security analysis, resource change tracking, and compliance auditing of your AWS environment as well. The best part about this service is that it enables you to configure it to send notifications via AWS SNS when new logs are delivered.
AWS Config
This helps you understand the configuration changes that happen in your environment. This service provides an AWS inventory that includes configuration history, configuration change notifications, and relationships between AWS resources. It can also be configured to send information via AWS SNS when new logs are delivered. datavalley
12. What is a DDoS attack, and what services can minimize them?
A Distributed Denial of Service (DDoS) attack is a type of cyber attack that aims to make a website or online service unavailable by overwhelming it with traffic from multiple sources. DDoS attacks can cause significant disruptions to online services and can be difficult to mitigate, as they typically involve a large number of compromised devices or servers (known as “bots”) that are coordinated to generate traffic simultaneously.
There are several AWS services that can help minimize the impact of DDoS attacks:
- Amazon CloudFront: CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic content to users around the world. CloudFront can absorb a significant amount of traffic and can help mitigate the impact of DDoS attacks by distributing traffic across a global network of edge locations.
- Amazon Route 53: Route 53 is a domain name system (DNS) service that routes traffic to your website or online service. Route 53 can help mitigate DDoS attacks by routing traffic through an Anycast network, which distributes traffic across multiple locations and can absorb high levels of traffic.
- AWS Shield: AWS Shield is a managed DDoS protection service that helps protect your applications running on AWS against DDoS attacks. AWS Shield includes two tiers of protection: Standard, which is included with all AWS accounts at no additional cost, and Advanced, which provides additional features and support.
- Amazon Elastic Load Balancer (ELB): ELB is a load balancing service that distributes incoming traffic across multiple targets, such as EC2 instances or containers. ELB can help mitigate DDoS attacks by distributing traffic across multiple targets and automatically scaling to absorb spikes in traffic.
- Amazon VPC: Amazon Virtual Private Cloud (VPC) is a networking service that enables you to create a logically isolated section of the AWS Cloud in which you can launch AWS resources. You can use VPC to segment your resources and create a defense in depth strategy to help protect against DDoS attacks.
13. You are trying to provide a service in a particular region, but you do not see the service in that region. Why is this happening, and how do you fix it?
Not all Amazon AWS services are available in all regions. When Amazon initially launches a new service, it doesn’t get immediately published in all the regions. They start small and then slowly expand to other regions. So, if you don’t see a specific service in your region, chances are the service hasn’t been published in your region yet. However, if you want to get a service that is not available, you can switch to the nearest region that provides the service.
14. What are the different types of virtualization in AWS, and what are the differences between them?
Amazon Web Services (AWS) offers several different types of virtualization:
- Elastic Compute Cloud (EC2) instances: EC2 instances are virtual servers that can be used to host applications and workloads in the cloud. They are powered by a variety of processors and can be customized to meet different performance, memory, and storage requirements.
- Amazon Elastic Container Service (ECS): ECS is a container orchestration service that allows you to deploy and manage Docker containers on EC2 instances. It provides features such as scheduling, scaling, and monitoring to make it easier to run containerized applications in the cloud.
- Amazon Elastic Kubernetes Service (EKS): EKS is a managed service that allows you to run Kubernetes clusters in the cloud. It provides features such as automatic scaling and patching to make it easier to deploy and manage containerized applications in a Kubernetes environment.
- AWS Lambda: AWS Lambda is a serverless computing platform that allows you to run code in response to events or triggers without having to provision or manage servers. It can be used to build scalable, event-driven applications or to execute code in response to specific events such as changes to data in a database or the arrival of new data in a stream.
There are also several other virtualization options available in AWS, including Amazon Virtual Private Cloud (VPC), Amazon Elastic Block Store (EBS), and Amazon Elastic File System (EFS). Each of these options provides different capabilities and is suited to different use cases.
15. Name some of the AWS services that are not region-specific
There are several Amazon Web Services (AWS) services that are not region-specific, meaning that they are not tied to a specific region and can be accessed from any region. Here are a few examples:
- Amazon Identity and Access Management (IAM): IAM is a service that enables you to manage access to AWS resources. It is not region-specific and can be used to manage access to resources in any region.
- Amazon CloudWatch: CloudWatch is a monitoring service that enables you to collect and track metrics, set alarms, and view logs. It is not region-specific and can be used to monitor resources in any region.
- Amazon Simple Queue Service (SQS): SQS is a message queue service that enables you to send, store, and receive messages between distributed applications. It is not region-specific and can be used to send and receive messages between applications in any region.
- Amazon Simple Notification Service (SNS): SNS is a notification service that enables you to send messages to subscribed endpoints, such as email addresses, SMS numbers, or HTTP endpoints. It is not region-specific and can be used to send notifications to endpoints in any region.
Overall, these are just a few examples of AWS services that are not region-specific. There are many other services that are also not region-specific, and which can be accessed from any region.
16. What are the differences between NAT Gateways and NAT Instances?
Amazon Web Services (AWS) NAT (Network Address Translation) gateways and NAT instances are two options for enabling outbound Internet connectivity for Amazon Virtual Private Cloud (VPC) resources. Both NAT gateways and NAT instances allow VPC resources to access the Internet, but they differ in how they are implemented and managed.
Here are some key differences between NAT gateways and NAT instances:
- Implementation: NAT gateways are managed services provided by AWS, while NAT instances are EC2 instances that are configured to perform NAT. NAT gateways are easier to set up and manage, as they are fully managed by AWS and do not require any additional configuration or maintenance. NAT instances, on the other hand, require you to launch, configure, and manage an EC2 instance, which can be more time-consuming.
- Scalability: NAT gateways are highly available and scalable, and can automatically scale up or down to meet the needs of your VPC. NAT instances, on the other hand, have a fixed capacity and cannot scale automatically. If you need to increase the capacity of a NAT instance, you will need to launch a new instance and reconfigure your VPC to use it.
- Performance: NAT gateways are optimized for high performance and can handle a large amount of traffic. NAT instances, on the other hand, may not be able to handle as much traffic, and may require additional tuning and optimization to achieve good performance.
Overall, NAT gateways and NAT instances are two options for enabling outbound Internet connectivity for VPC resources. NAT gateways are managed services that are easier to set up and manage, and are optimized for high performance and scalability. NAT instances are EC2 instances that you must launch and manage yourself, and may require additional configuration and optimization to achieve good performance. The choice between NAT gateways and NAT instances will depend on your specific needs and requirements.
a = [1, 2, 3]
b = [1, 2, 3]
c = a print(a == b)
# True, because a and b have the same values print(a is b)
# False, because a and b are not the same object in memory print(a is c)
# True, because a and c are the same object in memory
17. What is CloudWatch?
Amazon CloudWatch is a monitoring service offered by Amazon Web Services (AWS) that enables you to collect and track metrics, set alarms, and view logs. It is a fully managed service that provides real-time monitoring of your AWS resources, including Amazon Elastic Compute Cloud (EC2) instances, Amazon Relational Database Service (RDS) databases, and Amazon Simple Queue Service (SQS) queues.
CloudWatch can help you to monitor the performance and availability of your resources, and to identify and troubleshoot issues as they arise. It provides a variety of metrics and logs that you can use to monitor your resources, and enables you to set alarms that will trigger when certain conditions are met. For example, you can use CloudWatch to set an alarm that will notify you when the CPU utilization of an EC2 instance exceeds a certain threshold.
CloudWatch is a flexible and powerful service that can be used in a variety of scenarios, including:
- Monitoring the performance of your applications: You can use CloudWatch to collect metrics about your applications, such as request latency, error rates, and throughput, and to set alarms based on those metrics.
- Identifying and troubleshooting issues: You can use CloudWatch to view logs and trace the root cause of issues, such as errors or performance bottlenecks.
- Optimizing cost and performance: You can use CloudWatch to monitor the utilization of your resources and to scale them up or down as needed to optimize cost and performance
18. What is an Elastic Transcoder?
Amazon Elastic Transcoder is a media transcoding service offered by Amazon Web Services (AWS). It enables you to convert audio and video files from one format to another, and to create versions of your media files in different resolutions, bitrates, and formats.
Elastic Transcoder is a fully managed service that can handle a wide range of media formats and codecs, and can transcode your media files quickly and efficiently. It is designed to be easy to use and to integrate with other AWS services, such as Amazon Simple Storage Service (S3) and Amazon Simple Queue Service (SQS).
Some common use cases for Elastic Transcoder include:
- Creating versions of your media files in different formats and resolutions: You can use Elastic Transcoder to create multiple versions of your media files in different formats, such as MP4, WebM, or HLS, and in different resolutions, such as 1080p, 720p, or 480p. This can help you to deliver your media files to a wide range of devices and platforms.
- Optimizing the performance of your media files: You can use Elastic Transcoder to adjust the bitrate of your media files to optimize their performance. For example, you can create versions of your media files with lower bitrates for slower internet connections, or with higher bitrates for higher quality playback.
- Automating the transcoding process: You can use Elastic Transcoder in conjunction with other AWS services, such as S3 and SQS, to automate the transcoding process. For example, you can set up a pipeline that automatically transcodes your media files when they are uploaded to S3, and sends the transcoded files to a destination bucket or queue.
19. What is Amazon EC2?
EC2 is short for Elastic Compute Cloud, and it provides scalable computing capacity. Using Amazon EC2 eliminates the need to invest in hardware, leading to faster development and deployment of applications. You can use Amazon EC2 to launch as many or as few virtual servers as needed, configure security and networking, and manage storage. It can scale up or down to handle changes in requirements, reducing the need to forecast traffic. EC2 provides virtual computing environments called “instances.”
20. What Are Some of the Security Best Practices for Amazon EC2?
Amazon Elastic Compute Cloud (EC2) is a cloud computing service that enables you to launch and manage virtual machines (instances) in the cloud. Here are a few security best practices that you can follow to help ensure the security of your EC2 instances:
- Use strong passwords and enable multifactor authentication: Use strong, unique passwords for your EC2 instances and enable multifactor authentication (MFA) to add an extra layer of security.
- Enable security groups: Use security groups to control inbound and outbound traffic to your EC2 instances. Configure your security groups to allow only the traffic that is required for your applications, and block all other traffic.
- Enable network access control lists: Use network access control lists (ACLs) to control inbound and outbound traffic at the subnet level. Network ACLs can help to prevent unauthorized access to your instances and protect against network-based attacks.
- Enable Amazon GuardDuty: Amazon GuardDuty is a threat detection service that uses machine learning to identify and alert you to potential security threats. Enabling GuardDuty can help you to identify and respond to security threats in a timely manner.
- Use encryption: Use encryption to protect the data stored on your EC2 instances. You can use tools such as Amazon Elastic Block Store (EBS) encryption and Amazon Simple Storage Service (S3) encryption to encrypt your data at rest.
- Use identity and access management (IAM): Use IAM to manage and control access to your EC2 instances. Set up IAM policies that grant the least privileged access
21. Can S3 Be Used with EC2 Instances, and If Yes, How?
Yes, Amazon Simple Storage Service (S3) can be used with Amazon Elastic Compute Cloud (EC2) instances. S3 is a cloud storage service that enables you to store and retrieve data from anywhere on the web, while EC2 is a cloud computing service that enables you to launch and manage virtual machines (instances) in the cloud.
There are several ways that S3 can be used with EC2 instances:
- Data storage: S3 can be used to store data that is used by EC2 instances. For example, you can use S3 to store data such as application files, configuration files, or media files that are used by your EC2 instances.
- Data backup: You can use S3 to back up the data on your EC2 instances. This can help to protect your data in case of a disaster, and can also help you to recover from data loss or corruption.
- Data transfer: You can use S3 to transfer data between EC2 instances in different regions or availability zones. This can be useful if you need to move large amounts of data between instances, or if you need to transfer data between instances that are in different parts of the world.
- EC2 instance storage: You can use S3 to store the root device volume of an EC2 instance. This can be useful if you need to launch multiple instances with the same configuration, or if you want to create a custom AMI (Amazon Machine Image) that is based on an existing instance.
Overall, S3 is a useful service that can be used in conjunction with EC2 instances to store, back up, and transfer data, as well as to store the root device volume of an EC2 instance
22. What is the difference between stopping and terminating an EC2 instance?
In Amazon Elastic Compute Cloud (EC2), there are two options for shutting down an instance: stopping and terminating. The main difference between these options is how they affect the instance and its associated resources.
Stopping an EC2 instance: When you stop an EC2 instance, the instance is shut down, and the instance’s Amazon Elastic Block Store (EBS) volumes are preserved. You will not be charged for instance usage while the instance is stopped, but you will continue to be charged for the storage of the EBS volumes and any other resources associated with the instance. You can start the instance again later by using the StartInstances API action or the AWS Management Console.
Terminating an EC2 instance: When you terminate an EC2 instance, the instance is shut down and the instance’s EBS volumes are deleted. You will not be charged for instance usage or for the storage of the EBS volumes after the instance is terminated. Any other resources associated with the instance, such as Elastic IP addresses or security groups, will also be deleted. You cannot start the instance again after it has been terminated.
Overall, the main difference between stopping and terminating an EC2 instance is how they affect the instance and its associated resources. Stopping an instance preserves the instance and its EBS volumes, while terminating an instance deletes the instance and its EBS volumes. The choice between stopping and terminating an instance will depend on your specific needs and requirements.
23. What are the different types of EC2 instances based on their costs?
Amazon Elastic Compute Cloud (EC2) instances are available in several different pricing models, which can be grouped into the following categories:
- On-Demand: On-demand instances allow you to pay for compute capacity by the hour with no long-term commitments. This is a good option if you need flexible, short-term computing capacity or if you’re not sure how much you’ll use.
- Reserved: Reserved instances provide a discount on the hourly usage charge in exchange for a one-time upfront payment and a commitment to use the instances for a one- or three-year term. This is a good option if you have steady-state or predictable workloads and can commit to using a specific number of instances.
- Spot: Spot instances allow you to bid on spare Amazon EC2 computing capacity. If your bid is higher than the current Spot price, your instances will run. If the Spot price rises above your bid, your instances will be terminated. This is a good option if you have flexible workloads and can tolerate interruptions.
- Dedicated Hosts: Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use. They allow you to use your own license and meet compliance requirements that may not be possible with other instance types. This is a good option if you need complete control over the physical resources that your instances are running on, or if you need to use specific licenses that are not compatible with other instance types.
The specific instance types and pricing options available will depend on the region and availability zone you choose. You can use the AWS Pricing Calculator to get an estimate of the costs for different instance types and pricing options.
24. How do you set up SSH agent forwarding so that you do not have to copy the key every time you log in?
SSH agent forwarding allows you to use your local SSH keys to authenticate to remote servers without having to copy the keys to the remote servers. This can be useful if you need to log in to multiple servers and don’t want to manage multiple copies of your keys.
To set up SSH agent forwarding, you will need to follow these steps:
- Generate an SSH key pair on your local machine if you don’t already have one. This can be done with the following command:
ssh-keygen -t rsa
- Add your private key to the SSH agent:
ssh-add ~/.ssh/id_rsa
- Enable SSH agent forwarding in your SSH client. This can typically be done by adding the following line to your
~/.ssh/config
file:
ForwardAgent yes
- Connect to the remote server using your SSH client, and specify your username and the IP address or hostname of the server:
ssh username@server_ip_or_hostname
- When you connect to the remote server, your local SSH keys will be forwarded to the server and used for authentication. You will not need to copy your keys to the server or enter a password to log in.
It’s important to note that SSH agent forwarding can be a security risk if the remote server is compromised, as the attacker could potentially use the forwarded keys to access other servers. As such, it’s a good idea to use SSH agent forwarding only when necessary and to be mindful of the security implications.
25. What are Solaris and AIX operating systems? Are they available with AWS?
Solaris is an operating system that uses SPARC processor architecture, which is not supported by the public cloud currently.
AIX is an operating system that runs only on Power CPU and not on Intel, which means that you cannot create AIX instances in EC2.
Since both operating systems have their limitations, they are not currently available with AWS.
26. How do you configure CloudWatch to recover an EC2 instance?
Amazon CloudWatch is a monitoring service that allows you to set alarms and take automated actions when certain thresholds are breached. You can use CloudWatch to recover an Amazon Elastic Compute Cloud (EC2) instance if it fails or becomes unavailable.
To configure CloudWatch to recover an EC2 instance, follow these steps:
- Open the CloudWatch console and navigate to the Alarms section.
- Click the “Create Alarm” button.
- Select the “EC2 Status Check Failed” metric, and choose the specific instance that you want to monitor.
- Set the “Whenever” dropdown to “>= 1”. This will trigger the alarm whenever the status check fails once or more times.
- Under the “Actions” section, select the “Recover this instance” action.
- Give the alarm a name and description, and click the “Create Alarm” button to save it.
Once the alarm is configured, CloudWatch will automatically recover the EC2 instance if it fails a status check. You can also configure the alarm to send notifications or take other actions if desired.
It’s important to note that CloudWatch can only recover an EC2 instance if the issue causing the failure can be resolved automatically. If the issue requires manual intervention, CloudWatch will not be able to recover the instance.
27. What are the common types of AMI designs?
An Amazon Machine Image (AMI) is a pre-configured virtual machine image that can be used to launch Amazon Elastic Compute Cloud (EC2) instances. There are several common types of AMI designs:
- Operating system (OS) AMIs: These AMIs contain a base operating system, such as Linux or Windows, and can be used to launch EC2 instances with a specific OS.
- Application AMIs: These AMIs contain a specific application or set of applications, such as a web server or database, and can be used to launch EC2 instances that are pre-configured with the application(s).
- Custom AMIs: These AMIs are created by users and can contain a custom configuration or set of applications. They can be created from an existing EC2 instance or from an imported image.
- Shared AMIs: These AMIs are created by other users and shared with the public or specific AWS accounts. They can be used to launch EC2 instances with the same configuration as the original AMI.
- AWS Marketplace AMIs: These AMIs are offered by third-party vendors through the AWS Marketplace and can be used to launch EC2 instances with a specific application or set of applications.
Which type of AMI you choose will depend on your specific needs and requirements. It’s a good idea to consider the features and capabilities of each type before deciding which one to use.
28. What are Key-Pairs in AWS?
In Amazon Web Services (AWS), a key pair is a combination of a private key and a public key that is used to securely SSH into an Amazon Elastic Compute Cloud (EC2) instance. The private key is stored on your local machine and is used to authenticate your connection to the EC2 instance. The public key is stored on the EC2 instance and is used to verify the authenticity of the private key.
To create a key pair, you can use the AWS Management Console or the AWS command-line interface (CLI). Once the key pair is created, you can use it to launch an EC2 instance and securely connect to it using an SSH client.
It’s important to protect your private key, as it is the only way to authenticate to your EC2 instance. If you lose your private key, you will not be able to access your EC2 instance and will need to create a new key pair.
Key pairs are just one of the options available for authenticating to an EC2 instance. Other options include using a password or an AWS Identity and Access Management (IAM) role.
29. What is Amazon S3?
S3 is short for Simple Storage Service, and Amazon S3 is the most supported storage platform available. S3 is object storage that can store and retrieve any amount of data from anywhere. Despite that versatility, it is practically unlimited as well as cost-effective because it is storage available on demand. In addition to these benefits, it offers unprecedented levels of durability and availability. Amazon S3 helps to manage data for cost optimization, access control, and compliance.
30. How can you recover/log in to an EC2 instance for which you have lost the key?
If you have lost the private key for an Amazon Elastic Compute Cloud (EC2) instance and are unable to log in, you will need to create a new key pair and associate it with the instance.
To recover an EC2 instance for which you have lost the key, follow these steps:
- Open the Amazon EC2 console and navigate to the Instances page.
- Select the instance and click the “Actions” button.
- From the dropdown menu, select “Change Shutdown Behavior”.
- In the “Shutdown Behavior” section, set the “Shutdown behavior” to “Stop”.
- Click the “Save” button.
- Once the instance has stopped, click the “Actions” button again and select “Create Image”.
- In the “Create Image” dialog, give the image a name and description, and select the “No reboot” option.
- Click the “Create Image” button.
- Wait for the image to be created, then use it to launch a new EC2 instance.
- When launching the new instance, create a new key pair and specify it as the key pair for the instance.
- Connect to the new instance using the new key pair and the Secure Shell (SSH) client.
It’s important to note that this process will create a new EC2 instance and any data or changes made to the original instance will not be preserved. You will need to manually migrate any necessary data or configuration to the new instance.
31. What are some critical differences between AWS S3 and EBS?
Amazon Simple Storage Service (S3) and Amazon Elastic Block Store (EBS) are both storage services provided by Amazon Web Services (AWS), but they have some significant differences:
- Object storage vs. block storage: S3 is an object storage service, while EBS is a block storage service. This means that S3 stores data as individual objects with a unique identifier, while EBS stores data as blocks that can be attached to an Amazon Elastic Compute Cloud (EC2) instance and treated like a local hard drive.
- Scalability: S3 is designed for unlimited scalability and can store an unlimited amount of data. EBS is limited by the capacity of the attached EC2 instance and the number of EBS volumes that can be attached to the instance.
- Performance: EBS provides higher performance than S3, as it is designed for use with EC2 instances and can be optimized for different workloads. S3 is designed for general-purpose storage and may not provide the same level of performance as EBS.
- Cost: S3 and EBS have different pricing models, with S3 generally being less expensive for storing large amounts of data and EBS being more expensive for storing smaller amounts of data.
- Use cases: S3 is well-suited for storing large amounts of data that is infrequently accessed, while EBS is more suitable for storing data that is frequently accessed and requires high performance, such as the operating system or application files of an EC2 instance.
Ultimately, the choice between S3 and EBS will depend on your specific storage needs and requirements. It’s a good idea to consider the differences between the two services and determine which one is the best fit for your use case.
32. How do you allow a user to gain access to a specific bucket?
To allow a user to gain access to a specific Amazon Simple Storage Service (S3) bucket, you will need to create an AWS Identity and Access Management (IAM) policy and attach it to the user or a group that the user belongs to.
Here are the steps to follow to allow a user access to an S3 bucket:
- Open the IAM console and navigate to the Users page.
- Select the user that you want to give access to the S3 bucket.
- In the “Permissions” tab, click the “Add permissions” button.
- In the “Set permissions” page, select the “Attach existing policies directly” option.
- From the list of policies, select the policy that allows access to the S3 bucket. If you don’t have a suitable policy, you can create one by clicking the “Create policy” button.
- Review the policy and click the “Add permissions” button to attach it to the user.
- Once the policy is attached to the user, they will have the permissions specified in the policy to access the S3 bucket.
It’s important to note that you will need to specify the specific bucket and the desired permissions in the IAM policy. For example, you may want to allow the user to list the contents of the bucket, download objects from the bucket, or upload objects to the bucket. You can use the AWS policy generator to help create a suitable policy.
33. How can you monitor S3 cross-region replication to ensure consistency without actually checking the bucket?
Amazon Simple Storage Service (S3) cross-region replication is a feature that allows you to replicate objects in an S3 bucket to a different region. To monitor S3 cross-region replication and ensure consistency without actually checking the bucket, you can use Amazon CloudWatch Events and Amazon CloudWatch Alarms.
Here is a high-level overview of how this can be done:
- Enable cross-region replication for the S3 bucket: To do this, you will need to specify the destination region and any other necessary settings in the S3 bucket’s replication configuration.
- Set up CloudWatch Events to monitor S3 events: CloudWatch Events can be configured to send a notification whenever certain events occur in S3, such as when an object is created or deleted.
- Set up a CloudWatch Alarm to monitor the CloudWatch Events: The CloudWatch Alarm can be configured to trigger an action, such as sending an email or triggering an AWS Lambda function, if a specified threshold is breached.
- Configure the CloudWatch Alarm to trigger when replication events are not received: For example, you could configure the CloudWatch Alarm to trigger if no replication events are received within a certain time period. This would indicate that replication is not occurring as expected and could indicate a problem with the replication process.
34. What is SnowBall?
Snowball is a data migration service provided by Amazon Web Services (AWS). It is used to transfer large amounts of data into and out of the AWS Cloud, such as for data backups, disaster recovery, and data migration between regions or between on-premises data centers and the cloud.
Snowball comes in two main variants:
- Snowball: This is a physical device that is shipped to the customer, who loads data onto it using a special Snowball client application. The device can hold up to 50 TB of data. Once the data is transferred, the customer ships the device back to AWS, where the data is transferred into the customer’s Amazon S3 bucket or Amazon EBS volume.
- Snowball Edge: This is a variant of Snowball that includes additional computing power and storage capacity, making it possible to perform data processing tasks directly on the device before transferring the data to the cloud. Snowball Edge devices are available in 100 TB and 80 TB storage capacities.
Both Snowball and Snowball Edge are designed to be secure and easy to use, and can be used to transfer data over long distances with minimal network bandwidth requirements.
35. What are the Storage Classes available in Amazon S3?
Amazon Simple Storage Service (S3) provides a number of storage classes to help you store and manage your data in the most cost-effective way. The available storage classes are:
- Standard: This is the default storage class for new objects, and provides high durability, availability, and performance. It is suitable for a wide range of use cases, including primary storage, archival, and disaster recovery.
- Standard – Infrequent Access (Standard-IA): This storage class provides the same level of durability and availability as Standard, but at a lower price point. It is ideal for storing data that is not accessed frequently, but requires rapid access when needed.
- One Zone – Infrequent Access (One Zone-IA): This storage class is similar to Standard-IA, but is stored in a single availability zone instead of multiple zones. It is slightly less expensive than Standard-IA, but is less durable since it is stored in a single zone and is not protected against data loss in the event of a disaster in that zone.
- Reduced Redundancy Storage (RRS): This storage class provides lower levels of durability and availability compared to Standard, but at a lower price point. It is suitable for storing non-critical data that can be easily reproduced, such as thumbnails or transcoded media files.
- Amazon S3 Intelligent-Tiering: This is a cost-effective storage class that automatically moves data to the most cost-effective storage tier based on usage patterns. It is suitable for storing data with unknown or changing access patterns.
- Amazon S3 Glacier: This is a secure, durable, and extremely low-cost storage class for data archival and long-term backup. It is suitable for storing data that is infrequently accessed, and requires retrieval times of several hours.
- Amazon S3 Glacier Deep Archive: This is the lowest-cost storage class in Amazon S3, and is suitable for storing data that is rarely accessed and requires retrieval times of 12 hours or more. It is ideal for storing data that needs to be retained for compliance or regulatory purposes, or as a long-term digital archive.
36. What Is Amazon Virtual Private Cloud (VPC) and Why Is It Used?
A VPC is the best way of connecting to your cloud resources from your own data center. Once you connect your data center to the VPC in which your instances are present, each instance is assigned a private IP address that can be accessed from your data center. That way, you can access your public cloud resources as if they were on your own private network.
37. VPC is not resolving the server through DNS. What might be the issue, and how can you fix it?
To fix this problem, you need to enable the DNS hostname resolution, so that the problem resolves itself.
38. How do you connect multiple sites to a VPC?
There are several ways to connect multiple sites to a Virtual Private Cloud (VPC) in Amazon Web Services (AWS). Some of the most common approaches include:
- VPN Connection: You can use an AWS Virtual Private Network (VPN) connection to establish a secure connection between your VPC and your on-premises or remote network. This allows you to access resources in your VPC over an encrypted tunnel, and can be used to connect multiple sites to the same VPC.
- Direct Connect: AWS Direct Connect is a dedicated network connection that allows you to establish a direct connection between your on-premises data centers and your VPC. This can be used to connect multiple sites to the same VPC, and provides higher bandwidth and lower latencies compared to VPN connections.
- AWS Transit Gateway: AWS Transit Gateway is a service that enables you to connect multiple VPCs and on-premises networks to a central hub. This allows you to easily manage connectivity between multiple sites and VPCs, and reduces the need for complex VPN configurations.
- AWS Global Accelerator: AWS Global Accelerator is a service that routes traffic to the optimal AWS Region for your users, based on their geographic location. This can be used to improve the performance of your applications by routing traffic over the lowest-latency path, and can be used to connect multiple sites to the same VPC.
- AWS App Mesh: AWS App Mesh is a service that enables you to easily connect and manage microservices running on AWS. It provides a common way to discover and communicate with services, and can be used to connect multiple sites to the same VPC.
39. Name and explain some security products and features available in VPC?
Here is a selection of security products and features:
Security groups – This acts as a firewall for the EC2 instances, controlling inbound and outbound traffic at the instance level.
Network access control lists – It acts as a firewall for the subnets, controlling inbound and outbound traffic at the subnet level.
Flow logs – These capture the inbound and outbound traffic from the network interfaces in your VPC.
40. How do you monitor Amazon VPC?
There are several ways to monitor your Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). Some of the tools and services that you can use include:
- CloudWatch: CloudWatch is a monitoring service that provides metrics, logs, and alarms for various AWS resources, including VPCs. You can use CloudWatch to monitor VPC metrics such as network traffic, CPU utilization, and disk usage, and set alarms to be notified when certain thresholds are breached.
- VPC Flow Logs: VPC Flow Logs is a feature that allows you to capture information about the traffic flowing to and from your VPC. You can use Flow Logs to monitor network traffic patterns, troubleshoot connectivity issues, and ensure that your VPC is secure.
- Amazon Inspector: Amazon Inspector is a security assessment service that helps you identify vulnerabilities and misconfigurations in your VPC. You can use Inspector to run automated security assessments on your VPC resources, and receive recommendations for improving your security posture.
- AWS Trusted Advisor: AWS Trusted Advisor is a service that provides best practice recommendations for various AWS resources, including VPCs. You can use Trusted Advisor to identify opportunities to optimize your VPC for cost and performance, and receive alerts when potential issues are detected.
- Network Performance Monitor: Network Performance Monitor is a service that provides visibility into the performance of your VPC network. You can use Network Performance Monitor to monitor the availability and performance of your VPC resources, and troubleshoot connectivity issues.
41. How many Subnets can you have per VPC?
We can have up to 200 Subnets per Amazon Virtual Private Cloud (VPC).
42. When Would You Prefer Provisioned IOPS over Standard Rds Storage?
You would use Provisioned IOPS when you have batch-oriented workloads. Provisioned IOPS delivers high IO rates, but it is also expensive. However, batch-processing workloads do not require manual intervention.
43. How Do Amazon Rds, Dynamodb, and Redshift Differ from Each Other?
Amazon RDS is a database management service for relational databases. It manages patching, upgrading, and data backups automatically. It’s a database management service for structured data only. On the other hand, DynamoDB is a NoSQL database service for dealing with unstructured data. Redshift is a data warehouse product used in data analysis.
44. What Are the Benefits of AWS’s Disaster Recovery?
Businesses use cloud computing in part to enable faster disaster recovery of critical IT systems without the cost of a second physical site. The AWS cloud supports many popular disaster recovery architectures ranging from small customer workload data center failures to environments that enable rapid failover at scale. With data centers all over the world, AWS provides a set of cloud-based disaster recovery services that enable rapid recovery of your IT infrastructure and data.
45. How can you add an existing instance to a new Auto Scaling group?
Here’s how you can add an existing instance to a new Auto Scaling group:
- Open EC2 console
- Select your instance under Instances
- Choose Actions -> Instance Settings -> Attach to Auto Scaling Group
- Select a new Auto Scaling group
- Attach this group to the Instance
- Edit the Instance if needed
- Once done, you can successfully add the instance to a new Auto Scaling group.
46. What are the factors to consider while migrating to Amazon Web Services?
There are several factors that you should consider when planning a migration to Amazon Web Services (AWS). Some of the key considerations include:
Cost: One of the primary considerations when migrating to AWS is the cost of the services you will be using. You should carefully evaluate the cost of different services and configurations to ensure that you are choosing the most cost-effective options for your needs.
Compatibility: It is important to ensure that your applications and workloads are compatible with the services and infrastructure offered by AWS. This may require some changes to your applications or architectures, so it is important to assess the impact and feasibility of these changes upfront.
Performance: The performance of your applications and workloads is critical, and it is important to ensure that they are running optimally in the AWS environment. You should consider factors such as network latency, compute performance, and storage performance when planning your migration.
Security: Security is a top priority when migrating to AWS, and it is important to ensure that your data and applications are secure in the cloud. You should consider factors such as access controls, data encryption, and network security when planning your migration.
Scalability: AWS is designed for scalability, and it is important to consider how your applications and workloads will scale in the cloud. You should consider factors such as resource allocation, auto-scaling, and capacity planning when planning your migration.
Compliance: If your organization is subject to regulatory requirements or industry standards, it is important to ensure that your AWS environment is compliant. You should consider factors such as data sovereignty, data retention, and security controls when planning your migration
47. What are RTO and RPO in AWS?
RTO or Recovery Time Objective is the maximum time your business or organization is willing to wait for a recovery to complete in the wake of an outage. On the other hand, RPO or Recovery Point Objective is the maximum amount of data loss your company is willing to accept as measured in time.
48. If you would like to transfer vast amounts of data, which is the best option among Snowball, Snowball Edge, and Snowmobile?
AWS Snowball is basically a data transport solution for moving high volumes of data into and out of a specified AWS region. On the other hand, AWS Snowball Edge adds additional computing functions apart from providing a data transport solution. The snowmobile is an exabyte-scale migration service that allows you to transfer data up to 100 PB.
49. What are the advantages of AWS IAM?
Amazon Web Services (AWS) Identity and Access Management (IAM) is a service that enables you to securely control access to AWS resources and services. Some of the key advantages of AWS IAM include:
- Centralized control: With IAM, you can centralize the management of access control for all of your AWS resources in a single place. This makes it easier to manage access for multiple users and groups, and ensures that access policies are consistent across your organization.
- Fine-grained permissions: IAM allows you to specify fine-grained permissions for users and resources, which enables you to control exactly what actions users can perform on specific resources. This helps to reduce the risk of unauthorized access or misuse of resources.
- Identity federation: IAM enables you to use external identity providers, such as Active Directory or SAML, to authenticate users and manage access to AWS resources. This allows you to leverage your existing identity infrastructure, and makes it easier to manage access for large numbers of users.
- Multi-factor authentication: IAM supports multi-factor authentication (MFA), which adds an extra layer of security to your accounts by requiring users to provide a second form of authentication, such as a one-time code from a mobile device or a security token.
- Auditing and compliance: IAM provides detailed audit logs that enable you to track and monitor access to AWS resources. This can help you ensure compliance with internal policies and regulatory requirements, and identify any potential security issues.
- Cost optimization: By using IAM to carefully manage access to resources, you can reduce the risk of unnecessary resource usage and overspending on AWS services. This can help you optimize your AWS costs and get the most value from your AWS investment.
50. Explain what T2 instances are?
The T2 Instances are intended to give the ability to burst to a higher performance whenever the workload demands it and also provide a moderate baseline performance to the CPU.
The T2 instances are General Purpose instance types and are low in cost as well. They are usually used wherever workloads do not consistently or often use the CPU.
51. Explain Connection Draining
Connection Draining is an AWS service that allows us to serve current requests on the servers that are either being decommissioned or updated.
By enabling this Connection Draining, we let the Load Balancer make an outgoing instance finish its existing requests for a set length of time before sending it any new requests. A departing instance will immediately go off if Connection Draining is not enabled, and all pending requests will fail.
52. What is Power User Access in AWS?
The AWS Resources owner is identical to an Administrator User. The Administrator User can build, change, delete, and inspect resources, as well as grant permissions to other AWS users.
Administrator Access without the ability to control users and permissions is provided to a Power User. A Power User Access user cannot provide permissions to other users but has the ability to modify, remove, view, and create resources.
AWS Questions for CloudFormation.
53. How is AWS CloudFormation different from AWS Elastic Beanstalk?
Here are some differences between AWS CloudFormation and AWS Elastic Beanstalk:
AWS CloudFormation helps you provision and describe all of the infrastructure resources that are present in your cloud environment. On the other hand, AWS Elastic Beanstalk provides an environment that makes it easy to deploy and run applications in the cloud.
AWS CloudFormation supports the infrastructure needs of various types of applications, like legacy applications and existing enterprise applications. On the other hand, AWS Elastic Beanstalk is combined with developer tools to help you manage the lifecycle of your applications.
54. What are the elements of an AWS CloudFormation template?
AWS CloudFormation templates are YAML or JSON formatted text files that are comprised of five essential elements, they are:
- Template parameters
- Output values
- Data tables
- Resources
- File format version
55. What happens when one of the resources in a stack cannot be created successfully?
In Amazon Web Services (AWS), a stack is a collection of AWS resources that you create and manage as a single unit using AWS CloudFormation. If one of the resources in a stack cannot be created successfully, the stack creation process is rolled back, and the resources that were created are deleted. This is known as a “rollback on failure” behavior.
AWS CloudFormation provides several mechanisms for handling failures during stack creation. For example, you can use the “AWS::CloudFormation::WaitCondition” resource to pause the stack creation process until a specified condition is met, or use the “AWS::CloudFormation::RollbackConfiguration” resource to specify a list of resources to be deleted if the stack creation process fails.
56. How can you automate EC2 backup using EBS?
Use the following steps in order to automate EC2 backup using EBS:
- Get the list of instances and connect to AWS through API to list the Amazon EBS volumes that are attached locally to the instance.
- List the snapshots of each volume, and assign a retention period for the snapshot. Later on, create a snapshot of each volume.
- Remove the snapshot if it is older than the retention period.
57. What is the difference between EBS and Instance Store?
Below is a table comparing Amazon Elastic Block Store (EBS) and instance store, which are two types of storage options available in Amazon Web Services (AWS):
EBS | Instance Store | |
---|---|---|
Definition | EBS is a block-level storage service for Amazon Elastic Compute Cloud (EC2) instances. | Instance store is a type of temporary storage that is physically attached to the host computer of an EC2 instance. |
Performance | EBS provides high I/O performance for a wide range of workloads. | Instance store can provide very high I/O performance, but this can vary depending on the instance type and workload. |
Durability | EBS volumes are persisted to the AWS cloud, and are highly durable (99.999999999% annual durability). | Data stored on instance store is not persisted and is lost when the instance is stopped or terminated. |
Persistence | EBS volumes can be either persistent or ephemeral. Persistent EBS volumes are retained when an instance stops. | Instance store is ephemeral, meaning that data is lost when the instance is stopped or terminated. |
Pricing | EBS has a separate pricing model based on the volume type and size, as well as the volume’s I/O performance. | Instance store is included in the price of the instance. |
Use cases | EBS is suitable for a wide range of use cases, including boot volumes, database storage, and file storage. | Instance store is suitable for use cases that require very high I/O performance, such as temporary storage for buffers. |
In summary, EBS is a more durable and flexible storage option, but it is also more expensive than instance store. Instance store provides very high I/O performance, but it is not persisted and is lost when the instance is stopped or terminated.
58. Can you take a backup of EFS like EBS, and if yes, how?
Yes, you can use the EFS-to-EFS backup solution to recover from unintended changes or deletions in Amazon EFS. Follow these steps:
- Sign in to the AWS Management Console
- Click the launch EFS-to-EFS-restore button
- Use the region selector in the console navigation bar to select the region
- Verify if you have chosen the right template on the Select Template page
- Assign a name to your solution stack
- Review the parameters for the template and modify them if necessary
59. How do you auto-delete old snapshots?
You can use Amazon Web Services (AWS) CloudWatch Events and AWS Lambda to automatically delete old snapshots in Amazon Elastic Block Store (EBS). Here is an outline of the steps you can follow:
- Create a CloudWatch Events rule that triggers a Lambda function on a schedule (e.g., daily, weekly, monthly).
- In the Lambda function, use the AWS SDK to query for snapshots that are older than a certain age (e.g., 30 days).
- For each snapshot that meets the age criteria, use the AWS SDK to delete the snapshot.
- Test the Lambda function to ensure that it is working as expected.
- Deploy the Lambda function and enable the CloudWatch Events rule to start automatically deleting old snapshots.
Note that you should be careful when using this approach, as deleting snapshots can potentially result in data loss. It is a good idea to create a backup of your snapshots before deleting them, or to use versioning to retain multiple versions of your snapshots.
In summary, you can use CloudWatch Events and Lambda to automatically delete old snapshots in EBS by creating a rule that triggers a Lambda function on a schedule, and using the AWS SDK to query for and delete snapshots that are older than a certain age.
60. What are the different types of load balancers in AWS?
There are three types of load balancers that are supported by Elastic Load Balancing:
- Classic Load Balancer: This is the original load balancer in AWS, and it is suitable for simple load balancing scenarios. It supports two types of traffic: HTTP/HTTPS and TCP.
- Application Load Balancer: This is a newer type of load balancer that is designed for modern application architectures. It supports HTTP/HTTPS traffic, and enables you to use features such as host-based routing and content-based routing to route traffic to specific targets.
- Network Load Balancer: This is a high-performance load balancer that is designed for very high traffic volumes and ultra-low latency. It supports TCP and UDP traffic, and is optimized for high throughput and ultra-low latency.
AWS offers three types of load balancers: Classic Load Balancer, Application Load Balancer, and Network Load Balancer. Each type is suitable for different load balancing scenarios, and supports different types of traffic.
61. What are the different uses of the various load balancers in AWS Elastic Load Balancing?
Application Load Balancer:Used if you need flexible application management and TLS termination.
Network Load Balancer:Used if you require extreme performance and static IPs for your applications.
Classic Load Balancer:Used if your application is built within the EC2 Classic network
62. What Is Identity and Access Management (IAM) and How Is It Used?
Identity and Access Management (IAM) is a service offered by Amazon Web Services (AWS) that enables you to securely control access to your AWS resources and services. IAM is used to manage the following:
- Users: IAM allows you to create and manage AWS user accounts and groups, and assign permissions to these users and groups to control their access to resources.
- Roles: IAM allows you to create and manage AWS roles, which are sets of permissions that can be assigned to AWS resources such as EC2 instances or S3 buckets. This enables you to grant access to resources without sharing your AWS credentials.
- Policies: IAM allows you to create and manage policies, which are JSON documents that specify the permissions that users and roles have to access resources. These policies can be used to grant or deny access to resources based on a variety of conditions, such as the resource type, the region in which the resource is located, or the time of day.
- Multi-factor authentication: IAM supports multi-factor authentication (MFA), which adds an extra layer of security to your accounts by requiring users to provide a second form of authentication, such as a one-time code from a mobile device or a security token.
IAM is an important component of your AWS security strategy, and is used to ensure that only authorized users have access to your resources. It also enables you to comply with regulatory requirements and industry standards, and to maintain control over your resources even as your organization grows and evolves.
63. How can you use AWS WAF in monitoring your AWS applications?
AWS WAF or AWS Web Application Firewall protects your web applications from web exploitations. It helps you control the traffic flow to your applications. With WAF, you can also create custom rules that block common attack patterns. It can be used for three cases: allow all requests, prevent all requests, and count all requests for a new policy.
64. What are the different AWS IAM categories that you can control?
Using AWS IAM, you can do the following:
- Create and manage IAM users
- Create and manage IAM groups
- Manage the security credentials of the users
- Create and manage policies to grant access to AWS services and resources
65. What are the policies that you can set for your users’ passwords?
Here are some of the policies that you can set:
- You can set a minimum length of the password, or you can ask the users to add at least one number or special characters to it.
- You can assign requirements of particular character types, including uppercase letters, lowercase letters, numbers, and non-alphanumeric characters.
- You can enforce automatic password expiration, prevent the reuse of old passwords, and request a password reset upon their next AWS sign-in.
- You can have the AWS users contact an account administrator when the user has allowed the password to expire.
66. What is the difference between an IAM role and an IAM user?
The two key differences between the IAM role and the IAM user are:
- An IAM role is an IAM entity that defines a set of permissions for making AWS service requests, while an IAM user has permanent long-term credentials and is used to interact with the AWS services directly.
- In the IAM role, trusted entities, like IAM users, applications, or an AWS service, assume roles whereas the IAM user has full access to all the AWS IAM functionalities.
67. What are the managed policies in AWS IAM?
In Amazon Web Services (AWS) Identity and Access Management (IAM), managed policies are pre-defined policies that you can use to grant permissions to your users and resources. AWS provides a number of managed policies that cover a wide range of use cases, including:
- AWS managed policies: These policies are created and managed by AWS, and cover a wide range of AWS services and resources. Examples include Amazon S3 Bucket Policies, which grant permissions to access S3 buckets, and Amazon EC2 Role Policies, which grant permissions to access EC2 resources.
- Customer managed policies: These policies are created and managed by you, and can be used to grant permissions to your users and resources. You can use customer managed policies to define your own access control policies, or to customize the permissions granted by AWS managed policies.
- Third-party managed policies: These policies are created and managed by third-party organizations, and can be used to grant permissions to your users and resources. You can use third-party managed policies to grant access to resources or services provided by third parties, such as software as a service (SaaS) applications.
Managed policies are a convenient way to grant permissions to your users and resources, as they are pre-defined and can be easily attached to users, groups, or resources. You can use managed policies to quickly set up access control for your AWS resources, and to ensure that your access control policies are consistent and well-defined.
68. Can you give an example of an IAM policy and a policy summary?
Here is an example of an AWS IAM policy that allows a user to perform all actions on all Amazon S3 (Simple Storage Service) buckets in a specific AWS account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::*"
}
]
}
This policy has a single statement that allows the user to perform all actions (indicated by the “s3:” action) on all Amazon S3 buckets (indicated by the “arn:aws:s3:::” resource) in the AWS account.
Here is a policy summary for this policy:
- Effect: Allow
- Actions: All actions on Amazon S3 buckets
- Resources: All Amazon S3 buckets in the AWS account
IAM policies are written in JSON (JavaScript Object Notation) and use a specific syntax to specify the actions, resources, and other elements of the policy.
69. How does AWS IAM help your business?
AWS Identity and Access Management (IAM) is a service that enables you to manage access to AWS resources. IAM can help your business in several ways:
- Security: IAM enables you to create and manage users and groups, and to assign permissions to them using policies. This can help you to ensure that only authorized users have access to your resources, and that they can only perform the actions that are appropriate for their role.
- Compliance: IAM enables you to enforce fine-grained access control and to audit access to your resources. This can help you to meet regulatory and compliance requirements, and to demonstrate that you are following best practices for security and access control.
- Resource management: IAM enables you to manage access to your resources across your organization. You can use IAM to create and manage users and groups, and to assign permissions to them based on their role and responsibilities. This can help you to streamline resource management and to ensure that the right people have access to the resources they need to do their job.
- Collaboration: IAM enables you to share access to your resources with other AWS accounts and with third parties, such as contractors and partners. This can help you to collaborate more easily and to share resources in a secure and controlled way.
Overall, IAM can help you to improve the security, compliance, and resource management of your AWS environment, and to collaborate more effectively with others.
70. What Is Amazon Route 53?
Amazon Route 53 is a Domain Name System (DNS) and Domain Name System Security Extensions (DNSSEC) service offered by Amazon Web Services (AWS). It is a highly available and scalable cloud-based DNS service that routes traffic to Internet applications.
Route 53 provides a number of DNS services, including:
- Domain registration: Route 53 enables you to register domain names and to manage the DNS records for those domains. This allows you to create custom domain names for your applications, and to control how traffic is routed to those applications.
- Domain name resolution: Route 53 provides DNS resolution for your domains, which allows users to access your applications using a friendly domain name rather than an IP address.
- Load balancing: Route 53 can be used to distribute traffic across multiple servers or resources, such as Amazon Elastic Compute Cloud (EC2) instances or Amazon Elastic Container Service (ECS) containers. This can help to improve the availability and performance of your applications.
- Health checking: Route 53 can monitor the health of your resources and automatically route traffic away from unhealthy resources to healthy ones. This can help to ensure that your applications remain available and responsive to users.
Route 53 is a powerful and flexible DNS service that can be used to support a wide range of applications and use cases. It is often used in conjunction with other AWS services, such as EC2 and ECS, to build and deploy scalable and highly available applications.
71. What Is Cloudtrail and How Do Cloudtrail and Route 53 Work Together?
Amazon CloudTrail is a service that enables you to monitor and record API calls made to your AWS accounts. CloudTrail logs API calls made to your AWS resources, including information about the caller, the resource being accessed, and the API actions being performed.
CloudTrail can be used to track changes to your AWS resources, to help you understand how your resources are being used, and to troubleshoot issues. It can also be used to monitor the activity of your AWS accounts and to detect unusual or suspicious activity.
Amazon Route 53 is a Domain Name System (DNS) and Domain Name System Security Extensions (DNSSEC) service offered by AWS. It is a highly available and scalable cloud-based DNS service that routes traffic to Internet applications.
CloudTrail and Route 53 can work together in the following ways:
- CloudTrail logs can include information about API calls made to Route 53, such as when a domain is registered or when DNS records are modified. This can help you to track changes to your domains and DNS records, and to understand how Route 53 is being used in your AWS accounts.
- You can use CloudTrail and Route 53 together to monitor the health of your resources and to automatically route traffic away from unhealthy resources to healthy ones. For example, you can use CloudTrail to monitor the health of your EC2 instances or ECS containers, and use Route 53 to route traffic to healthy instances or containers based on the results of the health checks.
- You can use CloudTrail and Route 53 together to detect and respond to unusual or suspicious activity in your AWS accounts. For example, you can use CloudTrail to monitor for unexpected changes to your domains or DNS records, and use Route 53 to block or redirect traffic based on the results of the monitoring.
Overall, CloudTrail and Route 53 can be used together to monitor and manage the traffic and activity in your AWS accounts, and to ensure the availability and security of your applications
72. What is the difference between Latency Based Routing and Geo DNS?
Below is a table comparing Latency Based Routing and Geo DNS, which are two methods of routing traffic to optimize performance:
Latency Based Routing | Geo DNS | |
---|---|---|
Definition | Latency Based Routing is a feature of Amazon Web Services (AWS) Route 53, which enables you to route traffic to the Amazon Web Services (AWS) region that provides the lowest latency for your users. | Geo DNS is a feature of Route 53 that enables you to route traffic based on the geographic location of your users. You can use Geo DNS to route traffic to the closest region or endpoint. |
Performance | Latency Based Routing provides optimal performance by routing traffic to the region with the lowest latency for your users. | Geo DNS can provide good performance for users in specific geographic regions, but may not provide the lowest possible latency for users located outside of those regions. |
Configuration | Latency Based Routing requires you to configure health checks for your endpoints to determine their latency. | Geo DNS requires you to define the geographic regions and endpoints to which you want to route traffic. |
Use cases | Latency Based Routing is suitable for use cases that require the lowest possible latency, such as real-time applications or interactive web applications. | Geo DNS is suitable for use cases that require routing traffic based on geographic location, such as serving content in different languages or complying with data sovereignty laws. |
Latency Based Routing is a more sophisticated method of routing traffic, as it determines the optimal region for each user based on real-time latency measurements. Geo DNS is a simpler method of routing traffic based on the geographic location of users, and is suitable for use cases that do not require the lowest possible latency.
73. What is the difference between a Domain and a Hosted Zone?
Domain:A domain is a collection of data describing a self-contained administrative and technical unit & it is a domain and a general DNS concept.
Hosted zone:A hosted zone is a container that holds information about how you want to route traffic on theInternet for a specific domain. For example, it is a hostedzone.
74. How does Amazon Route 53 provide high availability and low latency?
Here’s how Amazon Route 53 provides the resources in question:
Globally Distributed Servers:Amazon is a global service and consequently has DNS services globally. Any customer creating a query from any part of the world gets to reach a DNS server local to them that provides low latency.
Dependency:Route 53 provides a high level of dependability required by critical applications
Optimal Locations:Route 53 uses a global anycast network to answer queries from the optimal position automatically.
75. How does AWS config work with AWS CloudTrail?
AWS CloudTrail records user API activity on your account and allows you to access information about the activity. Using CloudTrail, you can get full details about API actions such as the identity of the caller, time of the call, request parameters, and response elements. On the other hand, AWS Config records point-in-time configuration details for your AWS resources as Configuration Items (CIs).
You can use a CI to ascertain what your AWS resource looks like at any given point in time. Whereas, by using CloudTrail, you can quickly answer who made an API call to modify the resource. You can also use Cloud Trail to detect if a security group was incorrectly configured.
76. Can AWS Config aggregate data across different AWS accounts?
Yes, you can set up AWS Config to deliver configuration updates from different accounts to one S3 bucket, once the appropriate IAM policies are applied to the S3 bucket.
77. How are reserved instances different from on-demand DB instances?
Reserved instances and on-demand instances are the same when it comes to function. They only differ in how they are billed.
Reserved instances are purchased as one-year or three-year reservations, and in return, you get very low hourly based pricing when compared to the on-demand cases that are billed on an hourly basis.
78. What is a maintenance window in Amazon RDS? Will your DB instance be available during maintenance events?
The RDS maintenance window lets you decide when DB instance modifications, database engine version upgrades, and software patching have to occur. The automatic scheduling is done only for patches that are related to security and durability. By default, there is a 30-minute value assigned as the maintenance window and the DB instance will still be available during these events though you might observe a minimal effect on performance.
Master the art of building complex and demanding apps in AWS. Understand the architectural principles and services of AWS with our Cloud Architect Master’s Program.
79. Which type of scaling would you recommend for RDS and why?
There are two types of scaling – vertical scaling and horizontal scaling. Vertical scaling lets you vertically scale up your master database with the press of a button. A database can only be scaled vertically, and there are 18 different instances in which you can resize the RDS. On the other hand, horizontal scaling is good for replicas. These are read-only replicas that can only be done through Amazon Aurora.
80. What are the consistency models in DynamoDB?
There are two consistency models In DynamoDB. First, there is the Eventual Consistency Model, which maximizes your read throughput. However, it might not reflect the results of a recently completed write. Fortunately, all the copies of data usually reach consistency within a second. The second model is called the Strong Consistency Model. This model has a delay in writing the data, but it guarantees that you will always see the updated data every time you read it.
81. What type of query functionality does DynamoDB support?
DynamoDB supports GET/PUT operations by using a user-defined primary key. It provides flexible querying by letting you query non-primary vital attributes using global secondary indexes and local secondary indexes.
82. How do you set up a system to monitor website metrics in real-time in AWS?
Amazon CloudWatch helps you to monitor the application status of various AWS services and custom events. It helps you to monitor:
- State changes in Amazon EC2
- Auto-scaling lifecycle events
- Scheduled events
- AWS API calls
- Console sign-in events
- amazon cloud watch
Short Questions & Answers
83. Suppose you are a game designer and want to develop a game with single-digit millisecond latency, which of the following database services would you use?
Amazon DynamoDB
84. If you need to perform real-time monitoring of AWS services and get actionable insights, which services would you use?
Amazon CloudWatch
85. As a web developer, you are developing an app, targeted primarily at the mobile platform. Which of the following let’s you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily?
Amazon Cognito
86. You are a Machine Learning Engineer who is on the lookout for a solution that will discover sensitive information that your enterprise stores in AWS and then use NLP to classify the data and provide business-related insights. Which among the services would you choose?
AWS Macie
87. You are the system administrator in your company, which is running most of its infrastructure on AWS. You are required to track your users and keep tabs on how they are being authenticated. You wish to create and manage AWS users and use permissions to allow and deny their access to AWS resources. Which of the following services suits you best?
AWS IAM
88. Which service do you use if you want to allocate various private and public IP addresses to make them communicate with the internet and other instances?
Amazon VPC
89. Which of the following is a means for accessing human researchers or consultants to help solve problems on a contractual or temporary basis?
Amazon Mechanical Turk
90. This service is used to make it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Which of the following is this AWS service?
Amazon Elastic Container Service
91. This service lets you run code without provisioning or managing servers. Select the correct service from the below options
AWS Lambda
92. As an AWS Developer, using this pay-per-use service, you can send, store, and receive messages between software components. Which of the following is it?
Amazon Simple Queue Service
93. Which service do you use if you would like to host a real-time audio and video conferencing application on AWS, this service provides you with a secure and easy-to-use application.
Amazon Chime
94. As your company’s AWS Solutions Architect, you are in charge of designing thousands of similar individual jobs. Which of the following services best meets your requirements?
AWS Batch
95. This service provides you with cost-efficient and resizable capacity while automating time-consuming administration tasks
Amazon Relational Database Service
AWS Interview Multiple-Choice Question & Answers
96. Suppose you are a game designer and want to develop a game with single-digit millisecond latency, which of the following database services would you use?
A. Amazon RDS
B. Amazon Neptune
C. Amazon Snowball
D. Amazon DynamoDB
Answer D
97. As a web developer, you are developing an app, targeted especially for the mobile platform. Which of the following let’s you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily?
A. AWS Shield
B. AWS Macie
C. AWS Inspector
D. Amazon Cognito
Answer D
98. You are a Machine Learning Engineer who is on the lookout for a solution that will discover sensitive information that your enterprise stores in AWS and then use NLP to classify the data and provide business-related insights. Which among the services would you choose?
A. AWS Firewall Manager
B. AWS IAM
C. AWS Macie
D. AWS CloudHSM
Answer C
99. Which of the following is a means for accessing human researchers or consultants to help solve problems on a contractual or temporary basis?
A. Amazon Mechanical Turk
B. Amazon Elastic Mapreduce
C. Amazon DevPay
D. Multi-Factor Authentication
Answer A
100. This cross-platform video game development engine that supports PC, Xbox, Playstation, iOS, and Android platforms allows developers to build and host their games on Amazon’s servers.
A. Amazon GameLift
B. AWS Greengrass
C. Amazon Lumberyard
D. Amazon Sumerian
Answer C
101. You are the Project Manager of your company’s Cloud Architects team. You are required to visualize, understand and manage your AWS costs and usage over time. Which of the following services works best?
A. AWS Budgets
B. AWS Cost Explorer
C. Amazon WorkMail
D. Amazon Connect
Answer B
102. You are the chief Cloud Architect at your company. How can you automatically monitor and adjust computer resources to ensure maximum performance and efficiency of all scalable resources?
A. AWS CloudFormation
B. AWS Aurora
C. AWS Auto Scaling
D. Amazon API Gateway
Answer C
103. As a database administrator. you will employ a service that is used to set up and manage databases such as MySQL, MariaDB, and PostgreSQL. Which service are we referring to?
A. Amazon Aurora
B. AWS RDS
C. Amazon Elasticache
D. AWS Database Migration Service
Answer B
FAQ’s
How do I prepare for an AWS interview?
Preparation is key to doing well in any interview, and an interview for a role working with Amazon Web Services (AWS) is no exception. Here are a few tips to help you prepare:
Understand the basics of cloud computing: Make sure you have a solid understanding of the fundamentals of cloud computing, including concepts such as virtualization, scalability, and elasticity.
Familiarize yourself with AWS: Understand the different services offered by AWS, including compute services (e.g., EC2, Elastic Beanstalk), storage services (e.g., S3, EBS), and networking services (e.g., VPC, Route 53). Get hands-on experience with these services by setting up and working with them in an AWS account.
Understand AWS best practices: AWS has published a number of best practices for working with its services. Review these best practices and be prepared to discuss them in the interview.
Be prepared to discuss case studies: Interviewers will likely ask you to walk them through how you would solve a problem using AWS services. Be prepared to discuss a few case studies and explain your thought process and the AWS services you would use to solve the problem.
Understand AWS architecture: Understand what the different types of AWS architecture (e.g., single-tier, multi-tier, and microservices) are and when to use them. Also, be familiar with AWS architecture best practices and understand how to secure and scale your architecture on AWS
Be prepared to discuss your experience: Be prepared to discuss any projects you’ve worked on that involved using AWS. If you haven’t had much hands-on experience with AWS, be prepared to discuss other experiences you’ve had that are relevant to working with cloud computing or distributed systems.
Practice answering common interview questions: there are some common questions that are usually asked during AWS interviews, like:
Explain a situation where you had to migrate an application to the cloud
How would you secure your data on S3
How would you design a highly available architecture
Explain a situation where you had to troubleshoot a complex issue on AWS
practicing these kind of questions can be very helpful and increases your chances to succeed in the interview.
Understand the job and company requirement: Make sure you understand what the role is looking for and research the company to understand the environment and company culture, and tailor your responses to align with those expectations.
By following these tips and preparing thoroughly, you’ll be in a great position to do well in your AWS interview.
What is AWS basic interview questions?
Here are a few common interview questions that you may be asked in an interview for a role working with AWS:
1. Can you explain the difference between an Amazon Machine Image (AMI) and an 2.Amazon Elastic Block Store (EBS) volume?
3.How would you secure sensitive data stored in Amazon S3?
4.Can you explain how Amazon Elastic Block Store (EBS) works and what its use cases are?
5.Can you describe a situation where you had to migrate an application to the cloud, and how did you approach it?
6.Can you explain the different types of Amazon Elastic Compute Cloud (EC2) instances, and when you would use them?
7.How would you design a highly available and scalable architecture on AWS?
8.How would you troubleshoot an issue with an Amazon Elastic Compute Cloud (EC2) instance?
9.Can you explain the different types of Amazon Virtual Private Cloud (VPC) and their use cases?
10.How would you secure access to your Amazon S3 bucket?
Can you explain the difference between Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS)?
Keep in mind that this is not an exhaustive list and the question may vary based on the specific role and company you are interviewing with.
Additionally, AWS is a large platform and constantly developing and adding new services, it’s always beneficial to stay up-to-date with the recent services and features that AWS introduced and the current best practices.
Is AWS interview difficult?
The difficulty of an AWS interview can vary depending on the specific role you’re interviewing for and the company conducting the interview. Generally speaking, an interview for a role working with AWS would likely be more technical and focused on your understanding of cloud computing and your ability to work with the various AWS services.
An AWS interview may be considered difficult if you don’t have much hands-on experience with the platform or if you’re not well-versed in the fundamentals of cloud computing. However, with proper preparation and understanding of AWS services and best practices, the interview can be manageable.
It’s worth noting that an AWS interview may also assess your problem-solving and critical thinking skills, as interviewers often ask questions about how you would approach a problem or design an architecture using AWS services.
To prepare for an AWS interview, it’s crucial to have a solid understanding of cloud computing and the different AWS services and be prepared to discuss your experience working with them. Additionally, practice common interview questions and case studies, this should help you to feel more confident and prepared for the interview.
What are the 4 foundational services in AWS?
AWS provides a wide range of services to support the needs of different types of customers. However, there are four services that are considered the “foundational services” on which many other AWS services are built:
1.Amazon Elastic Compute Cloud (EC2): This is a web service that provides resizable compute capacity in the cloud. It enables you to launch virtual machines (VMs) with a variety of operating systems and configurations, and to scale up or down as needed.
2.Amazon Simple Storage Service (S3): This is a web service that provides object storage through a simple web service interface. It enables you to store, retrieve, and manage data in the cloud, and to scale your storage needs as needed.
3.Amazon Virtual Private Cloud (VPC): This is a web service that enables you to launch Amazon Elastic Compute Cloud (EC2) instances in a virtual network that you’ve defined. It provides a way to create a logically-isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you’ve defined.
4.Amazon Elastic Block Store (EBS): This is a web service that provides raw block-level storage for use with Amazon Elastic Compute Cloud (EC2) instances. It is similar to a hard disk drive (HDD) or a solid-state drive (SSD) that you’d use in a physical server, and enables you to persist data even after the instance is terminated.