Thursday 15 September 2022

AWS Dynamo DB table using AWS CLI

1)Create Ec2 instance and get its key-pair for ssh to it .

 ssh ec2user@17.38.292.1 -i ec2key.pem  (Make sure key has right permission if not change it by               command . chmod 400  ec2key.pem

2)Once login can configure aws cli by running below command. Below command shall work in Amazon type Ec2.

    aws configure   (This works in amazon instance)

    Prompt for below ::

    Access key :

    Secret access key:

    Default region name: us-east-1

    Output format: json

3)create dynamco db tables

   a) After running AWS Configure, create a DynamoDB table using the following command:

aws dynamodb create-table --table-name ProductCatalog --attribute-definitions \

AttributeName=Id,AttributeType=N --key-schema \

AttributeName=Id,KeyType=HASH \

--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

    b) This is the command to populate the table:

**** (make sure items.json is located in your working directory) ****

aws dynamodb batch-write-item --request-items file://items.json

where items.json contains: {

"ProductCatalog": [{

"PutRequest": {

"Item": {

"Id": {

"N": "201"

},

"ProductCategory": {

"S": "Bicycle"

},

"Description": {

"S": "Womens Road Bike"

},

"BicycleType": {

"S": "Road"

},

"Brand": {

"S": "Raleigh"

},

"Price": {

"N": "399"

},

"Color": {

"S": "Red"

}

}

}

},

{

"PutRequest": {

"Item": {

"Id": {

"N": "403"

},

"ProductCategory": {

"S": "Helmet"

},

"Description": {

"S": "Womens Cycling Helmet"

},

"Size": {

"S": "Small"

},

"Price": {

"S": "99"

},

"Color": {

"S": "Black"

}

}

}

},

{

"PutRequest": {

"Item": {

"Id": {

"N": "411"

},

"ProductCategory": {

"S": "Book"

},

"Description": {

"S": "The Read Aloud Cloud"

},

"Author": {

"S": "Forrest Brazeal"

},

"Price": {

"N": "19.99"

},

"Format": {

"S": "Hardback"

}

}

}

},

{

"PutRequest": {

"Item": {

"Id": {

"N": "563"

},

"ProductCategory": {

"S": "Helmet"

},

"Description": {

"S": "Mens Cycling Helmet"

},

"Size": {

"S": "Small"

},

"Price": {

"N": "75"

},

"Color": {

"S": "Blue"

}

}

}

},

{

"PutRequest": {

"Item": {

"Id": {

"N": "543"

},

"ProductCategory": {

"S": "Helmet"

},

"Description": {

"S": "Womens Cycling Helmet"

},

"Size": {

"S": "Medium"

},

"Price": {

"N": "199"

},

"Color": {

"S": "Red"

}

}

}

},

{

"PutRequest": {

"Item": {

"Id": {

"N": "493"

},

"ProductCategory": {

"S": "Helmet"

},

"Description": {

"S": "Childs Cycling Helmet"

},

"Size": {

"S": "Small"

},

"Price": {

"N": "99"

},

"Color": {

"S": "Black"

}

}

}

},

{

"PutRequest": {

"Item": {

"Id": {

"N": "347"

},

"ProductCategory": {

"S": "Helmet"

},

"Description": {

"S": "Womens Cycling Helmet"

},

"Size": {

"S": "Small"

},

"Price": {

"N": "79"

},

"Color": {

"S": "Blue"

}

}

}

},

{

"PutRequest": {

"Item": {

"Id": {

"N": "467"

},

"ProductCategory": {

"S": "Bicycle"

},

"Description": {

"S": "Mens Road Bike"

},

"BicycleType": {

"S": "Road"

},

"Brand": {

"S": "Raleigh"

},

"Price": {

"N": "250"

},

"Color": {

"S": "Blue"

}

}

}

},

{

"PutRequest": {

"Item": {

"Id": {

"N": "566"

},

"ProductCategory": {

"S": "Bicycle"

},

"Description": {

"S": "Mens Mountain Bike"

},

"BicycleType": {

"S": "Mountain"

},

"Brand": {

"S": "Raleigh"

},

"Price": {

"N": "599"

},

"Color": {

"S": "Black"

}

}

}

}

]

}


Response:

{

"UnprocessedItems":{}

}


c) This is the command to query Dynamodb from EC2 command line - make sure the region is correct. You should be working in us-east-1.


  aws dynamodb get-item --table-name ProductCatalog --region us-east-1  --key '{"Id":   {"N":"403"}}'


Tuesday 13 September 2022

MCQ for AWS Developer Associate Exam



1)Which is the best way to enable S3 read-access for an EC2 instance?
a)Create an IAM role with read-access to S3 and assign the role to the EC2 instance
b)Create a new IAM group and grant read access to S3. Store the group's credentials locally on the EC2 instance and configure your application to supply the credentials with each API request.
c)Create a new IAM role and grant read-access to S3. Store the role's credentials locally on the EC2 instance and configure your application to supply the credentials with each API request
d)Configure a bucket policy which grants read-access based on the EC2 instance name

Answer: a
Reason: As a security best practice, AWS recommends the use of roles for applications that run on Amazon EC2 instances. IAM roles allow applications to securely make API requests from instances, without requiring you to manage the security credentials that the applications use.

2)In AWS, what is IAM used for?
Choose 3
a)Secure VPN access to AWS
b)Creating and managing users and groups
c)Assigning permissions to allow and deny access to AWS resources
d)Managing access to AWS services

Answer: b,c,d

Reason: Correct. IAM supports multiple methods to create and manage IAM users & IAM groups.
Correct. Using policies, you can specify several layers of permission granularity.
Correct. You can use AWS IAM to securely control individual and group access to your AWS resources.


3)Which IAM entity can you use to delegate access to trusted entities such as IAM users, applications, or AWS services such as EC2?
a)IAM Group
b)IAM Web Identity Federation
c)IAM User
d)IAM Role

Answer: d

Reason: You can use IAM roles to delegate access to IAM users managed within your account, to IAM users under a different AWS account, to a web service offered by AWS such as Amazon Elastic Compute Cloud (Amazon EC2), or to an external user authenticated by an external identity provider (IdP) service that is compatible with SAML 2.0 or OpenID Connect, or a custom-built identity broker. IAM Roles.


4)What is an IAM Policy?
a)A CSV file which contains a users Access Key and Secret Access Key
b)The policy which determines how your AWS bill will be paid
c)A JSON document which defines one or more permissions
d)A file containing a user's private SSH key

Answer: c
Reason:: An IAM policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies.


5)True or False? AWS recommends that EC2 instances have credentials stored on them so that the instances can access other resources (such as S3 buckets).
a)True
b)False

AWS recommends IAM roles so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use.


6)Which of the following is NOT a feature of IAM?
a)Fine-grained access control to AWS resources
b)Identity federation for delegated access to the AWS Management Console or AWS APIs
c)Allows you to set up biometric authentication, so that no passwords are required
d)Centralized control of your AWS account

Answer : c
IAM doesn't have a feature to handle biometric authentication.


7)You are the IT manager at a furniture retailer and they are considering moving their web application to AWS. They currently colocate their servers in a co-location facility and the contract for this facility is now coming to an end. Management are comfortable signing a 3 year contract and want to get the cheapest web servers as possible while still maintaining availability. Their traffic is very steady and predictable. What EC2 pricing model would you recommend to maintain availability and to get the lowest cost price available?
a)On-demand.
b)Spot Instances.
c)Reserved Instances.
d)Dedicated Instances.

Answer : c

On-Demand Instances let you pay for compute capacity by the hour or second (minimum of 60 seconds) with no long-term commitments. [On Demand has isolated but multiple customers instances run on a shared hardware.Its like instance of different sizes run on same Ec2 host and consumes defined allocated resources. Like One machine runs many ec2 apps but use shared resources from Ec2 host.]

A Reserved Instance (RI) is an EC2 offering that provides you with a significant discount on EC2 usage when you commit to a one-year or three-year term.


8)You work for a media production company that streams popular TV shows to millions of users. They are migrating their web application from an in house solution to AWS. They will have a fleet of over 10,000 web servers to meet the demand and will need a reliable layer 4 load balancing solution capable of handling millions of requests per second. What AWS load balancing solution would best suit their needs?
a)AWS Direct Connect.
b)Application Load Balancer.
c)Network Load Balancer.
d)Elastic Load Balancer.

Answer: c
Reason: Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies.


9)You are a developer for a genomics firm that is moving its infrastructure to AWS. Their environment consists of a three-tier web application, a web tier, an application tier and a relational database tier. They have a separate fleet of virtual machines that are used to access large HPC clusters on the fly. Their lab researchers run multiple projects simultaneously and they will need to launch and decommission 1,000's of nodes on-demand while reducing the time required to complete genomic sequencing from weeks to days. In order to stay competitive they need to do this at as low cost as possible, with no long-term contracts. These HPC clusters can run any time day or night and their workloads store information in S3, so the instances can be terminated at any time without any effect on the data. What is the most COST EFFECTIVE EC2 pricing model for their requirements?
a)On-demand Instances. [Good for short term, no discount]
b)Dedicated Instances. 
c)Reserved Instances.
d)Spot Instances.  [Cheapest and used when there is scope of spare capacity in Ec2 host  so AWS tracks spare capacity and allow customers to make use of it at 90% cheap price.]

Answer:  d
Reason: 


10)You have a three-tier web application with a web server tier, application tier, and database tier. The application is spread across multiple availability zones for redundancy and is in an Auto Scaling group with a minimum size of two and a maximum size of ten. The application relies on connecting to an RDS Multi-AZ database. When new instances are launched, they download a connection string file that is saved in an encrypted S3 bucket using a bootstrap script. During a routine scaling event, you notice that your new web servers are failing their health checks and are not coming into service. You investigate and discover that the web server's S3 read-only role has no policies attached to it. What combination of steps should you take to remediate this problem while maintaining the principle of least privilege?
Choose 2
a)Create a snapshot of the EBS volume and then restart the instance.
b)Attach the S3 – Administrator policy.
c)Leave the healthy instances as they are and allow new instances to come into service after fixing the policy issue.
d)Copy the role to a new AMI.
e)Attach the S3 – read-only policy to the role.
f)Create a new role giving Lambda permission to execute.

Answer:  c and e
Reason: 
New instances can download a connection string, provided that the read-only policy is attached to the role. Instances will not download the connection string file without the S3 policy since it is required to allow the bootstrapping process to complete successfully.

The read-only policy attached to the role will solve the permission issue and is in line with the principle of least privilege


11)You have an EC2 instance in a single availability zone connected to an RDS instance. The EC2 instance needs to communicate to S3 to download some important configuration files from it. You try the command aws s3 cp s3://yourbucket /var/www/html however you receive an error message. You log in to Identity Access Management (IAM) and discover there is no role created to allow EC2 to communicate to S3. You create the role and attach it to the existing EC2 instance. How fast will the changes take to propagate?
a)It depends on the region and availability zone.
b)The same duration as CloudWatch detailed monitoring – 1 minute.
c)Almost immediately.
d)The same duration as CloudWatch standard monitoring – 5 minutes.


Answer:  c
Reason: You can change the permissions on the IAM role associated with a running instance, and the updated permissions take effect almost immediately.


12)Which of the following services can be used to securely store confidential information like credentials and license codes so that they can be accessed by EC2 instances?
a)Systems Manager Parameter Store
b)KMS
c)DynamoDB
d)IAM

Answer:  a
Reason:AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values.

AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.

13)You see a "timed out" error when using the AWS CLI to list all the files in an S3 bucket containing thousands of files. What could be the reason for this?
a)You don't have the correct permission to run the command.
b)Your network connection is too slow.
c)Too many results are being returned which is causing the command to time out.
d)You have not installed the AWS CLI correctly.

Answer:  c
Reason:Using the AWS CLI to list all the files in an S3 bucket containing thousands of files can cause your API call to exceed the maximum allowed time for the AWS CLI, and generate a "timed out" error. To avoid this, you can use the --page-size option to specify that the AWS CLI request a smaller number of items from each call to the AWS service.



14)You run the internal intranet for a corporate bank. The intranet consists of a number of web servers and single relational database running Microsoft SQL Server. Your peak demand occurs at 9am every week morning when users are first logging in to the intranet. They can only log in using the company's internal network and it is not possible to access the intranet from any location other than within the office building for security purposes. Management is considering a change and to move this environment to AWS where users will be able to access the intranet via a software VPN. You have been asked to evaluate a migration to AWS and to identify the best EC2 billing model for your company's intranet. You must keep costs low and to be able to scale at particular times of day. You must maintain availability of the intranet throughout office hours. Management do not want to be locked into any contracts in case for some reason they want to go back to hosting internally. What EC2 billing model should you recommend?
a)Spot Instances.
b)Dedicated Instances.
c)Reserved Instances.
d)On-demand.

Ans: d
Reason:
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and test & development workloads. 

The correct answer is On-demand instances - as they best satisfy the requirements of: low cost, availability during office hours and no lock in contracts. Dedicated instances are more costly, Reserved instances are a long term (1 to 3 year) commitment, and spot instances may terminate at any time so do not meet the availability requirements.


15)In order to enable encryption at rest using EC2 and Elastic Block Store, you must ____.
a)Configure encryption using the appropriate Operating Systems file system
b)Configure encryption using X.509 certificates
c)Mount the EBS volume into S3 and then encrypt the bucket using a bucket policy.
d)Configure encryption when creating the EBS volume

Ans: d
Reason:

When you create a new, empty EBS volume, you can encrypt it by enabling encryption for the specific volume creation operation.


16)You work for a government contractor who supply services that are critical to national security. Because of this your corporate IT policy states that no multi-tenant virtualization is authorized within the company. Despite this, they are interested in moving to AWS, but they cannot violate corporate IT policy. Which EC2 billing model would you recommend that they use to achieve this?
a)On-demand.
b)Dedicated Instances.
c)Reserved Instances.
d)Spot Instances.

Ans: b
Reason:Dedicated instances run on its own dedicated hardware, solely belonging to that customer, and they do not share resources.


17)You have a very popular blog site, which has recently had a surge in traffic. You want to implement an ElastiCache solution to help take the load off the production database and you want to keep it as simple as possible. You will need to scale your cache horizontally and object caching will be your primary goal. Which ElastiCache solution will best suit your needs?
a)ArangoDB
b)Memcached
c)Couchbase
d)Redis

Answer: b
Reason:
The Memcached engine supports partitioning your data across multiple nodes. Because of this, Memcached clusters scale horizontally easily. For this scenario we do not require advanced data structure support, only object caching and horizontal scaling - so Redis is incorrect. Couchbase and ArangoDB are not supported by ElastiCache, so these are incorrect.


18)Which of the following is a suitable use case for Provisioned IOPS SSD io2 Block Express EBS volumes?
a)Boot volumes for general applications
b)Large mission-critical applications that need SAN-level performance
c)Storage for non-critical workloads that are not latency sensitive
d)Cold data requiring few scans per day and applications that need the lowest cost.

Answer: b
Reason:Provisioned IOPS SSD io2 Block Express provides high performance, sub-millisecond latency SAN performance in the cloud. It is suitable for the largest, most critical, high-performance applications like SAP HANA, Oracle, Microsoft SQL Server, and IBM DB2. Each volume can support up to 64 TB and 256,000 IOPS per volume.

19)A new CIO joins your company and implements a new company policy that all EC2 EBS backed instances must have encryption at rest. What is the quickest and easiest way to apply this policy to your existing EC2 EBS backed instances?
a)Create an encrypted snapshot of the EC2 volume using the encrypt-on-the-fly option. Create an AMI of the copied snapshot and then redeploy the EC2 instance using the encrypted AMI. Delete the old EC2 instance.
b)Create a snapshot of the EC2 volume. Then create a copy of the snapshot, checking the box to enable encryption. Create an AMI of the copied snapshot and then redeploy the EC2 instance using the encrypted AMI. Delete the old EC2 instance.
c)Create an encrypted AMI of the EC2 volume using Windows BitLocker.
d)In the AWS console, click on the EC2 instances, click actions and click encrypt EBS volumes.

Answer: b
Reason: Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot


20)You have a WordPress site hosted on EC2 with a MySQL database hosted on RDS. The majority of your traffic is read traffic. There is only write traffic when you create a new blog. One of your blogs has gone viral and your WordPress site is struggling to cope. You check your CloudWatch metrics and notice your RDS instance is at 100% CPU utilization. What two steps should you take to reduce the CPU utilization?
Choose 2
a)Create an ElastiCache cluster and use this to cache your most frequently read blog posts.
b)Enable Multi-AZ on your RDS instances and point multiple EC2 instances to the new Multi-AZ instances, thereby spreading the load.
c)Create multiple RDS read replicas and point multiple EC2 instances to these read replicas, thereby spreading the load.
d)Migrate from an Elastic Load Balancer to a Network Load Balancer so you can sustain more connections.

Answer:  a ,c
Reason:

Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory system, instead of relying entirely on slower disk-based databases.
Multi-AZ would help with high availability, but wouldn't improve read performance and reduce the RDS CPU utilization.
Correct. Amazon RDS Read Replicas make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.


21)Which of the following EBS volume types gives you SAN performance in the cloud and is suitable for the largest, most critical, high-performance applications?
a)Provisioned IOPS SSD io2 Block Express
b)General Purpose SSD (gp3)
c)Throughput Optimized HDD (st1)
d)Provisioned IOPS SSD (io2)

Answer: a
Reason:Provisioned IOPS SSD io2 Block Express provides high performance, sub-millisecond latency SAN performance in the cloud. It is suitable for the largest, most critical, high-performance applications like Oracle, SAP HANA, Microsoft SQL Server, and SAS Analytics.. Each volume can support up to 64 TiB and 256,000 IOPS per volume.


22)You work for an online gaming store which has a global worldwide leader board for players of the game. You need to implement a caching system for your leader board that has multiple availability zones in order to prevent an outage. Which ElastiCache solution should you use?
a)Redis
b)ArangoDB
c)Memcached
d)Couchbase

Answer: a
Reason:

Amazon ElastiCache for Redis supports both Redis cluster and non-cluster modes and provides high availability via support for automatic failover by detecting primary node failures and promoting a replica to be primary with minimal impact. It allows for read availability for your application by supporting read replicas (across availability zones), to enable the reads to be served when the primary is busy with the increased workload.


23)You work for a web analytics firm who have recently migrated their application to AWS. The application sits behind an Elastic Load Balancer and it monitors user traffic to their website. You have noticed that in the application logs you are no longer seeing your users public IP addresses, instead you are seeing the private IP address of the elastic load balancer. This data is critical for your business and you need to rectify the issue immediately. What should you do?
a)Install a CloudWatch logs agent on the EC2 instances behind the Elastic Load Balancer to monitor the public IPv4 addresses and then stream this data to AWS Neptune.
b)Migrate the application to AWS Lambda instead of EC2 and put the Lambda function behind a Network Load Balancer.
c)Update the application to log the x-forwarded-for header to get your users public IPv4 addresses.
d)Migrate the application in front of a Network Load Balancer and then reverse proxy traffic to your RDS instance.

Answer: c
Reason:

Your access logs capture the IP address of your load balancer because the load balancer establishes the connection to your instances. You must perform additional configuration to capture the IP addresses of clients in your access logs. For Application Load Balancers and Classic Load Balancers with HTTP/HTTPS listeners, you must use X-Forwarded-For headers to capture client IP addresses. Then, you must print those client IP addresses in your access logs. Reference: How do I capture client IP addresses in my ELB access logs?


24)Which of the following are valid types of Elastic Load Balancers?

Choose 3
a)Classic Load Balancer.
b)Virtual Load Balancer.
c)Application Load Balancer.
d)Network Load Balancer.

Answer: a,c,d
Reason: Elastic Load Balancing offers three types of load balancers: Application Load Balancer, Network Load Balancer, and Classic Load Balancer.

25)The minimum file size allowed on S3 is 1 byte.
a)True
b)False

Ans: b)

Reason: Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. Reference: How much data can I store in Amazon S3?


26)What is the maximum file size that can be stored on S3?
a)4TB
b)2TB
c)1TB
d)5TB
Answer: d
Reason: 


27)You are hosting a website in an Amazon S3 bucket. Which feature defines a way for client web applications that are loaded in one domain to interact with resources in a different domain?
a)Bucket ACL
b)IAM Role
c)Bucket Policy
d)CORS

Ans: d
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources. Reference: Configuring and using cross-origin resource sharing (CORS).


28)You would like to migrate your website to AWS and use CloudFront to provide the best performance. Your users will need to complete a form on the website in order to subscribe to a mailing list and comment on blog posts. Which of the following allowed HTTP methods should you configure in your CloudFront distribution settings?
a)GET, HEAD, OPTIONS, POST
b)GET, HEAD, OPTIONS
c)GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
d)GET, HEAD

Ans: c
Reason: This combination of HTTP methods will enable your users to interact with the website and send, modify, insert, and delete data.


29)Which of the following options allows users to have secure access to private files located in S3?
Choose 3
a)CloudFront Signed URLs
b)CloudFront Signed Cookies
c)CloudFront Origin Access Identity
d)Public S3 buckets

Ans: a,b,c
Reason: here are three options in the question which can be used to secure access to files stored in S3 and therefore can be considered correct. Signed URLs and Signed Cookies are different ways to ensure that users attempting access to files in an S3 bucket can be authorized. One method generates URLs and the other generates special cookies but they both require the creation of an application and policy to generate and control these items. An Origin Access Identity on the other hand, is a virtual user identity that is used to give the CloudFront distribution permission to fetch a private object from an S3 bucket. Public S3 buckets should never be used unless you are using the bucket to host a public website and therefore this is an incorrect option.

30)Which storage class is suitable for long-term archiving of data and supports millisecond retrieval times?
a)Glacier Deep Archive
b)S3 Standard-Infrequent Access
c)Glacier Flexible Retrieval
d)Glacier Instant Retrieval

Ans: d
Reason:
S3 Standard-Infrequent Access is designed for storing long-term, infrequently accessed critical data (e.g., backups, data store for disaster recovery files, etc.) However, it is not recommended to be used for archiving data.
Glacier Instant Retrieval is designed for long-lived data, accessed approximately once per quarter with millisecond retrieval time. 


31)You would like to configure your S3 bucket to deny put object requests that do not use server-side encryption. Which bucket policy can you use to deny permissions to upload objects, unless the request includes server-side encryption?


      a)     {
                "Version": "2012-10-17",
                "Id": "PutObjPolicy",
                "Statement": [
                    {
                        "Sid": "DenyUnEncryptedObjectUploads",
                        "Effect": "Deny",
                        "Principal": "*",
                        "Action": "s3:PutObject",
                        "Resource": "arn:aws:s3:::bucket/*",
                        "Condition": {
                            "Null": {
                                "s3:x-amz-server-side-encryption": "true"
                            }
                        }
                    }
                ]
            }

        
      b)
        {
            "Version": "2012-10-17",
            "Id": "SSLPolicy",
            "Statement": [
                {
                    "Sid": "AllowSSLRequestsOnly",
                    "Effect": "Deny",
                    "Principal": "*"
                    "Action": "s3:*",
                    "Resource": [
                        "arn:aws:s3:::bucket/*"
                    ],
                    "Condition": {
                        "Bool": {
                        "aws:SecureTransport": "true"
                        }
                    }
                }
            ]
        }
        

      c)
        {
            "Version": "2012-10-17",
            "Id": "SSLPolicy",
            "Statement": [
                {
                    "Sid": "AllowSSLRequestsOnly",
                    "Effect": "Deny",
                    "Principal": "*"
                    "Action": "s3:*",
                    "Resource": [
                        "arn:aws:s3:::bucket/*"
                    ],
                    "Condition": {
                        "Bool": {
                        "aws:SecureTransport": "false"
                        }
                    }
                }
            ]
        }
     

     d)
        {
            "Version": "2012-10-17",
            "Id": "PutObjPolicy",
            "Statement": [
                {
                    "Sid": "DenyUnEncryptedObjectUploads",
                    "Effect": "Deny",
                    "Principal": "*",
                    "Action": "s3:PutObject",
                    "Resource": "arn:aws:s3:::bucket/*",
                    "Condition": {
                        "Null": {
                            "s3:x-amz-server-side-encryption": "false"
                        }
                    }
                }
            ]
        }
        

Answer: a
Reason: The condition above looks for a Null value for the s3:x-amz-server-side-encryption key. If this condition is true, it means the request is Null and does not include server-side encryption. Setting the condition in the condition policy to "s3:x-amz-server-side-encryption": "true" with "Effect": "Deny" and "Action": "s3:PutObject" would deny put object requests that do not use server-side encryption. AWS Documentation: How to Prevent Uploads of Unencrypted Objects to Amazon S3.


32)You are using S3 in ap-northeast-1 to host a static website in a bucket called "acloudguru". What would the new URL endpoint be?
a)http://acloudguru.s3-website-ap-northeast-1.amazonaws.com
b)http://acloudguru.s3-website-ap-southeast-1.amazonaws.com
c)https://s3-ap-northeast-1.amazonaws.com/acloudguru/
d)http://www.acloudguru.s3-website-ap-northeast-1.amazonaws.com


Ans: a

Reason:

Depending on your Region, your Amazon S3 website endpoint follows one of these two formats:

s3-website dash (-) Region ‐ http://bucket-name.s3-website-Region.amazonaws.com
s3-website dot (.) Region ‐ http://bucket-name.s3-website.Region.amazonaws.com
The Asia Pacific (Tokyo) region ap-northeast-1 uses the website endpoint s3-website-ap-northeast-1.amazonaws.com.

Hence, the correct URL is http://acloudguru.s3-website-ap-northeast-1.amazonaws.com.

References:

Website endpoints
Amazon S3 Website Endpoints


33)Which storage class is suitable for long-term archiving of data that occasionally needs to be accessed within a few hours or minutes?
a)S3 Intelligent-Tiering
b)S3 Glacier
c)S3 Glacier Deep Archive
d)S3 Standard

Ans: b

Reason:

Glacier Deep Archive is designed for rarely accessed data archiving with default retrieval time of 12 hours (e.g., financial records for regulatory purposes).
S3 Glacier is designed for long-term data archiving that occasionally needs to be accessed within a few hours or minutes. It supports retrieval options of 1 minute to 12 hours.



34)You are hosting a static website in an S3 bucket that uses Javascript to reference assets in another S3 bucket. For some reason, these assets are not displaying when users browse to the site. What could be the problem?
a)You cannot use one S3 bucket to reference another S3 bucket.
b)You need to open port 80 on the appropriate security group in which the S3 bucket is located.
c)You haven't enabled Cross-origin Resource Sharing (CORS) on the bucket where the assets are stored.
d)Amazon S3 does not support Javascript.

Ans:  d




35)True or False? An Amazon S3 object owner can optionally share objects with others by creating a presigned URL.
a)False
b)True


Ans: a

It is possible to share Amazon S3 objects with others by creating a presigned URL. Sharing an object with a presigned URL.

Correct Answer
All Amazon S3 objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a presigned URL, using their own security credentials, to grant time-limited permission to download the objects. Sharing an object with a presigned URL.



36)What is the largest size file you can transfer to S3 using a single PUT operation?
a)5TB
b)1GB
c)100MB
d)5GB

Ans: d

Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.


37)When you first create an S3 bucket, this bucket is publicly accessible by default.
a)True
b)False

Ans: b















Monday 12 September 2022

Amazon SQS

 SQS Delay Queues: is used to postpone the delivery of messages to the consumer from 0 sec to 15 mins which is maximum. In this q messages will be invisible for delayed duration and if any consumer wants to consume message they wont see any message till that time.

Delay Queue is not supported for FIFO queues in case of per message. This is not allowed may be because by doing so order will not be maintained. 

Difference between Visibility Timeout and Delay Queues

1)for delay queues, a message is hidden when it is first added to queue, whereas for visibility timeouts a message is hidden only after it is consumed from the queue.

2)Secondly visibility Timeout is mainly used for problematic messages and in case there is any issue with message processing by the consumer , message automatically gets added on the queue. For such scenarios we can even move those erroneous messages to Dead letter Q.

Use case of SQS Delay queue: When we want delay in processing of messages like in case there is some rate limit on the consumer side and we want give some buffer so that we can process once we have some free resources etc.

Takeaways:

1)Visibility timeout range is 0 seconds and 12 hours.

2)Message that can be transferred in SQS queues can have maximum payload of size 256KB.

3)In order to increase payload or video streaming etc SQS extended client lib can be used . In this that make use of S3 and SQS where S3 has actual data and SQS has message which stores link to S3.

4)Visibility time can be set on queue basis or per message basis .

5)Dead letter Queue is good candidate for processing failed scenarios or messages.

6)Delay Q has delay-seconds set which can range from 0 to 15 minutes. 



SQS Dead Letter Queue: used to handle problematic messages.It makes use of redrive policy wherein we define source queue and maxReceiveCount . In this when receiveCount> maxReceiveCount then message is moved to Dead letter Queue if its not deleted. Later we can have consumer which can send notification or can have diagnosis.

Enqueue timestamp is the time when queue was queued in normal SQS queue but remember when that message is moved to Dead letter queue that enqueue timestamp is same which means  deadLetter Q can have lower retention period in order to process that particular message. So we shall cautious for the expiry fo the message when message is moved to Dead Letter Queue.