AWS has a shared security model, meaning they commit to looking after part of the environment while you must look after the rest. We can generalize & say that AWS look after all of the bits of the environment that they can touch.
AWS are responsible for the physical security in their own facilities. This includes controlling the movements of individuals, restricting access to only those people that absolutely require access and keeping exact AWS data centre locations a closely guarded secret.
They’re responsible for the physical security of the underlying hardware and host operating system of EC2 and non-managed database instances. They are also responsible for the network security across their estate (all availability zones, edge locations and regions).
They deliver a number of managed services, as discussed earlier in this book. This includes RDS, whereby you are unable to access the underlying operating system to AWS are also responsible for the security around these services.
Finally, AWS are responsible for the virtualization infrastructure and the related security.
Now we know what AWS looks after, we can focus on the bits that we’re responsible for.
We are responsible for managing those users that are able to access the AWS resources through IAM. The first level of security is always user management. We should work with the principle of least privilege, meaning that users should only ever have the access they require, never more and never less. We can track everything that’s carried out in the AWS environment by enabling Cloudtrail and monitoring the logs it outputs.
Using IAM, we must provision EC2 roles, rather than passing API keys directly to the instance to add an extra layer of security across our environment.
We must also enable multi factor authentication (MFA) for all users of AWS. This is not just for login but also for termination protection of EC2 instances.
As AWS users, we are responsible for looking after all customer data. This includes managing data in transit; at rest and all our data stores. This can include the application of SSL certificates and data encryption (S3, Glacier, Redshift, EBS and SQL-Databases (RDS)). Remember: if your RDS database is encrypted, your read replicas and snapshots will also be encrypted.
While AWS will manage the host operating system, it is your responsibility to manage the install of security patches and updates on the guest operating system.
Further to this, it is your responsibility to manage the configuration of security groups, subnets, and network access control lists within your VPC.
You can further enhance security through the user of a dedicated connection between your on-premise environment and AWS by utilizing AWS Direct Connect.
We can monitor our environment and changes to it by using AWS Config. Essentially, this service takes a snapshot of your entire environment. You can then compare this against previous snapshots to identify changes in your estate.
Finally, we can utilize the AWS Trusted Advisor service, which is a premier support service where AWS will find security issues with your environment for you, enabling you to plug holes.
DDOS in your own environment can be a huge headache and you mustn’t expect that to change in AWS. To effectively mitigate the risk / impact of DDOS attacks, you should follow the same practices as you would do on-premise. This will include the configuration of firewalls, web application firewalls and traffic shaping / limiting applications.
AWS enables us to soak up some of the load from a DDOS attack by utilizing Cloudfront. As we discussed earlier in this book, Cloudfront provides edge locations with cached static content. The idea here is that when a DDOS attack is launched against your environments, much of the traffic-flood will be hitting cached versions of your content rather than hitting the origin EC2 server.
AWS does also provide us with an additional level of security which is managed at their network level. They have ingress filtering on all incoming traffic into their network which can assist DDOS mitigation.
You should not that, AWS must provide you with permission to do any port scanning of your resources in AWS.
Cloud HSM is a dedicated hardware security module (HSM) which is used to securely (to levels accepted by government organizations) generate, secure and manage cryptographic keys for data encryption.
CloudHSM can be deployed in a cluster of up to 32 individual HSM, spread across multiple availability zones. Keys are automatically synchronised & load balanced between each node in the cluster.
The cloud HSM must be part of a VPC in order to benefit from the additional layer of isolation and security. Within the VPC, you can configure a client on your EC2 instances that allows applications to use the HSM cluster over a secure, authenticated network connection.
That said, the application doesn’t have to reside in same VPC but must have network connectivity to all HSMs in cluster, which can be achieved through VPC peering, VPN connectivity or Amazon Direct Connect. In some use cases, it is possible sync keys between your AWS HSM with on-premise HSMs.
CloudHSM is integrated with Oracle DB, SQL Server, Apache, NGINX with relative ease due to existing compatibility.
You should use CloudHSM instead of AWS KMS if you need your cryptographic keys under your exclusive control. This is because CloudHSM is a single-tenanted platform, while KMS is multi-tenanted.
CloudHSM achieves FIPS 140-2 compliance.
Key Management Service (KMS)
KMS is a highly available key storage service which enables you to easily create, use, protect, manage and audit your encryption keys.
From a management perspective, KMS enables you to temporarily disable keys, delete old keys and audit the use of the keys via CloudTrail. You can create new encryption keys through the service or you can import your existing encryption keys.
You can define IAM users & roles that can manage keys and that can encrypt or decrypt data.
KMS offers PCI DSS compliant encryption standards and utilizes 256 bit keys.
Note: The KMS service limits you to creating 1,000 master keys per account per region and those master keys cannot be exported to used on on-premise applications.