At RedBear, we assess a lot of AWS environments. As you might expect, from a security perspective, we have seen pretty much everything. From super locked down, to raw and open, there’s a full spectrum of security maturity out there. What are the top 5 most common security mistakes in AWS environments we have seen that you should avoid?
Default use of admin level credentials
We’ve all done it. You need to get started. Quickly. You use the admin level credentials (an AWS user or role with the IAM profile AdministratorAccess). You’ll come back later and setup proper access roles. Except life and other priorities get in the way.
It’s the most common security mistakes in AWS that we see. In some cases it even means using the AWS account root user! Don’t do this!
Spend the time to understand what access requirements you have. Create appropriates policies to be used by IAM groups and roles. If you use IAM access keys, make sure that they are refreshed regularly. We recently came across a 7 year old key with admin level access in an AWS account. Don’t try and break this record! Any admin level user should be tightly controlled and should enforce MFA. In fact, while you are at it, even though you’ve stopped using the AWS root account user (right?), enforce MFA for the root user as well. Create an alert on any use of such user.
In fact, you should generally avoid the use of long term credentials.
Use of persistent credentials
Everyone starts with IAM users. IAM users provide persistent access. These users and associated keys are valid until they are disabled or deleted. It’s not uncommon to find AWS credentials stored on an EC2 instance. This is a risky approach and not recommended.
In comparison, roles provide short lived credentials. Should someone get a hold of the credentials, they will generally only be valid for 60 minutes.
Role are recommended for all access – humans, service accounts and AWS resources, such as EC2 instances. For humans, they can be implemented through Identity Centre which simplifies access to all your AWS accounts. With EC2 instances, ensure that you use Instance profiles (and enforce IDMS v2!) to define any access required by that instance. As an example, maybe the instance copies some files into S3. With this example, create a role to perform an s3:PutObject on the explicit bucket only and assign the role to the instance.
No logging enabled (and no one watching)
AWS has a whole bunch of sophisticated security services. Unless you have enabled an AWS Control Tower solution, most of these won’t be turned on in a raw AWS account. This means that not only won’t you know about anything going on at the infrastructure level, but also you won’t have any audit data for forensics should something happen. In our AWS Cloud Operations security model, that puts you at level 0!
The first step is to turn on some services. We recommend getting started with the following.
- Amazon GuardDuty – threat detection service that monitors your AWS account for malicious activity. Make sure you set it up to log data to CloudWatch or S3.
- AWS CloudTrail – even though this is enabled by default, ensure you set up a Trail to log data to S3. Adding monitoring of S3 data events is also recommended.
- VPC Flow Logs – captures flow information on IP traffic in your VPCs, logging to CloudWatch or S3.
- AWS Security Hub – provides a view of your security status covering not just behavioural alerts but also misconfiguration notification and recommendations based around benchmarks such as CIS.
As a minimum, we would then recommend setting up pro-active alerting through CloudWatch for GuardDuty notifications.
Everything with public IP and SSH/RDP access
Before VPCs, everything in Cloud had a public IP address. Those dark days are gone. Yet, it is all too common for the default configuration to assign a public IP to every instance. We even see it for DBs, whether EC2 hosted or running on RDS. This massively increases your risk profile. Even worse is when this is combined with a overly permissive security group. Often, these allow access to key administration ports – think SSH, RDP or SQL – to the entire Internet.
If you are doing that today, at least whitelist that access to a set of known and trusted IPs. Ideally, don’t provide this access at all.
You can avoid this by using edge services to hide your instances. That may be a load balancer such as ALB or CloudFront. If you do need admin access to machines, shell or RDP access is available through the AWS console or API (for shell access). AWS Systems Manager Session Manager provides this without having to expose admin ports to anything at zero cost!
No billing alerts
What’s cost got to do with security? Well, quite a lot if you have unexplained resources bitcoin mining in your environment. If your bill suddenly starts rising without explanation, maybe access to your environment has just been compromised!
Setting up budgets and billing alerts can help you discover the breach a lot faster than when you receive your monthly bill. If you are using AWS Organizations and consolidated billing (recommended), then you can set this up once for the all your accounts. It will then notify you if your bill is forecast to exceed your budget. It’s very handy and very powerful.
Bonus! – resources running in unexpected regions
I know this is a top 5 most common security mistakes in AWS environments, but there’s always space for a little more!
Following on from setting billing alerts, one of the common tricks is for unexpected resources to be spun up in regions that you don’t normally use and therefore are not pro-actively looking. The EC2 console has a global view and it’s worth running an inventory of your AWS accounts to be notified of any changes – third party tools such as Hava are great for this.
Another trick if you are using AWS Organizations, is to use service control policies (SCPs). SCPs are a great feature where you can set boundaries at the organization level to restrict certain actions for all users in accounts. This even includes the root user of an AWS account! Setting a restriction to only expected regions is a great use case for SCPs. It prevents accidental or malicious spinning up of resources in unexpected regions. Not only does that mean you know where your resources are running, but if you have compliance overlays relating to data location, you can sleep at night over that as well!