Skip to content

Commit

Permalink
Update latest docs (#6293)
Browse files Browse the repository at this point in the history
  • Loading branch information
protectionsmachine authored Dec 10, 2024
1 parent cae2641 commit e16ea03
Show file tree
Hide file tree
Showing 94 changed files with 2,258 additions and 153 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
[[prebuilt-rule-8-15-12-aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session]]
=== AWS Bedrock Invocations without Guardrails Detected by a Single User Over a Session

Identifies multiple AWS Bedrock executions in a one minute time window without guardrails by the same user in the same account over a session. Multiple consecutive executions implies that a user may be intentionally attempting to bypass security controls, by not routing the requests with the desired guardrail configuration in order to access sensitive information, or possibly exploit a vulnerability in the system.

*Rule type*: esql

*Rule indices*: None

*Severity*: medium

*Risk score*: 47

*Runs every*: 10m

*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <<rule-schedule, `Additional look-back time`>>)

*Maximum alerts per execution*: 100

*References*:

* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html
* https://atlas.mitre.org/techniques/AML.T0051
* https://atlas.mitre.org/techniques/AML.T0054
* https://www.elastic.co/security-labs/elastic-advances-llm-security

*Tags*:

* Domain: LLM
* Data Source: AWS Bedrock
* Data Source: AWS S3
* Resources: Investigation Guide
* Use Case: Policy Violation
* Mitre Atlas: T0051
* Mitre Atlas: T0054

*Version*: 1

*Rule authors*:

* Elastic

*Rule license*: Elastic License v2


==== Investigation guide



*Triage and analysis*



*Investigating Amazon Bedrock Invocations without Guardrails Detected by a Single User Over a Session.*


Using Amazon Bedrock Guardrails during model invocation is critical for ensuring the safe, reliable, and ethical use of AI models.
Guardrails help manage risks associated with AI usage and ensure the output aligns with desired policies and standards.


*Possible investigation steps*


- Identify the user account that caused multiple model violations over a session without desired guardrail configuration and whether it should perform this kind of action.
- Investigate the user activity that might indicate a potential brute force attack.
- Investigate other alerts associated with the user account during the past 48 hours.
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
- Examine the account's prompts and responses in the last 24 hours.
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.


*False positive analysis*


- Verify the user account that caused multiple policy violations by a single user over session, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails.


*Response and remediation*


- Initiate the incident response process based on the outcome of the triage.
- Disable or limit the account during the investigation and response.
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
- Identify the account role in the cloud environment.
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
- Identify any regulatory or legal ramifications related to this activity.
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).


==== Setup



*Setup*


This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation:

https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html


==== Rule query


[source, js]
----------------------------------
from logs-aws_bedrock.invocation-*
// create time window buckets of 1 minute
| eval time_window = date_trunc(1 minute, @timestamp)
| where gen_ai.guardrail_id is NULL
| KEEP @timestamp, time_window, gen_ai.guardrail_id , user.id
| stats model_invocation_without_guardrails = count() by user.id
| where model_invocation_without_guardrails > 5
| sort model_invocation_without_guardrails desc
----------------------------------
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
[[prebuilt-rule-8-15-12-aws-iam-login-profile-added-for-root]]
=== AWS IAM Login Profile Added for Root

Detects when an AWS IAM login profile is added to a root user account and is self-assigned. Adversaries, with temporary access to the root account, may add a login profile to the root user account to maintain access even if the original access key is rotated or disabled.

*Rule type*: esql

*Rule indices*: None

*Severity*: high

*Risk score*: 73

*Runs every*: 5m

*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <<rule-schedule, `Additional look-back time`>>)

*Maximum alerts per execution*: 100

*References*: None

*Tags*:

* Domain: Cloud
* Data Source: AWS
* Data Source: Amazon Web Services
* Data Source: AWS IAM
* Use Case: Identity and Access Audit
* Tactic: Persistence
* Resources: Investigation Guide

*Version*: 1

*Rule authors*:

* Elastic

*Rule license*: Elastic License v2


==== Investigation guide



*Investigating AWS IAM Login Profile Added for Root*


This rule detects when a login profile is added to the AWS root account. Adding a login profile to the root account, especially if self-assigned, is highly suspicious as it might indicate an adversary trying to establish persistence in the environment.


*Possible Investigation Steps*


- **Identify the Source and Context of the Action**:
- Examine the `source.address` field to identify the IP address from which the request originated.
- Check the geographic location (`source.address`) to determine if the access is from an expected or unexpected region.
- Look at the `user_agent.original` field to identify the tool or browser used for this action.
- For example, a user agent like `Mozilla/5.0` might indicate interactive access, whereas `aws-cli` or SDKs suggest scripted activity.

- **Confirm Root User and Request Details**:
- Validate the root user's identity through `aws.cloudtrail.user_identity.arn` and ensure this activity aligns with legitimate administrative actions.
- Review `aws.cloudtrail.user_identity.access_key_id` to identify if the action was performed using temporary or permanent credentials. This access key could be used to pivot into other actions.

- **Analyze the Login Profile Creation**:
- Review the `aws.cloudtrail.request_parameters` and `aws.cloudtrail.response_elements` fields for details of the created login profile.
- For example, confirm the `userName` of the profile and whether `passwordResetRequired` is set to `true`.
- Compare the `@timestamp` of this event with other recent actions by the root account to identify potential privilege escalation or abuse.

- **Correlate with Other Events**:
- Investigate for related IAM activities, such as:
- `CreateAccessKey` or `AttachUserPolicy` events targeting the root account.
- Unusual data access, privilege escalation, or management console logins.
- Check for any anomalies involving the same `source.address` or `aws.cloudtrail.user_identity.access_key_id` in the environment.

- **Evaluate Policy and Permissions**:
- Verify the current security policies for the root account:
- Ensure password policies enforce complexity and rotation requirements.
- Check if MFA is enforced on the root account.
- Assess the broader IAM configuration for deviations from least privilege principles.


*False Positive Analysis*


- **Routine Administrative Tasks**: Adding a login profile might be a legitimate action during certain administrative processes. Verify with the relevant AWS administrators if this event aligns with routine account maintenance or emergency recovery scenarios.

- **Automation**: If the action is part of an approved automation process (e.g., account recovery workflows), consider excluding these activities from alerting using specific user agents, IP addresses, or session attributes.


*Response and Remediation*


- **Immediate Access Review**:
- Disable the newly created login profile (`aws iam delete-login-profile`) if it is determined to be unauthorized.
- Rotate or disable the credentials associated with the root account to prevent further abuse.

- **Enhance Monitoring and Alerts**:
- Enable real-time monitoring and alerting for IAM actions involving the root account.
- Increase the logging verbosity for root account activities.

- **Review and Update Security Policies**:
- Enforce MFA for all administrative actions, including root account usage.
- Restrict programmatic access to the root account by disabling access keys unless absolutely necessary.

- **Conduct Post-Incident Analysis**:
- Investigate how the credentials for the root account were compromised or misused.
- Strengthen the security posture by implementing account-specific guardrails and continuous monitoring.


*Additional Resources*


- AWS documentation on https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateLoginProfile.html[Login Profile Management].


==== Rule query


[source, js]
----------------------------------
from logs-aws.cloudtrail* metadata _id, _version, _index
| where
// filter for CloudTrail logs from IAM
event.dataset == "aws.cloudtrail"
and event.provider == "iam.amazonaws.com"
// filter for successful CreateLoginProfile API call
and event.action == "CreateLoginProfile"
and event.outcome == "success"
// filter for Root member account
and aws.cloudtrail.user_identity.type == "Root"
// filter for an access key existing which sources from AssumeRoot
and aws.cloudtrail.user_identity.access_key_id IS NOT NULL
// filter on the request parameters not including UserName which assumes self-assignment
and NOT TO_LOWER(aws.cloudtrail.request_parameters) LIKE "*username*"
| keep
@timestamp,
aws.cloudtrail.request_parameters,
aws.cloudtrail.response_elements,
aws.cloudtrail.user_identity.type,
aws.cloudtrail.user_identity.arn,
aws.cloudtrail.user_identity.access_key_id,
cloud.account.id,
event.action,
source.address
----------------------------------

*Framework*: MITRE ATT&CK^TM^

* Tactic:
** Name: Persistence
** ID: TA0003
** Reference URL: https://attack.mitre.org/tactics/TA0003/
* Technique:
** Name: Valid Accounts
** ID: T1078
** Reference URL: https://attack.mitre.org/techniques/T1078/
* Sub-technique:
** Name: Cloud Accounts
** ID: T1078.004
** Reference URL: https://attack.mitre.org/techniques/T1078/004/
* Technique:
** Name: Account Manipulation
** ID: T1098
** Reference URL: https://attack.mitre.org/techniques/T1098/
Loading

0 comments on commit e16ea03

Please sign in to comment.