The document can be used in the following ways:
- Understand the vulnerabilities that can occur in ML models.
- Implement the prevention strategies to mitigate the risks associated with each vulnerability listed in the document.
- Use the sample attack scenarios to create tests to verify the resilience of models.
The document can be used in the following ways:
- Implement prevention strategies to ensure data integrity and security.
- Use risk factors to assess the security of data pipelines and storage.
Document can be used to:
- Understand the vulnerabilities to ensure secure deployment of ML models.
- Implement prevention strategies in MLOps pipelines to mitigate risks.
- Assist in monitoring and maintaining the security of ML systems in operation.
The document can be used in the following ways:
- Understand the vulnerabilities in order to write secure code for ML applications.
- Implement prevention strategies in the development process to reduce security risks.
The document can be used in the following ways:
- Use the document to design and perform penetration tests on ML systems.
- Use prevention strategies to recommend security improvements.
- Use risk factors and threat agents to perform threat modelling of ML systems.
The document can be used in the following ways:
- Source for developing comprehensive security policies and strategies for securing ML systems.
- Guide the organisation's security practices and policies for the secure use of ML.
- Use risk factors to assess the organisation's overall security posture with respect to ML systems.
This work overlaps with other projects run by the OWASP Foundation and also with work done by other organisations. It may not be suitable for your needs, especially if:
- you are looking for a security reference for Large Language Models (then check out the OWASP Top10 for LLM here)
- You are working with areas such as ethics of AI, sustainability of AI, etc.
- you are looking for a risk assessment framework or a complete threat model for AI/ML systems (then check e.g. AI RMF by NIST)
- you are looking for real vulnerabilities in AI/ML systems (check our RELATED document for more details)
Of course, this document may be helpful to you, but if you are looking for something to help you solve the above task, it is worth looking at these documents.