If you're running in production, you should set these securely.
However, if you just want to experiment, set the following values
These are all Django settings, defined in stixify/settings.py
DJANGO_SECRET
:insecure_django_secret
DJANGO_DEBUG
:True
DJANGO_ALLOWED_HOSTS
: BLANKDJANGO_CORS_ALLOW_ALL_ORIGINS
:True
DJANGO_CORS_ALLOWED_ORIGINS
: LEAVE EMPTY
These are all Django settings, defined in stixify/settings.py
POSTGRES_HOST
:pgdb
POSTGRES_PORT
: BLANKPOSTGRES_DB
:postgres
POSTGRES_USER
:postgres
POSTGRES_PASSWORD
:postgres
CELERY_BROKER_CONNECTION_RETRY_ON_STARTUP
:1
These define how the API behaves.
MAX_PAGE_SIZE
:50
- This is the maximum number of results the API will ever return before pagination
DEFAULT_PAGE_SIZE
:50
- The default page size of result returned by the API
Note, this code will not install an ArangoDB instance.
If you're new to ArangoDB, you can install the community edition quickly by following the instructions here.
The script will automatically create a database called stixify_database
when the container is spun up (if it does not exist).
All extraction will be added to the following collections in the database:
stixify_edge_collection
(relationships)stixify_vertex_collection
(extractions)
The ArangoDB settings you need to configure are:
ARANGODB_HOST_URL
:'http://host.docker.internal:8529'
- If you are running ArangoDB locally, be sure to set
ARANGODB_HOST_URL='http://host.docker.internal:8529'
in the.env
file otherwise you will run into networking errors.
- If you are running ArangoDB locally, be sure to set
ARANGODB_USERNAME
:root
- Change this if neeed
ARANGODB_PASSWORD
: USE PASSWORD OF ARANGODB_USERNAME
INPUT_TOKEN_LIMIT
:15000
- (REQUIRED IF USING AI MODES) Ensure the input/output token count meets requirements and is supported by the model selected. Will not allow files with more than tokens specified to be processed
TEMPERATURE
:0.0
- The temperature value ranges from 0 to 2, with lower values indicating greater determinism and higher values indicating more randomness in responses.
OPENAI_API_KEY
: YOUR_API_KEY- (REQUIRED IF USING OPENAI MODELS IN AI MODES) get it from https://platform.openai.com/api-keys
ANTHROPIC_API_KEY
: YOUR_API_KEY- (REQUIRED IF USING ANTHROPIC MODELS IN AI MODES) get it from https://console.anthropic.com/settings/keys
GOOGLE_API_KEY
:- (REQUIRED IF USING GOOGLE GEMINI MODELS IN AI MODES) get it from the Google Cloud Platform (making sure the Gemini API is enabled for the project)
BIN_LIST_API_KEY
: BLANK- for enriching credit card extractions needed for extracting credit card information. You get an API key here https://rapidapi.com/trade-expanding-llc-trade-expanding-llc-default/api/bin-ip-checker
Stixify requires ctibutler to lookup ATT&CK, CAPEC, CWE, ATLAS, and locations in blogs
CTIBUTLER_HOST
:'http://host.docker.internal:8006'
- If you are running CTI Butler locally, be sure to set
'http://host.docker.internal:8006'
in the.env
file otherwise you will run into networking errors.
- If you are running CTI Butler locally, be sure to set
Stixify requires vulmatch to lookup CVEs and CPEs in blogs
VULMATCH_HOST
:'http://host.docker.internal:8005'
- If you are running Vulmatch locally, be sure to set
'http://host.docker.internal:8005'
in the.env
file otherwise you will run into networking errors.
- If you are running Vulmatch locally, be sure to set
GOOGLE_VISION_API_KEY
: YOUR_API_KEY- This is used by file2txt to extract text from images. Instructions to create an API key are here
MARKER_API_KEY
: YOUR_API_KEY- This is used by file2txt to convert files into markdown. Instructions to create an API key are here
You can choose to store static assets on Cloudflare on R2. Default is local.
USE_S3_STORAGE
:0
- Set to
1
to enable
- Set to
R2_ENDPOINT_URL
: BLANK- Will be something like
https://ID.r2.cloudflarestorage.com
- Will be something like
R2_BUCKET_NAME
: BLANK- The bucket name you want to use.
R2_ACCESS_KEY
: BLANK- generated when creating an R2 API token. Make sure has read+write to R2_
BUCKET_NAME
specified
- generated when creating an R2 API token. Make sure has read+write to R2_
R2_SECRET_KEY
: BLANK- generated when creating an R2 API token
R2_CUSTOM_DOMAIN
: BLANK- this value is optional when using R2, but if you don't set your bucket to public, your images will hit 403s as they will hit the raw endpoint (e.g. https://ID.r2.cloudflarestorage.com/BUCKET/IMAGE/PATH.jpg) which will be inaccessible. The easiest way to do this is to enable R2.dev subdomain for the bucket. Looks like
pub-ID.r2.dev
. Do not include thehttps://
part
- this value is optional when using R2, but if you don't set your bucket to public, your images will hit 403s as they will hit the raw endpoint (e.g. https://ID.r2.cloudflarestorage.com/BUCKET/IMAGE/PATH.jpg) which will be inaccessible. The easiest way to do this is to enable R2.dev subdomain for the bucket. Looks like