This asynchronous web crawler is designed for reconnaissance tasks. It crawls a specified URL up to a defined depth, extracting useful information such as:
- Email addresses
- Internal and external links
- JavaScript files
- Images
- Document URLs (e.g., PDF, DOC, XLS)
- Comments and potential sensitive data
you can use this tool with this command python
spider.py -u https://exemple.com -d [depth]