Skip to content

Latest commit

 

History

History
75 lines (55 loc) · 2.35 KB

README.md

File metadata and controls

75 lines (55 loc) · 2.35 KB

Google Crawler

This api will parse and organizes the search results from google. In the backend the the API usage pyppeteer in a docker container to parse the google search result.

Here's the Swagger UI for the API EndPoint.

The live demo version which is running on my server. And it has a limit 6 request per minute.

Live Demo -> https://crawler.0x30c4/v1/docs
Live Demo Brach

Version.

Built With.

  • Docker - Platform and Software Deployment
  • FastApi - Backend Frame-work.
  • Redis - Caching DataBase.
  • Pyppeteer - Headless chrome/chromium automation library (unofficial port of puppeteer)

Prerequisites.

  • Docker - Platform and Software Deployment
  • make - As the Build System.

Production Version.

How to run the Production version.

To run the Production version first install Docker and make on you system and then clone the repo.

You can change the environment variables from the env/.env.prod

Change your user to root.

# Clone the repo
$ git clone https://github.com/0x30c4/google-crawler.git
$ cd google-crawler

# Run this command to build the production on image.
$ make build-prod

# Run this command to run the production version.
$ make run

# To stop it.
$ make stop-prod

Development Version.

How to run the Development version. To run the Production version first install [Docker](https://www.docker.com) and [make](https://tldp.org/HOWTO/Software-Building-HOWTO-3.html) on you system and then clone the repo.

You can change the environment variables from the env/.env.dev

Change your user to root.

# Clone the repo
$ git clone https://github.com/0x30c4/google-crawler.git
$ cd google-crawler

# Run this command to build the development image.
$ make build-dev

# Run this command to run the development version.
$ make run-dev

# To stop it run.
$ make stop-dev