Platform | Number of Instance | Reading Time |
---|---|---|
Play with Docker(testing webservice) | 1 | 10 min |
- Create an account with DockerHub
- Open PWD Platform on your browser (test purpose)
- AWS + EC2 instance (using ubuntu)+ or any linux base instance
- Click on Add New Instance on the left side of the screen to bring up Alpine OS instance on the right side
In this article, I'm going to show you a better way to deploy your production web services. Basically we are going to use the method of running multiple production containers using docker. When you do docker in day to day work then you can come across docker-compose. Docker is really a magical tool! We are gifted with tools in this modern era and we should utilize them to deliver services seamlessly.
In the old approach, these pieces are installed on a VPS.
1.Application Server (Node JS, Java or Python)
2.Proxy Server (Apache, Nginx)
3.Cache Server (Redis, Memcached)
4.Database Server(MySQL, PostgreSQL and MongoDB etc)
The old approach is not preffered because of automation is taking place, everyone using CI/CD deployemnet. We can also capture a snapshot of given environment to reduce the risk of deploying services into wrong set of conditions.
According to microservices, the tightly coupled logic and deploy them separately. It means in above diagram, every application server is more independent and talk via HTTP or RPC. But it doesn't mean you need to choose X number of VPS instance to run services.
Containers provide a nice way to simulate and isolate features within same machine or server. It's era of containerization. If you wrote a service and planning to deploy it on AWS EC2 or any cloud VPS, don’t deploy your stuff as a single big chunk. Instead, run that distributed code in containers. We are going to see how to containerize our deployment using Docker and Docker Compose.
Lets see some practical examples.
1.We need an AWS account (http://aws.amazon.com/).
2.Choose EC2 from Amazon Web Services Console.
3.On the Choose an Amazon Machine Image (AMI) menu on the AWS Console. Click the Select button for a 64Bit Ubuntu image. (i.e. Ubuntu Server 14.04 LTS)
4.For testing we can use the default (possibly free) t2.micro instance (more info on pricing).
5.Click the Next: Configure Instance Details button at the bottom right.
6.On the Configure Instance Details step, expand the Advanced Details section.
7.Under User data, select As text.
8.Enter #include https://get.docker.com into the instance User Data. CloudInit is part of the Ubuntu image we chose; it will bootstrap Docker by running the shell script located at this URL.
9.We may need to set up our Security Group to allow SSH. By default all incoming ports to our new instance will be blocked by the AWS Security Group, so we might just get timeouts when we try to connect.
10.Creating a new key pair:
11.After a few more standard choices where defaults are probably ok, our AWS Ubuntu instance with Docker should be running!
12.Installing with get.docker.com (as above) will create a service named lxc-docker. It will also set up a docker group and we may want to add the ubuntu user to it so that we don't have to use sudo for every Docker command.
1.connect to ubuntu intance using SSH
2. clone the repository
git clone https://github.com/sangam14/web_services.git
4.Change directory to webservices as shown below &
Bringing up app using Docker Compose
Clone the Repository:
git clone https://github.com/sangam14/web_services.git
Change directory to webservices as shown below:
cd webservices
Bringing up app using Docker Compose:
docker-compose up
for PWD click on port you will get health check page
health check by curl
$ curl http://localhost/api/v1/healthcheck
"2018-11-01T03:26:07.605Z/"
If you see, we are creating a simple express service with a health check endpoint.
https://github.com/sangam14/web_services/blob/master/app/server.js
check the nginx configuration file.
https://github.com/sangam14/web_services/blob/master/nginx/default.conf
upstream service {
server app:8080;
}
nginx and app both are bridged using mynetwork, one can access another by the service name. So DNS is already taken care by docker. If this privilege is not available, we need to hard code IP in Nginx configuration file or assign a static IP from the subnet in the docker-compose.yam file. This is a wonderful thing about docker networking.
version: "2"
services:
nginx:
build: ./nginx
ports:
- "80:80"
networks:
- mynetwork
app:
build: ./app
networks:
- mynetwork
expose:
- 8080
networks:
mynetwork:
driver: bridge
By default, all the containers we create will fall under the same Internal IP range(Subnet). Docker networking allows us to create custom networks with additional properties like automatic DNS resolution etc.
In the above YAML file, we are creating a network called mynetwork. The services(containers) app and nginx will lie in the same subnet and can communicate to each other without the need of exposing the web service container to the outside world. In this way, we can make a single entry point to our web service that is through the Nginx service. If anyone tries to access app service directly they cannot do it because it is hidden. This actually secures our application.
Sangam biradar - smbiradar14@gmail.com -www.codexplus.in