Skip to content
This repository has been archived by the owner on Jul 17, 2018. It is now read-only.
Christian Weiss edited this page Apr 16, 2016 · 10 revisions

This project contains two main components:

C3.ServiceFabric.HttpCommunication

An implementation of ICommunicationClient (part of the Service Fabric SDK) for HTTP. It resolves services and contains the retry logic. Please look at HTTP Communication for details.

C3.ServiceFabric.HttpServiceGateway

The actual gateway, implemented as an ASP.NET Core middleware. Please look at HTTP Gateway for details.

Background

Azure Service Fabric is a cluster platform for hosting service-oriented applications. It contains a feature-rich orchestration platform which allows you to configure, on how many nodes your services should run. If a node goes down or if Service Fabric has to reconfigure the service placements, your services are moved to other nodes automatically. Due to this dynamic nature, the cluster also contains a "naming service" which gives you an actual address for a service.

Calling this naming service is fine if your code runs inside the cluster as well. If you are using the default communication stacks or WCF for your services then you can even use the built-in classes from the SDK (e.g. ServiceProxy) which makes this process transparent.

However if you want to access your services from a computer which is not part of the cluster you usually communicate with the cluster through a load balancer. If you set up a Service Fabric cluster in Microsoft Azure this is automatically configured for you. Since the load balancer is a mechanism outside of the cluster, it is not aware of the Service Fabric placement settings and can't know where to redirect a call if the target service is only placed on one or a few of the cluster nodes.

For this reason, services which should be accessible through a load balancer have to be placed on every node (InstanceCount="-1") and if you have multiple services which should be routed through the load balancer, each needs its own public port. Both solutions have disadvantages and are often not desired.

To solve this problem you need an additional service which is placed on every node and acts as a gateway to your actual services. A gateway service has many advantages:

  • You only need to setup one port on your load balancer (e.g 80 or 443 if you use HTTP)
  • You can restrict access to your cluster resources on the network level
  • The callers don't need to know anything about the cluster
  • The gateway can translate protocols (your internal services may use different communication protocols like WCF or the built-in TCP protocol)
  • You can implement cross-cutting concerns like logging, security at the entry point of your cluster.
  • ...
Clone this wiki locally