Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed version #179

Open
ndrean opened this issue Oct 28, 2023 · 7 comments
Open

Distributed version #179

ndrean opened this issue Oct 28, 2023 · 7 comments

Comments

@ndrean
Copy link
Contributor

ndrean commented Oct 28, 2023

Today, I made the distributed version of this as your link is dead. This means you run each app on a different port and cluster them and everything works. I plan to test the deployment on Fly.io. Some interest to publish this as a branch?

Screenshot 2023-10-28 at 21 11 36

https://github.com/ndrean/elixir-hiring-project/tree/updated-version

They (Fly.io) say it is a 2h job. Really? For example, counting twice some events: the proposed code is wrong. And not totally straightforward to sync all the views when a user leaves... Took me the day to make it workn and just on time for the rugby!

@ndrean
Copy link
Contributor Author

ndrean commented Oct 29, 2023

The app is running SQLite and run a single node on a VM in Paris, France. Firstly some tests:

Since I am located in France at the moment, latency is 30ms.
Screenshot 2023-10-29 at 14 11 13

I used a VPN and I relocated to Japan. Latency increased to 500ms.
Screenshot 2023-10-29 at 14 10 44

and if I relocate in the US, latency is 150ms
Screenshot 2023-10-29 at 14 13 25

@ndrean
Copy link
Contributor Author

ndrean commented Oct 29, 2023

To distribute this, you use the Fly.io DNS discover library and replicate/sync in some way the embeded SQLite database. It is currently on a volume of the VM running my single node in one region.

@ndrean
Copy link
Contributor Author

ndrean commented Oct 29, 2023

I fail to deploy a distributed version with a distributed SQLIte db. So I opened an issue on Fly.io on how to distribute the SQLite db. It writes on the File system ok, and I understand you should use a kinda proxy LiteFS that should use a local volume to be replicated/synced with other locations. Normally.... However, I don't really believe that Fly will consider this: my ambition is probably too high considering their availability and the low attractiveness of this issue.

@ndrean
Copy link
Contributor Author

ndrean commented Oct 30, 2023

Approaching but there yet to a distributed Phoenixx+SQLite+LiteFS on Fly.io

LiteFS lease "LiteFS only allows a single node to be the primary at any given time. The primary node is the only one that can write data to the database. The other nodes are called replicas and they provide a read-only copy."

Screenshot 2023-10-31 at 10 36 13 Screenshot 2023-10-31 at 10 43 45

Lots of moving parts, so plenty of reasons to fail. It works, probably. I am not sure if it really works though: the Presence is ok, but the clicks count is not ok. When connect to a remote node, the "local" click count is rendered ok, but the other click counts of other nodes are reset. I need to click to update the correct click count, which is not lost in fact.

I have 2 nodes, on in CDG, one in NRT. Two sessions are online.

I update the NRT connection: I get the correct click count for NRT, but not for CDG:

Screenshot 2023-10-31 at 11 15 14

I update the CDG connection: I get the correct click count for CDG, but not for NRT:

Screenshot 2023-10-31 at 11 15 28

I click on NRT, and everything is updated:

Screenshot 2023-10-31 at 11 15 44

🤔

All I am doing is:

  • saving each click to the db, a simple addition to the GenServer. It is namely a record REGION+CLICK_COUNT, and broadcast it.
  • read the db when a socket mounts to populate the DOM.

The rendering happens when the socket assigns are changed of course.

My settings
# stop when you have to deploy
fly launch
# remove release_command from fly.toml 
fly consul attach
fly deploy
#env.sh.eex
ip=$(grep fly-local-6pn /etc/hosts | cut -f 1)
export RELEASE_DISTRIBUTION="name"
export RELEASE_NODE=$FLY_APP_NAME@$ip
#Dockerfile - Debian based.
{RUNNER}
RUN apt-get install ca-certificates fuse3...
COPY --from=flyio/litefs:0.5 /usr/local/bin/litefs /usr/local/bin/litefs
COPY litefs.yml /etc/litefs.yml
COPY --from=builder --chown=nobody:root /app/_build/${MIX_ENV}/rel/liveview_counter ./
# USER nobody
ENV ECTO_IPV6 true
ENV ERL_AFLAGS "-proto_dist inet6_tcp"
ENTRYPOINT litefs mount
def start(_type, _args) do
    MyApp.Release.migrate()
    children = [ {DNSCluster, query: System.get_env("DNS_CLUSTER_QUERY") || :ignore},...]
#config/runtime.exs
#  config :my_app, dns_cluster_query: System.get_env("DNS_CLUSTER_QUERY")
#litejs.yml
fuse:
  dir: "/mnt/mydata"

data:
  dir: "/mnt/litefs"

proxy:
  addr: ":8081"
  target: "localhost:8080"
  db: "my_app_prod.db"
  passthrough: 
    - "*.ico"
    - "*.png"

exec:
  - cmd: "/app/bin/server -addr :8080 -dsn /mnt/mydata" 

lease:
  type: "consul"
  advertise-url: "http://${HOSTNAME}.vm.${FLY_APP_NAME}.internal:20202"
  candidate: ${FLY_REGION == PRIMARY_REGION}
  promote: true

  consul:
    url: "${FLY_CONSUL_URL}"
    key: "litefs/${FLY_APP_NAME}"
#fly.toml
primary_region="cdg"
#[deploy] release_command = "/app/bin/migrate"

[mounts]
source="mydata"
destination="/mnt"
process=["app"]

[env]
PHX_HOST = "my-app.fly.dev"
DNS_CLUSTER_QUERY="my-app.internal"
DATABASE_PATH="/mnt/mydata/my_app_prod.db"
PORT="8080"

@ndrean
Copy link
Contributor Author

ndrean commented Oct 31, 2023

Alternative: Litestream as a background process.

a podcast

blog and companion repo

Screenshot 2023-10-31 at 14 06 06

@ndrean
Copy link
Contributor Author

ndrean commented Nov 4, 2023

I ended up removing LiteFS and used :erpc.call. Every writes (and reads when a node starts) to the SQLITE db are made to one node, the "primary" node. This works because Fly assigns a FLY_REGION to each node, so I just set an env var , say PRIMARY_REGION=cdg that starts a db, and all other nodes will find him once clustered. Then Litestream

@ndrean
Copy link
Contributor Author

ndrean commented Nov 11, 2023

You have this "local-first" paradigm mostly for mobile first. You use your local embedded SQLIte database. To keep every conection client in sync, you add a layer to sync with a Postgres database. Here ElectricSQL comes into play.

I understand that this is really for mobile first, and more precisely offline-capable apps. This means no LiveView but an SPA/PWA client.

https://news.ycombinator.com/item?id=37584049

Screenshot 2023-11-11 at 21 39 42

This video is a bit though...
Screenshot 2023-11-11 at 21 43 51

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant