-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reject duplicate users in a Global Scale deployment #78
Comments
While this might be true, this cannot be the default behavior. We have to assume that some setup can have same accountId on 2 different instance. |
Can you please elaborate? How the Lookup Server would know whether to redirect |
by specifying |
One of they key points of Global Scale, if I well understood, is to give the user the experience of a plain instance, so that users do not expect to enter their domain in the login box. In your view, all the users should enter the global scale node in which they are defined in the login box? |
Globalscale can be used in different ways. I do not see the issue having namesake account on different instance; not having one user with multiple account on different instance. On a side note, this will require some work on server side to not create account on the decision of the LUS ? |
In a Global Scale deployment, duplicate users should be rejected.
For example if we have two nodes
gs-node01
andgs-node02
, we shouldn't make it possible to haveuser1@gs-node01
user1@gs-node02
in the
users
table as this situation leads to inconsistent behavior: to which node is the user redirected when he/she logs into the instance through the Global Site Selector Node?The fact that is a Global Scale deployment can be based on the GLOBAL_SCALE boolean flag.
Note that this check should be done before creating the user locally in the Global Scale Node to avoid creating another inconsistent situation in which the user is created in the Global Scale Node but not in the Lookup Server. See this issue as an example of this scenario.
The text was updated successfully, but these errors were encountered: