-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: better way of interacting with databases #72
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SO FREAKING COOL!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I live the simpler UX!
Auto-saving
Instead of auto-saving changes back to Deep Origin, I think we should give users a method to easily save changes back to Deep Origin. To fit with the typical DataFrame UX, we could add a to_deeporigin
method.
- I think auto-save would make it too easy for users to unintentionally overwrite their data.
- I think auto-save would be surprising to users.
- By making auto-saving the default, it makes it a little harder for users to get a data frame that they can transiently manipulate. For this use case, users would have to go an extra step, rather than making users go an extra step to sync their changes back to deeporigin.
Generalizing to enable additional use cases
By making to_deeporigin
more explicit, we could also build on this idea further to enable additional use cases. See below.
- Use
to_deeporigin
to create a new database.- User creates a DataFrame
- User calls
to_deeporigin
to sync the DataFrame to Deep Origin. The user must supply arguments for the database ID, name, and row ID prefix.to_deeporigin
then attaches this metadata to the DataFrame.
- Use
to_deeporigin
to create a new rows
Documentation
I think we should expand the documentation for this a little to outline how various scenarios are handled (or not). I don't think we need to handle all of the use cases right now. Rather, we could simply make it clear what is presently supported and what isn't presently supported so users aren't surprised.
- Creating new databases
- Creating new columns
- Creating new rows
- Editing the names of databases
- Editing the columns
- Deleting columns
- Deleting rows
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left several minor suggests for clarifying the documentation
problem description
a typical user flow is
right now, to do so, we have to worry about API calls where we have to laboriously pass database ID, column IDs, etc.
proposed solution -- ux
1. user creates a df using pandas-like syntax
2. user modifies some data
the dataframe warns user that there are unsynced changes
3. user uses pandas-like
to_deeporigin
df method to write changes back to DO4. user views df again
5. user turns on auto_sync to live dangerously
6. user makes some changes
technical implementation
we create a new class that subclasses a pandas dataframe, and write special methods to talk to the data hub
changes
support for intelligently creating new columnswon't do this PRchanges post review
to_deeporigin
will print a message to console