(Replying to PARENT post)
I also only perform migrations as part of app init, not separately.
(Replying to PARENT post)
(Replying to PARENT post)
Unless obviously the view has magic to use either column name based on what's available on the underlying, but I did not see that either on `createView`.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
That being said, I've always found that doing the migrations within the app is safer, because rollbacks are tied to the code. For example to change a constraint, we add a new column with the new constraint, and then change the code to read from the new column and old column, take the new value if it exists, otherwise use the old value, perform the operation, then write the new value to the new column and old column. We do this for a while and then do a background migration of the data at the same time, slow enough to not overload the database. At some point the new column is fully populated, and then we can put a new version of the code that only reads the new column. Then you check your stats to make sure there are no reads from the old column, and then you can delete the old one whenever you want, which is a very fast operation. Then you get your space back when you do your vacuum (protip: make sure you have solid vacuuming schedules!).
What are some use cases where you'd want to migrate the schema without also migrating the code at the same time?
(Replying to PARENT post)
> Page not found
> We're sorry, but the page you requested could not be found. This could be because the page has been moved or deleted. We're tracking these errors so we can fix them in the future.
(Replying to PARENT post)
You can't change table names unless you take a lock. How exactly do you switch the original table to be a view pointing to the original table? The docs don't go into detail how this is done exactly, I'll check the code later.
It looks like the tool maintains two copies of the table but how exactly this copy process is done isn't explained. A potential issue is that you need to have disk space and I/O capacity available to support this.
The copy table + trigger approach might not work for databases of significant size. For example I have seen instances with >50k qps on a table where it is not possible to run pg_repack because it never catches up and it also doesn't ever manage to take the lock which is needed to switch to the new table. This can be simulated with overlapping long running queries.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
I'd also be a bit nervous that any of these tricks may themselves cause (perhaps temporary) performance problems.
(Replying to PARENT post)
The only way to do it without downtime is to make the app forward compatible before making DB changes, and make it backward compatible before undoing the DB changes.
There will always be a short period where the app version and DB version will be out of sync.
(Replying to PARENT post)
Also, how do you handle the back filling on columns, how you make sure you don't miss any data before dropping the old column?
(Replying to PARENT post)
(Replying to PARENT post)
Would love to hear your thoughts!
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
I've been using dbmate which uses SQL and works really well.
(Replying to PARENT post)
Presumably (didn't see this mentioned in the docs yet) the trick is that you re-deploy the app with a new connection string referencing the new schema (as in [3]), while an old app deployment can keep referencing the old schema with its connection string.
Hopefully I got that right.
[0] https://www.postgresql.org/docs/current/ddl-schemas.html [1] https://github.com/xataio/pgroll/blob/main/pkg/roll/execute.... [3] https://stackoverflow.com/a/67525360