While working with microservices which are connected with Postgres Database, Database migration work came up, so created this post to add some important notes.
Reference-1, A Discussion on Reddit
Good Comments
Comment-1
We use this approach in production for last 10 years, and it works just fine. Our migrations are done from the server code … during server startup it simply checks if all migrations were done already, and does the missing ones. Price we pay is that server is down until migration finishes, but typically they don’t take longer than couple of seconds, sometimes minutes.
A common pattern is to:
-
put migrations in sql files with unique names
-
have a metadata table in your db to record migrations run
-
write a script (say make migrate) to run any outstanding migrations and record them in the db (this can be v simple)
I definitely would not work with db dumps or diffs thereof. Migrations often need to change data, not just schemas. I also wouldn’t bother with explicit rollbacks.
That way you always know which migrations are run, can check changes into version control, can run them out of order, and can rerun easily when testing. Here is an example
We’re using https://github.com/golang-migrate/migrate for DB migrations.
Comment-2
there’s a few advises how to prepare migration sources https://github.com/golang-migrate/migrate/blob/master/MIGRATIONS.md
Github Reference :
https://github.com/fragmenta/fragmenta/blob/master/migrate.go
Comment-3
Database migrations are written in Go. Use as CLI or import a library.
https://github.com/golang-migrate/migrate
Comment-4
I recently started to use IntelliJ IDEA, which will generate the migration SQL for me.
Comment-5:
It’s really simple but we just keep our ALTERs and what not in numbered SQL files and run them sequentially at release.
We’ve been doing it this way for years and it hasn’t burned us yet.
Comment-6:
We use https://github.com/jackc/tern at work. It’s very simple, works as either a binary or integrated into your server, just SQL in a series of numbered files, very simple config to get DB info, and tern will run only what files need to be run to ensure your DB is up to date.
Comment-7:
Us developing and scaling large databases have adopted the philosophy over the years of: “Never modify, always add with default value set and don’t FK”
This means:
-
Never modify a column once it has been added to production.
-
Make sure your new column has a default value set, so adding to existing tables with billions of rows automatically get a value wirhout you having to insert billions of new values.
-
Don’t use FKs
-
Oh, and don’t use an ORM if you can – if not closely inspected by debug statements, your new ORM query might try to touch like 10 tables for a single ID lookup.
Comment-8
I like github.com/BurntSushi/migration – I never see it mentioned and it’s deliciously simple. Migrations automatically run when you open a connection. If that model doesn’t work for you, the code is super simple and you could just use it as a base for your own lib
Comment-9
We use https://github.com/golang-migrate/migrate and have a cmd/migrate tool for running migrations.