Please purchase the course to watch this video.

Full Course
Database migrations are crucial for evolving application schemas without disrupting existing data or user experience. Integrating migrations into Go CLI applications ensures that the database structure is automatically created and updated as the app develops, eliminating the need for manual SQL execution by end users. The lesson demonstrates how to use the powerful golang-migrate package both from the command line and directly in Go code, highlighting how to write up and down migration scripts, manage migration errors, and ensure migrations are idempotent. It also addresses distribution concerns by employing Go’s embed package with the IOFS driver, allowing migration files to be bundled into the application binary for seamless deployment. These practices streamline database setup and maintenance, providing a robust and user-friendly foundation as your CLI tool’s features grow.
Now that we've managed to take a look at how we can integrate with a database, be it SQLite or Postgres, when it comes to our CLI applications, the next thing we're going to look at in this lesson is how to perform database migrations. But what are database migrations?
Well, basically, here I have an application that is performing the exact same thing as my CMS posts list command, where I'm pulling all of the blog posts out of the SQLite database and writing them out to the console using the tab writer. By the way, one thing to note is I'm actually using the modernc.org
SQLite driver in this case, instead of using the mattn
CGO version. So far, I've not had any problems with this, and it's a lot easier to run given the fact that it doesn't use CGO under the hood. So you may want to swap your SQLite driver if you're using SQLite to this one instead.
In any case, the code for this is available on GitHub, and it's doing pretty much what we've seen already, where it opens up a SQLite database and performs a simple query in order to select the ID, title, slug, content from the posts. If I go ahead and run this code with my SQLite database I already created, you can see it works as follows. However, if I didn't have a SQLite database available, so if I go ahead and run it, you can see my code now crashes. This is because I don't have any posts table, which is to be expected.
The Problem with Current Approach
Whilst this is expected behavior, it does present a bit of an issue when it comes to distributing CLI applications to users, as there needs to be some sort of mechanism in order to be able to define the data model or define the database schema before you can actually use the code. You could ask your users to go ahead and actually create the database schema. So they could go ahead and run the following create table query in order to actually create the posts table. But this is a bit of a bad user experience, and there's also the risk that users might do something wrong.
Now, another approach is to go ahead and actually run the creation of this table when your application starts up for the first time. So you could go ahead and do something like, let's go ahead and do db.execute
. So we could go ahead and execute. By the way, I'm not using the context ones just to reduce the amount of code on screen, but I would use that in a production setting. So we could go ahead and do the create table query here. Whilst this would work, we could do say create table if not exists
, which is this is a perfectly viable way of doing it. It doesn't account for how to handle reconciliation of your database schema as your code starts to develop.
To explain what I mean, let's say on the first release of our application, we have a simple table that just has, say, an ID and a title, similar to what we looked at in the last lesson. However, as we develop, then we want to be able to add other columns to this field. Let's say we then add in a slug field and a content field later on. Whilst being able to just modify this query works when it comes to new users, when it comes to existing users, we can't just go ahead and create the same table again. Instead, we need to be able to iteratively change their database schema without actually losing or affecting any of the data that they may already have written.
What Are Database Migrations?
When it comes to software development, this is done through the use of migrations, which allow you to specify various different SQL files that can be performed or applied iteratively in order to bring your database schema up to your desired state. Fortunately, Go provides a really awesome library or framework to be able to perform database migrations, which is known as the golang migrate package. This package allows you to perform database migrations written in Go and can be used as either a CLI or imported as a library. And we'll take a look at both of these in this video.
Installing the Golang Migrate CLI
To see how database migrations work, let's go ahead and actually install the Golang Migrate CLI to get started. To do so, you can follow the CLI documentation, which gives some instructions on how to install the package depending on your operating system. For this lesson, we want to make sure to use the go install command as we need to make sure that we install SQLite onto our database. SQLite can sometimes be a challenge to distribute and so they actually use a build tag in order to control whether or not to install SQLite when you install it using the go install command.
So to install it, there's a command listed in the description below, but we just want to go ahead and use the following go install command. However, for the tags, we just want to specify the SQLite tag. This will then go about installing the migrate package with the SQLite feature enabled.
go install -tags sqlite3 github.com/golang-migrate/migrate/v4/cmd/migrate@latest
Creating Migration Files
Once it's been installed, we can then go about actually creating our database migrations file. In my case, I like to go ahead and actually create these database migrations inside of a directory called database/migrations
inside of the root of my project. This works well with another tool that we're going to look at in the next lesson called SQLC. So for this moment, just go ahead and create the database migrations here as follows. And I need to use mkdir -p
in order to make all of these as well.
mkdir -p database/migrations
Okay, so now we have a database migrations file inside. Let's go about creating our SQL migrations in it. To do so, we can use the migrate command, which has a number of different subcommands inside. In order to create SQL migrations, we're going to use the create subcommand, which does require a number of different flags.
The ext
flag is to specify the extension we want this file to have. In our case, we'll set SQL. The dir
flag is to set the directory where you want these migrations to live. The seq
flag option is used to generate sequential up and down migrations with n digits, digits being the next flag afterwards. We're going to use sequential rather than timestamp. Then format and tz we're not going to use, but these are used to specify a go time format string and to specify the time zone that you want this to be created if you're using the time zone database migration format. We're not going to use that. Instead, we're going to use a sequential format.
So let's go ahead and actually call the migrate create command:
migrate create -ext sql -dir ./database/migrations -seq -digits 5 initial
And we're going to go ahead and set the extension to be SQL. We'll go ahead and set the sequential flag. So it's going to create them with sequential numbering. And for the digits, we'll set this to be five. Now let's go ahead and actually create the migration name. In this case, I'm going to go ahead and call it initial. So it's just going to be the initial database migrations.
Now, if I go ahead and run this, you can see it's created two files: 00001_initial.up.sql
, which is the five digits sequential number, initial up, initial being the name of the file that we defined, and the same but down.sql
. These two files are where we define our database migrations, with the up file being when we want to apply up migrations. So creating tables, creating new features, etc, etc. And the down is used for rollbacks. So if you want to remove a table that you created, etc, etc.
Defining Migration Content
With our migrations files defined, let's head on over into the database/migrations
directory. And we'll go ahead and open this up as follows. Here I'm inside the initial.up.sql
. So this is where we want to apply any SQL queries in order to actually spin up our migrations. In this case, we just want to go ahead and actually create the posts table that we saw already, which we can do using create table posts, and then specifying the fields we want inside.
In this case, I want to go ahead and specify an ID, which is an integer and a primary key. And I want to specify a title, which is going to be a varchar and not null. Then the other two fields is a slug, which is another varchar, which is not null. And then we want to have a content, which again is going to be a text actually in this case, not null.
CREATE TABLE posts (
id INTEGER PRIMARY KEY,
title VARCHAR NOT NULL,
slug VARCHAR NOT NULL,
content TEXT NOT NULL
);
With that, we now have our database migration defined. We can also go ahead and set the down database migration here as well, which in this case is just going to be to drop the posts table that we just created.
DROP TABLE posts;
Running Migrations with CLI
Okay, with the migrations defined, let's head back on over to the root directory. We can now go about actually applying these database migrations in order for them to spin up or set up our database into a good state. To do so, we can use the migrate command yet again, if we're working inside of the CLI. Although a better use case is going to be to implement this inside of our code. However, for the meantime, let's take a look at how we can apply this when it comes to our database migration.
In order to do so, we need to specify a couple of flags. The first is the source flag, which is the location of the migrations file. This is a driver, which means which golang migrate actually provides a number of different source drivers. If we take a look at the documentation, you can see all of the source drivers it provides. Let's have a look. There we go. Migration sources. So file system, IOFS, we'll take a look at that shortly. The binary data, packager, bucket, GitLab, AWS S3, you can store these anywhere. In our case, we're storing them on the file system. So we just want to use the actual path option, which is a shorthand for the file system.
Then we want to go ahead and actually specify the database we want to run those migrations against. Those are the only two flags we need for the moment. So let's go ahead and use them:
migrate -path ./database/migrations -database sqlite3://db.sqlite up
We're going to use migrate and then we're going to head and specify the path. In this case, it is the ./database/migrations
directory. Then for the database, we want to use SQLite. And we just want to go ahead and specify, say, database. I think it's database. DB.SQLite
. Oh, database. Sorry. And all we want to do is run the up command.
As you can see, it's now been applied. And we have the db.sqlite
file inside, which if I go ahead and actually open up using the SQLite 3 command and run the tables command, you can see we now have two tables inside. The first is the post table that we went and created. If I check the schema for, you can see it's the create table post, exactly the same thing that we defined inside of our database migration file.
The other table that's been created is schema_migrations
. This is used by the golang migrate package in order to manage the state of our migration code or where the actual tool is when it comes to our migration files. As you can see, it has a version, which is a uint64 and a field called dirty. If we go ahead and select all from the schema migrations, you can see the values that's coming back is one, which is the version and a zero, which is the Boolean dirty. In this case, it means false. We'll talk about dirty in a little bit, but for the meantime, this is just used to track where our schema migrations actually are.
So, okay, now that we've ran our migrations, if I go ahead and run this code, you can see now it's working as expected. We don't have any data in this migration file, but in our case, you can see that the posts table exists and our code has worked as expected. It just produced zero values.
Adding More Migrations
So let's go ahead and add in another migration to actually insert data into this database. Normally, you wouldn't do this unless you want to set up some pre-configured tables. Let's say you have kind of like more admin tables, then you would go about actually adding in database migrations to insert them in. So we can go ahead and call migrate again:
migrate create -ext sql -dir ./database/migrations -seq -digits 5 add_base_data
And we can go ahead and say add base data, I guess. And in this case, yep, we're keeping the exact same command. Now this time, you can see it's got about creating a new database migration file called version two. Very cool.
So let's go ahead and actually navigate over to the up one. We're not going to do the down one for this case, but in this situation, you would just remove any of the fields you added. But we're going to go ahead and do insert into posts and we could do kind of title slug content and we'll do the values. And in this case, let's go ahead and set a title of my first post. Then the slug, we're going to go ahead and set this to be my first post. We could do like 2025 or something. I'm not too sure how blog posts usually work. And then for the content, this is my first post. I hope you enjoy it. And it's the start of something big, let's say.
INSERT INTO posts (title, slug, content) VALUES
('My First Post', 'my-first-post-2025', 'This is my first post. I hope you enjoy it and it''s the start of something big');
Okay, I'm going to add this trailing comma here. This will cause an error, but it will allow us to look at the dirty state when running a database migration.
Handling Migration Errors and Dirty State
Okay, with the new migration file created, let's go ahead and run our database migrations up again, which we can do by running the same database or migrate up command that we saw before. However, as you can see, this time I have error. SQL logic near syntax error, etc, etc. Okay, so let's just go about fixing it and trying to run this again. However, upon doing so, you'll see that we're left in a dirty database version too. So we need to fix it and force the version. This is why you really want to test your database schemas when it comes to distributing your code, as you don't want your users to be left in a dirty state.
In our case, we had a bug. We tried to run the migration. We knew that we had a problem. So now we can go ahead and actually fix the problem, which we already did, remove the trailing comma, and then we need to force the actual version that we're currently in. Because this is our second database migration, and we've already performed a migration, we can force the version to be number one:
migrate -path ./database/migrations -database sqlite3://db.sqlite force 1
And then we could just go ahead and actually migrate again, and this time it applies. In some cases, however, you may not be able to force the version. Let's say it's the first database file that you ran, and it failed for some reason. In that case, there is no previous version to force to. So in that case, you can go ahead and actually drop the database schema:
migrate -path ./database/migrations -database sqlite3://db.sqlite drop
Which gives you a prompt. Use the drop sub command to do this. Go ahead and drop it. It drops the entire database, and we could just go ahead and run this again using up, and both of our migration files will be applied, which is very cool.
Okay, now if we go ahead and run this, we should see, hey, we have some actual data being published. I think I have a little bug here. Let me go and fix that in my code. Oh yeah, I do have a bug. Let's go ahead and change this to be title as follows. Okay, now if we go ahead and run that, I fix my bugs. As you can see, the slug is coming through, which is 2025, my first post, title, and the content as well, being printed out using the tab writer package, which again is fantastic, and I recommend using it for your own CMS tool.
In any case, as you can see, we've now been able to run our database migrations, and we have both our tables being set up and the actual data being written to it as well. Although in this case, I'm just writing the data to show how you have multiple migrations, and how you can actually perform the up when it comes to migration code.
Integrating Migrations into Your Application
However, whilst being able to run database migrations is great, typically you won't want to actually perform the up when it comes to the CLI. This again would actually be a bit of a pain for your users to have to go ahead and run, because they don't want to run or install a migrate package just to use your code. Instead, we want to be able to perform database migrations when our application starts up. Fortunately, the golang migrate package provides us the ability to do this, as you can use it inside of your Go project, and they give some nice examples on how to do so. So let's take a look at how we can do it in our own code.
To begin, we'll go ahead and create a new function called migrate. Let's just go ahead and call it migrate. No, we're going to have a conflict here. Let's go ahead and call it performMigrations
, and we'll take the database name as a string or db path or whatever it is. Then we can go ahead and return an error in case something goes wrong. And we'll set this to return nil for the meantime.
func performMigrations(dbPath string) error {
return nil
}
Now let's go ahead and actually create a constant for our database name. So const dbPath
or dbFile
, I'll call it. In this case, it's going to be, we'll call it db.sqlite
, I think. We'll do ./db.sqlite
. Then we can go ahead and make use of this here, dbFile
. And we can go ahead and call the performMigrations
function beforehand, passing in the dbFile
. We can do a quick error check as well. We may as well make sure that if something did go wrong, we don't try to actually pull data out of it because that would be not good. log.fatal
line, and we'll just go ahead and log out the error.
const dbFile = "./db.sqlite"
func main() {
if err := performMigrations(dbFile); err != nil {
log.Fatal(err)
}
// Rest of application...
}
Okay, so we've added in the code to be able to perform migrations or call a perform migrations function. Let's take a look at actually how we can perform those database migrations themselves. Well, as you can see, it's actually pretty simple. We just need to import the migrate package and then import the database driver and the source driver. So let's go ahead and copy this and we'll just make some quick modifications to it.
In our case, we can go ahead and paste these imports in. And I'm not going to go ahead and use Postgres. I'm going to use SQLite. And for the source, I'm actually going to go ahead and use the file source. I think it is. Let's go ahead and try that anyway. I think it's the file source is correct. Yes, source file.
import (
"github.com/golang-migrate/migrate/v4"
_ "github.com/golang-migrate/migrate/v4/database/sqlite3"
_ "github.com/golang-migrate/migrate/v4/source/file"
)
Okay, perfect. Now to actually use the migrate package, we begin by creating a migration type. So we can use migrate and then do a new, call the new function, call the constructor. Here we need to pass in, I believe, the source, which is going to be the file, and we'll point it to our database/migrations
directory. This does cause a bit of an issue when it comes to distributing our application, but we'll talk about that after we've taken a look at how we can migrate in code. And I'm going to go ahead and split this across two new lines. We then want to go ahead and pass in the database connection URL, which in this case is just going to be SQLite. And then I'm going to go ahead and just concatenate the two strings together as follows.
func performMigrations(dbPath string) error {
m, err := migrate.New(
"file://./database/migrations",
"sqlite3://"+dbPath,
)
if err != nil {
return fmt.Errorf("failed to create migrator: %w", err)
}
defer m.Close()
return nil
}
Okay, as you can see, I'm getting an error. This is because I haven't actually added the migrate package to the code. We could go ahead and do this using the go get command that we saw already. But a quick little hack is to just go ahead and use go mod tidy
, which will go ahead and scan all of the dependencies in your code and add them to your go mod file. It's a really good way of doing so quickly without needing to remember to import everything in.
Let's go ahead and check for the error. So if error is not equal to nil, we'll just go ahead and return the error for the meantime. Then we can go ahead and call m.Up
, which does return an error itself. So let's go ahead and capture this if error. We'll just do error equals m.Up
. And we actually want to do an if error check, but we don't want to just do if error is not equal to nil, because this will throw an error if there's no changes to be applied, which is actually sometimes a bit of a nuisance. So we need to do a quick if error is not equal to nil. In fact, actually, I'll show you that before we fix it, just so you can understand what that error looks like. For the meantime, let's just go ahead and return an error. And we're going to go ahead and wrap this. So failed, I would do up migrations, just so that we can see that this is where this is coming from. And we'll just do a fmt.Errorf
here. And we'll just do create migrator. So failed to create the migrator for whatever reason, usually if the file doesn't exist.
func performMigrations(dbPath string) error {
m, err := migrate.New(
"file://./database/migrations",
"sqlite3://"+dbPath,
)
if err != nil {
return fmt.Errorf("failed to create migrator: %w", err)
}
defer m.Close()
err = m.Up()
if err != nil {
return fmt.Errorf("failed to run up migrations: %w", err)
}
return nil
}
Okay, that's all it is to be able to add migrations to our code. So now if I go ahead and remove the db.sqlite
and we go ahead and run this using go run command, you can see it works as expected. We didn't have to get our user or our user being ourselves to set up a database table using the SQLite CLI. Instead, it just did it all for us using our database migration code.
However, as I mentioned, we currently have a bug. If I try to run this again, you can see we get the error of up migrations, no change. This is because there's no changes to be made, because our database migration has already been up. If I go ahead and run this, you can see again, this is what that error is, no change. So in our case, we don't actually want to error when there is no change. That just means we're already in the correct state. So we can go ahead and quickly check for this by using the errors.Is
function and passing in the error and then checking the migrate.ErrNoChange
variable.
err = m.Up()
if err != nil && !errors.Is(err, migrate.ErrNoChange) {
return fmt.Errorf("failed to run up migrations: %w", err)
}
Now if I go ahead and run this, it should just work as it did before and this time we're not going to cause any issues. If, oh yeah, if not errors, sorry, not and errors, we want to do and not errors.Is
equal to no change. Now when I run this, it should work as it did before. As you can see, it does.
The Distribution Problem
So far, we've taken a look at how we can perform migrations in our code so that our users don't have to worry about ensuring that they have a database set up in the correct way. And also by adding in multiple migrations, you can develop or evolve your database schema as your feature set of your CLI application evolves as well.
However, we still have a bit of an issue when it comes to distribution. For example, if I go ahead and create a new directory called binary, and if I go ahead and build this code, outputting it to the binary directory, I need to cast this in first. So output .bin
and we'll build it as follows:
mkdir binary
go build -o ./binary/cms .
Now if I head on over to the binary directory, you can see I have my CMS inside. But if I go ahead and run it, you can see we get a failure. Creator, creating migrator, fail to open source, file database migration, no such file or directory. This is because our migration code is trying to load in our database migrations at runtime, which is needed in order to be able to perform them or apply them. But unfortunately, it leaves us in a bit of a situation where in order for us to be able to distribute this application, we would also have to distribute this migration code. This is yet again a bad user experience. It's slightly better than before where you would have the user perform the actual migration. But again, having to bundle files with your binary again is not desirable.
Solution: Embedded Migrations
Fortunately, Go provides a solution to be able to solve this. One that we actually took a look at in the last module, I believe, which is the embed package. By using the embed package, we can actually embed our database migrations into our code. And the migrate package itself provides a source driver called IOFS, which allows you to read from an IO file system. They also actually have some documentation on how you can do this, which we'll take a look at on the following page. I'll leave a link to this in the description down below.
As you can see, it's from 1.16 onwards, which is when migrations were added to the Go standard library. And this is how it works. Using the Go embed directive that we saw before with an embed.FS
and then creating a new source driver of IOFS and parsing it in. So that's what the example looks like. Let's go ahead and use it within our actual code.
To do so, we need to go ahead and first define a embedded file system, which means we need to import the embed package as follows. Then we can go ahead and call this as a FS, I guess. Yeah, we'll call it FS. Files is also another good name. I'm going to call it FS for the meantime. And we want this to be an embed.FS
as follows. Now we can go ahead and use the Go embed directive that we saw before, parsing in kind of the database/migrations
. And then we'll go ahead and wildcard this to SQL as well. So basically embedding all of the SQL files inside of the database migrations directory.
import "embed"
//go:embed database/migrations/*.sql
var fsys embed.FS
Now we can actually go ahead and make use of this inside of our perform migrations code. So let's go ahead and define a new source. So we can call this source. This is going to be an error. And we'll go ahead and call the IOFS, which as you can see is imported from github.com/golang-migrate/migrate/v4/source/iofs
. And then we want to go ahead and call new. This takes a file system and a path. In this case, the file system is going to be the FS. We could do this fsys
actually, that might be a better name. I'm going to go ahead and make that change. And then we need to pass the path in where our database migrations live, which is database/migrations
. This will then load all of the SQL files inside of that directory and will allow us to apply them when it comes to database migrations.
import (
"github.com/golang-migrate/migrate/v4/source/iofs"
)
func performMigrations(dbPath string) error {
source, err := iofs.New(fsys, "database/migrations")
if err != nil {
return fmt.Errorf("failed to create source: %w", err)
}
// ... rest of function
}
Let's go ahead and do a quick, if error is not equal to nil check, return error. We've seen this before. Okay, with our source defined, the next thing to do is to make use of it, which we can do using the new with source instance constructor function. This function takes a source name, so it will be IOFS, which is the source we've defined, and then the actual instance of the source driver itself, followed by the database URL. I do wish they would actually change this to be a generic function, but I'm not sure if they can actually do that.
In any case, let's go ahead and specify IOFS, which is the source driver we want. Then we can replace our call to file database migrations with the actual source driver itself. And that's it.
func performMigrations(dbPath string) error {
source, err := iofs.New(fsys, "database/migrations")
if err != nil {
return fmt.Errorf("failed to create source: %w", err)
}
m, err := migrate.NewWithSourceInstance(
"iofs",
source,
"sqlite3://"+dbPath,
)
if err != nil {
return fmt.Errorf("failed to create migrator: %w", err)
}
defer m.Close()
err = m.Up()
if err != nil && !errors.Is(err, migrate.ErrNoChange) {
return fmt.Errorf("failed to run up migrations: %w", err)
}
return nil
}
If we now go ahead and run this inside of our CMS directory, we can see that everything works as it did before. However, we can also now go ahead and build it and output that binary into a different file. And so this simulates distributing it to a different user. Now, if we go ahead and run the CMS, it now works as we would expect without us needing to distribute the migration files with this code. That's because they've been embedded into this actual binary themselves, which means we could then distribute this application binary to other users and they can also make use of the database or the SQLite database as well.
Conclusion and Homework
With that, we've taken a look at how we can use the fantastic golang migrate package in order to be able to have database migrations so that we can get our database schema and tables into the state we expect them to be as our application evolves and develops.
As for the homework, well, this is to integrate this database migration code into your own CMS system. Again, I have a link to my CMS system in the description down below so you can use that as a reference or you can use this code as a reference as well, which basically does the exact same thing as the list command when it comes to my own CMS implementation.
One thing to note is that the database migration package or the migrate package does work for many different drivers as well. So you can handle different source drivers if you want to be able to read from GitHub or AWS S3 or something similar. And you can also define different database drivers as well. So it supports all of these different database drivers, PGX, etc, etc, whichever one makes sense to you.
In any case, you'll want to set up your database schema for whatever data model that you have defined for your own CMS application. If you're following along with me, then it's going to be a very similar one to what I created in this application. I would also recommend making sure to create it inside of the database. So ls database/migrations
directory because we're going to use the database directory in the next lesson when we take a look at another database tool that is available for Go, SQLC. In any case, once you've managed to add this homework, you're ready to move on to the next lesson.
No homework tasks for this lesson.