4 Tips for Working with Sqlc in Go
This post shares 4 tips for effectively working with Sqlc in Go. It covers handling bulk updates with parameterized queries, using CASE and NULL for conditional filters, and organizing query types across packages to reduce type pollution. These strategies help maintain clean and scalable code when using Sqlc.
Key Takeaways
Bulk updates with safe, parameterized SQL queries to prevent injection.
Use nullable values and CASE statements for flexible conditional queries.
Leverage Sqlc macros for safe and maintainable SQL query generation.
What is sqlc?
If you’re unfamiliar with sqlc, it’s a command-line application that generates fully type-safe, idiomatic Go code from SQL. Here’s how it works, as explained on their website:
- You write SQL queries.
- You run sqlc to generate Go code that provides type-safe interfaces to those queries.
- You write application code that calls the methods sqlc generated.
If you haven’t explored sqlc before, I highly recommend checking it out, especially before diving into this blog post.
The Problem With Generated Queries
As awesome as sqlc is, I believe it falls short in a few areas:
Limitations on conditional queries
sqlc doesn’t support generating dynamic queries based on user inputs. This makes it challenging to create queries for endpoints that allow searching through a database with various filters, such as categories, tags,
LIKE
searches, etc. There’s no straightforward way to model this kind of flexibility.Limited bulk action support
While sqlc supports some bulk actions, like multiple inserts, it lacks robust options for bulk updates with conditional logic. For example, if you need to reorder a set of elements, you have to update each one individually rather than as part of a single query.
Generated code isn’t ideal
Some of the generated code for interfaces and structs could be improved. This includes better support for creating transactions, managing configurations, and handling the setup of the database more seamlessly.
Luckily, we can mitigate most of these issues with a few small modifications!
1. Extended Generated Code with QueriesExt
When you generate your Go code from SQL using the default settings, you’ll end up with a struct and interface like this:
type DBTX interface {
Exec(context.Context, string, ...interface{}) (pgconn.CommandTag, error)
Query(context.Context, string, ...interface{}) (pgx.Rows, error)
QueryRow(context.Context, string, ...interface{}) pgx.Row
}
func New(db DBTX) *Queries {
return &Queries{db: db}
}
type Queries struct {
db DBTX
}
func (q *Queries) WithTx(tx pgx.Tx) *Queries {
return &Queries{
db: tx,
}
}
Starting out, this setup is great. You have a basic interface that allows you to swap out a transaction and query type interchangeably. This makes composing and executing collections of queries straightforward.
However, a downside is that you need to manage both a Queries
object and the underlying pgx
connection instead of encapsulating that behavior within the Queries
object. Fortunately, thanks to Go’s support for embedding structs, this is an easy problem to solve.
We’ll create a new type, QueriesExt
, that embeds the Queries
struct, adds functionality, wraps the pgxpool.Pool
type, and provides methods for easily managing transactions.
pgxpool.Pool
instead of a pgx.Conn
. The pgx.Conn
type is not safe for concurrent use and shouldn’t be used across goroutines, such as in a web application.// QueriesExt is an extension of the generated Queries struct which
// also depends directly on the internal sql connection and allows
// for easier transaction handling and some basic utility methods for working
// with the database.
type QueriesExt struct {
*Queries
conn *pgxpool.Pool
}
// Close closes the connection.
func (qe *QueriesExt) Close(ctx context.Context) error {
qe.conn.Close()
return nil
}
// WithTx runs the given function in a transaction.
func (qe *QueriesExt) WithTx(ctx context.Context, fn func(*QueriesExt) error) error {
tx, err := qe.conn.BeginTx(ctx, pgx.TxOptions{})
if err != nil {
return err
}
qext := &QueriesExt{qe.Queries.WithTx(tx), qe.conn}
if err := fn(qext); err != nil {
_ = tx.Rollback(ctx)
return err
}
return tx.Commit(ctx)
}
This example adds a convenient WithTx
method that runs a function within a transaction. If the function returns an error, the transaction is rolled back automatically. This allows the caller to focus on their business logic while the setup and cleanup are handled in a central, reusable way.
Constructor for QueriesExt
Creating your own QueriesExt
also allows you to write a constructor function to handle common database connection logic.
type Config struct {
// ... postgres connection config
}
func NewExt( ctx context.Context, logger zerolog.Logger, config Config,) (*QueriesExt, error) {
var conn *pgxpool.Pool
var err error
var (
retries = 5
wait = 1
)
for {
conn, err = pgxpool.New(ctx, config.ConnStrPgx())
if err == nil {
err = conn.Ping(ctx)
if err == nil {
break
}
}
if retries == 0 {
return nil, err
}
retries--
logger.Warn().
Err(err).
Int("retries", retries).
Msg("failed to ping database, retrying...")
time.Sleep(time.Duration(wait) * time.Second)
wait *= 2
}
return &QueriesExt{
Queries: New(conn),
conn: conn,
}, nil
}
In this example, the NewExt
function accepts a common configuration structure for PostgreSQL and handles the initial connection setup. It includes a retry mechanism to ensure that PostgreSQL is available, which is particularly useful in environments with multiple application entry points. This approach provides a standardized way to manage connections to the database.
2. How to do Bulk Updates
I’m going to talk specifically about bulk updates here because that’s what I’ve found I need to implement the most. In general, the principle remains the same: extract parts of your query where you need parameterization into a querybuilder
package and use the builders to keep the majority of your SQL query readable.
Query Builder
package querybuilder
import (
"errors"
"slices"
"strconv"
"strings"
)
type BulkUpdateTableParams struct {
TableName string // TableName name of the created table
TableColumns []string // TableColumns lists the columns of the created table
// Values specifies the values that will be inserted into the table. When build is called these
// are used to construct a table is parameterized queries to avoid any possibility of sql
// injection.
Values [][]any
// Cast is an optional modifier that can be used to cast a input into a specific sql type.
// if casts are provided the MUST be the same length as the Values table provided otherwise
// Build() will fail.
Casts []string
// Params is the array used to put each paramater for the parameterized query.
Params []any
}
// BulkUpdateTable generates a SQL query string and parameters for performing a bulk update in a
// database using a temporary table of values. This function provides SQL injection safety by
// constructing parameterized queries, allowing bulk data insertion while optionally casting columns.
//
// Example Table:
//
// (VALUES ($1, $2), ($3, $4)) AS v("id", "new_order")
//
// Usage:
//
// valuesTable, args, _ := querybuilder.BulkUpdateTable(...)
// query := `
// UPDATE
// table_name
// SET
// "order" = v.new_order
// FROM
// ` + valuesTable + `
// WHERE
// ...
// `
func BulkUpdateTable(b BulkUpdateTableParams) (string, []any, error) {
if b.TableName == "" || len(b.TableColumns) == 0 {
return "", nil, errors.New("table name and columns must be set using As()")
}
if len(b.Values) == 0 {
return "", nil, errors.New("values must be provided using Values()")
}
if len(b.Casts) > 0 && len(b.Casts) != len(b.Values[0]) {
return "", nil, errors.New("number of casts must match number of columns in values")
}
params := make([]any, 0, len(b.Values)*len(b.Values[0]))
paramIndex := len(b.Params) + 1
query := &strings.Builder{}
query.WriteString("(VALUES ")
for rowIdx, row := range b.Values {
if rowIdx > 0 {
query.WriteString(", ")
}
query.WriteString("(")
for colIdx, val := range row {
if colIdx > 0 {
query.WriteString(", ")
}
// Build the parameter placeholder manually, applying casting if needed
query.WriteString("$" + strconv.Itoa(paramIndex))
if len(b.Casts) > 0 {
query.WriteString("::" + b.Casts[colIdx])
}
params = append(params, val)
paramIndex++
}
query.WriteString(")")
}
query.WriteString(")")
// Add the table alias and column names
query.WriteString(" AS ")
query.WriteString(b.TableName)
query.WriteString("(")
for i, col := range b.TableColumns {
query.WriteString(`"`)
query.WriteString(col)
query.WriteString(`"`)
if len(b.TableColumns) != i+1 {
query.WriteString(", ")
}
}
query.WriteString(")")
return query.String(), slices.Concat(b.Params, params), nil
}
This function generates a SQL query for performing bulk updates using a temporary table of values. It ensures SQL injection safety by constructing parameterized queries, allowing bulk data insertion while optionally casting columns.
Breakdown of the Function
Validate Inputs
Ensure that the struct has matching slice lengths and any required parameters are set.
Pre-Allocation
Pre-allocate the array of parameters for the query. The
paramIndex
accounts for predefined parameters in the array, ensuring consistent parameter indexing.Build the Query
Use a string builder to construct the
VALUES
table with placeholders ($1
,$2
, etc.) for parameters. This approach avoids SQL injection and ensures performance.
At the end of this process, you get something like:
(VALUES ($1, $2), ($3, $4)) AS v("id", "new_order")
Example Usage in Recipinned
Here’s how this is applied in Recipinned to manage meal plan templates.
type MealplanTemplateEntriesBulkUpdateEntry struct {
ID uuid.UUID
Order int16
}
type MealplanTemplateEntriesBulkUpdateParams struct {
TemplateID uuid.UUID
Entries []MealplanTemplateEntriesBulkUpdateEntry
}
// MealplanTemplateEntriesBulkUpdate is a non-sqlc generated function that accepts paramemters
// to bulk update entry values based on their ID and TemplateID. This method currently only
// supports bulk update of the order field, but may support others in the future.
func (q Queries) MealplanTemplateEntriesBulkUpdate(ctx context.Context, arg MealplanTemplateEntriesBulkUpdateParams) error {
args := []any{
arg.TemplateID,
}
rows := make([][]any, len(arg.Entries))
for i, nv := range arg.Entries {
rows[i] = []any{nv.ID, nv.Order}
}
valuesTable, args, err := querybuilder.BulkUpdateTable(querybuilder.BulkUpdateTableParams{
TableName: "v",
TableColumns: []string{"id", "new_order"},
Values: rows,
Casts: []string{"uuid", "int2"},
Params: args,
})
if err != nil {
return err
}
query := `
UPDATE
mealplan_template_entries
SET
"order" = v.new_order
FROM
` + valuesTable + `
WHERE
mealplan_template_entries.id = v.id AND
mealplan_template_entries.mealplan_template_id = $1`
_, err = q.db.Exec(ctx, query, args...)
return err
}
Explanation
Define the Data Structure
Create a struct for the entries being updated, specifying their IDs and new values.
Prepare the Query
Use the querybuilder.BulkUpdateTable function to generate the VALUES table dynamically.
Execute the Query
Combine the generated table with your main SQL update statement and execute it using your database driver.
This approach makes bulk updates efficient, safe, and maintainable, keeping your SQL queries both performant and readable.
3. Conditional Filter Queries
For conditional queries, you have two main options:
- Using a Query Builder.
- Using
CASE
and/orNULL
values within SQL.
If you’re interested in query builders and the vast libraries that implement them, check out this collection:
For this post, I’ll focus on SQL-based solutions. Often, these are sufficient to solve your problem without introducing an additional library. Let’s explore two approaches.
Using Null Values
You can construct your WHERE
clause with optional or NULL
values by first checking if the value is NULL
before applying your criteria. In Go, you’ll need to handle this properly by doing the following:
- Use a macro from SQLC, such as
sqlc.narg
, to generate nullable values. - Cast the value to the appropriate type (e.g.,
::TEXT
) if necessary. Depending on the specifics of your query, SQLC may or may not infer the type automatically when using macros.
SELECT
*
FROM
recipes
WHERE
sqlc.narg(domain)::TEXT = NULL OR domain = sqlc.narg(domain)::TEXT
Using Case Statments
You can also use a CASE
statement for conditional filtering. While this method offers flexibility, the NULL
-value approach is often simpler and more direct. Here’s an example for comparison:
SELECT *
FROM recipes
WHERE CASE
WHEN sqlc.narg(domain)::TEXT IS NOT NULL THEN domain = sqlc.narg(domain)::TEXT
ELSE 1 = 1
END;
Although the CASE
approach works, I haven’t found a compelling reason to use it over the simpler NULL
-value solution. Still, it’s good to be aware of the option when you encounter edge cases or more complex conditions.
4. Managing Type Pollution in the Queries Package
A common challenge when working with database queries is managing type proliferation. This issue often arises from two sources:
- Storing all queries in a single package.
- SQLC not sharing models, especially with more complex queries and joins.
Addressing Query Organization
One approach to reduce clutter is splitting queries into domain-specific packages. For example:
recipequeries
userqueries
adminqueries
This separation creates more manageable boundaries between domains. However, it can still result in duplicate types across packages.
Isolating Queries with a Store/Repository Layer
My preferred solution is to keep all queries in a single package but isolate their usage within a store
or repository
layer. This strategy confines the queries
dependency and its types to a specific part of the application. For example:
- Instead of calling
queries.GetUserDetails
directly in the API layer, callstore.GetUserDetails
. - The API layer works with a
store.UserDetails
type, not aqueries.UserDetails
.
This isolation provides several benefits:
- Control over serialization: The returned objects can be structured to meet application needs rather than reflecting database schema directly.
- Flexibility: As the application evolves, the representation of data in the database can diverge from its representation in the application.
Handling Type Mapping
One downside of this structure is the need to map queries.TypeA
to store.TypeA
. While this mapping can sometimes be 1-to-1, other times it requires adapting the data. To streamline this process, I use a pattern for writing generic mappers in Go, which simplifies converting models between layers. You can read more about this approach in my post: