Volver

Rust for NodeJS developers (V) - SQLx Integration

Diario del capitán, fecha estelar d623.y41/AB

Rust Web development Guides GraphQL
Tech Lead
 Rust for NodeJS developers (V) - SQLx Integration

Welcome back to Part 5 of our "Rust for Node.js Developers" series! In our previous article, we set up the SQLx CLI and created our first database migration. Now it's time to dive into integrating SQLx with our Rocket application to build a robust, type-safe API.

In this article, we'll explore how to structure a Rust application with proper separation of concerns, leveraging SQLx's compile-time query validation to ensure our database operations are both safe and efficient.

Project dependencies and configuration

First, let's enhance our project with the necessary SQLx features. SQLx is incredibly flexible, so we need to specify which features we want to use:

/Cargo.toml

[dependencies.sqlx]
version = "0.8.6"
features = [
    "runtime-tokio",    # Async runtime integration
    "tls-native-tls",   # TLS support for secure connections
    "postgres",         # PostgreSQL database driver
    "macros",          # Compile-time query checking
    "migrate",         # Database migration support
    "uuid",            # UUID support for our primary keys
    "time",            # Date/time handling
    "json",            # JSON data type support
]

[dependencies.uuid]
version = "1.18.0"
features = ["serde"]

Environment configuration

We'll configure our database connection through environment variables. This keeps our configuration flexible across different environments:

/.env

DATABASE_URL=postgresql://root:password@localhost:5432/rocket

Understanding rust attributes: derive and serde

Before we dive into building our models, let's take a moment to understand two important Rust features you'll see throughout this code: attributes and specifically the derive attribute with serde.

The derive attribute

In Rust, attributes are metadata applied to code elements. The #[derive] attribute is particularly powerful - it automatically generates trait implementations for your types. Think of it as similar to decorators in TypeScript or Python, but these work at compile time.

#[derive(Serialize, Deserialize)]
pub struct User {
    pub id: Uuid,
    pub username: String,
}

In this example, derive automatically implements the Serialize and Deserialize traits for our User struct. Without derive, we'd need to write this boilerplate code manually!

The serde library

serde (short for serialization/deserialization) is Rust's most popular framework for converting data structures to and from formats like JSON, YAML, or TOML. When you derive Serialize and Deserialize, you're adding two capabilities:

The #[serde(...)] attribute

Beyond just deriving, serde provides additional attributes to customize serialization behavior:

#[derive(Serialize, Deserialize)]
#[serde(crate = "self::serde", rename_all = "lowercase")]
pub enum Status {
    Ok,
    Ko,
}

Here's what these options do:

For Node.js developers: This is similar to using libraries like class-transformer in TypeScript, but with the key difference that in Rust, all this happens at compile time with zero runtime overhead!

Building our data models layer

Creating consistent API responses

Before diving into user-specific models, let's establish a consistent response structure for our API. This helps maintain predictable responses across all endpoints:

/src/models/status.rs

use rocket::serde::{self, Deserialize, Serialize};

#[derive(Serialize, Deserialize, PartialEq, Eq)]
#[serde(crate = "self::serde", rename_all = "lowercase")]
pub enum Status {
    Ok,
    Ko,
}

#[derive(Serialize, Deserialize)]
#[serde(crate = "self::serde")]
pub struct StatusResponse {
    pub status: Status,
    pub message: Option<String>,
}

User domain models

Now let's create our user models. Notice how we separate the main entity from request DTOs - this is a common pattern that helps maintain clean boundaries between what we store and what we accept from clients:

/src/models/user.rs

use rocket::serde::{self, Deserialize, Serialize};
use sqlx::types::Uuid;

// Main user entity - matches our database schema
#[derive(Serialize, Deserialize)]
#[serde(crate = "self::serde")]
pub struct User {
    pub id: Uuid,
    pub username: String,
    pub email: String,
}

// DTO for creating new users (no ID needed)
#[derive(Deserialize)]
#[serde(crate = "self::serde")]
pub struct CreateUserRequest {
    pub username: String,
    pub email: String,
}

// DTO for partial updates (all fields optional)
#[derive(Deserialize)]
#[serde(crate = "self::serde")]
pub struct ModifyUserRequest {
    pub username: Option<String>,
    pub email: Option<String>,
}

Organizing our models

Let's keep our models organized with a proper module structure:

/src/models/mod.rs


mod status;
mod user;

pub use status::{Status, StatusResponse};
pub use user::{CreateUserRequest, ModifyUserRequest, User};

Building the database services layer

Now comes the heart of our application - the service layer that handles all database operations. This layer abstracts away the complexity of SQL queries and provides a clean interface for our routes.

User service implementation

Our UsersService encapsulates all user-related database operations. Notice how we use SQLx's compile-time checked queries - this means typos in our SQL will be caught at compile time, not at runtime!

/src/db/services/users_service.rs

use sqlx::{postgres::PgQueryResult, Error, PgPool};
use uuid::Uuid;

use crate::models::{CreateUserRequest, ModifyUserRequest, User};

pub struct UsersService {
    pool: &'static PgPool,
}
impl UsersService {
    pub fn new(pool: &'static PgPool) -> Self {
        Self { pool }
    }

    pub async fn find_all(&self) -> Result<Vec<User>, Error> {
        sqlx::query_as!(User, "SELECT * FROM users")
            .fetch_all(self.pool)
            .await
    }

    pub async fn create(&self, user: CreateUserRequest) -> Result<User, Error> {
        sqlx::query_as!(
            User,
            "INSERT INTO users (username, email) VALUES ($1, $2) RETURNING *",
            user.username,
            user.email
        )
        .fetch_one(self.pool)
        .await
    }

    pub async fn remove(&self, user_id: Uuid) -> Result<PgQueryResult, Error> {
        sqlx::query!("DELETE FROM users WHERE id = $1", user_id)
            .execute(self.pool)
            .await
    }

    pub async fn modify(&self, user_id: Uuid, user: ModifyUserRequest) -> Result<User, Error> {
        sqlx::query_as!(
            User,
            "UPDATE users SET username = $1 WHERE id = $2 RETURNING *",
            user.username,
            user_id
        )
        .fetch_one(self.pool)
        .await
    }
}

Services module organization

src/db/services/mod.rs


mod users_service;

pub use users_service::UsersService;

Creating our database architecture layer

Database service aggregator

As our application grows, we'll have multiple services for different domains. This aggregator pattern keeps everything organized and provides a single point of access:

/src/db/db_service.rs


use super::services::UsersService;
use sqlx::PgPool;

pub struct DBServices {
    pub users: UsersService,
    // Future services will be added here (posts, comments, etc.)
}
impl DBServices {
    pub fn new(pool: &'static PgPool) -> Self {
        DBServices {
            users: UsersService::new(pool),
        }
    }
}

Database fairing implementation

Rocket's Fairing system allows us to hook into the application lifecycle. This fairing ensures our database services are properly initialized and available throughout our application:

src/db/db_fairing.rs

use super::DB;
use rocket::{
    fairing::{Fairing, Info, Kind, Result},
    Build, Rocket,
};
use sqlx::PgPool;

pub struct DBFairing {
    pub pool: &'static PgPool,
}
#[rocket::async_trait]
impl Fairing for DBFairing {
    fn info(&self) -> Info {
        Info {
            name: "DB module",
            kind: Kind::Ignite,
        }
    }

    async fn on_ignite(&self, rocket: Rocket<Build>) -> Result {
        let static_db: &'static DB = Box::leak(Box::new(DB::new(self.pool)));

        Ok(rocket.manage(static_db))
    }
}

Database module organization

/src/db/mod.rs

mod db_fairing;
mod db_services;
mod services;

pub use db_fairing::DBFairing;
pub use db_services::DBServices as DB;

Building our HTTP routes layer

Finally, let's connect our database services to HTTP endpoints. Notice how clean and focused each route handler is - all the complexity is handled by our service layer!

/src/routes/users

use rocket::{delete, get, patch, post, routes, serde::json::Json, Route, State};
use uuid::Uuid;

use crate::{
    db::DB,
    models::{CreateUserRequest, ModifyUserRequest, Status, StatusResponse, User},
};

use super::user_friends::user_friends_routes;

pub fn users_routes() -> Vec<Route> {
    [
        routes![get_users, post_user, delete_user, modify_user],
        user_friends_routes(),
    ]
    .concat()
}

// GET /users - List all users
#[get("/")]
async fn get_users(db: &State<&'static DB>) -> Json<Vec<User>> {
    let users = db.users.find_all().await.expect("Failed to fetch users");

    Json(users)
}

// POST /users - Create a new user
#[post("/", data = "<user>")]
async fn post_user(db: &State<&'static DB>, user: Json<CreateUserRequest>) -> Json<User> {
    let user_data = user.into_inner();
    let created_user = db
        .users
        .create(user_data)
        .await
        .expect("Failed to create user");

    Json(created_user)
}

// DELETE /users/<id> - Remove a user
#[delete("/<user_id_as_string>")]
async fn delete_user(db: &State<&'static DB>, user_id_as_string: String) -> Json<StatusResponse> {
    let user_id = Uuid::try_parse(&user_id_as_string).unwrap_or(Uuid::nil());
    if user_id == Uuid::nil() {
        return Json(StatusResponse {
            status: Status::Ko,
            message: Some("Invalid uuid".to_string()),
        });
    }

    let delete_result = db
        .users
        .remove(user_id)
        .await
        .expect("Failed to delete user");

    if delete_result.rows_affected() == 1 {
        Json(StatusResponse {
            status: Status::Ok,
            message: None,
        })
    } else {
        Json(StatusResponse {
            status: Status::Ko,
            message: None,
        })
    }
}

// PATCH /users/<id> - Update user information
#[patch("/<user_id_as_string>", data = "<user>")]
async fn modify_user(
    db: &State<&'static DB>,
    user_id_as_string: String,
    user: Json<ModifyUserRequest>,
) -> Result<Json<User>, Json<StatusResponse>> {
    let user_id = Uuid::try_parse(&user_id_as_string).unwrap_or(Uuid::nil());
    if user_id == Uuid::nil() {
        return Err(Json(StatusResponse {
            status: Status::Ko,
            message: Some("Invalid uuid".to_string()),
        }));
    }

    let user_data = user.into_inner();
    let modified_user = db
        .users
        .modify(user_id, user_data)
        .await
        .expect("Failed to modify user");

    Ok(Json(modified_user))
}

Tying it all together

Library entry point

Our library exports the main server builder function, keeping the architecture modular and testable:

/src/lib.rs


mod db;
pub mod models;
mod routes;

use db::DBFairing;
use rocket::{build, Build, Rocket};
use routes::RoutesFairing;
use sqlx::PgPool;

pub fn build_server(pool: PgPool) -> Rocket<Build> {
    let static_pool = Box::leak(Box::new(pool));

    build()
        .attach(RoutesFairing {})
        .attach(DBFairing { pool: static_pool })
}

Application entry point

The main function handles database connection setup and server launch. Notice how we configure connection pooling for better performance:

/src/main.rs

use std::env;

use rocket::{launch, Build, Rocket};
use rust_for_nodejs_developers::build_server;
use sqlx::postgres::PgPoolOptions;

#[launch]
async fn rocket() -> Rocket<Build> {
    let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");

    // Configure connection pool for optimal performance
    let pool = PgPoolOptions::new()
        .max_connections(5)
        .connect(&database_url)
        .await
        .unwrap();

    build_server(pool)
}

What we've accomplished

Congratulations! You've just built a production-ready, type-safe API with Rust and SQLx. Here's what makes this architecture special:

Coming up next

In the next article, we'll take our type-safe API to the next level by adding GraphQL capabilities using async-graphql.

Compartir este post

Artículos relacionados

Rust for NodeJS developers (I) - Why and how?

Rust for NodeJS developers (I) - Why and how?

We are always open to exploring new technologies. Recently, Rust has caught our attention due to its high performance, memory safety and reliability. In this series of articles, we will share the experience of learning Rust as a Node.js developer by building a GraphQL API in Rust.

Leer el artículo
Rust for NodeJS developers (II) - Rocket

Rust for NodeJS developers (II) - Rocket

Rocket is a robust web framework for Rust, offering developers a streamlined approach to building high-performance web applications. In this article, we'll dive into how to get started with Rocket.

Leer el artículo
Rust for NodeJS developers (III) - Docker development environment

Rust for NodeJS developers (III) - Docker development environment

Docker can be beneficial not just for deploying applications but also for local development. By creating a Docker environment for our Rust API, we can ensure a consistent and isolated development experience across different machines and team members.

Leer el artículo