Author: Bill

  • [Troubleshooting] Error: could not find system library ‘openssl’ required by the ‘openssl-sys’ crate

    When using reqwest crate and building docker images with Dockerfile mentioned in this article, you will encounter openssl issues due to missing packages.

    Error: could not find system library 'openssl' required by the 'openssl-sys' crate

    It turns out that below line is the culprit:

    FROM docker.io/rust:1-slim-bookworm AS build
    
    #############################################

    rust:1-slim-bookworm misses lots of necessary package to run the application. It is suggested in official Rust docker image document that we should avoid using slim image.

    Simple but ugly solution

    Well, the simplest approach is using a minimal Dockerfile which will always work:

    FROM rust:bookworm
    
    COPY . .
    
    RUN cargo build --release
    
    EXPOSE 8080
    
    CMD ./target/release/your_package

    However, it will generate a pretty large image which can take gigabytes of storage.

    The better one

    We can still use multi stage build to generate an optimal image with a little tweak (using default rust image version rust:bookworm):

    FROM docker.io/rust:bookworm AS build
    
    ## cargo package name: customize here or provide via --build-arg
    ARG pkg=hello-world
    
    WORKDIR /build
    
    COPY . .
    
    RUN --mount=type=cache,target=/build/target \
        --mount=type=cache,target=/usr/local/cargo/registry \
        --mount=type=cache,target=/usr/local/cargo/git \
        set -eux; \
        cargo build --release; \
        objcopy --compress-debug-sections target/release/$pkg ./main
    
    ################################################################################
    
    FROM docker.io/debian:bookworm-slim
    
    WORKDIR /app
    
    ## copy the main binary
    ## add more files below if needed
    COPY --from=build /build/main ./
    
    EXPOSE 8080
    
    CMD ./main

    Now it builds fine, but you will get a runtime error when running reqwest:

    ./main: error while loading shared libraries: libssl.so.3: cannot open shared object file: No such file or directory

    Furthermore, after installing correct packages either vendored or automatic, you will still see error when sending https requests:

    hyper_util::client::legacy::Error(Connect, Ssl(Error { code: ErrorCode(1), cause: Some(Ssl(ErrorStack([Error { code: 167772294, library: "SSL routines", function: "tls_post_process_server_certificate", reason: "certificate verify failed", file: "ssl/statem/statem_clnt.c", line: 2092 }]))) }, X509VerifyResult { code: 20, error: "unable to get local issuer certificate" }))

    To solve these issues, I separately wrote an article for detailed explanation and solutions.

  • [Troubleshooting] libssl.so.3 and certificate Error Running Rust/Reqwest Under Debian Image

    If you see below errors while running Rust app using Docker, this article is right for you:

    error while loading shared libraries: libssl.so.3: cannot open shared object file: No such file or directory
    hyper_util::client::legacy::Error(Connect, Ssl(Error { code: ErrorCode(1), cause: Some(Ssl(ErrorStack([Error { code: 167772294, library: "SSL routines", function: "tls_post_process_server_certificate", reason: "certificate verify failed", file: "ssl/statem/statem_clnt.c", line: 2092 }]))) }, X509VerifyResult { code: 20, error: "unable to get local issuer certificate" }))

    Anatomy

    Reqwest document mentioned about requirements when running under Linux OS, where you need OpenSSL installed.

    To supply OpenSSL, Rust official document provided 2 ways:

    Vendored OpenSSL

    Add below dependency to Cargo.toml.

    [dependencies]
    openssl = { version = "0.10", features = ["vendored"] }

    Automatic OpenSSL

    Add below line to Dockerfile at stage of running app.

    RUN apt-get update && apt-get install -y pkg-config libssl-dev

    Either approach works fine and you can run the app successfully. However, when sending https requests, you will get below error:

    unable to get local issuer certificate

    A Debian image issue

    This thread gives a good idea of what happens here. Essentially, the debian official image does not have ca certification package installed. To solve the issue, simply install the package in Dockerfile:

    RUN apt-get update && apt-get install -y pkg-config libssl-dev && apt install -y ca-certificates
    # or below if using vendored OpenSSL
    # RUN apt-get update && apt install -y ca-certificates

    Final Dockerfile

    To build rust + actix-web + reqwest, below is what works for me:

    [package]
    name = "hello-world"
    version = "0.1.0"
    edition = "2021"
    
    [dependencies]
    actix-web = "4"
    reqwest = "0.12"
    serde = { version = "1.0", features = ["derive"] }
    openssl = { version = "0.10", features = ["vendored"] }
    FROM rust:bookworm AS build
    
    ## cargo package name: customize here or provide via --build-arg
    ARG pkg=hello-world
    
    WORKDIR /build
    
    COPY . .
    
    RUN --mount=type=cache,target=/build/target \
        --mount=type=cache,target=/usr/local/cargo/registry \
        --mount=type=cache,target=/usr/local/cargo/git \
        set -eux; \
        cargo build --release; \
        objcopy --compress-debug-sections target/release/$pkg ./main
    
    ################################################################################
    
    FROM docker.io/debian:bookworm-slim
    
    WORKDIR /app
    
    ## copy the main binary
    COPY --from=build /build/main ./
    
    EXPOSE 8080
    
    RUN apt-get update && apt install -y ca-certificates
    
    CMD ./main
  • [Troubleshooting] Error during `gcloud app deploy` for GAE app: “Failed to create cloud build: invalid bucket”

    When I was trying to run app with Google App Engine, I followed Hello World example step by step, and no surprise that it didn’t work. You will see below error:

    ERROR: (gcloud.app.deploy) Error Response: [13] Failed to create cloud build: com.google.net.rpc3.client.RpcClientException: <eye3 title='/ArgoAdminNoCloudAudit.CreateBuild, FAILED_PRECONDITION'/> APPLICATION_ERROR;google.devtools.cloudbuild.v1/ArgoAdminNoCloudAudit.CreateBuild;invalid bucket "staging.avid-shape-445101-a4.appspot.com"; service account avid-shape-445101-a4@appspot.gserviceaccount.com does not have access to the bucket;AppErrorCode=9;StartTimeMs=1734486112961;unknown;ResFormat=uncompressed;ServerTimeSec=0.921583338;LogBytes=256;Non-FailFast;EndUserCredsRequested;EffSecLevel=privacy_and_integrity;ReqFormat=uncompressed;ReqID=4b101eaa0d045332;GlobalID=0;Server=[2002:a05:6670:1585:b0:a5e:1b4f:746d]:4001.

    Same error has been posted in many places:

    • https://www.googlecloudcommunity.com/gc/Serverless/Failed-to-create-cloud-build-no-access-to-bucket/m-p/758310
    • https://stackoverflow.com/questions/78742739/error-during-gcloud-app-deploy-for-gae-app-failed-to-create-cloud-build-inv
    • https://www.googlecloudcommunity.com/gc/Serverless/Error-during-gcloud-app-deploy-for-GAE-app-quot-Failed-to-create/m-p/778778
    • etc.

    Based on this doc, starting May 3, 2024, iam.automaticIamGrantsForDefaultServiceAccounts is disabled by default, which caused all of these issues.

    Thanks to this answer, it gives me strong lead to the solution, although there are still some extra steps (might be from undocumented recent changes from Google Cloud).

    Step by step solution

    Give access to storage bucket

    There are 2 ways of doing it.

    Using Console UI

    1. Go to https://console.cloud.google.com/storage/browser, select staging.PROJECT_ID.appspot.com and go to Permissions tab.
    2. Click on GRANT ACCESS button.
    3. Enter PROJECT_ID@appspot.gserviceaccount.com as New principals.
    4. Enter Storage Admin as new Role.
    5. Save setting.

    Using CLI

    Simply using below command:

    gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:PROJECT_ID@appspot.gserviceaccount.com" --role="roles/storage.admin"

    Now let’s deploy again:

    gcloud app deploy

    No surprise, you will see below error:

    ERROR: (gcloud.app.deploy) Error Response: [9] Cloud build f490b199-1e2e-4fbb-89c7-e0caae57f7bc status: FAILURE
    An unexpected error occurred. Refer to build logs: https://console.cloud.google.com/cloud-build/builds;region=us-central1/f490b199-1e2e-4fbb-89c7-e0caae57f7bc?project=519995341679
    Full build logs: https://console.cloud.google.com/cloud-build/builds;region=us-central1/f490b199-1e2e-4fbb-89c7-e0caae57f7bc?project=519995341679

    Its much better now, at least we have a log. Let’s go to the link provided.

    You will very likely see above warning and empty logs.

    Grant access to writing logs

    Run below command to add role as suggested:

    gcloud projects add-iam-policy-binding avid-shape-445101-a4 --member="serviceAccount:avid-shape-445101-a4@appspot.gserviceaccount.com" --role="roles/logging.logWriter"

    Now let’s run deploy again (just to to see actual building logs):

    gcloud app deploy

    You will see same error but in the build log, you will see why exactly the build failed:

    ERROR: failed to create image cache: accessing cache image "us.gcr.io/avid-shape-445101-a4/app-engine-tmp/build-cache/default/ttl-7d:latest": connect to repo store "us.gcr.io/avid-shape-445101-a4/app-engine-tmp/build-cache/default/ttl-7d:latest": GET https://us.gcr.io/v2/token?scope=repository%3Aavid-shape-445101-a4%2Fapp-engine-tmp%2Fbuild-cache%2Fdefault%2Fttl-7d%3Apull&service=us.gcr.io: DENIED: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/avid-shape-445101-a4/locations/us/repositories/us.gcr.io" (or it may not exist)

    Again, its permission issue.

    Add necessary permissions

    After adding the permission shown in build logs, you will see more permissions needed. In the end I figured out that below permissions are necessary in addition to above ones:

    • roles/artifactregistry.createOnPushWriter
    • roles/storage.objectAdmin

    Now run gcloud app deploy, and you should be able to deploy the app successfully.

  • [AdBlock Anatomy] Run and Debug AdBlock Locally

    This article is starter of an anatomy series for dissecting AdBlock/AdBlockPlus/uBlockOrigin source code and learn how they work. First let’s start from running and debugging AdBlock locally.

    Note that this article is for eyeo owned browser extension AdBlock and AdBlockPlus.

    Source code

    Luckily, AdBlock main source entry is using NX monorepo, where we can easily see the dependency graph within the repo below.

    However, there are many core features are not within the monorepo, here is a list of important ones:

    • @eyeo/webext-ad-filtering-solution: most important core adblocking features shared across extensions.
    • @eyeo/snippets: eyeo developed special content scripts for element hiding and behavioral intercepting. This repo reflects everything from eyeo official developer document.
    • Filter list: eyeo developed filter list mostly leveraging @eyeo/snippets for enhanced blocking.
    • @adblockinc/rules: an npm package for retrieving, storing and providing filter lists from various sources for use in AdBlock and Adblock Plus.

    Installation

    Now let’s download the main source code and install packages:

    git clone https://gitlab.com/eyeo/extensions/extensions.git
    cd extensions
    npm install

    In often case if you are using Apple silicon chips, you will often see node-gyp errors like below:

    > npm ERR! gyp ERR! stack Error: `gyp` failed with exit code: 1

    The extensions repo provides some potential fixes, but below commends worked for me:

    python3 -m pip install setuptools
    brew install pkg-config cairo pango libpng jpeg giflib librsvg

    Build extension

    Simply run below to build both AdBlock and AdBlockPlus extension:

    npm run build

    Add extensions to Chrome

    • Go to chrome://extensions/.
    • At the top right, turn on Developer mode.
    • Click Load unpacked.
    • Find and select the app or extension folder.
      • e.g. for Chrome AdBlockPlus with Manifest V2, it is located inside /extensions/host/adblockplus/dist/devenv/chrome-mv2

    Now you should be able to see the extension being added and enabled.

    Debugging @eyeo/webext-ad-filtering-solution

    In a lot of cases, the feature you are looking for is mostly in webext-ad-filtering-solution package. This package can be loaded as extension standalone.

    Installation

    git clone https://gitlab.com/eyeo/adblockplus/abc/webext-ad-filtering-solution.git
    cd extensions
    npm install

    Build and run in watch mode

    You can simply do npm run build, but I found it convenient to use watch mode:

    npx webpack --watch

    The built extensions can be found under dist folder

    • performance-mv2 for Manifest V2
    • performance-mv3 for Manifest V3

    Simply add desired extension to Chrome in dev mode for debugging.

    Debugging background script

    Let’s go to sdk/background/index.js and add some logging:

    ...
    export async function start(addonInfo) {
      console.log("start extension");
      ...
    }
    ...

    Then go to chrome://extensions/ and click on Inspect views -> background page (Service worker for Manifest V3 version) under eyeo's WebExtension Ad-Filtering Solution Test Extension.

    You should be able to see start extension log under Console tab. You can try reload extension if not seeing the log.

    Debugging content script

    Let’s go to sdk/content/index.js and add some logging:

    ...
    async function initContentFeatures() {
      console.log("initContentFeatures");
      ...
    }
    ...

    Open a new tab and go to any page, then open console and you should be able to see initContentFeatures log.

  • Containerize Actix Web App Using Docker In A Cleaner Way

    Recently came across this task to try different Rust web frameworks and Actix Web is one of the most popular frameworks. Its documentation is clean and easy to bootstrap, however, when I was trying to deploy the app using Docker, I could not find a clean Dockerfile example that contains minimal dependencies.

    Luckily, when I was trying Rocket, it has a pretty clean documentation about containerize the app. After tweaking it a bit, it just worked for Actix Web app as well.

    Personally, I feel more confident to follow official documents for long term maintainability. Even tho the image size coming from the build is not as small as ~60MB like in this guide, but ~145MB image is also acceptable here consider other factors.

    Initiate Actix Web Sample App

    Create a new Rust app:

    cargo new hello-world
    cd hello-world

    Now add Actix Web dependency into Cargo.toml file:

    [dependencies]
    actix-web = "4"

    Then replace the contents of src/main.rs with the following:

    use actix_web::{get, post, web, App, HttpResponse, HttpServer, Responder};
    
    #[get("/")]
    async fn hello() -> impl Responder {
        HttpResponse::Ok().body("Hello world!")
    }
    
    #[post("/echo")]
    async fn echo(req_body: String) -> impl Responder {
        HttpResponse::Ok().body(req_body)
    }
    
    async fn manual_hello() -> impl Responder {
        HttpResponse::Ok().body("Hey there!")
    }
    
    #[actix_web::main]
    async fn main() -> std::io::Result<()> {
        HttpServer::new(|| {
            App::new()
                .service(hello)
                .service(echo)
                .route("/hey", web::get().to(manual_hello))
        })
        .bind(("0.0.0.0", 8080))?
        .run()
        .await
    }

    Compile and run the program:

    cargo run

    Test app using http://0.0.0.0:8080

    Add Dockerfile and build Docker image

    The original Dockerfile from Rocket document has some Rocket specific settings, simply removing those env variables would just work:

    FROM docker.io/rust:1-slim-bookworm AS build
    
    ## cargo package name: customize here or provide via --build-arg
    ARG pkg=hello-world
    
    WORKDIR /build
    
    COPY . .
    
    RUN --mount=type=cache,target=/build/target \
        --mount=type=cache,target=/usr/local/cargo/registry \
        --mount=type=cache,target=/usr/local/cargo/git \
        set -eux; \
        cargo build --release; \
        objcopy --compress-debug-sections target/release/$pkg ./main
    
    ################################################################################
    
    FROM docker.io/debian:bookworm-slim
    
    WORKDIR /app
    
    ## copy the main binary
    ## add more files below if needed
    COPY --from=build /build/main ./
    
    EXPOSE 8080
    
    CMD ./main

    Note that, pkg ARG need to be the same as package name in Cargo.toml.

    Now let’s build the image:

    docker build -t hello-world-image .

    Check the image size (~145MB):

    Finally, run the image:

    docker run -d -p 0.0.0.0:8080:8080 hello-world-image

    Test the app using http://0.0.0.0:8080

  • Develop and Deploy Rust Rocket App Using Docker Compose (Traefik/Docker Hub)

    This article will talk about how to use Docker for quick Rocket web app deployment. This architecture can be scaled with more complicated CI/CD framework and Kubernetes clusters for large scale application and zero downtime deployment.

    Prerequisite

    • Linux server
    • Local dev machine with Rust installed
    • Docker with Docker Compose
    • Follow this guide for setting up Traefik.
    • Domain name

    Getting started

    Update rust

    Install rustup by following the instructions on its website. Once rustup is installed, ensure the latest toolchain is installed by running the command:

    rustup default stable

    Initiate Rocket sample app

    cargo new hello-rocket --bin
    cd hello-rocket

    Now, add Rocket as a dependency in your Cargo.toml:

    [dependencies]
    rocket = "0.5.1"

    Modify src/main.rs so that it contains the code for the Rocket Hello, world! program, reproduced below:

    #[macro_use] extern crate rocket;
    
    #[get("/")]
    fn index() -> &'static str {
        "Hello, world!"
    }
    
    #[launch]
    fn rocket() -> _ {
        rocket::build().mount("/", routes![index])
    }

    Finally, we can run below command to test our first app:

    cargo run

    Build docker image and upload

    Add Dockerfile

    There are many tutorials out there but it is always a good practice to follow official documents: https://rocket.rs/guide/v0.5/deploying/#containerization

    Note that, in order to test Docker image in local, EXPOSE is needed:

    FROM docker.io/rust:1-slim-bookworm AS build
    
    ## cargo package name: customize here or provide via --build-arg
    ARG pkg=hello-rocket
    
    WORKDIR /build
    
    COPY . .
    
    RUN --mount=type=cache,target=/build/target \
        --mount=type=cache,target=/usr/local/cargo/registry \
        --mount=type=cache,target=/usr/local/cargo/git \
        set -eux; \
        cargo build --release; \
        objcopy --compress-debug-sections target/release/$pkg ./main
    
    ################################################################################
    
    FROM docker.io/debian:bookworm-slim
    
    WORKDIR /app
    
    ## copy the main binary
    COPY --from=build /build/main ./
    
    ## copy runtime assets which may or may not exist
    COPY --from=build /build/Rocket.tom[l] ./static
    COPY --from=build /build/stati[c] ./static
    COPY --from=build /build/template[s] ./templates
    
    ## ensure the container listens globally on port 8000
    ENV ROCKET_ADDRESS=0.0.0.0
    ENV ROCKET_PORT=8000
    
    ## uncomment below to test in local
    ## EXPOSE 8000
    
    CMD ./main

    Make sure the pkg set to same value as package name in Cargo.toml

    Build Docker image

    docker build -t your_username/my-rocket-image .

    Upload Docker image to Docker Hub

    First make sure you have Docker Hub account and then login to Docker:

    Docker login

    Then upload Docker image to Docker Hub:

    docker push your_username/my-rocket-image

    Deploy docker image to cloud instance with Docker Compose and Traefik

    Assume you already followed this guide and Traefik reverse proxy is up running in your server.

    Run uploaded Rocket app Docker image

    First let’s add a folder:

    mkdir ~/rocket-docker && cd ~/rocket-docker

    Then add docker-compose.yml:

    vi docker-compose.yml
    networks:
      traefik:
        external: true
     
    services:
      app:
        image: your_username/my-rocket-image:latest
        networks:
          - traefik
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.rocket-docker.rule=Host(`your-domain.com`)"
          - "traefik.http.routers.rocket-docker.service=rocket-docker"
          - "traefik.http.routers.rocket-docker.entrypoints=websecure"
          - "traefik.http.services.rocket-docker.loadbalancer.server.port=8000"
          - "traefik.http.services.rocket-docker.loadbalancer.passhostheader=true"
          - "traefik.http.routers.rocket-docker.tls=true"
          - "traefik.http.routers.rocket-docker.tls.certresolver=letsencrypt"
          - "traefik.http.routers.rocket-docker.middlewares=compresstraefik"
          - "traefik.http.middlewares.compresstraefik.compress=true"
          - "traefik.docker.network=traefik"
        restart: unless-stopped

    Make sure to update your-domain.com

    Run below to start service:

    docker compose up -d

    Verify by visiting www.your-domain.com see if everything works properly.

    Develop and update service with latest change

    Try change something in your local Next.js app, and run upload script again:

    docker build -t your_username/my-rocket-image . && docker push your_username/my-rocket-image

    Then go to your cloud instance and run below:

    docker pull your_username/my-rocket-image:latest && docker compose -f ~/rocket-docker/docker-compose.yml up -d

    Lastly, verify the change by visiting www.your-domain.com.

  • Setup Traefik Reverse Proxy and Add Routing Using Docker Compose

    Reverse proxy is essential for any service being accessed publicly. Traefik is a popular reverse proxy and load balancer designed for microservices and containerized applications. Please make sure Docker is installed with Docker Compose.

    Setup Traefik with built-in dashboard app

    Let’s first create a folder for Traefik reverse proxy.

    mkdir ~/traefik && cd ~/traefik

    Next we need to create network for Traefik to communicate with other containers, and it should be exposed externally.

    docker create network traefik

    Now, let’s create docker-compose.yml

    vi docker-compose.yml
    networks:
      traefik:
        external: true
    
    volumes:
      traefik-certificates:
    
    services:
      traefik:
        image: "traefik:latest"
        command:
          - "--log.level=DEBUG"
          - "--accesslog=true"
          - "--api.dashboard=true"
          - "--api.insecure=true"
          - "--ping=true"
          - "--ping.entrypoint=ping"
          - "--entryPoints.ping.address=:8082"
          - "--entryPoints.web.address=:80"
          - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
          - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
          - "--entryPoints.websecure.address=:443"
          - "--providers.docker=true"
          - "--providers.docker.endpoint=unix:///var/run/docker.sock"
          - "--providers.docker.exposedByDefault=false"
          - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
          # For requesting dev cert (if prod cert has issue during development)
          # - "--certificatesresolvers.myhttpchallenge.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
          - "--certificatesresolvers.letsencrypt.acme.email=admin@bill-min.com"
          - "--certificatesresolvers.letsencrypt.acme.storage=/etc/traefik/acme/acme.json"
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
          - traefik-certificates:/etc/traefik/acme
        networks:
          - traefik
        ports:
          - "80:80"
          - "443:443"
        healthcheck:
          test: ["CMD", "wget", "http://localhost:8082/ping","--spider"]
          interval: 10s
          timeout: 5s
          retries: 3
          start_period: 5s
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.dashboard.rule=Host(`traefik.your-domain.com`)"
          - "traefik.http.routers.dashboard.service=api@internal"
          - "traefik.http.routers.dashboard.entrypoints=websecure"
          - "traefik.http.routers.dashboard.tls=true"
          - "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
          - "traefik.http.routers.dashboard.middlewares=authtraefik"
          - "traefik.http.middlewares.authtraefik.basicauth.users=your_username:{SHA}your_hash"
        restart: unless-stopped

    Above config also adds Traefik Dashboard and https with letsencrypt.

    Replace your-domain with your actual domain name and setup correct DNS.

    Replace your_username and {SHA}your_hash with ones generated from https://hostingcanada.org/htpasswd-generator/

    Finally, lets start Traefik container.

    docker compose up -d

    Verify the container is running successfully

    docker ps

    You can also check Traefik dashboard by visiting https://traefik.your-domain.com. (you will need to enter the username and password used when generating htpasswd)

    Add routing to other containerized apps

    After setting up Traefik, adding routes is simple. In most cases, there are 2 things get involved:

    • Communicating with Traffic through network that can be accessed externally, which is traefik from above setting.
    • Running the container from Docker image behind Traefik load balancer at exposed port.

    Traefik will config properly based on labels and below is an example of running Next.js app through Docker Compose.

    networks:
      traefik:
        external: true
    
    services:
      app:
        image: your_username/nextjs-docker:latest
        ports:
          - 3000:3000
        networks:
          - traefik
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.nextjs.rule=Host(`www.your-domain.com`)"
          - "traefik.http.routers.nextjs.service=nextjs"
          - "traefik.http.routers.nextjs.entrypoints=websecure"
          - "traefik.http.services.nextjs.loadbalancer.server.port=3000"
          - "traefik.http.services.nextjs.loadbalancer.passhostheader=true"
          - "traefik.http.routers.nextjs.tls=true"
          - "traefik.http.routers.nextjs.tls.certresolver=letsencrypt"
          - "traefik.http.routers.nextjs.middlewares=compresstraefik"
          - "traefik.http.middlewares.compresstraefik.compress=true"
          - "traefik.docker.network=traefik"
        restart: unless-stopped

    For running different apps, the settings are mostly the same except the router name and service name.

  • How Do Ad Blocking Browser Extensions Work?

    Partial of below content were copied from answers using Perplexity AI

    Ad blocking browser extensions work by intercepting and filtering web content before it’s rendered in your browser. Here’s a detailed explanation of how they function:

    Filter lists and rules

    Ad blockers rely on filter lists, which are collections of predefined rules that determine what content should be blocked or hidden on web pages. These lists contain patterns and rules that match known ad servers, tracking scripts, and other unwanted elements.

    Content interception process

    When you visit a website, the ad blocker extension activates before the page is fully loaded. It performs the following steps:

    • HTTPS Request Blocking: The extension listens to outgoing HTTPS requests from your browser. It compares these requests against its filter lists and blocks any that match known ad platforms or tracking services.
    • URL Filtering: As the page loads, the ad blocker checks URLs of various elements against its filter lists. If a match is found, the content from that URL is blocked.
    • Content Filtering: The extension analyzes the HTML, CSS, and JavaScript of the page, looking for patterns that indicate ads or unwanted content.
    • CSS Injection: Ad blockers may inject custom CSS rules to hide elements that couldn’t be blocked at the network level.
    • JavaScript Injection: Some ad blockers inject their own JavaScript code to counteract advertising scripts and prevent them from functioning.

    What happens behind scene?

    Ad blocking browser extensions utilize both content scripts and background scripts to effectively block ads and unwanted content. Let’s delve deeper into how these scripts work together.

    Background scripts

    Background scripts run continuously in the extension’s background page, separate from any particular web page. They play a crucial role in ad blocking:

    • Filter List Management: Background scripts are responsible for downloading, parsing, and updating filter lists. These lists contain rules for blocking ads and are typically updated periodically to stay current with new ad servers and patterns.
    • Request Interception: Background scripts use the browser’s webRequest API to intercept and analyze network requests before they are sent. This allows the ad blocker to block requests to known ad servers or tracking domains at the network level, preventing the ads from loading in the first place.
    • Communication Hub: The background script acts as a central communication point, receiving messages from content scripts and popup interfaces, and coordinating the extension’s overall behavior.
    • Rule Matching: When a request is intercepted, the background script quickly checks it against the loaded filter lists to determine if it should be blocked.

    Content scripts

    Content scripts are injected into web pages and can manipulate the DOM (Document Object Model) directly. They are essential for handling ad blocking tasks that can’t be accomplished through network request blocking alone:

    • Element Hiding: Content scripts can inject CSS rules or modify the page’s existing CSS to hide ad elements that have already loaded. This is useful for ads that are served from the same domain as the main content and can’t be blocked at the network level.
    • DOM Scanning: These scripts can scan the page’s DOM structure to identify and remove ad-related elements based on specific patterns or rules.
    • Script Injection: Content scripts can inject additional JavaScript into the page to neutralize ad-related scripts or prevent them from executing.
    • Cosmetic Filtering: They apply cosmetic filters to remove empty spaces left by blocked ads, improving the page’s appearance.

    Interaction between background and content scripts

    The background and content scripts work together to provide comprehensive ad blocking:

    • Message Passing: Content scripts communicate with the background script via message passing, sending information about the current page and receiving instructions on what to block or modify.
    • Dynamic Rule Application: The background script can send updated blocking rules to content scripts in real-time, allowing for dynamic ad blocking that adapts to changes on the page.
    • Performance Optimization: By dividing tasks between background and content scripts, ad blockers can optimize performance. Network-level blocking happens in the background script, while page-specific modifications occur in the content script.

    This combination of background and content scripts allows ad blocking extensions to provide a comprehensive and efficient ad-blocking experience, handling both network-level blocking and page-specific content manipulation.

  • Develop and Deploy Next.js App Using Docker Compose (Traefik/Docker Hub)

    Before Docker, the way to deploy a node app usually requires uploading/downloading latest files using git or file transfer protocol then re-run the app in cloud. With Docker we can build docker image from local development then upload to Docker Hub. With Docker Compose setup in cloud, we can easily fetch latest image and re-run the container. This architecture can be scaled with more complicated CI/CD framework and Kubernetes clusters for large scale application and zero downtime deployment.

    Install Docker Desktop

    Simply download and install Docker Desktop from: https://www.docker.com/products/docker-desktop/

    Register account (if not already)

    A Docker Hub account is required in order to upload images.

    Login to Docker user CLI

    Run below command to login in to Docker for using docker push command.

    Docker login

    Initiate sample Next.js app

    Official Next.js doc has sample app with docker:

    npx create-next-app --example with-docker nextjs-docker

    After going through prompts, Dockerfile should be already inside project folder.

    Add build&upload script inside package.json

    Official Next.js doc has instructions for building and running Next.js with Docker, however, let’s do it more intuitively. Add below script to package.json

    {
      ...
      "scripts": {
        "upload:docker": "docker build -t your_username/nextjs-docker . && docker push your_username/nextjs-docker"
      }
      ...
    }

    Replace nextjs-docker with your preferred app’s name.

    Build docker image and upload

    Simply run below command:

    npm run upload:docker

    This will build docker image then upload to Docker Hub with latest tag.

    Github and Github Action

    In case github is being used as source control, above step can be integrated with Github Action and trigger on commit or PR merge.

    Deploy docker image to cloud instance with Docker Compose and Traefik

    Prerequisite

    • Linux server
    • Docker Engine installed with Docker Compose plugin

    Setup Traefic (skip if already done)

    Let’s make a folder for Traefic:

    mkdir ~/traefik && cd ~/traefik

    Next we need to create network for Traefik to communicate with other containers, and it should be exposed externally.

    docker create network traefik

    Now, let’s create docker-compose.yml

    vi docker-compose.yml
    networks:
      traefik:
        external: true
    
    volumes:
      traefik-certificates:
    
    services:
      traefik:
        image: "traefik:latest"
        command:
          - "--log.level=DEBUG"
          - "--accesslog=true"
          - "--api.dashboard=true"
          - "--api.insecure=true"
          - "--ping=true"
          - "--ping.entrypoint=ping"
          - "--entryPoints.ping.address=:8082"
          - "--entryPoints.web.address=:80"
          - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
          - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
          - "--entryPoints.websecure.address=:443"
          - "--providers.docker=true"
          - "--providers.docker.endpoint=unix:///var/run/docker.sock"
          - "--providers.docker.exposedByDefault=false"
          - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
          # For requesting dev cert (if prod cert has issue during development)
          # - "--certificatesresolvers.myhttpchallenge.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
          - "--certificatesresolvers.letsencrypt.acme.email=admin@bill-min.com"
          - "--certificatesresolvers.letsencrypt.acme.storage=/etc/traefik/acme/acme.json"
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
          - traefik-certificates:/etc/traefik/acme
        networks:
          - traefik
        ports:
          - "80:80"
          - "443:443"
        healthcheck:
          test: ["CMD", "wget", "http://localhost:8082/ping","--spider"]
          interval: 10s
          timeout: 5s
          retries: 3
          start_period: 5s
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.dashboard.rule=Host(`traefik.your-domain.com`)"
          - "traefik.http.routers.dashboard.service=api@internal"
          - "traefik.http.routers.dashboard.entrypoints=websecure"
          - "traefik.http.routers.dashboard.tls=true"
          - "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
          - "traefik.http.routers.dashboard.middlewares=authtraefik"
          - "traefik.http.middlewares.authtraefik.basicauth.users=your_username:{SHA}your_hash"
        restart: unless-stopped

    Above config also adds Traefik Dashboard and https with letsencrypt.

    Replace your-domain with your actual domain name and setup correct DNS.

    Replace your_username and {SHA}your_hash with ones generated from https://hostingcanada.org/htpasswd-generator/

    Finally, lets start Traefik container.

    docker compose up -d

    Verify the container is running successfully

    docker ps

    You can also check Traefik dashboard by visiting https://traefik.your-domain.com. (you will need to enter the username and password used when generating htpasswd)

    Run uploaded Next.js docker image

    First let’s add a folder:

    mkdir ~/nextjs-docker && cd ~/nextjs-docker

    Then add docker-compose.yml:

    networks:
      traefik:
        external: true
    
    services:
      app:
        image: your_username/nextjs-docker:latest
        ports:
          - 3000:3000
        networks:
          - traefik
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.nextjs.rule=Host(`www.your-domain.com`)"
          - "traefik.http.routers.nextjs.service=nextjs"
          - "traefik.http.routers.nextjs.entrypoints=websecure"
          - "traefik.http.services.nextjs.loadbalancer.server.port=3000"
          - "traefik.http.services.nextjs.loadbalancer.passhostheader=true"
          - "traefik.http.routers.nextjs.tls=true"
          - "traefik.http.routers.nextjs.tls.certresolver=letsencrypt"
          - "traefik.http.routers.nextjs.middlewares=compresstraefik"
          - "traefik.http.middlewares.compresstraefik.compress=true"
          - "traefik.docker.network=traefik"
        restart: unless-stopped

    Note that, the Next.js app need to run inside same network as Traefic. Also replace your-domain with real domain.

    Run below to start service:

    docker compose up -d

    Verify by visiting www.your-domain.com see if everything works properly.

    Develop and update service with latest change

    Try change something in your local Next.js app, and run upload script again (or using Github Action):

    npm run upload:docker

    Then go to your cloud instance and run below:

    docker pull your_username/nextjs-docker:latest && docker compose -f ~/nextjs-docker/docker-compose.yml up -d

    Lastly, verify the change by visiting www.your-domain.com.

  • Pull and Run Latest Docker Image With One Line Using Docker Compose

    In case the docker image with latest tag has been updated, we can use below one-line script to pull the updated version and recreate container.

    docker pull docker-image:latest && docker compose -f ~/apps/docker-compose.yml up -d

    After that, we can use below script for cleaning up dangling images.

    docker image prune