When using reqwest crate and building docker images with Dockerfile mentioned in this article, you will encounter openssl issues due to missing packages.
Error: could not find system library 'openssl' required by the 'openssl-sys' crate
It turns out that below line is the culprit:
FROM docker.io/rust:1-slim-bookworm AS build#############################################
rust:1-slim-bookworm misses lots of necessary package to run the application. It is suggested in official Rust docker image document that we should avoid using slim image.
Simple but ugly solution
Well, the simplest approach is using a minimal Dockerfile which will always work:
FROM rust:bookwormCOPY . .RUN cargo build --releaseEXPOSE 8080CMD ./target/release/your_package
However, it will generate a pretty large image which can take gigabytes of storage.
The better one
We can still use multi stage build to generate an optimal image with a little tweak (using default rust image version rust:bookworm):
FROM docker.io/rust:bookworm AS build## cargo package name: customize here or provide via --build-argARG pkg=hello-worldWORKDIR /buildCOPY . .RUN --mount=type=cache,target=/build/target \ --mount=type=cache,target=/usr/local/cargo/registry \ --mount=type=cache,target=/usr/local/cargo/git \ set -eux; \ cargo build --release; \ objcopy --compress-debug-sections target/release/$pkg ./main################################################################################FROM docker.io/debian:bookworm-slimWORKDIR /app## copy the main binary## add more files below if neededCOPY --from=build /build/main ./EXPOSE 8080CMD ./main
Now it builds fine, but you will get a runtime error when running reqwest:
./main: error while loading shared libraries: libssl.so.3: cannot open shared object file: No such file or directory
Furthermore, after installing correct packages either vendored or automatic, you will still see error when sending https requests:
Add below line to Dockerfile at stage of running app.
RUN apt-get update && apt-get install -y pkg-config libssl-dev
Either approach works fine and you can run the app successfully. However, when sending https requests, you will get below error:
unable to get local issuer certificate
A Debian image issue
This thread gives a good idea of what happens here. Essentially, the debian official image does not have ca certification package installed. To solve the issue, simply install the package in Dockerfile:
RUN apt-get update && apt-get install -y pkg-config libssl-dev && apt install -y ca-certificates# or below if using vendored OpenSSL# RUN apt-get update && apt install -y ca-certificates
Final Dockerfile
To build rust + actix-web + reqwest, below is what works for me:
When I was trying to run app with Google App Engine, I followed Hello World example step by step, and no surprise that it didn’t work. You will see below error:
ERROR: (gcloud.app.deploy) Error Response: [13] Failed to create cloud build: com.google.net.rpc3.client.RpcClientException: <eye3 title='/ArgoAdminNoCloudAudit.CreateBuild, FAILED_PRECONDITION'/> APPLICATION_ERROR;google.devtools.cloudbuild.v1/ArgoAdminNoCloudAudit.CreateBuild;invalid bucket "staging.avid-shape-445101-a4.appspot.com"; service account avid-shape-445101-a4@appspot.gserviceaccount.com does not have access to the bucket;AppErrorCode=9;StartTimeMs=1734486112961;unknown;ResFormat=uncompressed;ServerTimeSec=0.921583338;LogBytes=256;Non-FailFast;EndUserCredsRequested;EffSecLevel=privacy_and_integrity;ReqFormat=uncompressed;ReqID=4b101eaa0d045332;GlobalID=0;Server=[2002:a05:6670:1585:b0:a5e:1b4f:746d]:4001.
Thanks to this answer, it gives me strong lead to the solution, although there are still some extra steps (might be from undocumented recent changes from Google Cloud).
Now let’s run deploy again (just to to see actual building logs):
gcloudappdeploy
You will see same error but in the build log, you will see why exactly the build failed:
ERROR: failed to create image cache: accessing cache image "us.gcr.io/avid-shape-445101-a4/app-engine-tmp/build-cache/default/ttl-7d:latest": connect to repo store "us.gcr.io/avid-shape-445101-a4/app-engine-tmp/build-cache/default/ttl-7d:latest": GET https://us.gcr.io/v2/token?scope=repository%3Aavid-shape-445101-a4%2Fapp-engine-tmp%2Fbuild-cache%2Fdefault%2Fttl-7d%3Apull&service=us.gcr.io: DENIED: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/avid-shape-445101-a4/locations/us/repositories/us.gcr.io" (or it may not exist)
Again, its permission issue.
Add necessary permissions
After adding the permission shown in build logs, you will see more permissions needed. In the end I figured out that below permissions are necessary in addition to above ones:
roles/artifactregistry.createOnPushWriter
roles/storage.objectAdmin
Now run gcloud app deploy, and you should be able to deploy the app successfully.
This article is starter of an anatomy series for dissecting AdBlock/AdBlockPlus/uBlockOrigin source code and learn how they work. First let’s start from running and debugging AdBlock locally.
Note that this article is for eyeo owned browser extension AdBlock and AdBlockPlus.
Source code
Luckily, AdBlock main source entry is using NX monorepo, where we can easily see the dependency graph within the repo below.
However, there are many core features are not within the monorepo, here is a list of important ones:
@eyeo/snippets: eyeo developed special content scripts for element hiding and behavioral intercepting. This repo reflects everything from eyeo official developer document.
Filter list: eyeo developed filter list mostly leveraging @eyeo/snippets for enhanced blocking.
@adblockinc/rules: an npm package for retrieving, storing and providing filter lists from various sources for use in AdBlock and Adblock Plus.
Installation
Now let’s download the main source code and install packages:
Simply run below to build both AdBlock and AdBlockPlus extension:
npmrunbuild
Add extensions to Chrome
Go to chrome://extensions/.
At the top right, turn on Developer mode.
Click Load unpacked.
Find and select the app or extension folder.
e.g. for Chrome AdBlockPlus with Manifest V2, it is located inside /extensions/host/adblockplus/dist/devenv/chrome-mv2
Now you should be able to see the extension being added and enabled.
Debugging @eyeo/webext-ad-filtering-solution
In a lot of cases, the feature you are looking for is mostly in webext-ad-filtering-solution package. This package can be loaded as extension standalone.
Then go to chrome://extensions/ and click on Inspect views -> background page (Service worker for Manifest V3 version) under eyeo's WebExtension Ad-Filtering Solution Test Extension.
You should be able to see start extension log under Console tab. You can try reload extension if not seeing the log.
Debugging content script
Let’s go to sdk/content/index.js and add some logging:
Recently came across this task to try different Rust web frameworks and Actix Web is one of the most popular frameworks. Its documentation is clean and easy to bootstrap, however, when I was trying to deploy the app using Docker, I could not find a clean Dockerfile example that contains minimal dependencies.
Luckily, when I was trying Rocket, it has a pretty clean documentation about containerize the app. After tweaking it a bit, it just worked for Actix Web app as well.
Personally, I feel more confident to follow official documents for long term maintainability. Even tho the image size coming from the build is not as small as ~60MB like in this guide, but ~145MB image is also acceptable here consider other factors.
Initiate Actix Web Sample App
Create a new Rust app:
cargonewhello-worldcdhello-world
Now add Actix Web dependency into Cargo.toml file:
[dependencies]actix-web="4"
Then replace the contents of src/main.rs with the following:
use actix_web::{get, post, web,App,HttpResponse,HttpServer,Responder};#[get("/")]asyncfnhello()->implResponder{HttpResponse::Ok().body("Hello world!")}#[post("/echo")]asyncfnecho(req_body:String)->implResponder{HttpResponse::Ok().body(req_body)}asyncfnmanual_hello()->implResponder{HttpResponse::Ok().body("Hey there!")}#[actix_web::main]asyncfnmain()-> std::io::Result<()>{HttpServer::new(||{ App::new().service(hello).service(echo).route("/hey", web::get().to(manual_hello))}).bind(("0.0.0.0",8080))?.run().await}
Compile and run the program:
cargorun
Test app using http://0.0.0.0:8080
Add Dockerfile and build Docker image
The original Dockerfile from Rocket document has some Rocket specific settings, simply removing those env variables would just work:
FROM docker.io/rust:1-slim-bookworm AS build## cargo package name: customize here or provide via --build-argARG pkg=hello-worldWORKDIR /buildCOPY . .RUN --mount=type=cache,target=/build/target \ --mount=type=cache,target=/usr/local/cargo/registry \ --mount=type=cache,target=/usr/local/cargo/git \ set -eux; \ cargo build --release; \ objcopy --compress-debug-sections target/release/$pkg ./main################################################################################FROM docker.io/debian:bookworm-slimWORKDIR /app## copy the main binary## add more files below if neededCOPY --from=build /build/main ./EXPOSE 8080CMD ./main
Note that, pkg ARG need to be the same as package name in Cargo.toml.
This article will talk about how to use Docker for quick Rocket web app deployment. This architecture can be scaled with more complicated CI/CD framework and Kubernetes clusters for large scale application and zero downtime deployment.
Install rustup by following the instructions on its website. Once rustup is installed, ensure the latest toolchain is installed by running the command:
rustupdefaultstable
Initiate Rocket sample app
cargonewhello-rocket--bincdhello-rocket
Now, add Rocket as a dependency in your Cargo.toml:
[dependencies]rocket="0.5.1"
Modify src/main.rs so that it contains the code for the Rocket Hello, world! program, reproduced below:
Note that, in order to test Docker image in local, EXPOSE is needed:
FROM docker.io/rust:1-slim-bookworm AS build## cargo package name: customize here or provide via --build-argARG pkg=hello-rocketWORKDIR /buildCOPY . .RUN --mount=type=cache,target=/build/target \ --mount=type=cache,target=/usr/local/cargo/registry \ --mount=type=cache,target=/usr/local/cargo/git \ set -eux; \ cargo build --release; \ objcopy --compress-debug-sections target/release/$pkg ./main################################################################################FROM docker.io/debian:bookworm-slimWORKDIR /app## copy the main binaryCOPY --from=build /build/main ./## copy runtime assets which may or may not existCOPY --from=build /build/Rocket.tom[l] ./staticCOPY --from=build /build/stati[c] ./staticCOPY --from=build /build/template[s] ./templates## ensure the container listens globally on port 8000ENV ROCKET_ADDRESS=0.0.0.0ENV ROCKET_PORT=8000## uncomment below to test in local## EXPOSE 8000CMD ./main
Make sure the pkg set to same value as package name in Cargo.toml
Build Docker image
dockerbuild-tyour_username/my-rocket-image.
Upload Docker image to Docker Hub
First make sure you have Docker Hub account and then login to Docker:
Dockerlogin
Then upload Docker image to Docker Hub:
dockerpushyour_username/my-rocket-image
Deploy docker image to cloud instance with Docker Compose and Traefik
Assume you already followed this guide and Traefik reverse proxy is up running in your server.
Reverse proxy is essential for any service being accessed publicly. Traefik is a popular reverse proxy and load balancer designed for microservices and containerized applications. Please make sure Docker is installed with Docker Compose.
Setup Traefik with built-in dashboard app
Let’s first create a folder for Traefik reverse proxy.
mkdir~/traefik&&cd~/traefik
Next we need to create network for Traefik to communicate with other containers, and it should be exposed externally.
dockercreatenetworktraefik
Now, let’s create docker-compose.yml
vidocker-compose.yml
networks:traefik:external:truevolumes:traefik-certificates:services:traefik:image:"traefik:latest"command:-"--log.level=DEBUG"-"--accesslog=true"-"--api.dashboard=true"-"--api.insecure=true"-"--ping=true"-"--ping.entrypoint=ping"-"--entryPoints.ping.address=:8082"-"--entryPoints.web.address=:80"-"--entrypoints.web.http.redirections.entrypoint.to=websecure"-"--entrypoints.web.http.redirections.entrypoint.scheme=https"-"--entryPoints.websecure.address=:443"-"--providers.docker=true"-"--providers.docker.endpoint=unix:///var/run/docker.sock"-"--providers.docker.exposedByDefault=false"-"--certificatesresolvers.letsencrypt.acme.tlschallenge=true"# For requesting dev cert (if prod cert has issue during development)# - "--certificatesresolvers.myhttpchallenge.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"-"--certificatesresolvers.letsencrypt.acme.email=admin@bill-min.com"-"--certificatesresolvers.letsencrypt.acme.storage=/etc/traefik/acme/acme.json"volumes:-/var/run/docker.sock:/var/run/docker.sock-traefik-certificates:/etc/traefik/acmenetworks:-traefikports:-"80:80"-"443:443"healthcheck:test:["CMD","wget","http://localhost:8082/ping","--spider"]interval:10stimeout:5sretries:3start_period:5slabels:-"traefik.enable=true"-"traefik.http.routers.dashboard.rule=Host(`traefik.your-domain.com`)"-"traefik.http.routers.dashboard.service=api@internal"-"traefik.http.routers.dashboard.entrypoints=websecure"-"traefik.http.routers.dashboard.tls=true"-"traefik.http.routers.dashboard.tls.certresolver=letsencrypt"-"traefik.http.routers.dashboard.middlewares=authtraefik"-"traefik.http.middlewares.authtraefik.basicauth.users=your_username:{SHA}your_hash"restart:unless-stopped
Above config also adds Traefik Dashboard and https with letsencrypt.
Replace your-domain with your actual domain name and setup correct DNS.
You can also check Traefik dashboard by visiting https://traefik.your-domain.com. (you will need to enter the username and password used when generating htpasswd)
Add routing to other containerized apps
After setting up Traefik, adding routes is simple. In most cases, there are 2 things get involved:
Communicating with Traffic through network that can be accessed externally, which is traefik from above setting.
Running the container from Docker image behind Traefik load balancer at exposed port.
Traefik will config properly based on labels and below is an example of running Next.js app through Docker Compose.
Partial of below content were copied from answers using Perplexity AI
Ad blocking browser extensions work by intercepting and filtering web content before it’s rendered in your browser. Here’s a detailed explanation of how they function:
Filter lists and rules
Ad blockers rely on filter lists, which are collections of predefined rules that determine what content should be blocked or hidden on web pages. These lists contain patterns and rules that match known ad servers, tracking scripts, and other unwanted elements.
Content interception process
When you visit a website, the ad blocker extension activates before the page is fully loaded. It performs the following steps:
HTTPS Request Blocking: The extension listens to outgoing HTTPS requests from your browser. It compares these requests against its filter lists and blocks any that match known ad platforms or tracking services.
URL Filtering: As the page loads, the ad blocker checks URLs of various elements against its filter lists. If a match is found, the content from that URL is blocked.
Content Filtering: The extension analyzes the HTML, CSS, and JavaScript of the page, looking for patterns that indicate ads or unwanted content.
CSS Injection: Ad blockers may inject custom CSS rules to hide elements that couldn’t be blocked at the network level.
JavaScript Injection: Some ad blockers inject their own JavaScript code to counteract advertising scripts and prevent them from functioning.
What happens behind scene?
Ad blocking browser extensions utilize both content scripts and background scripts to effectively block ads and unwanted content. Let’s delve deeper into how these scripts work together.
Background scripts
Background scripts run continuously in the extension’s background page, separate from any particular web page. They play a crucial role in ad blocking:
Filter List Management: Background scripts are responsible for downloading, parsing, and updating filter lists. These lists contain rules for blocking ads and are typically updated periodically to stay current with new ad servers and patterns.
Request Interception: Background scripts use the browser’s webRequest API to intercept and analyze network requests before they are sent. This allows the ad blocker to block requests to known ad servers or tracking domains at the network level, preventing the ads from loading in the first place.
Communication Hub: The background script acts as a central communication point, receiving messages from content scripts and popup interfaces, and coordinating the extension’s overall behavior.
Rule Matching: When a request is intercepted, the background script quickly checks it against the loaded filter lists to determine if it should be blocked.
Content scripts
Content scripts are injected into web pages and can manipulate the DOM (Document Object Model) directly. They are essential for handling ad blocking tasks that can’t be accomplished through network request blocking alone:
Element Hiding: Content scripts can inject CSS rules or modify the page’s existing CSS to hide ad elements that have already loaded. This is useful for ads that are served from the same domain as the main content and can’t be blocked at the network level.
DOM Scanning: These scripts can scan the page’s DOM structure to identify and remove ad-related elements based on specific patterns or rules.
Script Injection: Content scripts can inject additional JavaScript into the page to neutralize ad-related scripts or prevent them from executing.
Cosmetic Filtering: They apply cosmetic filters to remove empty spaces left by blocked ads, improving the page’s appearance.
Interaction between background and content scripts
The background and content scripts work together to provide comprehensive ad blocking:
Message Passing: Content scripts communicate with the background script via message passing, sending information about the current page and receiving instructions on what to block or modify.
Dynamic Rule Application: The background script can send updated blocking rules to content scripts in real-time, allowing for dynamic ad blocking that adapts to changes on the page.
Performance Optimization: By dividing tasks between background and content scripts, ad blockers can optimize performance. Network-level blocking happens in the background script, while page-specific modifications occur in the content script.
This combination of background and content scripts allows ad blocking extensions to provide a comprehensive and efficient ad-blocking experience, handling both network-level blocking and page-specific content manipulation.
Before Docker, the way to deploy a node app usually requires uploading/downloading latest files using git or file transfer protocol then re-run the app in cloud. With Docker we can build docker image from local development then upload to Docker Hub. With Docker Compose setup in cloud, we can easily fetch latest image and re-run the container. This architecture can be scaled with more complicated CI/CD framework and Kubernetes clusters for large scale application and zero downtime deployment.
After going through prompts, Dockerfile should be already inside project folder.
Add build&upload script inside package.json
Official Next.js doc has instructions for building and running Next.js with Docker, however, let’s do it more intuitively. Add below script to package.json
Replace nextjs-docker with your preferred app’s name.
Build docker image and upload
Simply run below command:
npmrunupload:docker
This will build docker image then upload to Docker Hub with latest tag.
Github and Github Action
In case github is being used as source control, above step can be integrated with Github Action and trigger on commit or PR merge.
Deploy docker image to cloud instance with Docker Compose and Traefik
Prerequisite
Linux server
Docker Engine installed with Docker Compose plugin
Setup Traefic (skip if already done)
Let’s make a folder for Traefic:
mkdir~/traefik&&cd~/traefik
Next we need to create network for Traefik to communicate with other containers, and it should be exposed externally.
dockercreatenetworktraefik
Now, let’s create docker-compose.yml
vidocker-compose.yml
networks:traefik:external:truevolumes:traefik-certificates:services:traefik:image:"traefik:latest"command:-"--log.level=DEBUG"-"--accesslog=true"-"--api.dashboard=true"-"--api.insecure=true"-"--ping=true"-"--ping.entrypoint=ping"-"--entryPoints.ping.address=:8082"-"--entryPoints.web.address=:80"-"--entrypoints.web.http.redirections.entrypoint.to=websecure"-"--entrypoints.web.http.redirections.entrypoint.scheme=https"-"--entryPoints.websecure.address=:443"-"--providers.docker=true"-"--providers.docker.endpoint=unix:///var/run/docker.sock"-"--providers.docker.exposedByDefault=false"-"--certificatesresolvers.letsencrypt.acme.tlschallenge=true"# For requesting dev cert (if prod cert has issue during development)# - "--certificatesresolvers.myhttpchallenge.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"-"--certificatesresolvers.letsencrypt.acme.email=admin@bill-min.com"-"--certificatesresolvers.letsencrypt.acme.storage=/etc/traefik/acme/acme.json"volumes:-/var/run/docker.sock:/var/run/docker.sock-traefik-certificates:/etc/traefik/acmenetworks:-traefikports:-"80:80"-"443:443"healthcheck:test:["CMD","wget","http://localhost:8082/ping","--spider"]interval:10stimeout:5sretries:3start_period:5slabels:-"traefik.enable=true"-"traefik.http.routers.dashboard.rule=Host(`traefik.your-domain.com`)"-"traefik.http.routers.dashboard.service=api@internal"-"traefik.http.routers.dashboard.entrypoints=websecure"-"traefik.http.routers.dashboard.tls=true"-"traefik.http.routers.dashboard.tls.certresolver=letsencrypt"-"traefik.http.routers.dashboard.middlewares=authtraefik"-"traefik.http.middlewares.authtraefik.basicauth.users=your_username:{SHA}your_hash"restart:unless-stopped
Above config also adds Traefik Dashboard and https with letsencrypt.
Replace your-domain with your actual domain name and setup correct DNS.
You can also check Traefik dashboard by visiting https://traefik.your-domain.com. (you will need to enter the username and password used when generating htpasswd)