June 26, 2025
I was looking for a way to support dot notation in the parameters of my query string in my HTTP endpoint. For example, /search?user.name=Bruno&car.name=Honda+Civic
. I wanted req.query
to be parsed in a structured way so that I could do
const {
user: { name: userName } = {},
car: { name: carName } = {},
} = req.query;
It turns out you can set a custom query parser in your Express app. Additionally, qs
, a query string parser already used in Express, supports the dot notation. Here’s how to implement it:
const app = express();
// Use custom query parser
app.set('query parser', (str: string) => qs.parse(str, { allowDots: true }));
Here’s a simple demo implementation:
import express, { Request, Response } from 'express';
import * as qs from 'qs';
const app = express();
// Use custom query parser
app.set('query parser', (str: string) => qs.parse(str, { allowDots: true }));
interface SearchQuery {
user?: {
name?: string;
};
car?: {
name?: string;
};
}
app.get('/search', (req: Request<{}, {}, {}, SearchQuery>, res: Response) => {
const {
user: { name: userName } = {},
car: { name: carName } = {},
} = req.query;
if (userName) {
console.log('User name:', userName);
}
if (carName) {
console.log('Car name:', carName);
}
res.json({ userName, carName });
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
#express #nodejs #typescript #query-string
I had a failing test when Axios made a request to one of my API endpoints. The test reported a failure to connect to the server. That was weird because I was sure the server was up and running. It turned out the endpoint I was testing implemented an HTTP redirect to an unavailable location. Axios was following the redirect and trying to connect to that location, which is why I was seeing a failure to connect to the server.
To avoid that, we can configure Axios not to follow redirects
test('should redirect to correct location', async () => {
const response = await axios.get('http://example.com', {
maxRedirects: 0,
validateStatus: null
});
expect(response.status).toBeGreaterThanOrEqual(300);
expect(response.status).toBeLessThan(400);
expect(response.headers['location']).toBe('https://expected.com/target');
});
June 24, 2025
I’m not very familiar with TypeScript patterns for handling errors. I was wondering what would be an interesting way to define domain-specific error types so that I can be very specific on what happened inside a function execution.
Consider a createUser
function that creates a user. We can create a generic Rust-like Result
type:
type Result<T, E> =
| { success: true; value: T }
| { success: false; error: E };
Then define domain-specific error types as a discriminated union:
type CreateUserError =
| { type: 'EmailAlreadyExists'; email: string }
| { type: 'InvalidEmailFormat'; email: string }
| { type: 'WeakPassword'; reason: string };
Our createUser
function becomes:
function createUser(email: string, password: string): Result<User, CreateUserError> {
if (!isValidEmail(email)) {
return { success: false, error: { type: 'InvalidEmailFormat', email } };
}
if (!isStrongPassword(password)) {
return { success: false, error: { type: 'WeakPassword', reason: 'Too short' } };
}
if (emailExists(email)) {
return { success: false, error: { type: 'EmailAlreadyExists', email } };
}
const user = new User(email, password);
return { success: true, value: user };
}
June 18, 2025
Today I learned about codespell. A CLI tool for checking and fixing misspellings. I can check and fix if any of my blog posts have any misspellings with
codespell -f -w _posts
June 17, 2025
Lock-Free Rust: How to Build a Rollercoaster While It’s on Fire. In this article a lock-free array is built in Rust using atomics and memory ordering control. It’s useful to understand that lock-free algorithms are not so easy to build. You have to understand memory ordering semantics and how to apply them.
#rust #lock-free #atomics #memory-ordering
Why locks typically have worse performance than atomics?
The main reason is that locks can rely on syscalls like futex
to put threads to sleep when there’s contention, which introduces overhead such as context switches. In contrast, atomic operations are low-level CPU instructions executed entirely in user space, avoiding these costly transitions. Additionally, locks tend to serialize access to larger critical sections, while atomics enable more fine-grained concurrency, reducing contention and improving performance in many scenarios.
June 16, 2025
Atomics And Concurrency. This article explains the importance of memory ordering when writing concurrent programs using atomics. Essentially, data races can occur because compilers and CPUs may reorder instructions. As a result, threads operating on shared data might observe operations in an unintended order.
Some programming languages, such as C++ and Rust, give you finer control over the memory model by exposing detailed options through their atomics APIs. In C++, for example, the memory models include:
- Relaxed: no ordering guarantees
- Sequentially consistent: enforces ordering on paired operations for specific variables
- Release–Acquire: introduces a global ordering barrier
Other languages, like Go, don’t provide this level of control. Instead, Go implements a sequentially consistent memory model under the hood.
Russ Coss does a great job explaining hardware memory models, how different programming languages exposes memory models control and Go’s memory model in the following articles:
June 13, 2025
Embeddings are underrated. Blog post on how underrated embeddings is for technical writers.
I’m still not very familiar with the world of embeddings, it was nice to see concepts. Essentially embeddings is a way of semantically representing text as a multidimensional vector of floats, making it easier to compare similarity across texts.
Word embeddings was introduced in the foundational paper Word2Vec, and is also how Large Language Models represent words and capture semantic relationships, although in more complex and advanced way.
The Illustrated Word2vec illustrates the inner workings of Word2Vec.
#embeddings #ml #nlp #word2vec
Systems Correctness Practices at Amazon Web Services. Article on the portifolio of formal methods used across AWS.
Our experience at AWS with TLA+ revealed two significant advantages of applying formal methods in practice. First, we could identify and eliminate subtle bugs early in development—bugs that would have eluded traditional approaches such as testing. Second, we gained the deep understanding and confidence needed to implement aggressive performance optimizations while maintaining systems correctness.
Here’s a list of techniques they use:
- P programming language to model and specify distributed systems. It was used, for example, on migrating Simple Storage Service (S3) from eventual to strong read-after-write consistency.
- Dafne programming language to prove that the Cedar authorization policy language implementation satisfies a variety of security properties
- A tool called Kani was used by the Firecracker team to prove key properties of security boundaries
- Fault Injection Service that injects simulated faults, from API errors to I/O pauses and failed instances
- Also property-based testing, deterministic simulation, and continuous fuzzing or random test-input generation
June 11, 2025
I read How Compiler Explorer Works in 2025 and a lightweight process isolation tool called nsjail caught my eye.
June 5, 2025
Interesting tweet that resonates a lot with how I feel about the use of AI for coding. I can type faster, but not sure if I can deliver faster.
June 2, 2025
Switching away from OOP | Casey Muratori. Casey Muratori always has strong takes against OOP. I thought it was worth making a note about this one:
The lie is if something is object oriented it will be easier for someone else to integrate, because it’s all encapsulated. The truth is the opposite. The more walled off something is the harder it is for someone to integrate because there’s nothing they can do with it. The only things they can do are things you’ve already thought of and provided an interface for and anything you forgot, they’re powerless. They have to wait for an update.
#oop #programming-paradigms #casey-muratori
How to Build an Agent. I went through this tutorial today. It is very good for grasping the basics of how a coding agent works.
I really like how he presents what an agent is:
An LLM with access to tools, giving it the ability to modify something outside the context window. An LLM with access to tools? What’s a tool? The basic idea is this: you send a prompt to the model that says it should reply in a certain way if it wants to use “a tool”. Then you, as the receiver of that message, “use the tool” by executing it and replying with the result. That’s it. Everything else we’ll see is just abstraction on top of it.
May 30, 2025
Thoughts on thinking. Nice blog post on how the use of AI makes the author feel about his relationship to writing and understanding.
Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.
Amp Is Now Available. Here Is How I Use It.. A blog post from Thorsten Ball, that works at Amp, describing his use of Amp. I kind of like compiling this kind of of “how I use LLMs” articles. There’s always something new you learn that you can use to refine your coding experience. Here’s a couple of examples that caught my eyes:
Code Review
Run `git diff` to see the code someone else wrote. Review it thoroughly and give me a report
Code search
Find the code that ensures unauthenticated users can view the /how-to-build-an-agent page too
Interact with that database
Update my user account (email starts with thorsten) to have unlimited invites
May 29, 2025
The Biggest “Lie” in AI? LLM doesn’t think step-by-step. Interesting video trying to make the point that the process in which a model arrives to a mathematical evaluation answer is not necessarily the process the model describes when asked to describe how it achieved the answer. In other words, the verbalization of the reasoning is not necessarily how they model reason, and it could be the case the verbalization might not even be key to reasoning.
What I found odd about the video is that it kind of makes a claim that that is the reason LLMs don’t think like humans do. However, I’d say humans also can think without verbalizing, and, actually, verbalizing the thought process could even be difficult in some cases.
Today I learned that Cline is able to open the browser and manually test your web app. I found that amazing. Here’s a demo from Cline’s founder Saoud Rizwan. Seems to be using Puppeteer under the hood.
#cline #browser-testing #puppeteer #ai-tools
Nova. Interesting JavaScript engine written in Rust using data-oriented design and Entity-Component-System architecture.
#javascript #rust #ecs #data-oriented-design
Why Cline Doesn’t Index Your Codebase (And Why That’s a Good Thing). An interesting blog post by Cline on why they don’t use a RAG-based approach, which is common is similar products such as Cursor, to handle large codebases. In essence, their rational boils down to:
- they don’t think a RAG-based approach offers better codebase search results
- it’s a pain to keep the index up-to-date
- security
They say though that it may make sense for a product charging $20/month.
May 25, 2025
Today, I worked on a small example on how to compile a Rust program targeting a RISC-V architecture. Essentially, you add the correct target
rustup target add riscv64gc-unknown-linux-gnu
then configure the linker
to use the appropriate GNU GCC linker and also the runner
to QEMU, and statically link the C libraries.
[target.riscv64gc-unknown-linux-gnu]
linker = "riscv64-linux-gnu-gcc"
rustflags = ["-C", "target-feature=+crt-static"]
runner = "qemu-riscv64"
You can run the program with
cargo run --target riscv64gc-unknown-linux-gnu
#rust #riscv #cross-compilation
UUIDv7 Comes to PostgreSQL 18. A blog post from Nile that discusses the new UUID version that will come with the next PostgreSQL release.
Essentially, in regard to the use of UUIDs in databases, there are 3 common concerns: sorting, index locality and size. The new version solves the sorting and index locality by using Unix Epoch timestamp as the most significant 48 bits, keeping the other 74 bits for random values.
By calling uuidv7()
, a new UUIDv7 can be generated with the timestamp set to current time. An optional interval can be passed to generate a value for a different time
select uuidv7(INTERVAL '1 day');
May 24, 2025
So, I have a Lemur Pro 13 notebook running Pop!_OS. Since I first started using it, I noticed the fan noise gets very loud quite frequently. It took me a while to figure out the cause, but I finally discovered the reason: it was running in maximum performance mode.
The CPU performance is managed by a component of the operating system called the governor, which controls how the CPU frequency is adjusted based on system load.
In Pop!_OS, there are two available governors: performance
and powersave
. You can check which ones are available with:
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors
performance powersave
You can check the current governor for each CPU core by running:
cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
To change the governor to powersave
for all CPUs, run:
echo powersave | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
#pop-os #linux #cpu-governor #performance
You Can Learn RISC-V Assembly in 10 Minutes | Getting Started RISC-V Assembly on Linux Tutorial . I’ve watched this video today to get a sense of how to program a simple thing using RISC-V assembly. It turned out pretty simple. In the video, it writes a simple Hello World! program. I went just a bit further and tried a program that prints the number 0 through 9.
With GNU toolchain for RISC-V, you can easily compile your program
riscv64-linux-gnu-as hello.s -o hello.o
riscv64-linux-gnu-gcc -o hello hello.o -nostdlib -static
and with qemu
you can run it
qemu-riscv64 ./hello
Here’s what I ended up with
.section .data
char_buffer:
.byte 0 # Reserve one byte for ASCII character output
.section .text
.global _start
_start:
# -------------------------------
# Initialize loop control
# t0 = counter (0 to 9)
# t1 = limit (10)
# -------------------------------
li t0, 0 # counter = 0
li t1, 10 # limit = 10
# Load address of char_buffer into t2
la t2, char_buffer
loop:
# -------------------------------
# Print current digit as ASCII
# -------------------------------
li a7, 64 # syscall: write
li a0, 1 # fd: stdout
addi t3, t0, 48 # convert digit to ASCII ('0' + t0)
sb t3, 0(t2) # store character into buffer
mv a1, t2 # buffer address
li a2, 1 # length = 1 byte
ecall # make syscall to write digit
# -------------------------------
# Print newline character
# -------------------------------
li a7, 64 # syscall: write
li a0, 1 # fd: stdout
li t3, 10 # ASCII for newline '\n'
sb t3, 0(t2) # store newline into buffer
mv a1, t2 # buffer address
li a2, 1 # length = 1 byte
ecall # make syscall to write newline
# -------------------------------
# Loop control
# -------------------------------
addi t0, t0, 1 # increment counter
bne t0, t1, loop # continue if t0 != t1
# -------------------------------
# Exit program
# -------------------------------
li a7, 93 # syscall: exit
li a0, 0 # exit code 0
ecall
May 16, 2025
Some more notes on Amp. I bought five dollars worth of credits, and two prompts have consumed 75% of it. The problem is that it does a lot more than you asked for, consuming lots of credits. Also, there’s no way of bringing your own key.
Tweet that came out:
Gave @AmpCode a spin. Burned through my free credits fast, so I bought more. Two prompts later… five bucks gone 😅
May 15, 2025
EarlyRiders is a bitcoin-denominated investment fund.
At Early Riders we raise our fund in Bitcoin, maintain our capital in Bitcoin, require our portfolio companies to maintain Bitcoin reserves, and return capital to our limited partners in Bitcoin. Our goal is to return more Bitcoin to our limited partners than they invested in the fund:
The fund’s core philosophy is that if entrepreneurs are looking through the lens of an asset that appreciates over time, and everything is denominated according to that asset, they’ll need to spend the money with a high level of discernment and scrutiny.
I found that very interesting. I’ve always felt that the ease of raising large amounts of money made misallocating capital in startups a non-event.
#bitcoin #investment-fund #earlyriders
LLMs Get Lost In Multi-Turn Conversation (via). In this paper, large-scale simulation experiments are performed, and performance degradation is found in multi-turn LLM settings when compared to single-turn settings. From abstract:
Analysis of 200,000+ simulated conversations decomposes the performance degradation into two components: a minor loss in aptitude and a significant increase in unreliability. We find that LLMs often make assumptions in early turns and prematurely attempt to generate final solutions, on which they overly rely. In simpler terms, we discover that when LLMs take a wrong turn in a conversation, they get lost and do not recover.
The main explanations for this effect could be:
- premature and incorrect assumptions early in the conversation
- over-relying on previous incorrect responses, compounding the error
- overly adjusting responses to the first and last turn, forgetting middle turns
- overly verbose responses, muddling the context, and confusing next turns
AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms (via). Google presents AlphaEvolve, an evolutionary coding agent that combines Gemini models, automated evaluators, and an evolutionary framework to design and discover advanced algorithms.
A more technical explanation can be found in the paper AlphaEvolve: A coding agent for scientific and algorithmic discovery.
#alphaevolve #google #ai-agents #gemini
I had this thought
There’s a difference between AI writing a percentage of someone’s code and AI making them more productive. That person remains the author - they still need to understand and verify the code. AI might do most of the writing, but productivity may stay the same.
This clarifies what the “vibe” part of “vibe coding” means. The amount you’re vibing is inversely proportional to the amount you’re understanding and verifying.
#ai #productivity #vibe-coding
Today I tried out Amp. It’s a VS Code extension AI code agent. It felt a bit less intrusive than Cline, although somewhat slower. Also, I don’t understand the web product proposal where you can have a team and people competing on AI usage.
May 14, 2025
What the heck is npx
, which is occasionally used in JavaScript projects? It’s a CLI tool that comes with NodeJS that allows you to run
NodeJS scripts without installing them globally. For example,
npx -y whats-the-weather paris
The current weather in Paris is 'few clouds' with a temperature of 19°°C.
May 13, 2025
First time working in a project with pnpm. pnpm
is a JavaScript package manager written in Typescript. It is faster than npm
and yarn
, and it makes use of a content-addressable filesystem to store files and hard links, avoiding duplication and saving disk space.