Thursday, August 31, 2023

Google Maps Testing New Apple Maps-Inspired Map Style


For the past three years, Google’s cartography has largely remained in this difficult-to-scan state—that is, until now. That’s because as of late August 2023, Google appears to be testing a new Apple Maps-inspired map style.

Unfortunately, Google’s new, in-testing map style is even worse than its old one. Here in Chicago, for instance, notice how much harder it is to read and scan the map:1



from Hacker News https://ift.tt/HcCe3n9

Writing a bare-metal RISC-V application in D

Writing a bare-metal RISC-V application in D

2023/02/08

Categories: d osdev riscv

This post will show you how to use D to write a bare-metal “Hello world” program that targets the RISC-V QEMU simulator. In a future blog post (now available) we’ll build on this to target actual hardware: the VisionFive 2 SBC. See blog-code for the final code from this post. For a more complex example, see Multiplix, an operating system I am developing that runs on the VisionFive 2 (and Raspberry Pis).

Why D?

Recently I’ve been writing bare-metal code in C, and I’ve become a bit frustrated with the lack of features that C provides. I started searching for a good replacement, and revisited D (a language I used for a project a few years ago). It turns out D has introduced a mode called betterC (sounds exactly like what I want), which essentially disables all language features that require the D runtime. This makes it roughly as easy to use D for bare-metal programming as C. You don’t get all the features of D, but you get enough that it covers all the things I want (in fact, for systems programming I prefer the betterC subset of D over full D). D in betterC mode is exactly what it sounds like, and retains the feel of C – going forward I think I will be using it instead of C in all situations where I would have otherwise used C (even in non-bare-metal situations).

Here are the positives about D I value most:

  • A decent import system (no more header files and #include).
  • Automatic bounds checking, and bounded strings and arrays.
  • Methods in structs.
  • Compile-time code evaluation (run D code at compile-time!).
  • Powerful templating and generics.
  • Iterators.
  • Default support for thread-local storage.
  • Scope guards and RAII.
  • Some memory safety protections with @safe.
  • A fairly comprehensive and readable online specification.
  • An active discord channel with people that answer my questions in minutes.
  • Both an LLVM-based compiler (LDC) and a GNU compiler (GDC), which is officially part of the GCC project.
    • And these compilers both export roughly the same flags and intrinsics as Clang and GCC respectively.

These features, combined with the lack of a runtime and the C-like feel of the language (making it easy to port previous code), make it a no-brainer for me to have D as the base choice for any project where I would otherwise use C.

Now that I’ve told you about my reasons for choosing D, let’s try using it to write a bare-metal application that targets RISC-V. If you want to follow along, the first step is to download the toolchain (the following tools should work on Linux or MacOS). You’ll need three different components:

  1. LDC 1.30 (the LLVM-based D compiler). Can be downloaded from GitHub. Make sure to use version 1.30.
  2. A riscv64-unknown-elf GNU toolchain. Can be downloaded from SiFive’s Freedom Tools repository.
  3. The QEMU RISC-V simulator: qemu-system-riscv64. Can be downloaded from SiFive’s Freedom Tools repository, or also usually available as part of your system’s QEMU package.

We’ll be using LDC since it ships with the ability to target riscv64. I have used GDC for bare-metal development as well, but it requires building a toolchain from source since nobody ships pre-built riscv64-unknown-elf-gdc binaries. We’ll use the GNU toolchain for assembling, linking, and for other tools like objcopy and objdump, and QEMU for simulating the hardware.

With these installed you should be able to run:

$ ldc2 --version
LDC - the LLVM D compiler (1.30.0):
...

$ riscv64-unknown-elf-ld
riscv64-unknown-elf-ld: no input files

$ qemu-system-riscv64 -h
...

CPU entrypoint

We’re writing bare-metal code, so there’s no operating system, no console, no files – nothing. The CPU just starts executing instructions at a pre-specified address after performing some initial setup. We’ll figure out what that address is later when we set up the linkerscript. For now we can just define the _start symbol as our entrypoint, and assume the linker will place the code at this label at the CPU entrypoint.

A D function requires a valid stack pointer, so before we can execute any D code we need to load the stack pointer register sp with a valid address.

Let’s make a file called start.s and put the following in it:

.section ".text.boot"

.globl _start
_start:
    la sp, _stack_start
    call dstart
_hlt:
    j _hlt

For now let’s assume _stack_start is a symbol with the address of a valid stack, and in the linkerscript we’ll set this up properly. After loading sp, we call a D function called dstart, defined in the next part.

D entrypoint

Now we can define our dstart function in dstart.d. For now we’ll just cause an infinite loop.

module dstart;

extern (C) void dstart() {
    while (1) {}
}

Linkerscript

Before we can compile this program we need a bit of linkerscript to tell the linker how our code should be laid out. We’ll need to specify the address where the text section should start (the entry address), and reserve space for all the data sections (.rodata, .data, .bss), and the stack.

Entry address

Today we’ll be targeting the QEMU virt RISC-V machine, so we have to figure out what its entrypoint is.

We can ask QEMU for a list of all devices in the virt machine by telling it to dump the its device tree:

$ qemu-system-riscv64 -machine virt,dumpdtb=virt.dtb
$ dtc virt.dtb > virt.dts

In virt.dts you’ll find the following entry:

memory@80000000 {
    device_type = "memory";
    reg = <0x00 0x80000000 0x00 0x8000000>;
};

This means that RAM starts at address 0x80000000 (everything below is special memory or inaccessible). The CPU entrypoint for the virt machine is the first instruction in RAM, stored at 0x80000000.

In the linkerscript, we need to tell the linker that it should place the _start function at 0x80000000. We do this by telling it to put the .text.boot section first in the .text section, located at 0x80000000. Then we include the rest of the .text sections, followed by read-only data, writable data, and the BSS.

In link.ld:

ENTRY(_start)

SECTIONS
{
    .text 0x80000000 : {
        KEEP(*(.text.boot))  
        *(.text*) 
    }
    .rodata : {
        . = ALIGN(8);
        *(.rodata*)
        *(.srodata*)
        . = ALIGN(8);
    }
    .data : { 
        . = ALIGN(8);
        *(.sdata*)
        *(.data*)
        . = ALIGN(8);
    } 
    .bss : {
        . = ALIGN(8);
        _bss_start = .;
        *(.sbss*)
        *(.bss*)
        *(COMMON)
        . = ALIGN(8);
        _bss_end = .;
    }

    .kstack : {
        . = ALIGN(16);
        . += 4K;
        _stack_start = .;
    }

    /DISCARD/ : { *(.comment .note .eh_frame) }
}

What is the BSS?

The BSS is a region of memory that the compiler assumes is initialized to all zeroes. Usually the static data for a program is directly copied into the ELF executable – if you have a string "hello world" in your program, those exact bytes will live somewhere in the binary (in the read-only data section). However, a lot of static data is initialized to zero, so instead of putting those zero bytes directly into the ELF file, the linker lets us save space by making a special section (the BSS) that must be initialized to all zeroes at runtime, but won’t actually contain that data in the ELF file itself. So even if you have a giant 1MB array of zeroes, your ELF binary will be small because that section will be expanded into RAM only when the application starts. Usually the OS sets up the BSS before it launches a program, but we’re running bare-metal, so we have to do that manually in the dstart function (in the next section). To make this initialization possible, we define the _bss_start and _bss_end symbols in the linkerscript. These are symbols whose addresses will be the start and end of the BSS section respectively.

Reserving space for the stack

We also reserve one page for the .kstack section and mark the _stack_start symbol to be located to the end of it (remember the stack grows down). The stack must be 16-byte aligned.

Compile!

Now we have everything we need to compile a basic bare-metal program.

$ ldc2 -Oz -betterC -mtriple=riscv64-unknown-elf -mattr=+m,+a,+c --code-model=medium -c dstart.d
$ riscv64-unknown-elf-as -mno-relax -march=rv64imac start.S -c -o start.o
$ riscv64-unknown-elf-ld -Tlink.ld start.o dstart.o -o prog.elf

Let’s look at some of these flags:

  • Oz: optimize aggressively for size.
  • betterC: enable betterC mode (disable the built-in D runtime).
  • mtriple=riscv64-unknown-elf: build for the riscv64 bare-metal ELF target.
  • mattr=+m,+a,+c: enable the following RISC-V extensions: m (multiply/divide), a (atomics), and c (compressed instructions).
  • code-model=medium: code models in RISC-V control how pointers to far away locations are constructed. The medium code model (also called medany) allows us to address any symbol located within 2 GiB of the current address, and is recommended for 64-bit programs. See the SiFive post for more information.
  • mno-relax: disables linker relaxation in the assembler (it is already disabled by default in LDC). Linker relaxation is a RISC-V-specific optimization that allows the linker to make use of the gp (global pointer) register. I explain it in more detail in the linker relaxation section.

It’s going to get tedious to type out these commands repeatedly, so let’s create a Makefile (or a Knitfile if you’re cool):

SRC=$(wildcard *.d)
OBJ=$(SRC:.d=.o)

all: prog.bin

%.o: %.d
      ldc2 -Oz -betterC -mtriple=riscv64-unknown-elf -mattr=+m,+a,+c,+relax --code-model=medium --makedeps=$*.dep $< -c -of $@
%.o: %.s
      riscv64-unknown-elf-as -march=rv64imac $< -c -o $@
prog.elf: start.o $(OBJ)
      riscv64-unknown-elf-ld -Tlink.ld $^ -o $@
%.bin: %.elf
      riscv64-unknown-elf-objcopy $< -O binary $@
%.list: %.elf
      riscv64-unknown-elf-objdump -D $< > $@
run: prog.bin
      qemu-system-riscv64 -nographic -bios none -machine virt -kernel prog.bin
clean:
      rm -f *.bin *.list *.o *.elf *.dep

-include *.dep

and compile with

This file is a raw dump of our program. At this point it clocks in at a whopping 22 bytes.

To see the disassembled program, run

$ make prog.list
...
$ cat prog.list
prog.elf:     file format elf64-littleriscv

Disassembly of section .text:

0000000080000000 <_start>:
    80000000: 00001117                auipc   sp,0x1
    80000004: 02010113                addi    sp,sp,32 # 80001020 <_stack_start>
    80000008: 00000097                auipc   ra,0x0
    8000000c: 00c080e7                jalr    12(ra) # 80000014 <dstart>

0000000080000010 <_hlt>:
    80000010: a001                    j       80000010 <_hlt>
    ...

0000000080000014 <dstart>:
    80000014: a001                    j       80000014 <dstart>

Looks like our _start function is being linked properly at 0x80000000 and has the expected assembly!

If you try to run with

$ make run
qemu-system-riscv64 -nographic -bios none -machine virt -kernel prog.bin

it will just enter an infinite loop (press Ctrl-A Ctrl-X to quit QEMU). We still have a bit more work to do before we get output.

More setup: initializing the BSS

Now let’s modify dstart to initialize the BSS. We need to declare some extern variables so that the linker symbols _bss_start and _bss_end are available to our D code. Then we can just loop from _bss_start to _bss_end and assign all the bytes in that range to zero. Once complete, our BSS is initialized and we can run arbitrary D code (using globals that may be initialized to zero).

extern (C) {
    extern __gshared uint _bss_start, _bss_end;

    void dstart() {
        uint* bss = &_bss_start;
        uint* bss_end = &_bss_end;
        while (bss < bss_end) {
            *bss++ = 0;
        }

        import main;
        kmain();
    }
}

And in main.d we have our bare-metal main entrypoint:

module main;

void kmain() {}

Creating a minimal D runtime

Several D language features are unavailable because of our lack of runtime. For example, types such as string and size_t are undefined, and we can’t use assertions (we’ll get to those later). The first step to creating a minimal runtime is to create an object.d file. The D compiler will search for this special file and import it automatically everywhere. So we can create definitions for types like string and size_t here. Here is the minimal definition I like to use, which also defines ptrdiff_t, noreturn, and uintptr.

module object;

alias string = immutable(char)[];
alias size_t = typeof(int.sizeof);
alias ptrdiff_t = typeof(cast(void*) 0 - cast(void*) 0);

alias noreturn = typeof(*null);

static if ((void*).sizeof == 8) {
    alias uintptr = ulong;
} else static if ((void*).sizeof == 4) {
    alias uintptr = uint;
} else {
    static assert(0, "pointer size must be 4 or 8 bytes");
}

Writing to the UART device

Most systems have a UART device. Generally how this works is that you write a byte to a special place in memory, and that byte will be transmitted using the UART protocol over some pins on the board. In order to read the bytes with your host computer you need a UART to USB adapter plugged into your host, and then you can read from the corresponding device file (usually /dev/ttyUSB0) on the host computer. Today we’ll just be simulating our bare-metal code in QEMU, so you don’t need to have a special adapter. QEMU will emulate a UART device and print out the bytes written to its transmit register.

Enabling volatile loads/stores

When writing to device memory it is important to ensure that the compiler does not remove our loads/stores. For example, if a device is located at 0x10000000, we might write directly to that address by casting the integer to a pointer. To the compiler, it just looks like we are writing to random addresses, which might be undefined behavior or result in dead code (e.g., if we never read the value back, the compiler may determine that it can eliminate the write). We need to inform the compiler that these reads/writes of device memory must be preserved and cannot be optimized out. D uses the volatileStore and volatileLoad intrinsics for this.

We can define these in our object.d:

pragma(LDC_intrinsic, "ldc.bitop.vld") ubyte volatileLoad(ubyte* ptr);
pragma(LDC_intrinsic, "ldc.bitop.vld") ushort volatileLoad(ushort* ptr);
pragma(LDC_intrinsic, "ldc.bitop.vld") uint volatileLoad(uint* ptr);
pragma(LDC_intrinsic, "ldc.bitop.vld") ulong volatileLoad(ulong* ptr);
pragma(LDC_intrinsic, "ldc.bitop.vst") void volatileStore(ubyte* ptr, ubyte value);
pragma(LDC_intrinsic, "ldc.bitop.vst") void volatileStore(ushort* ptr, ushort value);
pragma(LDC_intrinsic, "ldc.bitop.vst") void volatileStore(uint* ptr, uint value);
pragma(LDC_intrinsic, "ldc.bitop.vst") void volatileStore(ulong* ptr, ulong value);

Controlling the UART

With that set up, let’s figure out where QEMU’s UART device is located in memory so we can write to it.

The QEMU virt machine defines a number of virtual devices, one of which is a UART device. Looking through the QEMU device tree again in virt.dts, you’ll see the following:

uart@10000000 {
    interrupts = <0x0a>;
    interrupt-parent = <0x03>;
    clock-frequency = <0x384000>;
    reg = <0x00 0x10000000 0x00 0x100>;
    compatible = "ns16550a";
};

This says that a ns16550a UART device exists at address 0x10000000.

On real hardware the UART would need to be properly initialized by writing some memory-mapped configuration registers (for setting up the baud rate and other options). However the QEMU device does not require initialization. It emulates an ns16550a device, and writing to its transmit register is enough to cause a byte to be written over the UART (which appears on the console when simulating with QEMU). The transmit register for the ns16550a is the first mapped register, so it is located at 0x10000000.

In uart.d:

module uart;

struct Ns16650a(ubyte* base) {
    static void tx(ubyte b) {
        volatileStore(base, b);
    }
}

alias Uart = Ns16650a!(cast(ubyte*) 0x10000000);

Now in kmain, we can test the UART.

module main;

import uart;

void kmain() {
    Uart.tx('h');
    Uart.tx('i');
    Uart.tx('\n');
}
$ make prog.bin
$ qemu-system-riscv64 -nographic -bios none -machine virt -kernel prog.bin
hi

Press Ctrl-A Ctrl-x to quit QEMU (the program will enter an infinite loop after returning from kmain).

Making a simple print function

Now we can just wrap the Uart.tx function up with a println function and we’ll have a bare-metal Hello world! in no time.

In object.d:

import uart;

void printElem(char c) {
    Uart.tx(c);
}

void printElem(string s) {
    foreach (c; s) {
        printElem(c);
    }
}

void print(Args...)(Args args) {
    foreach (arg; args) {
        printElem(arg);
    }
}

void println(Args...)(Args args) {
    print(args, '\n');
}

And in main.d:

void kmain() {
    println("Hello world!");
}
$ make prog.bin
$ qemu-system-riscv64 -nographic -bios none -machine virt -kernel prog.bin
Hello world!

There you have it, (simulated) bare-metal hello world!

Some of the initialization we’ve done hasn’t been strictly necessary (we didn’t end up using any variables in the BSS), but it should set you up properly for writing more complex bare-metal programs. The next sections discuss some further steps.

Bonus content

Adding support for assertions and bounds-checking

If you try to use a D assert expression, you might notice that the linking step fails:

riscv64-unknown-elf-ld: dstart.o: in function `_D6dstart5kmainFZv':
dstart.d:(.text+0x3c): undefined reference to `__assert'

It is looking for an __assert function, so let’s create one in the object.d file:

size_t strlen(const(char)* s) {
    size_t n;
    for (n = 0; *s != '\0'; ++s) {
        ++n;
    }
    return n;
}

extern (C) noreturn __assert(const(char)* msg, const(char)* file, int line) {
    // convert a char pointer into a bounded string with the [0 .. length] syntax
    string smsg = cast(string) msg[0 .. strlen(msg)];
    string sfile = cast(string) file[0 .. strlen(file)];
    println("fatal error: ", sfile, ": ", smsg);
    while (1) {}
}

Now you can use assert statements!

D also supports bounds-checking, and internally the compiler will also call __assert when a bounds check fails. This means we also have working bounds checks now.

Try this in main.d:

void kmain() {
    char[10] array;
    int x = 12;
    println(array[x]);
}

Running it gives

fatal error: main.d: array index out of bounds

Bounds-checked arrays!

This code doesn’t print the line number because that requires converting an int to a string – something left as an exercise to the reader.

Enabling linker relaxation

Linker relaxation is an optimization in the RISC-V toolchain that allows globals to be accessed through the global pointer (stored in the gp register). This value is a pointer to somewhere in the data section, which allows instructions to load globals by directly offsetting from gp, instead of constructing the address of the global from scratch (which may require multiple instructions on RISC-V).

To enable linker relaxation we have to do three things:

  1. Modify the linkerscript so that it defines a symbol for the global pointer.
  2. Load the gp register with this value in the _start function.
  3. Enable linker relaxation in our compiler.

To modify the linkerscript we just add the following at the beginning of the .rodata section definition:

__global_pointer$ = . + 0x800;

This sets up the __global_pointer$ symbol (a special symbol that the linker assumes is stored in gp) to point 0x800 bytes into the data segment (RISC-V instructions can load/store values offset up to 0x800 bytes from the gp register in either direction in one instruction). This allows offsets from gp to cover most/all of static data.

Next we add to _start:

.option push
.option norelax
la gp, __global_pointer$
.option pop

We need to temporarily enable the norelax option, otherwise the assembler will optimize this to mv gp, gp.

Finally, we can remove the -mno-relax flag from the riscv64-unknown-elf-as invocation, and add -mattr=+m,+a,+c,+relax to the ldc2 invocation to enable linker relaxation in the compiler.

Removing unused functions

If you take a look at the disassembly of the program (make prog.list), you might notice there are definitions for functions that are never called. This is because those functions have been inlined, but the definitions were not removed. Functions/globals in D are always exported in the object file, even if they are marked private (I’m not really sure why). Luckily modern linkers can be pretty smart and it’s easy to have the linker remove these unused functions. Pass --function-sections and --data-sections to LDC to have it put each function/global in its own section (still within .text, .data etc.). Now if you pass the --gc-sections flag to the linker, it will remove any unreferenced sections (hence removing any unused functions/globals). With these flags I got the final “hello world” binary down to 160 bytes.

This is a basic form of optimization performed by the linker. There are more advanced forms of link-time optimization (LTO), which I won’t discuss in much detail. If you pass -flto=thin or -flto=full to LDC, the object files that it generates will be LLVM bitcode. Then you will need to invoke the linker with the LLVMgold linker plugin (or use LLD) so that it can read these files. With this method, the linker will apply full compiler optimizations across object files.

Thread-local storage and globals

Globals are thread-local by default in D. That means if you declare a global as int x; then whenever you access x, the compiler will do so through the system’s thread pointer (on RISC-V this is stored in the tp register). That means if you use a thread-local variable, you had better make sure tp points to a block of memory where x is located, and if you have multiple threads each thread’s tp should point to a distinct thread-local block (each thread will have its own private copy of x). I won’t explain in detail how to set that up here, but briefly, you’ll need to initialize the .tdata and .tbss sections for each thread in dstart, and load tp with a pointer to the current thread’s local .tdata.

To make a global shared across all threads, you need to mark it as immutable or shared. A variable marked as shared imposes some limits, and basically forces you to mark everything it touches as shared. You can still read/write it without checks, but at least you should be able to easily know if you are accessing a shared variable (and manually verify you have the appropriate synchronization). In a future version of D it is likely that directly accessing a shared variable will be disallowed, except through atomic intrinsics. If you have a lock to protect the variable, then you will need to cast away the shared qualifier manually, which isn’t perfect but forces the programmer to acknowledge the possible unsafety of accessing the shared global. You can always use the __gshared attribute as an escape hatch, which makes the global shared but does not make any changes to the type (no limitations). A global marked as __gshared is equivalent to a C global.

I hope this provided a simple introduction to D for bare-metal programming, and that you might consider using D instead of C in some future project as a result. This post has only covered running in a simulated environment. In a future post I’ll show how to write bare-metal code for the VisionFive 2, a recently released RISC-V SBC produced by StarFive. Stay tuned! (now available)

If you want to see a larger example, I am developing an operating system called Multiplix in D. It has support for RISC-V and AArch64, and targets the VisionFive, VisionFive 2, Raspberry Pi 3, and Raspberry Pi 4 (and likely more boards in the future). Check it out! It is currently heavily in-progress, but I plan to make a post about it when it is further along.

The code from this post is available in my blog-code repository.



from Hacker News https://ift.tt/CM1rHvV

The Scourge of Airport Noise

August 30, 2023

EARLIER THIS MONTH month I was in Berlin, boarding a plane at the new Berlin-Brandenburg Airport. Walking through the lobbies and concourses, something felt different. I couldn’t quite place it. The airport, only three years-old is spacious, clean, and well laid out. But it was more than that. It was something else.

Then it hit me. It was quiet! From the time we walked through the front doors, until the moment we arrived at the gate, not a single public address announcement played. Not one. The speakers were silent.

Every airport in the world should follow this model. Indeed, some are quieter than others, to varying degrees. Copenhagen and Amsterdam, for example, keep announcements to a minimum. But on the whole, airports are some of the noisiest public spaces we have, and the loudspeaker is mainly to blame.

Sure, terminals are packed with wailing babies, chattering TVs, and airport architecture seemingly designed to amplify, rather than quash, the collective racket of hundreds of people. But it’s those public address announcements that are the most aggravating culprit. Ninety percent of them are useless in the first place, and they’re often delivered at a volume severe enough to shatter windows. And with all the various microphones and speakers targeting different sections of a terminal, it’s not uncommon to hear two or three announcements blaring at the same time.

The result of this, whether you sense it directly or not, is stress. And if there’s one thing the air travel experience needs less of, it’s stress.

Berlin-Brandenburg Airport.     Photo by the author.

The needlessness and redundancy of most announcements would be hilarious if it weren’t so annoying. And those few of any value are presented in such a tautological tangle as to be almost incomprehensible. Why say in ten words what you can say in a hundred?

At JFK, for instance, there’s an announcement that loops around every five minutes or so. It declares: “All areas of the terminal have been designated as smoke-free.” I’ll begin by asking if there’s anyone alive who’d be daft enough to assume they’re permitted to smoke in a terminal, an accommodation you’ll no longer find even at an airport in rural Pakistan. But listen, also, to the language. JFK is the ultimate melting pot, and I have a healthy suspicion that, to someone with limited (or no) English skills, a phrase like “designated as smoke-free” has about as much meaning as a bird call.

Then we have the security announcements. Did you know that my hometown airport, Boston-Logan, is home to a program called “SAFE,” or “Security Awareness For Everyone”? I know this because I’m told about it over and over again while sitting at the gate. “If you see something, say something.” Important advice there.

We also have the one that goes, “If a stranger approaches you about carrying a foreign object…” A what? I picture a toaster with wires coming out of it. “Would you mind taking this to Frankfurt for me?”

Meanwhile, “TSA has limited the items that may be carried through the security checkpoint,” we’re told at Los Angeles International. “Passengers are advised to contact their air carrier.” The pointlessness of this counsel needs no elaboration. Of the millions of travelers who’ve been subjected to this recording, I suspect the total number who’ve moved to action and “contacted their air carrier,” stands exactly at zero. To further fray our nerves and damage our hearing, it plays after you’ve gone through security.

Indeed, the overseers of LAX have created what might be the noisiest airport in America. Among the racket is an absurd series of PAs that play outside, on the sidewalks, where the concrete overpasses increase the decibel level exponentially. Anyone waiting for a hotel shuttle or the rental car bus is subject to a mind-melting cacophony of unintelligible blather.

And although Americans have a deep cultural affinity for infantilization and condescension — as if every citizen is too stupid to get on an airplane, or to even ride an escalator, without a loudly barked set of instructions — we aren’t the only offenders. If you’ve ever been to the domestic terminal in Medellin, Colombia, or to Mexico City’s terminal 2, you’ll know what I mean. Bring a good pair of headphones.

Schiphol Airport, Amsterdam.     Photo by the author.

Ironically, the actual loudest things at an airport — airplanes themselves — are almost never heard, buffered behind walls of glass and concrete. And it’s not until stepping aboard your plane that you can finally savor some silence.

Or that’s the idea, anyway. Alas, the airplane cabin has contracted this same scourge. Nowadays, the entire boarding process, followed by the first several minutes after takeoff, consist of nothing but announcements: safety videos that never end, ignored directives on how to stow your luggage, and those manifesto-length promotional speeches that last from the time the landing gear retracts until thirty-thousand feet, sometimes in multiple languages. On a flight recently I counted thirteen separate PAs during the boarding process alone, from either the gate agent or a cabin attendant.

Here’s the thing: nobody is paying attention. All these PAs do is create noise and leave people frazzled.

On one airline, a pre-recorded briefing plays during descent, telling people to buckle up, stow their tables, shut their laptops and such. The recording ends, and a second later a flight attendant comes on and repeats the entire thing.

Bad enough, but winner of the redundancy award are those announcements letting us know that “Flight attendants will now be coming through the aisles to [insert task here].” Seriously, we don’t need a heads-up on what you’re about to do any more than we need to know what color underwear you’re wearing. Simply do it.

All of this sonic pollution does not make passengers more attentive, more satisfied, or keep them better informed. What it does is make an already nerve-wracking experience that much more uncomfortable.

Berlin we turn our weary ears to you.

 

Upper photo courtesy of Unsplash.

Related Stories:

HOW TO SPEAK AIRLINE. A GLOSSARY FOR TRAVELERS.



from Hacker News https://ift.tt/X65fsk7

The Making of EP Thompson

When they told Frank Thompson they would shoot him he told them he was proud. I’m ready to die for democracy, he told them, as they led him out to the barren hills above Sofia. I’m proud to die in the fight against fascism, he told them, sixteen hundred miles from home. I give you the salute of freedom, said Major Frank Thompson, 23 years old. He raised his right fist. When Edward Palmer Thompson asked what happened to his brother, people told him this: he died so well that grown men wept.

Frank Thompson died a hero. His brother spent his life wondering why. As more details emerged, more seemed missing. Frank was a liaison between the British army and Bulgarian anti-fascist partisans. Their mission, which led to his capture and execution in 1944, was badly planned, poorly supported, sent out in “conditions of almost impossible difficulty”. None of the family’s questions received any answer from the state. Frank’s brother had to do his own research. In the process he became the most influential British historian of the past half-century: his Making of the English Working Class created an entire field. But before he wrote history, EP Thompson made it.

The Thompson brothers were different. Frank was frail; Edward was rugged. Frank was fluent in ten languages; Edward, the “duffer of the family”, knew only one. But they shared a home “quick with ideas and poetry and international visitors”: their father, a Methodist missionary who returned from India an anti-imperialist apostate, was personal friends with Jawaharlal Nehru, the Indian nationalist leader. They shared a love of poetry: a desire to one day be poets themselves. And growing up in the 1930s, poverty at home and fascism abroad, they shared a dream of a better world.

That dream led both brothers into the Communist Party as the Spanish Republic fell in 1939. And it inspired both, when war came, to enlist: Frank in the mysterious world of special operations, Edward as a lieutenant commanding tanks. Fighting his way through Italy, Lieutenant Thompson read a poem by Frank’s favourite poet, William Blake: “I will not cease from Mental Fight,/Nor shall my sword sleep in my hand,/Till we have built Jerusalem,/In England’s green and pleasant land”. Frank would often write to Edward about the world ordinary people would build after the war: there’s a spirit in Europe, he said, that’s never been seen before.

“Milton” by William Blake

And after the war, for a few years, that appeared prophetic: when Edward wrote his first book in 1947, the story of his brother’s life, he chose those words for the title. New governments across Europe promised something like the dream the brothers shared: societies where democracy and socialism and freedom were interchangeable words. Even in Britain things were changing: Labour marched into office in 1945 singing “The Red Flag” and it seemed a new England was closer than it had ever been.

After his death in 1993, a colleague said of Edward that he remained “a man of the Forties” all his life, defined to the end by that decade’s sufferings and hopes and disappointments. Which is perhaps to say that, when Frank Thompson died on 10 June 1944, he didn’t cease to act. What else is a historian but someone who can’t let go of the past?

A decade on from the end of the war, and it seemed like the past was all EP Thompson had. The 1945 government “sank with all hands in full view of the electorate”; the idealism of the 1940s soured to apathy or curdled to cynicism, the peace Thompson fought for withering in the shadow of the hydrogen bomb. In 1956, a Hungarian workers’ revolt was crushed under the treads of Soviet tanks. It was one more insult, Thompson told his party, to his brother’s memory. He resigned.

Content from our partners

All through the next year, Thompson found himself reading William Blake. The poem “London” was a critique, Thompson wrote that year, of the system Blake saw being born two centuries before: the society founded on “the acquisitive ethic, which divides man from man, leads him into mental and moral captivity, destroys the sources of joy, and brings, as its reward, death”. It was that society, Thompson wrote, that endures today: capitalism. Blake had a different name for it. We’re living, Blake said, in the kingdom of the beast.

Blake was, Thompson thought, a frustrated revolutionary, retreating into a mystic “inner kingdom” after political defeat. But in Blake, and others like him, Thompson found a tradition he thought could “leaven” the soulless materialism of contemporary socialist movements – and a history of forgotten struggles that deserved to be illuminated. Those insights set him on a path that led, six years later, to the 1963 publication of his masterpiece: The Making of the English Working Class.

“I am seeking to rescue,” Thompson wrote, in a sentence that would define his career, “the poor stockinger, the Luddite cropper, the ‘obsolete’ hand-loomweaver… from the enormous condescension of posterity.” If Eric Hobsbawm was correct in his judgement that Thompson was the only historian he knew that possessed true genius, Making was the work that exhibited it. As a tutor in adult education, Thompson was known for turning the lectern over to his working-class students: Making attempted to do the same on a larger scale, centring the ideas and experiences of those historians traditionally overlooked. He called it “history from below”.

Making’s groundbreaking reinterpretation of English social history in the late 18th and early 19th century infuriated conservative scholars – and challenged their counterparts on the left. Class wasn’t a thing, Thompson argued, the product of blind material forces: “peasants flocking to factories, processed into so many yards of class-conscious proletarians”. Class was a relationship, “embodied in real people and in a real context”, in real struggles. And those struggles were, Thompson wrote, not confined to wages and conditions. Workers fought for autonomy, independence from employers, freedom from the state. The fight for democracy wasn’t, therefore, a sidetrack but a touchpaper smouldering under the symbolic order of the age – one lit by a host of tenacious and persecuted campaigners for democracy.

To the new doctrine of economic utility, workers contrasted their own set of values: a “moral economy” of custom that maintained, from bread riots to Luddite machine-breakers, that production was subordinate to human needs. Beyond these “conservative revolutionaries”, Thompson’s cast of radicals ran from the familiar – early feminists, romantic poets, democratic pamphleteers – to the bizarre: popular belief in the rights of “freeborn Englishmen” rested on the wholesale fantasy of a pre-Norman constitution. But – excepting religious dissent, dismissed by Thompson as “psychic masturbation” – he warned readers not to mistake victory for truth. “Economic development” was never neutral; progress for the rich was often disaster for the poor; and these, Thompson told readers, were not merely historical insights. “Causes which were lost in England,” he wrote, “might, in Asia or Africa, yet be won.”

For two decades it was cited in Britain more often than any other history book; in the 1980s Thompson was cited more than any other British historian worldwide. The book was reprinted again and again and again. One of his great regrets, Thompson later admitted, was that – for all his animus toward “the species academia superculosis” – his later histories were written for academic audiences. One of his abiding satisfactions was that Making was not.

It was for his students: a history book where their experiences mattered, just as they had in his classroom, to remind them of the collective histories that consumer culture was stripping away. It was for the young people of the movement Thompson supported, the New Left, to let them know where their struggle for a humane society came from – and perhaps, also, where it was going.

[See also: How to start a revolution]

And it was written, in some sense, for Thompson himself. “The New Left has dispersed itself,” Thompson wrote a few months after Making‘s publication. “We failed.” Thompson’s idiosyncratic Marxism – literary, moralistic, a kind of Methodism without God – was out of step with the “theatrical and irrational” left he saw emerging in the sixties. Seven years on from Making, Thompson admitted he felt “the whole idiom and tradition I thought and worked within” had been rejected by the left. In the next decade, Thompson compared himself to an old steam engine, swept off the tracks; to a great bustard, struggling to fly; to Blake, a mystic in a rationalist age. The conclusion is hard to avoid. EP Thompson wrote about history’s losers because he was one.

And just as Thompson drifted to the fringes, he saw the mainstream converge: Labour and Conservatives uniting in service to the emerging “managerial state”. New technologies had enabled, Thompson thought, unprecedented concentrations of power. The “arteries” of Britain were hardening, one by one: politics, industry, media, universities. Zones of freedom, of “open conflict of values and ideas” winnowed, year on year. In a 1971 essay, Thompson linked Harold Wilson’s managerialism with his administration’s continually shrinking political horizons: “The art of the possible can only be restrained from engrossing the whole universe if the impossible can find ways of breaking back into politics, again and again.” EP Thompson couldn’t find a way forwards. So, once again, he turned back.

But, to the surprise of his friends, it wasn’t to write a sequel to Making, a book on the Victorian socialist thinkers Thompson loved. He chose, instead, to study a time about which he knew little, landing, he said “like a parachutist into unknown territory”. The early 18th century was a harsh time: an era of private wealth and public decay; of a parasitic state owned wholesale by the predatory rich. In Whigs and Hunters Thompson investigated the “Black Act”, a law creating over 300 capital offences. The act, effectively criminalising whole communities of poor forest dwellers, was disproportionate by design. “Stability,” Thompson wrote, “no less than revolution, may have its own kind of Terror.”

Whigs and Hunters is a bleak, pessimistic book: published in 1975, it reflected a changed world – and an author that was changing, too. The Black Act shows us the harm done by bad laws, Thompson wrote, and by that fact it indicates that the converse is possible: that law isn’t, as conventional Marxism teaches, a simple mechanism of class rule. “The rule of law,” Whigs and Hunters concluded, “is an unqualified human good.” Making’s “freeborn Englishmen”, with their fantasised Anglo-Saxon liberties, look ludicrous to modern eyes. But they left an inheritance, Thompson realised, almost beyond price; a state penned by laws, watched by juries, circumscribed by popular suspicion of militaries and police. These fragile barriers, erected by “intense historic struggles”, form “precedents signed in blood”, preserving, imperfectly and unevenly, a political culture in which traditions of justice and honesty can survive.

And it was all slipping away. Under successive governments, juries were stacked and neutered; freedom of speech asphyxiated by official secrecy; the police murder of the teacher-activist Blair Peach in 1979, the investigation into which was obstructed by an increasingly aggressive, and unaccountable, police hierarchy. “There has never been such a bonfire of our ancient laws as has taken place in the last decade, and, dancing around the leaping flames,” Thompson wrote, “we find, not the extreme Left, but the ‘law and order brigade’.” Ruling-class “anarchs” appalled Thompson. He found popular indifference far harder to take. Anaesthetised by television, “gorged on consumer goodies and blood”, the tradition of dissidence that Making chronicled seemed dead. “An operation has been done on our culture,” he wrote in 1979, “and the guts taken out.” The arteries keep hardening; the walls closed in. And the man famous for rescuing the past began to wonder if it wasn’t the present that stood most in need of rescue.

On a 1976 visit to India, led by Indira Gandhi, Nehru’s daughter and a lifelong family friend, Thompson saw his worst fears realised. Empowered by a declaration of national emergency, Gandhi was ruling like a dictator: civil liberties suspended; dissidents crushed. In Making, Thompson expressed the hope that where the acquisitive society had triumphed in England, in the Global South something better might be born. But in the India of the Emergency, he saw something worse: a marriage between the managed society and the police state; the Black Act resurrected in the nuclear age. And the cause that lost in India could lose in England, too. Thompson’s suspicion of the secret state wasn’t groundless: we now know he was under police surveillance from when he was 18 years old. After India, the old fears took on apocalyptic dimensions. He told a 1979 conference of historians that, quite soon, they would all be in jail.

Indira Gandhi, prime minister of India, declared a state of emergency in 1975, allowing her to imprison her opponents. Photo by Bettmann/Getty Images

So it was something other than curiosity that brought Thompson, in the middle of the greatest crisis of his life, in late 1978, to Bulgaria, to Frank: to the mystery that made him a historian. As he retraced his brother’s final journey, Thompson found shadowy figures standing in his way. He called them anti-historians. Shredded documents, censored records, vicious rumours, convenient lies: if historians recover the past, anti-historians work to destroy it. Thompson still found enough to be deeply disturbed.

His brother’s mission was ill timed, badly planned and under-supplied: almost as if he was set up to fail. Thompson began to suspect he was: records suggested neither the Foreign Office nor the Soviets wanted the partisans to take power after the war. But even if Frank’s defeat was preordained, his death wasn’t. Eighteen days passed between Frank’s capture and his execution; 18 days that, Thompson discovered, the Bulgarian government had spent in continual communication with Allied intelligence, negotiating the country’s impending declaration of neutrality. Within that context, the state execution of a uniformed British officer seemed an unbelievable provocation. Unless it wasn’t a provocation at all, but a diplomatic headache neatly resolved: “somebody winked”. Frank Thompson died a hero because someone preferred it that way.

As for the story the Soviets told about his brother – the brave speeches, the final salute – it was just another work of the anti-historians. Just another lie that power tells. Thompson had long been wary of the state; after Bulgaria his scepticism hardened to contempt. His politics drifted, in the process, away from Marx’s contending classes, towards William Blake’s “radical constituency of ‘us’ or  ‘the people’ against the ‘them’ of the State, or of Bishops”, as Thompson later described it, “or of the servitors of the Beast”. On his return to England, in the spring of 1980, Lieutenant Thompson went back to war.

[See also: The restoration of the coffeehouse]

He was outnumbered several million to one. The Cold War had reached fever pitch. Europe hosted hundreds of thousands of warships, tanks and planes; ten million men under arms; uncounted thousands of nuclear bombs. The West and the Soviet Union had at their disposal spies, police, anti-historians – the power, if they wished, to destroy the world. Once again, all EP Thompson had was the past.

It was all he needed. For all Thompson’s professed materialism, he never wavered in his conviction that history is not just a record of action but a means of exchange, a place of meeting: deep underneath the earth we walk on, in places the powerful never consider and the cynics can never reach, course underground rivers of struggle, friendship, justice, love. And that there are times in history when “the stored energies of the dead flow back into the living”, when those forgotten rivers burst to earth. Thompson didn’t know if he was living in such time. But the hour was late. For four decades he’d fought and lost. The treaties had been abrogated; the powers were gearing for war – a war, many analysts thought, that would come quickly, and leave nothing behind. He had to try.

If the defining symbol of the 18th century was the constitution, Thompson wrote, ours is the bomb. Nuclear weapons necessitate centralisation, secrecy, deceit; and these in turn induce the hatred and fear justifying yet more weapons. As the nuclear state destroys the external forms of democracy, the city-killers themselves strike, invisibly, at its moral core. There’s something worse than a society which chooses, collectively, to burn millions of men, women and children alive. And that’s a society – like England, like America, like Russia – that doesn’t think it’s a choice at all. Thompson calls it exterminism: “the civilisation of death”.

European Nuclear Disarmament launched in April 1980. Calling for a nuclear-free Europe on both sides of the Iron Curtain, it was Thompson’s last campaign, and the strangest. A coalition of trade unionists, feminists, dissident communists, religious pacifists – even a poet or two – it didn’t look like anything Thompson had organised before. It looked, in fact, oddly like the radicalism he wrote about in Making. That was, Thompson chuckled in a later interview, why it worked. Modern means of communication rested firmly in the hands of corporate power and the managerial state. But not, Thompson noted, pre-modern means.

In the period covered by Making, the indispensable means of agitation was the pamphlet. In 1980, Thompson turned back the clock: his pamphlet “Protest and Survive” sold 100,000 copies. In the months that followed, Thompson was speaking at trade union branches and in public squares and from half the pulpits of England, the unclosed arteries of an alternative nation. Thompson’s life was a sequence of demonstrations and conventions and yet more pamphlets: he was quoting Cobbett against Thatcher and Milton against Reagan; writing a Swiftean sci-fi satire against the nuclear arms race (the historian Perry Anderson called it his most revealing work). Thompson was speaking directly to the people of England. And they were listening. In polls of public opinion, he was closing in on the Queen. He was reading Blake again, but not on his own: 100,000 people listened in Trafalgar Square as Thompson reissued a statement of defiance made 200 years before: “Against the kingdom of the beast, we witnesses rise.”

And all over Europe, they were rising. Hundreds of thousands marched in Paris, Rome, Bonn, Madrid – a movement Washington couldn’t manage, that Moscow couldn’t control. Thompson’s name was on the lips of generals and politicians, and it was not long before the anti-historians were out. From west and east rumours circulated that Thompson was a proxy, acting for someone else. They were right. In 1984, in an Italian city he had fought in 40 years before, he talked about the dreams he had then, about the brother he lost. If we can win this, he told them, “we will have liberated the intentions of the dead”.

He lost. But the defeat looked a little like victory. “Good years for peace,” Thompson wryly observed, “are not good years for the peace movement.” Whether this helped push the superpowers back into dialogue was a question, Thompson thought, for future historians. But when Mikhail Gorbachev became leader of the USSR in 1985, and asked his advisers if there was another way to live, it’s said they told him of a man called Thompson.

By the time the “polluting cloud” of the Cold War finally lifted, Thompson was dying. He spent his final years finishing the research he’d neglected. In one, his long-awaited book on Blake, Thompson corrected himself. Blake’s “inner kingdom” wasn’t an escape from politics but a precondition for it: a system of alternative values without which social change would be literally unimaginable. That doesn’t sound very Marxist: he wasn’t sure, Thompson told one interviewer, if he was a Marxist any more. Maybe, he suggested in one talk, the old political labels were obsolete: future politics will have to, as it did in Making, consist in part of finding the right names. And this, he said, was a job for poets. But Thompson wrote poetry all his life. And in his sequel to Making, the 1991 collection Customs in Common, he suggested the “conservative revolutionaries” he studied might make an unexpected return.

In an era of ecological crisis, he wrote, we may need the “rediscovery, in new forms, of a kind of “customary consciousness”, in which once again successive generations stand in apprentice relation to each other, in which material satisfactions remain stable (if more equally distributed) and only cultural satisfactions enlarge”. In a society that increasingly conformed to Thompson’s darkest visions, that work of rediscovery would be difficult. It could not be done alone. “What passes on the daily screen is so distracting,” Thompson wrote in 1975, “the presence of the status quo so palpable, that it is difficult to believe that any other form of energy exists.” But there will always be moments when the impossible breaks in – when “we become aware of other and older reserves of energy glowing all around us, just as, when the street-lights are dowsed, we become aware of the stars”. The past needs the present. The future needs the past.

At least one of Thompson’s youthful hopes was not disappointed: he did “grow more dangerous as he grew old”. His first book asked the security services why his brother died. So did Beyond a Frontier (1997), his last. Thompson had refused, to the end, to compromise with power; with the acquisitive society and the civilisation of death. In his life, as in all lives, the dead had not ceased to act. “Never, on any page of Blake,” Thompson wrote, a few months before his death on 28 August 1993, “is there the least complicity with the kingdom of the beast.”

[See also: EP Thompson’s dystopian visions]



from Hacker News https://ift.tt/nXUst3P

Things I wish I knew before moving 50K lines of code to React Server Components

July 19, 2023 (about 1 month ago)

Everything I wish I knew before moving 50,000 lines of code to React Server Components


React Server Components are a lot. We recently rethought our docs and rebranded Mux and, while we were at it, moved all of mux.com and docs.mux.com over to Server Components. So… believe me. I know. I also know that it’s possible and not that scary and probably worth it.

Let me show you why by answering the following questions: Why do Server Components matter, and what are they good for? What are they not good for? How do you use them, how do you incrementally adopt them, and what kind of advanced patterns should you use to keep them under control? By the end of all this, you should have a pretty good idea of whether you should use React Server Components and how to use them effectively.

One great way to understand React Server Components is to understand what problem they’re solving. So let’s start there.

Long ago, in days of yore, we generated websites on servers using tech like PHP. This was great for fetching data by using secrets and doing CPU-heavy work on big computers so that clients could just get a nice, light HTML page, personalized to them.

Then, we started wondering: What if we wanted faster responses and more interactivity? Every time a user takes an action, do we really want to send cookies back to the server and make the server generate a whole new page? What if we made the client do that work instead? We can just send all the rendering code to the client as JavaScript!

This was called client-side rendering (CSR) or single-page applications (SPA) and was widely considered a bad move. Sure, it’s simple, which is worth a lot! In fact, for a long time, the React team recommended it as the default approach with their tool, create-react-app. And for frequently changing, highly interactive pages like a dashboard, it’s probably enough. But what if you want a search engine to read your page, and that search engine doesn’t execute JavaScript? What if you need to keep secrets on a server? What if your users’ devices are low-powered or have poor connections (as so many do)?

This is where server-side rendering (SSR) and static site generation (SSG) came in. Tools like Next.js and Gatsby used SSR and SSG to generate the pages on the server and send them to the client as HTML and JavaScript. The best of both worlds. The client can immediately show that HTML so the user has something to look at. Then, once the JS loads, the site becomes nice and interactive. Bonus: search engines can read that HTML, which is cool.

This is actually quite good! But there are still a few problems to solve. First: most SSR/SSG approaches send all the JavaScript used to generate the page to the client, where the client then runs it all again and marries that HTML with the JavaScript that just booted up. (This marriage, by the way, is called hydration — a term you’ll see a lot in this neck of the woods.) Do we really need to send and run all that JavaScript? Do we really need to duplicate all of the rendering work just to hydrate?

Second, what if that server-side render takes a long time? Maybe it runs a lot of code, maybe it’s stuck waiting for a slow database call. Then the user’s stuck waiting. Bummer.

This is where React Server Components come in.

React Server Components (RSCs) are, unsurprisingly, React components that run on the server instead of on the client. The “what” isn’t nearly as interesting as the “why,” though. Why do we want RSCs? Well, frameworks that support RSCs have two big advantages over SSR.

First, frameworks that support RSCs give us a way to define where our code runs: what needs to run only on the server (like in the good ol' PHP days) and what should run on the client (like SSR). These are called Server Components and Client Components, respectively. Because we can be explicit about where our code runs, we can send less JavaScript to the client, leading to smaller bundle sizes and less work during hydration.

The second advantage of RSC-driven frameworks: Server Components can fetch data directly from within the component. When that fetch is complete, Server Components can stream that data to the client.

This new data-fetching story changes things in two ways. First, fetching data in React is way easier to think about now. Any Server Component can just… fetch data directly using a node library or using the fetch function we all know and love. Your user component can fetch user data, your movie component can fetch movie data, and so on and so forth. No more using a library or using useEffect to manage complex loading states (react-query I still love you), and no more fetching a bunch of data at the page level with getServerSideProps and then drilling it down to the component that needs it.

Second, it solves the problem we talked about earlier. Slow database call? No need to wait; we’ll just send that slow component to the client when it’s ready. Your users can enjoy the rest of the site in the meantime.

Bonus round: What if you need to fetch data on the server in response to a user’s action on the client (like a form submission)? We have a way to do that, too. The client can send data to the server, and the server can do its fetching or whatever, and stream the response back to the client just like it streamed that initial data. This two-way communication isn't technically React Server Components — this is React Actions — but it’s built on the same foundation and is closely related. We’re not going to talk much about React Actions here, though. Gotta save something for the next blog post.

Up until now, I’ve been painting a pretty rosy picture. If RSCs are so much better than CSR and SSR, why wouldn’t you use them? I was wondering the same thing, and I learned the hard way — as the title of this post suggests — that there is indeed a catch. A few, actually. Here are the three things we spent the most time on when migrating to React Server Components.

Turns out that, as of right now, CSS-in-JS doesn’t work in Server Components. This one hurt. Moving from styled-components to Tailwind CSS was probably the biggest part of our RSC conversion, although we thought it was worth the trouble.

So, if you went all-in on CSS-in-JS, you’ve got some work to do. At least it’s a great opportunity to migrate to something better, right?

You can access React Context only in Client Components. If you want to share data between Server Components without using props, you’ll probably have to use plain ol' modules.

And here’s the kicker: If you want some sort of data to be limited to a subtree of your React application, there is no great mechanism for doing that in Server Components. (If I'm wrong, please correct me. I really miss this.)

On our docs site, this wasn’t too big of a problem. The places where we used React Context heavily were also the places that were highly interactive and needed to be shipped to the client anyway. Our search experience, for example, shares state like queryString and isOpen throughout the component tree.

On our marketing site, though, this really got us. Our marketing site has areas that share a theme. For example, in the screenshot below, each component in our pre-footer needs to understand that it is on a green background so it knows to use the dark green border. Normally, I would’ve reached for Context to share that theme state, but since these are largely static components that are ideal candidates for Server Components, Context wasn’t an option. We worked around this by leaning hard on CSS custom properties (which is probably better, since this is a styling concern, not a data concern). But other developers may not be so lucky.

Fundamentally, RSCs give you more flexibility about where your code runs and what your data fetching looks like. With flexibility comes complexity. No tool can completely paint over this complexity, so at some point, you’re going to have to understand it and confront it and communicate it to other developers.

Every time a new developer picked up our codebase, the questions came up: “What’s running on the server? What’s running on the client?” Every PR had feedback regarding something accidentally/unnecessarily shipped to the client. I frequently added console logs to my code to see if the server or the client would do the logging. And don’t even get me started on the complexity of caching.

This has gotten better with practice and with reliable patterns. So let’s talk about that. How do we use React Server Components? How do we suggest migrating incrementally? How do we do tricky things without creating an illegible hairball of spaghetti code?

You haven’t been scared away yet? Think the pros outweigh the cons? Great! Let’s dive in, starting with the basics.

As of the time of writing, the only production-ready implementation of RSCs is Next.js 13’s new app directory. You could roll your own RSC framework, but if you’re the kind of developer who does that, you’re probably not reading my blog post. Anyway, some notes here might be a bit specific to Next.js.

The mental model of Server Components may be complicated, but the syntax is blissfully simple. By default, any component you write in Next.js 13’s new app directory will be a Server Component. In other words, by default, none of your page’s code is getting sent to the client.

A basic Server Component

function Description() { 
  return (
    <p>
      None of this code is getting sent to the client. Just the HTML!
    </p>
  )
}

Add async to that Server Component and you can just… fetch data! Here’s what that might look like:

A Server Component with data fetching

async function getVideo(id) {
  const res = await fetch(`https://api.example.com/videos/${id}`)
  return res.json()
}

async function Description({ videoId }) {
  const video = await getVideo(userId)
  return <p>{video.description}</p>
}

There’s one last ingredient to really unlock the power of RSCs. If you don’t want to be stuck waiting for one slow data fetch, you can wrap your Server Components in React.Suspense. React will show the client a loading fallback, and when the server is done with its data fetching, it will stream the result to the client. The client can then replace the loading fallback with the full component.

In the example below, the client will see “loading comments” and “loading related videos.” When the server is done fetching the comments, it will render the <Comments /> component and stream the rendered component to the client; likewise with related videos.

A Server Component with data fetching and streaming

import { Suspense }  from 'react'

async function VideoSidebar({ videoId }) {
  return (
    <Suspense fallback={<p>loading comments...</p>}>
      <Comments videoId={videoId} />
    </Suspense>
    <Suspense fallback={<p>loading related videos...</p>}>
      <RelatedVideos videoId={videoId} />
    </Suspense>
  )
}

Embracing React.Suspense has advantages beyond streaming data when it’s ready. React can also take advantage of Suspense boundaries to prioritize hydrating certain parts of an app in response to user interaction. This is called selective hydration, and is probably a topic better left to the experts.

Now let’s say you have some code that needs to run on the client. For example, maybe you have an onClick listener, or you’re reacting to data stored in useState.

A component gets shipped in one of two ways. The first: By adding “use client” at the top of a file, that module will be shipped to the client so it can respond to user interaction.

A basic Client Component

"use client"
import { useState } from 'react'

function Counter() {
  const [count, setCount] = useState(0)
  const increment = () => setCount(count + 1)

  return (
    <button onClick={increment}>
      The count is {count}
    </button>
  )
}

The second way a component gets shipped to the client is if it’s imported by a Client Component. In other words, if you mark a component with “use client”, not only will that component be shipped to the client, but all the components it imports will also be shipped to the client.

(Does this mean that a Server Component can’t be a child of a Client Component? No, but it’s a little complicated. More on that later.)

If it’s helpful, you can think of it this way: “use client” is telling your bundler that this is the client/server boundary. If that’s not helpful, well, ignore the last sentence.

We can leverage this second way to solve a common problem. Let’s say you want to use a library that doesn’t yet support React Server Components, so it doesn’t have “use client” directives. If you want to make sure that library ships to the client, import it from a Client Component, and it will be shipped to the client too.

Converting a library to a Client Component

"use client"



import MuxPlayer from "@mux/mux-player-react"

function ClientMuxPlayer(props) {
  return <MuxPlayer {...props} />
}

Let’s take a step back and summarize.

Server Components are the brave new React world. They’re great for fetching data and running expensive code that you don’t want or need to send to the client: rendering the text of a blog post, for example, or syntax-highlighting a code block. When convenient, you should leave your code as Server Components to avoid bloating your client bundle.

Client Components are the React you know and love. They can be server-side rendered, and they’re sent to the client to be hydrated and executed. Client Components are great when you want to react to user input or change state over time.

If your whole app was made of Client Components, it would work just like it used to with yesterday’s SSR frameworks. So don’t feel pressured to convert your whole app to Server Components all at once! Adopt them incrementally in places that would stand to gain the most. And… speaking of incremental adoption…

This is the part of the show where folks tend to say, “Neat! But this seems like a lot of work, and I don’t have time to rewrite my whole codebase.” Well, I’m here to tell you that you don’t need to. Here’s the three-step playbook we used to bring most of our code to Server Components:

  1. Add the “use client” directive to the root of your app
  2. Move the directive as low in the rendering tree as you can
  3. Adopt advanced patterns when performance issues arise

Let’s walk through that.

Yup. That’s it. If you’re in Next.js 13, go to your top-level page.tsx and plop in a “use client” at the top. Your page works just like it used to, except now you’re ready to take on the world of Server Components!

video/page.jsx

"use client"

export default function App() {
  <>
    <Player />
    <Title />
  </>
}

Got any server-side data fetching? We can’t do that from a Client Component, so we’re going to add a Server Component. Let’s add it as a parent of the Client Component. That Server Component will perform the data fetching and pass it into our page. Here’s what that will look like:

video/page.jsx


import VideoPageClient from './page.client.jsx'


async function fetchData() {
  const res = await fetch('https://api.example.com')
  return await res.json()
}

export default async function FetchData() {
  const data = await fetchData()
  {}
  const <VideoPageClient data={data} />
}

export default Page

video/page.client.jsx


"use client"

export default function App({ data }) {
  <>
    <Player videoId={data.videoId} />
    <Title content={data.title} />
  </>
}

Next, take that “use client” directive and move it from that top-level component into each of its children. In our example, we’ll be moving it from our <Client /> component into our <Player /> and <Title /> components.

video/Player.jsx

"use client"
import MuxPlayer from "@mux/mux-player-react"

function Player({ videoId }) {
  return <MuxPlayer streamType="on-demand" playbackId={videoId} />
}

video/Title.jsx

"use client"

function Title({ content }) {
  return <h1>{content}</h1>
}

And repeat! Except… because neither <Player /> nor <Title /> have children into which we can push the “use client” directive, let’s remove it!

<Title /> has no issues, because <Title /> doesn’t require any client-side code and can be shipped as pure HTML. Meanwhile, <Player /> throws an error.

Great. That’s as low as we can go. Let’s restore “use client” to the <Player /> component to address that error and call it a day.

See? That wasn’t too bad. We’ve moved our app to Server Components. Now, as we add new components and refactor old ones, we can write with Server Components in mind. And, we’ve saved a bit of bundle size by not shipping <Title /> !

Steps 1 and 2 should be enough for most cases. But if you’re noticing performance issues, there are still some wins you can squeeze out of your RSC conversion.

For example, when we migrated our docs site to RSCs, we leaned on two patterns to unlock deeper gains. The first was wrapping key Server Components in Suspense to enable streaming of slow data fetches (as demonstrated earlier). Our whole app is statically generated except for the changelog sidebar, which comes from a CMS. By wrapping that sidebar in Suspense, the rest of the app doesn’t have to wait for the CMS fetch to resolve. Beyond that, we leveraged Next.js 13’s loading.js convention, which uses Suspense/streaming under the hood.

The second optimization we applied was creatively rearranging Client and Server Components to ensure that large libraries, like our syntax highlighting, Prism, stayed on the server. And speaking of creatively rearranging Client and Server Components…

We established earlier that any component imported from a Client Component would itself become a Client Component. So… how do you make a Server Component a child of a Client Component? Long story short, pass Server Components as children or props instead of importing them. The Server Component will be rendered on the server, serialized, and sent to your Client Component.

This, imo, is the hardest thing to wrap your head around in this whole RSC mess. It gets easier with practice. Let’s check out some examples, starting with the wrong way.

How NOT to mix Client and Server Components

"use client"



import ServerComponentB from './ServerComponentB.js'

function ClientComponent() {
  return (
    <div>
      <button onClick={onClickFunction}>Button</button>
      {}
      <ServerComponentB />
    </div>
  )
}

By importing ServerComponent in a Client Component, we shipped ServerComponent to the client. Oh no! To do this properly, we have to go up a level to the nearest Server Component — in this case, ServerPage — and do our work there.

How to mix Client and Server Components

import ClientComponent from './ClientComponent.js'
import ServerComponentB from './ServerComponentB.js'


function ServerComponentA() {
  return (
    <ClientComponent>
      <ServerComponentB />
    </ClientComponent>
  )
}


function ServerPage() {
  return (
    <ClientComponent
      content={<ServerComponentB />}
    />
  )
}

Nope! But here’s a pattern we use a lot, when we want part of our component’s functionality to stay on the server. Let’s say we’re making a <CodeBlock /> component. We might want the syntax highlighting to stay on the server so we don’t have to ship that large library, but we might also want some client functionality so that the user can switch between multiple code examples. First, we break the component into two halves: CodeBlock.server.js and CodeBlock.client.js. The former imports the latter. (The names could be anything; we use .server and .client just to keep things straight.)

components/CodeBlock/CodeBlock.server.js


import Highlight from 'expensive-library'
import ClientCodeBlock from './CodeBlock.client.js'
import { example0, example1, example2 } from './examples.js'

function ServerCodeBlock() {
  return (
    <ClientCodeBlock
      
      renderedExamples={[
        <Highlight code={example0.code} language={example0.language} />,
        <Highlight code={example1.code} language={example1.language} />,
        <Highlight code={example2.code} language={example2.language} />
      ]}
    >
  )
}

export default ServerCodeBlock

components/CodeBlock/CodeBlock.client.js

"use client"
import { useState } from 'react'

function ClientCodeBlock({ renderedExamples }) {
  
  
  const [currentExample, setCurrentExample] = useState(1)
  
  return (
    <>
      <button onClick={() => setCurrentExample(0)}>Example 1</button>
      <button onClick={() => setCurrentExample(1)}>Example 2</button>
      <button onClick={() => setCurrentExample(2)}>Example 3</button>
      { renderedExamples[currentExample] }
    </>
  )
}

export default ClientCodeBlock

Now that we have those two components, let’s make them easy to consume with a delightful file structure. Let’s put those two files in a folder called CodeBlock and add an index.js file that looks like this:

components/CodeBlock/index.js

export { default } from './CodeBlock.server.js'

Now, any consumer can import CodeBlock from ‘components/CodeBlock.js’ and the Client and Server Components remain transparent.

Honestly, at first, we just added console.log to our code during development and checked to see if that log came out of the server or web browser. This was enough to begin with, but we did eventually find a better way.

If you want to be extra sure that your Server Component will never get included in a bundle, you can import the server-only package. This is extra handy if you want to make sure a large library or a secret key doesn’t end up where it shouldn’t. (Though if you’re using Next.js, it will protect you from accidentally shipping your environment variables.)

Using server-only also had another subtle but meaningful benefit for us: legibility and maintainability. Maintainers who see server-only at the top of a file know exactly where that file is running without having to keep a complete mental model of the component tree.

At the end of the day, React Server Components don’t come for free. It’s not just those gotchas surrounding CSS-in-JS or React Context. It’s also the added complexity: understanding what’s running on the server and what’s running on the client, understanding hydration, incurring infrastructure costs, and of course, managing the code complexity (especially when mixing Client and Server Components). Every facet of complexity adds another surface for bugs to sneak in and for code to become less maintainable. Frameworks reduce this complexity, but they don’t eliminate it.

When deciding whether to adopt RSCs, weigh these costs against the benefits — like smaller bundle sizes and faster execution, which can be critical to SEO. Or advanced data loading patterns that can be used to optimize complex data-heavy sites. Jeff Escalante, trying to answer the same question in their Reactathon talk, nailed it with this diagram:

If your team is ready to take on the mental overhead and the performance benefits are worthwhile, then RSCs might just be for you.

Written By

Pretends he knows more about coffee than he does. Happier when he's outside. Thinks the web is pretty neat.

Leave your wallet
where it is

No credit card required to get started.



from Hacker News https://ift.tt/8jSuMHm