Wednesday, June 30, 2021

eSafety says tweeting commisioner will not qualify as a formal Online Safety Act request

Australian eSafety Commissioner Julie Inman Grant is set to receive sweeping new powers in early 2022 as part of the Online Safety Act that passed Parliament last month. Among other things, the new Act extends the Commissioner's cyber takedown function to adults, giving the commissioner the power to issue takedown notices directly to the services hosting the content and end users responsible for the abusive content.

The new powers have been labelled as overbearing. As one Twitter user put it, the Commissioner is imminently receiving the "master on/off switch to the internet". Of concern to many is that it is not yet known what the test or criteria will be for determining if content warrants removal. There is much to take into account, especially when much of "Australian culture" includes the use of a curse word as a term of endearment; that tone, for example, can be hard to ascertain from a character-limited post.  

The Act will formalise a voluntary scheme that eSafety has in place. The agency has received 3,600 adult cyber abuse-related requests since it began taking them informally in 2017. Only 72 of them, however, were considered by eSafety to be reaching its existing threshold for "real harm". One of them, Inman Grant told the Senate in May, was "horrific", and a few of them involved domestic violence and stalking.  

This week, Inman Grant found herself amid a Twitter dispute when she stepped in to offer advice to an individual who explicitly tagged her for help. The incumbent eSafety Commissioner then allegedly blocked another individual who claimed they were simply disagreeing with the first individual's vaccination opinions.

"Part of eSafety's role is to provide education, support, and advice. We frequently offer information to those in distress -- including offering advice about using the reporting tools available on the platforms," an eSafety spokesperson told ZDNet.

"Although we don't yet have laws in place that allow us to deal with serious adult cyber abuse, currently we can help informally by providing support and guidance on what to do."

The eSafety spokesperson did not respond to questions, however, on whether a banhammer would be waved in a short amount of time when the scheme is formalised.

"In this case, the eSafety Commissioner was tweeted at by a person in distress, and the Commissioner provided our standard advice, including encouraging people to report an issue to the platform in the first instance," they said. 

"This information is also available on the eSafety website, and advice that Twitter provides through its safety centre. This advice did not involve use of the Commissioner's powers, as tweeting at us (as described above) does not constitute a report that enlivens our powers."

The spokesperson then reiterated the office would take its obligations seriously under the Act and said the new laws would be critical in helping more Australians who are experiencing online harm. 

They also said the complaints mechanism for reporting adult cyber abuse would be robust and that a simple tag of eSafety or the eSafety Commissioner in posts or comments on social media would not be treated as a formal report, as per its current practice.

MORE ONLINE SAFETY ACT

AI bias and discrimination aplenty: Australian Greens want Online Safety Bill repealed

Australian Greens have put forward an amendment that seeks to withdraw the Bill and have it re-drafted to address its rushed nature.

Protecting women in the cloud: eSafety hopes the Online Safety Act will do just that

The commissioner said a lot of online abuse is rooted in misogyny and intended to silence women's voices. She hopes the new Online Safety Act will go some way to prevent such abuse.

Australia's eSafety and the uphill battle of regulating the ever-changing online realm

The eSafety Commissioner has defended the Online Safety Act, saying it's about protecting the vulnerable and holding the social media platforms accountable for offering a safe product, much the same way as car manufacturers and food producers are in the offline world.



from Latest Topic for ZDNet in... https://ift.tt/3hrcU7Q

Mass Observation Project: recording everyday life in Britain

Drawing of an Observer's home taken from the MOPThe Mass Observation Project (MOP) is a unique national life writing project about everyday life in Britain, capturing the experiences, thoughts and opinions of everyday people in the 21st century.

Launched in 1981, the project continues to commission new research and is a valuable resource for research, teaching and learning.  It is one of the major repositories of longitudinal qualitative social data in the UK.

New Research: Directives

Each year the project issues three 'Directives' (open questionnaires) to a panel of hundreds of volunteer writers nationally (known as 'Observers'). The Archive collates the responses and makes them available for research. Visit our collection pages to find out more about the collection and how to access it at The Keep.

It differs from other similar social investigations because of its historical link to the original Mass Observation movement and because of its focus on voluntary, self-motivated participation. The 'Observers' do not constitute a statistically representative sample of the population but can be seen as reporters or “citizen journalists” who provide a window on their world.

Database

You can also access information about the Mass Observation Project Panel on this database, which contains information about the biographical/demographic characteristics and writing behaviours of individual Mass Observation Project writers.

Collaborating on research

Researchers across a wide range of disciplines collaborate with us by commissioning their own Directive. Find out more about collaborating with Mass Observation on a Directive.

The Directive themes need to have some relevance to everyone who writes and we are therefore careful to maintain a balance between subjects such as:

- personal (Sex, The Home, Doing a Job)
- political (The General Election, Civil Disobedience, The Scottish Referendum)
- historical (The First World War, Doing Family History Research)

Funding

In order to sustain the Mass Observation Project and continue to create valuable research opportunities, generating funding is core to our activities. A fee is therefore charged to those who collaborate on the production of a Mass Observation Directive to support project costs. Find out more about this here



from Hacker News https://ift.tt/3x60E3j

Download Speeds: What Do 2G, 3G, 4G and 5G Mean

You can access the internet on your smartphone using either a 2G, 3G, 4G or 5G connection. Find out how download speeds compare.

When it comes to mobile internet download speeds, terms like 2G, 3G, 4G and 5G are often used. Referring to four different generations of mobile technology, each of them gives a very different download speed.

Older 2G connections give a download speed of around 0.1Mbit/s, with this rising to around 8Mbit/s on the most advanced 3G networks. Speeds of around 60Mbit/s are available on 4G mobile networks in the UK (but this can be substantially higher in other countries like the US). Next-generation 5G mobile networks are targeting a download speed of over 1,000Mbit/s (1Gbit/s).

In this article, we take an in-depth look into the topic of download speeds and see how 2G, 3G, 4G and 5G mobile networks compare. We’ll also consider the real-world speeds and how they’ll impact upon your actual day-to-day usage.

What is Download Speed?

The “download speed” is a measure of the rate at which data can be transferred from the internet to your smartphone. This data might be a web page or a photo you’re viewing, or it could be an application or video you’re downloading to your smartphone.

In its rawest form, download speeds are measured in “bits per second” (bps) where a “bit” is a one or zero in binary. More commonly, however, we talk about download speeds in “megabits per second” (Mbit/s), where 1 Megabit is equal to 1,000,000 bits.

In general, a faster download speed normally mean that content from the internet loads faster and with less of a wait. A faster download speed also supports higher-quality streaming (e.g. you might be able to watch higher definition video as it downloads without encountering buffering). Download speeds aren’t the full picture however: there is also the related concept of latency (discussed below) that affects the responsiveness of your internet.

2G, 3G, 4G & 5G Download Speeds

The following table shows a comparison of download speeds on various flavours of 2G, 3G, 4G and 5G mobile networks. The icon column refers to what you’ll most likely see in the notification bar of your smartphone when using one of these networks.

Generation Icon Technology Maximum Download Speed Typical Download Speed
2G

G

GPRS 0.1Mbit/s <0.1Mbit/s

E

EDGE 0.3Mbit/s 0.1Mbit/s
3G

3G

3G (Basic) 0.3Mbit/s 0.1Mbit/s

H

HSPA 7.2Mbit/s 1.5Mbit/s

H+

HSPA+ 21Mbit/s 4Mbit/s

H+

DC-HSPA+ 42Mbit/s 8Mbit/s
4G

4G

LTE Category 4 150Mbit/s 15Mbit/s
4G+

4G+

LTE-Advanced Cat6 300Mbit/s 30Mbit/s

4G+

LTE-Advanced Cat9 450Mbit/s 45Mbit/s

4G+

LTE-Advanced Cat12 600Mbit/s 60Mbit/s

4G+

LTE-Advanced Cat16 979Mbit/s 90Mbit/s
5G

5G

5G 1,000-10,000Mbit/s
(1-10Gbit/s)
150-200Mbit/s

Our table provides two different download speeds. The first is the theoretical “maximum download speed”. This is based on the limits of the technology, assuming you had perfect coverage and no congestion on the masts. We’ve also listed a more “typical download speed” which is more representative of what you’d actually experience on a day-to-day basis.

The actual download speeds you get will depend on a number of factors such as your location, whether you are indoors or outdoors, the distance to nearby masts and the amount of congestion on them. You can measure the actual download speed of your connection using tools like Google’s Speed Test, Netflix’s Fast.com or Ookla’s SpeedTest.net.

Which Technologies Can I Access?

The latest iPhone supports Category 16 LTE-Advanced.

In order to access a certain technology, you’ll need both a mobile phone and a mobile network that supports it. For instance, if you wanted to access Category 6 LTE-Advanced, you’ll need a mobile phone that supports it and a mobile network that has coverage in your area.

Most modern smartphones now support 4G technology, but they often differ in the maximum download speeds supported, or the maximum “category” of LTE they support. Some of the latest flagship smartphones like the iPhone XS and Galaxy S9 now support up to Category 16 LTE-Advanced.

Mobile networks will also differ in terms of the maximum download speeds and coverage they offer. In the UK, it’s possible to get up to Category 9 speeds on EE and Vodafone (up to 450Mbit/s), and up to Category 6 speeds on O2 and Three (up to 300Mbit/s) at the time of writing. In other countries, however, it often looks quite different. For instance, in the United States, it’s possible to get up to Category 16 speeds (up to 979Mbit/s) on all of the major mobile networks including AT&T, Sprint, T-Mobile and Verizon.

Impact on Download Times & Streaming

The following table shows how expected download times compare across the different technologies:

Activity 4G Download Time 3G Download Time 2G Download Time
Accessing typical web page 0.5 seconds 4 seconds 3 minutes
Sending an e-mail without attachments <0.1 seconds <0.1 seconds 1 second
Downloading high-quality photograph 0.5 seconds 4 seconds 3 minutes
Downloading an music track (MP3) 3 seconds 10 seconds 7 minutes
Downloading an application 8 seconds 1 minute 40 minutes

For this comparison table, we have used the average download speeds of 30Mbit/s (4G LTE Cat6), 4Mbit/s (3G HSPA+) and 0.1Mbit/s (2G EDGE). Typical file sizes used in our calculations: 2MB for a webpage, 10KB for a basic e-mail, 2MB for a high-quality photograph, 5MB for a music track and 30MB for a typical application download.

We haven’t listed 5G download times in the table above but it’s safe to say they would all download near instantaneously!

Streaming Applications

Netflix is a video streaming application.

When it comes to certain applications that “stream” data, your connection will need to support a minimum download speed. This is because content from the internet is being shown on your phone at the same time as whilst you’re downloading it (a concept commonly known as “streaming”). If the content can’t be downloaded at a sufficient speed, you’ll experience pauses during playback (also known as “buffering”).

Applications that make use of streaming include voice over IP (e.g. calling via Skype or WhatsApp), online video apps (e.g. Netflix and YouTube) and online radio. The following table shows minimum download speeds you would require for this content to play smoothly without buffering:

Activity Required Download Speed
Skype/WhatsApp phone call 0.1Mbit/s
Skype video call 0.5Mbit/s
Skype video call (HD) 1.5Mbit/s
Listening to online radio 0.2Mbit/s
Watching YouTube videos (basic quality) 0.5Mbit/s
Watching YouTube videos (720p HD quality) 2.5Mbit/s
Watching YouTube videos (1080p HD quality) 4Mbit/s
Watching iPlayer/Netflix (standard definition) 1.5Mbit/s
Watching iPlayer/Netflix (high definition) 5Mbit/s
Watching iPlayer/Netflix (4K UHD) 25Mbit/s

A 3G connection or better should normally be able to sustain most of these activities. Having a faster 4G connection may also allow you to stream higher quality content (e.g. watching Netflix in 4K Ultra HD quality).

Latency

Besides download speed, latency is another really important concept that affects the experience you’ll get on your smartphone. It’s also known as the “lag” or “ping” if you’re familiar with online gaming.

When your mobile phone wants to download some content from the internet, there is an initial delay before the server on the other end starts to respond. Only once the server has responded, it will then be possible for the download to progress. For example, if it takes 0.5 seconds for the server to initially respond and then 1 second for the file to download, you’ll need to wait a total of 1.5 seconds for the download to complete.

High latency connections can cause web pages to load slowly, and can also affect the experience in applications that require real-time connectivity (e.g. voice calling, video calling and gaming applications).

Across 2G, 3G, 4G and 5G technologies, there’s a major difference in the latency you can expect:

Generation Typical Latency
2G 500ms (0.5 seconds)
3G 100ms (0.1 seconds)
4G 50ms (0.05 seconds)
5G 1ms (0.001 seconds)*

* The target latency of a 5G connection is 1ms (theoretical). Other figures are based on real-world usage.

Many people argue that the benefits of 5G are more from having reduced latency and increased capacity rather than having faster download speeds. This is because the download speeds available on 4G are already fast enough for most uses (e.g. 5Mbit/s is already more than enough for high-definition video). However, despite faster download speeds not making a huge difference here, the reduction in latency from 5G technology will help overall response time.

The reduced latency of 5G technology is particularly important for some of the newer embedded applications of mobile technology. For instance, a connected car travelling on the motorway at 70mph (110km/h) would travel almost 2 meters in the amount of time it takes for a 4G mobile network to respond. The lower latency of a 5G connection will allow mobile technology to be used more safely in cars.

Download Speeds & Download Limits

Download speeds shouldn’t directly affect how much data you consume. This is because the web pages you visit and the files you download are still the same size (and hence will consume the same amount of data) regardless of which connection type you have. There are, however, two key exceptions to this:

  1. Adaptive Streaming on Videos. Some video providers (e.g. YouTube and Netflix) automatically adjust the quality of videos depending on what your connection can handle. For instance, you might receive standard-definition video on a 3G connection and high-definition video on a 4G or 5G connection. This may increase the amount of data you consume as you move to a faster connection.
  2. Increased Engagement. The increased download speed and improved experience of a faster internet connection may encourage you to consume more content and to use your phone more regularly on-the-go.

For both of these reasons, we’d typically advise choosing a larger data allowance when moving to mobile network or tariff offering faster download speeds.

Terminology

Kbit/s, Mbit/s, Gbit/s

There are 1,000 kilobits in a megabit (1000kbit = 1Mbit) and 1,000 megabits in a gigabit (1,000Mbit = 1Gbit). This means a 1Mbit/s connection is twice as fast as a 500kbit/s connection. Wikipedia has a full explanation.

In everyday life, it is most useful to talk about download speeds in megabits per second (Mbit/s). 2G connections are sometimes specified in kbit/s (e.g. the maximum download speed for GPRS is 80kbit/s). Similarly, 5G connections are sometimes specified in Gbit/s (e.g. the target download speed for 5G technology is 1-10Gbit/s). For ease of comparison, we have converted these measurements to be in the common unit of Mbit/s.

Mbit/s vs Mbps

There’s no difference between Mbit/s and Mbps: they’re just two different ways of abbreviating “megabits per second”. At Ken’s Tech Tips, we prefer to use the term Mbit/s as we believe it ensures a little more clarity. The alternative abbreviation, Mbps, is often confused with “megabytes per second”.

It’s important to draw the distinction between bits and bytes. Whilst download speeds are normally measured in “megabits per second” (Mbit/s), download limits and download sizes are measured in megabytes (MB). As there are 8 bits in one byte (and hence 8 megabits in one megabyte), it would actually take you 8 seconds to download a 1MB file on a 1Mbit/s connection.

5G Wi-Fi

The term “5G Wi-Fi” is often confused for 5G mobile technology. In fact, the “5G” actually stands for 5GHz and relates to the frequencies used by the wi-fi network to communicate with your device (traditionally, wi-fi networks have used the spectrum around 2.4GHz).

As the “5G” in “5G Wi-Fi” has no relation to download speeds, it’s recommended that this technology is now referred to as Wi-Fi 5 or 802.11ac to reduce confusion.

More Information

For more information, please consult your mobile network’s website for details about the download speeds and coverage they’re able to offer. If you’re in the UK, please see the EE, O2, Three and Vodafone websites.



from Hacker News https://ift.tt/32Ixb27

Gap to go completely online, closing 81 stores in UK and Ireland

Gap blamed what it described as market dynamics - in other words, the huge shift to internet shopping. It's going online-only, just like Debenhams and Sir Philip Green's Arcadia group. It's yet another famous name bidding a retreat from our High Streets, adding to the challenge of what to do with empty shops.



from Hacker News https://ift.tt/3w6gqdd

Alcohol, health, and the ruthless logic of the Asian flush

Say you’re an evil scientist. One day at work you discover a protein that crosses the blood-brain barrier and causes crippling migraine headaches if someone’s attention drifts while driving. Despite being evil, you’re a loving parent with a kid learning to drive. Like everyone else, your kid is completely addicted to their phone, and keep refreshing their feeds while driving. Your suggestions that the latest squirrel memes be enjoyed later at home are repeatedly rejected.

Then you realize: You could just sneak into your kid’s room at night, anesthetize them, and bring them to your lair! One of your goons could then extract their bone marrow and use CRISPR to recode the stem-cells for an enzyme to make the migraine protein. Sure, the headache itself might distract them, but they’ll probably just stop using their phone while driving. Wouldn’t you be at least tempted?

This is an analogy for something about alcoholism, East Asians, Odysseus, evolution, tension between different kinds of freedoms, and an idea I thought was good but apparently isn’t.

It’s not good to drink too much

This is a surprise to no one, but let’s look at some numbers. Here’s data from a meta review on the relative risk of various health conditions as a function of the number of US standard drinks (14g of alcohol) someone has in a day:

risk of various health issues vs. number of drinks

The three small dots show that having 10 drinks a day associated with a 9x risk of getting lip/oral cancer, a 3x risk of epilepsy, and a 1.5x risk of diabetes, as compared with not drinking at all. These are all associations, controlling only for age, sex, and drinking history. This makes the little dip around 1-2 drinks for heart disease and diabetes controversial. Still, the causal link is pretty clear in many cases, and for our purposes, all that matters is that heavy drinking is not good.

But who averages 10 drinks per day, you ask? The answer is an astonishing number of people. Half of Americans drink almost nothing, but the top 10% average more than 10 drinks per day. They’re responsible for around 75% of all alcohol consumption.

Some East Asians struggle with alcohol

Humans metabolize alcohol in various ways. The “normal” way is that an ADH enzyme converts alcohol to acetaldehyde after which an ALDH enzyme breaks the acetaldehyde down into acetate. Eventually the acetate is broken down into water and carbon dioxide. The intermediate product (acetaldehyde) is highly toxic and carcinogenic, while acetate is much less active. It appears that ethanol itself isn’t carcinogenic, but acetaldehyde is.

But guess what: Around 80% of East Asians have a variant of ADH (ADH1B or ADH1C) that converts alcohol to acetaldehyde more quickly. Also, around 50% of East Asians have a variant ALDH isoenzyme (ALDH2*2) that is much less effective. Both of these mean that acetaldehyde tends to accumulate, leading to a “flush” reaction.

Kang et al. (2014), recruited a bunch of healthy 20-something male Koreans. Here is the peak acetaldehyde concentration (ng/ml) of people with different genes after consuming 0.25 g/kg of ethanol (around 1.25 standard drinks for someone who weighs 70 kg / 154 lb.)

ALDH \ ADH half variant full variant
standard 167.9 190.1
full variant 736.6 1,613.6

The variant enzymes lead to much higher peak concentrations. Remember, we have two copies of every gene, one from mom and one from dad. The middle column shows people with one copy of the ADH1B variant that produces acetaldehyde faster, while the right column shows people with two copies. This doesn’t even include people with zero copies of the ADH1B variant, presumably because they couldn’t find enough of them. The top row shows people with the standard ALDH2 enzyme, the bottom row with the East Asian variant. This is dominant so you don’t have to worry about half-effects.

It’s likely that having these variant enzymes means that if you do drink, alcohol causes more problems. This is hard to study since you can’t do randomized tests and people with the mutation drink less, but there’s pretty strong evidence of this in humans for esophageal cancer. In mice, removing the ALDH enzyme greatly increases the DNA damage that alcohol causes to the stomach.

Those East Asians drink less

Now, why do East Asians have these genetic variants? I long assumed that this is because other people had a longer tradition of drinking alcohol, and so had evolved to do it painlessly. This is totally incorrect.

Let’s back up. When Homo Sapiens left Africa, these variants basically didn’t exist. We evolved to be able to consume alcohol, probably because we’re fond of not starving to death. (If rotting fruit is the only source of calories, it’s better if you can eat it without getting incapacitated.) Rather, the genes for these variant enzymes probably arose in China.

Alcoholic beverage production started early in China. It’s hard to say exactly when, since it pre-dates recorded history, but 9000 years old Chinese pottery already has residues of early beers. Alcohol production in Egypt seems to have started around 5000 years ago, and in Europe around 4000 years ago.

So, China is where alcohol first became common. China is also where genes that make alcohol consumption difficult first became common. Why would such “defective” genes arise in the place where alcohol has been around the longest?

The simplest explanation is that these genes are adaptive. It’s obvious in retrospect: Humans are prone to alcoholism. Alcoholics tend to get sick, commit suicide, and have accidents, all of which interfere with the business of having and raising kids.

A study in Taiwan found that 48% of the control population had a copy of the (dominant) ALDH2*2 mutation that slows the breakdown of acetaldehyde, but only 12% of alcoholics. Similarly, 93% had at least one copy of the ADH1B gene that speeds acetaldehyde production, compared to 64% of alcoholics. Other studies (Muramatsu et al. 1995, Hurley and Edenberg, 2012, Bierut et al., 2012) confirm the same basic picture, which is that these genes reduce alcoholism.

If these genes really are an adaptation, it shows how ruthless evolution can be. If you implanted a device in your kid that mildly poisoned them every time they drank, you’d be a monster. But evolution basically did that.

Constraints are sometimes a kind of freedom

No one cares about my freedom to rob convenience stores or burn down public buildings. We all understand that different people’s freedoms are in conflict, and we’ve invented things like “manners” and “property” and “noise ordinances” to navigate the tradeoff.

There’s a different tradeoff I think about a lot. We all know the story of Odysseus having his men block their ears and tie him to the mast of his ship. He knew he would go temporarily insane when going past the Sirens, so he wanted to remove freedom from himself to overcome that.

odysseus and the sirens

It’s a cute story, but it’s not typical. Odysseus constrained his future self with technology. Most real-world scenarios are different:

  • We need society to enforce constraints.
  • Those constraints affect everyone to some degree, even those who don’t want them.

For example, I almost never buy snack foods because once home, I can’t resist the urge to eat them. This works OK for me, but they’re sometimes available at conferences or parties or whatever, and I have a hard time saying no. What I’d really like is for society to criminalize all mint-chocolate flavored snacks.

(We’ll will get back to alcohol in a second.)

Cookies are laughable, but how about fentanyl? Some of the reason drugs are illegal is because of externalities or the idea that people don’t know what’s good for them. However, there’s no doubt that some former or potential addicts would choose criminalization if it was up to them. Say you used to be addicted but now you’ve quit. If you could snap your fingers and make all drugs disappear, wouldn’t you do that?

Obviously, criminalizing cookies (or fentanyl) is bad for both responsible users and people who can’t or don’t want to quit. I’m just trying to point out that there is a tradeoff. Society has decided that tradeoff in favor of responsible Twinkie users and against responsible fentanyl users.

Just as society made a different tradeoff for Twinkies and drugs, biology made a different one for alcohol, depending on if you got the East Asian variants or not.

Sometimes we can give people the chance to “Odysseus” themselves without intruding too much on the freedom of others. An example is gambling. Some locations allow people to “self-exclude” from gambling, after which casinos won’t let you play for a time period of your choice. This isn’t perfect, since now responsible gamblers have their ID checked, and addicts can still cross state lines or play the lotto or whatever.

We can informally picture the different regimes like so:

freedoms and freedoms from temptation

A self-ban for alcohol

Roughly 10% of people in the US are raging alcoholics. Could we offer them the chance to self-exclude from alcohol?

Unfortunately, it seems very difficult. We’d have to force some heavyweight process of checking IDs on all bars and liquor stores. Even then, it wouldn’t be very effective, since people could still have their friends buy it for them. Do we want to make it illegal to hand out drinks at a party without checking everyone’s ID against a database? It would be a nightmare.

A while ago, I had a strange idea. In principle, instead of having people ban themselves from alcohol legally, couldn’t it be done biologically? After all, this is the solution evolution came up with. Can we allow people to opt-in to getting the Asian flush?

freedoms from temptation with an invention

Obviously, this is just hypothetical. For one thing, is it even possible? It seems hard, but perhaps with the full might of our modern nano RNAi cell-therapy quantum stem-cell CAR-T gene therapy arsenal, we could figure something out. It doesn’t matter though. We could never actually give such a drug to people, since it’s equivalent to poisoning them.

Just kidding. It’s called disulfiram, and it was approved by the FDA in 1951.

Does disulfiram work?

Still today, we don’t really know.

To be sure, disulfiram does what’s asked of it. It definitely blocks the ALDH enzyme and this definitely does lead to 5-10x higher acetaldehyde concentrations. This definitely causes “flushing” and other symptoms typical of those with a genetic predisposition against drinking. Early on, it was prescribed in such massive doses that patients who drank anyway sometimes went into cardiac arrest, or even died.

What’s unknown is if it helps with alcohol addiction. There have been a huge number of studies, but none of them give clear answers. As far as I can tell, there are three problems.

First, just imagine you give alcoholics a bottle of pills, explain that they make alcohol (more) toxic, and send them on their way. Obviously, almost nobody takes them, and those that do are ultra-determined and would probably have quit anyway. You can ask patients to come into the office to take the drugs, but then people drop out. It’s just incredibly hard.

Second, there’s all sorts of confounders and weirdness. Some show effects on number of days without alcohol but not on number of drinks or vice-versa. Some studies show great results for people who are married but not for single people.

Third, most of the studies are kind of.. crap? Hughes and Cook reviewed the studies up to 1997. Their paper is a marvel of inventive euphemisms like “Nothing can be said directly”, and “not strictly a controlled study”, and “a very poor study, but the authors subsequently stated that they ‘made no claims for methodological sophistication or statistical significance’.”

The drug is still available today, though not much used for alcoholism except in Denmark, where it’s widely prescribed. (This seems to be pharma-nationalism, resulting from the drug being invented there.)

If people refuse to take the pills, couldn’t we just make some kind of implant? This too has been experimented with since 1968, and again we have no clear answers. One major problem is that it’s not clear how well the drug is absorbed from the implant. Another is that randomized trials require “sham implants” to blind participants. A third is that various trials used ridiculously low doses of the drug, far below the level that’s physiologically plausible.

Update: People have pointed out that disulfiram implants are apparently fairly popular in Eastern Europe (1, 2, 3). However, these implants typically a total of 1-2g dispensed over something like 6-24 months. If you assume 1g dispensed over a year, that’s 2.7 mg / day, around 1% of a typical oral dose of 250 mg / day. On top of that, bioavailability of implanted disulfiram appears to be lower than oral. So I suspect these implants are almost entirely placebo.

So, it’s hardly been revolutionary. What explains this?

One possibility is that the drug could cure alcoholism, we just haven’t done enough studies, or found a sufficiently reliable way to deliver it yet.

Another possibility is that the alcohol intolerance in East Asians is just a “nudge”, which is often enough to prevent alcoholism from forming in the first place, but not strong enough to displace alcoholism once it’s taken root.

I favor the second possibility. Disulfiram definitely does make acetaldehyde build up when you drink. If that had a massive effect on alcoholism, it shouldn’t be that hard to see it! Yet we still don’t see much after 70 years. These days, the first-line drug treatments for alcoholism are acamprosate which reduces the physical symptoms of alcohol withdrawal and naltrexone which screws around with the opiod receptors and probably reduces the pleasure people get from drinking.

What are we supposed to conclude from all this? That we should be careful about cute evolutionary explanations? That human fallibility means your individual freedom is in tension with my freedom from temptation? That “Odysseusing” is a way to resolve that tension? That our addictions run deep into us, and aren’t easy to remove? That there’s nothing new under the sun? That human behavior is complex, and harder to manipulate than mere biology? Take your pick. Honestly, I just thought it was a good story.



from Hacker News https://ift.tt/3fUmTll

Monochromatic Portraits with GLSL

Monochromatic Portraits with GLSL

1 Feb 2019

In my Computer Graphics Art class, we were assigned a monochromatic portrait project. Given a photograph of a subject, we were to split the image into a small number of discrete sections of varying brightnesses, all of the same colour. Typically, this process would be completed by hand in a tool like Krita or Photoshop.

Monochromatic portrait, processed from a webcam with this shader.

I chose GLSL.

Rather than manually producing the portrait, I realised that the project can be distilled into a number of per-pixel filters, a perfect fit for fragment shaders. We pass in the source photograph as a texture, transform it in our shader, and the filtered image will be written out to the framebuffer.

This post assumes a basic familiarity with the OpenGL Shading Language (GLSL). The interface between the fragment shader and the rest of the world is (relatively) trivial and will not be covered in-depth here. For my early experiments, I modified shaders from glmark’s effect2d scene, which allowed rapid prototyping. Later, I moved the demo into the web browser via three.js. Source code is available under the MIT license.

Without further ado, let’s get to work!

First, we include the sampler corresponding to our photograph, and a varying defined in the vertex shader corresponding to the texture coordinate.

uniform sampler2D frame;
varying vec2 v_coord;

Next, let’s start with a simple pass-through shader, reading the specified texel and outputting that to the screen.

void main(void)
{
    vec3 rgb = texture2D(frame, v_coord).rgb;

    gl_FragColor = vec4(rgb, 1.0);
}

The colour photograph shines through as-is – everything sanity checked. However, when we make monotone portraits, we don’t care about the colour, only the brightness. So, we need to convert the pixel to greyscale. There are various ways to do this, but the easiest is to multiply the RGB values with some “magic” coefficients. That is,

grey=cr⋅red+cg⋅green+cb⋅blue\mathrm{grey} = c_r \cdot \mathrm{red} + c_g \cdot \mathrm{green} + c_b \cdot \mathrm{blue}

What coefficients do we choose? An obvious choice is 13\frac{1}{3} for each, taking equal parts red, green, and blue. However, human colour perception is not fair; a psych teacher told me that bias is literally in our DNA. Oops, wait, I’m not supposed to talk politics in here. Anyway!

Point is, we want coefficients corresponding to human perception. One choice is BT.709 coefficients, which are used when computing the luminance (Y) component of the YUV colour space. These coefficients correspond to a particular vector:

c⃗=(0.21260.71520.0722)\vec{c} = \begin{pmatrix}0.2126\\0.7152\\0.0722\end{pmatrix}

We just take the dot product of those coefficients with our RGB value, et voila, we have a greyscale image instead:

vec3 coefficients = vec3(0.2126, 0.7152, 0.0722);
float grey = dot(coefficients, rgb);

At this point, we might adjust the image to taste. For instance, to make the greyscale image 20% cooler brighter, we just multiply in the corresponding coefficient, clamping (saturating) between 0.0 and 1.0 to avoid out-of-bounds behaviour:

float brightness = 1.2;
grey = clamp(grey * brightness, 0.0, 1.0);

Now, here comes the magic. Beyond the artistic description, monotone portraits, the technical name for this effect is “posterization”. Posterization, at its core, transforms an image with many colours with smooth transitions into an image with few colours and sharp transitions. There are many ways to approach this, but one is particularly simple: rounding!

All of our colour (and greyscale) values are within the range [0,1][0, 1], where 00 is black and 11 is white. So, if we simply round the value, the darks will become black and the lights will become white: posterization with two levels (colours)!

What if we want more than two levels? Well, think about what happens if we multiply the colour by an integer nn greater than one, and then round: the rounded value will map linearly to n+1n + 1 discrete values, from 00 to nn. (Psst, where did the plus one come from? If we multiply by 11 – not changing anything from the black/white case – there are two possibilities, not one. It’s a fencepost problem).

However, after we scale the grey value from [0,1][0, 1] to [0,n][0, n], we probably want to scale back to [0,1][0, 1]. That’s achieved easily enough – divide the rounded value by nn.

All in all, we can posterize to six levels, for instance, quite simply:

float levels = (6.0) - 1.0;
float posterized = round(grey * levels) / level;

Et voila, we have a greyscale posterized image. For some OpenGL versions lacking a round function, just replace round with floor with 0.50.5 added to the argument.

Posterized greyscale, but feeling clustered

What’s next? Well, the posterized values feel a little “clustered”, for lack of a better word. They are faithful to the actual brightness in the image, but we’re not going for photorealistic here – we want our colours to pop. So, increase the contrast by some factor; I chose 30%. How do we adjust contrast? Well, first we need to define contrast: contrast is how far everything is from grey. By grey, I mean 0.50.5, half-way between black and white. So, we can subtract 0.50.5 from our posterized colour value, multiply it by some contrast factor (think percentages), and subtract 0.50.5 again to bring us back. Again, we saturate (clamp to [0,1][0, 1]) at the end to keep everything in-range.

float contrast = 1.3;
float contrasted = clamp(contrast * (posterized - 0.5) + 0.5, 0.0, 1.0);

If you’re a geometric thinker, or if you have a little background in linear algebra, we are effectively scaling (dilating) pixel values with the “origin” set to grey (0.50.5), rather than black (00). You can express it nicely in terms of some simple composited affine transformations, but I digress.

Posterized with contrast adjusted

Anyway, with the above, we have a nice, posterized, grey image. Grey?! No fun. Let’s add a splash of colour.

Unfortunately, within RGB, adding colour can be tricky. Simply multiplying our base colour with the greyscale value will perform a tint, but it’s a different effect than we want. For these monotone portraits, given grey values, we want 00 to correspond to black, 0.50.5 to a colour of our choosing, and 11 to white. Values in between should interpolate nicely.

This problem is neigh intractable in RGB… but we can take another trick out of linear algebra’s book, and perform a change of basis! Or colour space, in this case.

In particular, the HSL (hue/saturation/lightness) colour space, modeled after artistic perception of colour rather than the properties of light, has exactly the property we want. Within HSL, zero lightness is black, half-lightness is a particular colour, and full-lightness is white. Hue and saturation decide the colour shade, and the lightness is decided by, well, the lightness.

So, we can pick a particular hue and saturation value, set the lightness to the greyscale lightness we calculated, and bingo! All that’s left is to convert back from HSL to RGB, since our hardware does not feature native support for HSL. For instance, choosing a hue of 0.80.8 and a saturation of 0.60.6 – values corresponding to pastel blues – we compute:

vec3 rgb = hsl2rgb(vec3(0.6, 0.4, contrasted);

Finally, we just set the default alpha value and write that out!

gl_FragColor = vec4(rgb, 1.0);

“But wait,” you ask. “Where did hsl2rgb come from? I didn’t see it in the GLSL specification?”

A fair question; indeed, we have to define this routine ourselves. A straightforward implementation based on the definition of HSL does not take full advantage of the GPU’s vectorized and parallelism capabilities. A discussion of the issue is found on the Lol engine blog, which includes a well-optimized GLSL routine for HSV to RGB conversions. The code is easily adapted to HSL to RGB (as HSV and HSL are closely related), so presented with proof is the following implementation of hsl2rgb. Verifying correctness is left as an exercise to the reader (please do!):

vec3 hsl2rgb(vec3 c) {
    float t = c.y * ((c.z < 0.5) ? c.z : (1.0 - c.z));
    vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
    vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
    return (c.z + t) * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), 2.0*t / c.z);
}

And with that, we’re done.

Almost.

Everything we have so far works, but it’s noisy as seen above. The “jumps” from level to level are not smooth like we would like; they are awkward and jaggedy. Why? Stepping through, we see the major artefacts are introduced during the posterization routine. The input image is noisy, and then when we posterize (quantize) the image, small perturbations of noise around the edges correspond to large noisy jumps in the output.

What’s the solution? One easy fix is to smooth the input image, so there’s no perturbations to worry about in the first place. In an offline implementation of something like a cartoon cutout filter, like that included in G’MIC, a complex noise reduction algorithm would be used. G’MIC’s cutout filter uses a median filter; even better results can be achieved with a bilateral filter. Each of these filters attempts to reduce noise without reducing the edges. But they’re slow.

What can we do instead of a sophisticated noise reduction filter? An unsophisticated one! Experimenting with filters in Krita, I found that any blur of suitable size does the trick, not just an edge-preserving filter. Even something simple like a Gaussian blur or even a box blur does the trick. So, instead of reading a single texel rgb, we read a group of texels and average them to compute rgb. The demo code uses a single-pass blur, which suffers from performance issues; this initial blur is by far the slowest part of the pipeline. That said, for a production application, it would be trivial to optimize this section to use a two-pass blur, weighted as desired, and to take better advantage of native bilinear interpolation. Implementing fast GPU-accelerated blurs is out-of-the-scope of this article, however.

Regardless, with a blur added, results are much cleaner!

All in all, the algorithm fits in a short, efficient, single-pass fragment shader… which means even on low-end mobile devices, as long as there’s GPU-acceleration, we can run it on input from the webcam in real-time.

For best results, ensure good lighting. Known working on Firefox (Linux) and Chromium (Linux, Windows, Android). Known issues on iOS (?).

Try it!

Back to home



from Hacker News https://ift.tt/3doDGMU

Tell HN: Don't root for WFH, it might destroy well-paying jobs

Tell HN: Don't root for WFH, it might destroy well-paying jobs
3 points by RestlessMind 4 minutes ago | hide | past | favorite | 2 comments
There is a vocal crowd on HN cheering for the WFH trend. But that is a big risk for the US SW engineers and a great opportunity for the rest of the SW engineers in Americas.

The crowd cheering for WFH seems to assume that the jobs will be available anywhere the suburban US which is ideal for WFH with big houses with enough space to have an office, gym and whatever. They also assume SV level salaries, maybe with a 10-15% haircut and think they can live like kings.

But from my personal experience at a company going fully WFH, I can say that those are rosy dreams. I am in a position where I know about the company's hiring plans and salaries. Guess where is a majority of new hires going to come from? Other countries! Guess what will be their salaries? Much lower than what we are offering in the non-SV parts of the US.

If the WFH trend continues, it will do to the plum US tech job market what globalization did to manufacturing. In a WFH setup, someone in Mexico is on the same level as someone in Kansas or SV, but much cheaper.

It is time for tech workers to look out for themselves and root for a failure of WFH trend. Else for the short term king-size lifestyle in your remote corner of the US, you will destroy one of the last remaining well-paying job market in the long term.






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



from Hacker News https://ift.tt/3jxVKbs

Fusion and Magic

Audio brought to you by Curio, a Lapham’s Quarterly partner

Not so long ago, at the start of 2007, the world’s population lived with a vivid technological divide. Half had a mobile phone: three billion people. Not quite a quarter used the internet. The phones were for talking. The internet required a computer. Wheelers and dealers—lawyers, agents, politicos—had BlackBerrys for emails, which they pecked on Lilliputian keyboards. But otherwise being online was a physically static condition. One surfed sitting still. The internet of the 2000s was an indoor child, happiest on the couch or behind a desk.

That changed the second week of that new year, when Apple CEO Steve Jobs teased the first iPhone from his jeans pocket, on a conference stage in San Francisco. Gaunt from the pancreatic cancer that would eventually kill him, he was nonetheless at the height of his powers as a technologist. The iPod, released in 2001, had been a phenomenon—a worthy follow-up to Jobs’ first great success, the original Macintosh computer, released in 1984. This new thing would be bigger than both, Jobs boasted. “Every once in a while, a revolutionary product comes along that changes everything,” he said from the stage. This was “three revolutionary products”: a phone, “a breakthrough internet communicator,” and a “widescreen iPod.” On a screen behind him, square pictographs of the trio spun into one another like a superhero changing costumes. “Are you getting it?” Jobs shouted while the audience tittered and then roared, wide-eyed at this shared moment of technological alchemy, of transmogrification, of near transubstantiation—all of which might sound purple except for everything that has come since.

It can be hard to recall now, but before Jobs’ black mirror, we were a species of button pressers. Nothing before came close to the iPhone’s fluidity and so seemed as much like the infinite tablet of a prophecy. Little surprise then that Apple has sold around two billion iPhones, making it among the richest companies in history, valued at more than $2 trillion. Forty percent of the world’s population now uses a smartphone. Democracy and truth have themselves been diverted by our phones’ pull on our attention, darkened in the shadow of our doom scrolling, unmoored in the weightlessness of our fiddling.

Occasionally, a singular technological leap changes what we expect from the world and the ability of human ingenuity to shape it. The iron horse of the railroad bent the geopolitics of the nineteenth century, just as the internal combustion engine, and the fossil fuel it requires, did the twentieth. It is a straight line from the Wright brothers’ 1903 biplane made of bicycle parts to the latest carbon-fiber Boeing. All defied what was then thought possible. Each seemed at first like magic, if of two different types. They start out nearly supernatural, a kind of witchcraft, but soon resemble stage magic, a refined trick built on years of practice and iteration. But whereas stage magic relies on sleight of hand, or some other subterfuge, the iterative magic of technology requires its inner workings to be revealed, exposed, and understood—in order to be refined, built upon, and made more marvelous and consequential. In An Enquiry Concerning Human Understanding (1748), David Hume notices that miracles no longer count as miracles when broadly seen. “There is not to be found, in all history,” he writes, “any miracle attested by a sufficient number of men, of such unquestioned good sense, education, and learning, as to secure us against all delusion in themselves.” Explanation ruins miracles. Familiarity deflates them.

Doing research on the web is like using a library assembled piecemeal by pack rats and vandalized nightly.

—Roger Ebert, 1998

Technological magic is persistent. It stands up to scrutiny. It recharges overnight, cruises at highway speeds, and offers cocktails and in-flight movies. But Hume was right about the first impression of a miracle, that initial surprise and delight. There is a threshold between when a technology has to be imagined and when it is real enough to hold in your hand—a moment when the magical becomes real, when dreamed-of things finally happen, when a machine carries you into the sky, or a new medicine polishes a rough edge of human frailty.

Over this past year of human buckling, I have found myself craving that extreme ingenuity, those marvels that arrive infrequently but decisively. For a decade or two, it has become clear how desperately we need new energy technologies to provide the warmth, light, movement, and stuff we demand without catastrophically warming the planet. But this year brought a more specific and more desperate need: a microscopic technology to teach the body to fight the coronavirus, and the industrial and scientific infrastructure to manufacture it and deliver it into human arms. As vaccines have arrived, spectacularly if unevenly, one is tempted to be boastful: science did this. It is easy to worry that the miracle is, again, short-lived. But it has also made me wonder, What further leaps might we see? What else might be possible?

 

On the evening of March 18, 1987, the American Physical Society, physicists’ century-old professional organization, was midway through its annual meeting when the scientists in attendance crowded the hallways of the New York Hilton, eager to elbow their way into the big second-floor ballroom for a special evening session. The year before, two physicists, J. Georg Bednorz and Karl Alex Müller, had discovered that certain compounds of ceramic materials were remarkable “superconductors” of electricity: electrons flowed through them without any loss. Whereas resistance had been a given for any conductor of electricity—including the aluminum and copper used in power wires—these new ceramics had none, even at temperatures significantly higher than earlier superconductors. Condensed-matter scientists like Bednorz and Müller typically keep a lower profile than the astrophysicists or nuclear physicists accustomed to holding the spotlight with grand pronouncements about the nature of the universe. But in this instance, with these new superconductors raising the prospect of fantastic new applications—levitating trains, electric cars, new imaging tools like MRIs—the field went into a frenzy. Physicists around the world hoping to replicate (if not exceed) Bednorz and Müller’s superconducting success began testing new combinations of materials, looking for an organic mix that could begin superconducting at higher temperatures. Using the Kelvin scale (which starts at the scientific constant of absolute zero, or −273 degrees Celsius), Bednorz and Müller saw superconductivity at the then shocking 35 K (or −238 degrees Celsius). Soon others were leaping ahead, finding new material that worked at 38 K, then 52 K. According to Douglas Scalapino, now a professor emeritus at the University of California, Santa Barbara, it was as if everyone were running a four-minute mile: “You could go to any track meet, and some guy was breaking it.”

In that pre-internet era, with the rate of discovery outpacing the scientific publishing process, physicists were ravenous for news of the latest breakthroughs. Three thousand of them crammed into the Hilton ballroom for the High Temperature Superconductivity Symposium, while hundreds more watched on TVs set up in the hotel corridors. In a marathon session that soon became known as the “Woodstock of physics,” fifty-one separate presentations went on until 3:15 am, with example after example of new superconducting feats. It became a singular event in modern science, its legend fueled by a Nobel Prize that year for Bednorz and Müller and a cover story in Time magazine (“Superconductors!”).

Much of the hype was premature. Since these new superconductors were ceramics, rather than metals, they weren’t bendable like traditional conducting wires but instead were as brittle as dinner plates. To be useful, scientists—or really, engineers—had to manufacture these superconducting materials so that they could be coiled and wrapped. The practical applications would have to wait far longer than expected—not years but decades.

Not until Bob Mumgaard finished his PhD in applied plasma physics at the Massachusetts Institute of Technology, in 2015, did one particularly promising class of high-temperature superconductors made with material known as ReBCO—short for rare-earth barium copper oxide—reach a point of new potential. “The thing that really mattered was you could see this material in an adjacent field get better and better,” Mumgaard says, leaning toward me into his webcam one morning in 2020. Superconductors made from ReBCO operated at higher temperatures (100 K) and could be readily deposited into thin films that could in turn be wound into astonishingly strong and efficient electromagnets. Superconducting magnets had been used in hospital MRI machines and in the grand scientific experiments of particle accelerators, including the Large Hadron Collider outside of Geneva. But Mumgaard, a nuclear physicist, had locked in on their potential to fulfill the grandest promise of his field: fusion.

Copper model of a submarine, by Antoine Lipkens and Olke Uhlenbeck, 1836–39. Rijksmuseum.

The first extrasomatic energy source was fire, mastered by prehistoric societies 250,000 years ago. Pack animals provided ancient humans with an order of magnitude more energy. But not until waterwheels came into common use in the medieval era was there any common inanimate source to master. The Canadian historian Vaclav Smil notes that the Domesday Book records 5,624 water mills in southern and eastern England in the late eleventh century, one for every 350 people. Yet it would take another eight hundred years, into the Industrial Revolution, for their performance to be increased by another order of magnitude. Then things sped up. By the 1880s, the electrical system as we know it was recognizable, and crude oil began its rise to dominance for transportation. Ox by ox, water­wheel by waterwheel, engine by engine, the peak capacity of individual generating units rose approximately fifteen million times in ten thousand years, with more than 99 percent of that rise occurring during the twentieth century. Of those leaps, the most dramatic was nuclear fission, the breaking apart of atoms. Fission weapons shaped the century’s geopolitics; fission power plants still supply 10 percent of the world’s electricity.

Except now fission has run its course. In the wake of the Fukushima disaster, society’s appetite for nuclear risk has diminished. The costs of engineering even greater safety make fission power less economically viable compared to the falling costs of renewable sources like wind and solar. Averting further climate catastrophe requires broad policy changes—and some key new technologies. A step-change improvement in energy storage would open up new ways of using renewable energy, like solar energy at night and wind energy on calm days. More efficient ways of removing carbon from the atmosphere, at scale, might help change the climate again.

But the greatest potential for innovation—the closest thing to a technological silver bullet—remains fusion. Fusion is what powers the sun: a self-sustaining reaction in which isotopes of hydrogen at tens of millions of degrees fuse to form helium, releasing vast amounts of energy in the process. Fusion carries none of fission’s catastrophic risks. Its raw materials are abundant and safe, derived primarily from seawater. Its waste is minimally radioactive—more like what’s produced by hospitals than fission power plants. And there is no risk of meltdowns: when a fusion reactor’s power is shut off, its reaction stops.

The challenge is a different kind of control. Fusion reactions take the form of a roiling hot plasma, burning at more than 50 million degrees Celsius. Engineering its containment—putting “the sun in a bottle,” in a classic metaphor—has consumed scientists since the 1950s. The leading strategy is a type of reactor known as a tokamak, a doughnut-shaped chamber that uses electromagnets to hold the plasma in place. Since the tokamak was conceived in 1950, by the Soviet scientist Andrey Sakharov, the stumbling block has been finding magnets powerful enough to hold the plasma but efficient enough to require less energy than the fusion reaction itself creates. (Otherwise what, ultimately, is the point?) That’s where superconductors come in. “If you can figure out how to build a magnet out of this material, the material itself stops being the limit, and instead the engineering becomes the limit,” says Mumgaard. “And if you can do that, you can make small fusion reactors, and you can do that without having to have some big scientific breakthrough in plasma physics.” A working fusion reactor has been perennially out of reach. But a proper magnet made of superconductors offers a new path.

Inventor, n. A person who makes an ingenious arrangement of wheels, levers, and springs and believes it civilization.

—Ambrose Bierce, 1911

Mumgaard cofounded Commonwealth Fusion Systems in 2018, almost as soon as he deemed the technology ready for his uses. Throughout graduate school, he had kept a close eye on the progress of thin-film manufacturing, the process needed to shape superconducting ceramics into useful forms. For other thin-film products like solar panels and silicon computer chips, enormous economies of scale led to constant technological improvements. If the same could be done for ReBCO, then it could be worked and coiled into extremely powerful electromagnets—powerful enough, perhaps, to be fusion’s missing piece.

In 2021 Commonwealth will begin construction on a new headquarters campus, designed to accommodate the fabrication and testing of a two-step project. The first is a massively powerful magnet made with high-temperature superconductors. The second uses the magnet as the transformative component in a fusion reactor capable of producing more energy than it requires to operate. Called SPARC, it is especially remarkable for its relatively small size, with the entire unit taking up about the same space as a volleyball court. This is startlingly intimate in contrast with the grand scale of other fusion projects, most notably ITER, an international collaboration currently under construction in the South of France. Designed over decades, beginning in 1988, ITER’s tokamak doesn’t reap all the benefits of high-temperature superconductors, requiring its magnet to compensate with sheer size in order to generate enough force to contain the plasma. Its budget has stretched into the tens of billions of dollars (the precise number is a matter of considerable dispute). Despite beginning construction in 2013, its first fusion reaction is not expected until 2035.

For Mumgaard (and perhaps all of us), that is too late to be useful. On their current timeline, he and his colleagues hope to press a button that kick-starts a fusion reaction sometime before 2025. When they press it again, the reaction will stop. The net energy created will be the first glimmering light of—quite literally—a human-made sun. “We think it will be a really, really big deal,” he says with quiet understatement. “It’s already a big deal—fusion made all the atoms and all of us, right? It’s the most powerful thing in the universe: 99.999—all the way out—percent of all the energy in the universe starts with fusion.” Once that sun can be turned on and off at will, the challenge will be to extract its excess energy and package it into something resembling a power plant. Then build them, fast.

 

It is easy to be defeatist about climate change. The political dread of the past several years, the wildfires, the thawing permafrost, the unrelenting virus, make the specter of extreme disruption familiar. But to take just one slice of humanity’s challenge—halting the burning of fossil fuels—it seems possible, with a squint, to see this as a solution. Mumgaard likes to imagine the fuel truck backing up to the plant on the first day and pumping the totality of the hydrogen isotopes, delivered as a gas, required for its entire working life. No coal trains, no tank farms, no underground pipelines of rushing hydrocarbons. In the years following, one imagines, thousands of fossil-fuel power plants that dot the planet, emitting the gases that are cooking us all, can be replaced by thousands of fusion reactors. These would not displace the enormous progress already made with renewables but compensate for their limitations. To create a reliable electric grid exclusively out of variable generation—that is, dependent on the wind or the sun—means building the excess capacity required to cover calm or cloudy days, along with enormous batteries to cover the gaps and the nights. It is prohibitively inefficient. We need “dispatchable” power—easy on, easy off. Fusion could be that, and then some, eliminating (or minimizing) the need for nuclear fission, as well as fossil fuels.

A Cotton Gin, by Franklin G. Weller, c. 1870. The J. Paul Getty Museum, Los Angeles. Digital image courtesy the Getty’s Open Content Program.

If fusion works, the world will change. The hydrocarbons that established society as we know it will be replaced with a clean power source—a magic, at least for a moment, as if out of science fiction. (In the original Star Trek, first aired in 1966, the power source was described as a kind of fusion.) What then unravels? And what newly forms? Freshwater would be more abundant if the energy to desalinate it were, too. Absent the cost of power, more products and materials could be recycled economically, opening up new possibilities for circular material flows. It’s a tantalizing vision, and also totalizing. Fossil fuels have long been the scaffold of the global economy, but what happens when they are plausibly removed from the equation? “The two fundamental markets are human creativity and energy,” Mumgaard says—it’s the second of ten mantras by which Commonwealth guides itself. (Number one: “Energy and a livable environment are both fundamental human rights.”)

But what’s startling to me is that, as implausible as it sounds, transcendent, magical inventions have happened before. Fusion has always emitted the peculiar energy of magical thinking, even stranger as it gets closer to reality. What happens when the magical becomes real? When people fly through the air, when books light up with infinite knowledge, when energy is limitless? At the least there is hope in this space of possibility—this wide gap where reality and fantasy join. It is an abiding thrill of technology that there are moments when it asks us to look away from its externalities—from the conflict minerals, the emissions, the noise, the ad-supported models that suck our attention, destroy the livelihood of culture producers, and warp politics—and toward a miraculous, or just plain livable, future.

We are living on the knife edge of all that now. Will the vaccines beat the virus? Will the technological alternatives to fossil fuels come fast enough to limit the suffering—and perhaps the ultimate apocalypse—of climate change? The promise of technology is the possibility of all its little innovations, those miracles that become commonplace, to amount to the grand magic of ongoing life on earth.



from Hacker News https://ift.tt/3rBnrAx

For the Love of Troff (2020) [pdf]

Comments

from Hacker News https://ift.tt/2UgHsRN

A 1982 chess computer plays itself by mechanically moving the pieces [video]

Comments

from Hacker News https://www.youtube.com/watch?v=UxLd_wiGMA4

Handling errors with grace (and sometimes without it)

Reading Time: 6 minutes
On addressing the dangers that waddle into your way, whether you like it or not.

A year and a half ago I picked up Crafting Interpreters. The project guides programmers through building their own interpreters for the Lox programming language. I started writing a blog series about my progress. You can see all the posts so far right here.

In the last post, we dove into chapter 7 to write an interpreter with the visitor pattern. That’s all well and good when our Lox code is written correctly. But how do we handle the cases where it isn’t?

At the interpretation stage, two things can go sideways:

  1. Lox encounters an operator that it does not know about, like $ or #.
  2. Lox encounters an operator it knows about with operands that don’t associate with that operator, like "two heads" > "one".

In our interpreter code, we have to account for those. We can do that by executing the appropriate checks at the appropriate times and then bubbling up any issues through the Lox run loop.

1. Catching Issues

Remember in the last post, when we looked at this method exemplifying the interpreter’s expression evaluation?

@Override
public Object visitUnaryExpr(Expr.Unary expr) {
        Object right = evaluate(expr.right);

        switch (expr.operator.type) {
            case MINUS:
                checkNumberOperand(expr.operator, right); //we'll get to this later
                return -(double) right;
            case BANG:
                return !isTruthy(right); //we'll also get to this later
        }

        ...
}

I promised to get back to checkNumberOperand() later. As it turns out, that method looks like this:

    private void checkNumberOperand(Token operator, Object operand) {
        if (operand instanceof Double) return;
        throw new RuntimeError(operator, "Operand must be a number.");
    }

We have a similar check embedded inside the interpreter method visitBinaryExpr() to check that both operands are numbers when the operator is something that operates exclusively on numbers, like one of these: > < >= <= * / -

A common lament about error handling is that there isn’t really a good way to completely separate it from the operational code. It sort of has to embed itself in each expression that could receive an input that produces an undesirable output.

At some point, most authors and teachers who talk about software architecture come around to this in one way or another. I’ve heard two solutions that I find myself returning to time and time again:

  1. Avdi Grimm in Confident Ruby: Accept dealing with nonnominal inputs as an expected and necessary part of your workflow, and group it as much as possible at the beginning of a function’s work rather than littering it all over the code (blog post here). That’s what you see exemplified above: we check that the operand is a number as soon as we know that the operator is a minus sign. This is also the idea behind the guard clause pattern.
  2. Michael Feathers, Edge Free Programming: Find ways to turn nonnominal inputs into nominal, expected inputs to keep your code as streamlined as possible (blog post here with examples of how to do this). That might be, to use one of Bob’s examples, taking an operator Lox doesn’t use (such as + as a unary operator a la +123 being the same as 123) and add it to the operators we do parse, surfacing the message “Lox does not support the + operator for just one operand” as the default behavior in the same way the other operators get evaluated as the default behavior.

Though the second of those solutions sounds preferable to the first, I wanted to start with the more familiar example. Also, even if the second one sounds more preferable and clean, from a practical perspective I find that I don’t usually arrive at ‘good’ executions of the second solution until I have noticed patterns in how I’m using the first solution, or come back to a case where I used the first solution after having some time to think.

2. Surfacing Issues

So now we know there’s a problem. What do we do?

In the prior post, I mentioned that I would show where we’re calling the interpreter’s evaluate method in this post, since it includes error handling. Here it is, also on the interpreter:

void interpret(Expr expression) {
     try {
         Object value = evaluate(expression);
     } catch (RuntimeError error) {
         Lox.runtimeError(error);
     }
}

So, when checkNumberOperand() throws that RuntimeError, this will catch it and call our own runtimeError method in the Lox runner:

    static void runtimeError(RuntimeError error) {
        System.err.println(error.getMessage() + "\n[line " + error.token.line + "]");
        hadRuntimeError = true;
    }

We’re calling that interpreter method in the Lox run loop itself. Lox has a static instance of the interpreter, and when some code is run, it scans, parses, and then interprets our code with this static method:

    private static void run(String source) {
        Scanner scanner = new Scanner(source);
        List<Token> tokens = scanner.scanTokens();

        for (Token token: tokens) {
            System.out.println(token);
        }
        Parser parser = new Parser(tokens);
        Expr expression = parser.parse();

        // Stop if there was a syntax error.
        if (hadError) return;

        interpreter.interpret(expression); // <===== HERE!!!
    }

We don’t catch the error that might have been thrown here. Instead, we let RuntimeError surface that message you see above. Worth noting: RuntimeError is a class of our own, which is different from (and inherits from) Java’s RuntimeException. We did this to maintain control over what the message is and to make it clear that this is an error coming from Lox, not the underlying implementing language:

class RuntimeError extends RuntimeException {
    final Token token;

    RuntimeError(Token token, String message) {
        super(message);
        this.token = token;
    }
}

Like many of the Gang of Four patterns or, say, the implementation of a decorator in Python, the process of catching, and throwing error messages as we rise through the call stack just kinda looks gnarly until you get familiar with it. At least, that has been my experience with it.

The next chapter of Crafting Interpreters is called “Statements and State.” I haven’t looked at it yet, but I’m excited about it. Expect more soon.

If you liked this post, you might also like…

The rest of the Crafting Interpreters series

This post about structural verification (I’m just figuring you’re into objectcraft, so)

This post about why use, or not use, an interface (specifically in Python because of the way Python does, or rather doesn’t exactly do, interfaces)

Like this:

Like Loading...



from Hacker News https://ift.tt/3gWG7sT

The City, the Sparrow, and the Tempestuous Sea

Article body copy

This article is part of Birdopolis, a three-part series that explores the lives of birds that are, by accident or design, spending more time in urban environments. The other stories are “The Gull Next Door” and “Honolulu: A Seabird’s Surprising Five Star Destination”.

For insights into the urban lives of another group of coastal birds—gulls—watch the recording of our webinar “Birdopolis: Coastal Birds at Home in the City.”

Today, the name of the park preserve—Idlewild—seems aspirational. Snugged up against the northwest border of John F. Kennedy International Airport in the New York borough of Queens, population 2.2 million, the green space, approximately one-fifth the size of nearby Central Park, is a remaining sliver of the expansive wetlands that once carpeted the Atlantic coastline. It’s also some of the only habitat remaining for one of North America’s endangered birds, the saltmarsh sparrow. And in their little patch of wild, female saltmarsh sparrows are hardly idle. Undeterred as jets fly overhead every five minutes or so, females flit and dip through the grasses, hurriedly building nests so that they can lay their eggs and raise them to fully fledged chicks, all within one lunar cycle.

I’ve joined Alex Cook, a biologist at the State University of New York College of Environmental Science and Forestry (SUNY ESF), and her team of four in Idlewild this July 2019 morning, already steamy at 5 a.m., to learn more about their work and the saltmarsh sparrow. As we load up the gear, I notice the stark contrast between our knee-high rubber boots—mine, shiny black and newly purchased; theirs, mud-caked and sun-bleached—and rightly predict what the day has in store. Within 100 meters of entering the wetlands, I’m breathing loudly and heavily, trying to keep up with the all-female team skirting along a barely discernible path through the sloppy mud.

With each step the mud reaches halfway up my boots. The suction feels powerful enough to pull them off my feet.

“Don’t worry,” Cook says. “It’ll go over your boots by the end of the day. It’s inevitable.”

Eventually, I free myself and carry on into the marsh.

When I looked up the saltmarsh sparrow in preparation for this trip, I knew I wouldn’t be much help in the ID department. To me, the bird looked like an LBB, or “little brown bird,” the informal name birders sometimes use for any small brownish bird that is difficult to identify. Its picture showed lots of grays and browns, streaks and spots, but the white throat and the orange “eyebrow” seemed like good clues. I am in excellent hands with Cook and her team, though. Cook’s program at SUNY ESF has been studying the birds since 2011 as part of the Saltmarsh Habitat and Avian Research Program (SHARP), an umbrella organization that amasses data on tidal marsh birds for various research groups. And I’d certainly be excused for never having seen a bird so rare. Once, a population of 250,000 saltmarsh sparrows bred in a 1,000-kilometer-long strip of coastal wetlands from Maine to the Chesapeake Bay. Now, only an estimated 60,000 birds hang on in a few remaining pockets of breeding habitat.

A team of researchers from State University of New York College of Environmental Science and Forestry (SUNY ESF) search for saltmarsh sparrow nests in Idlewild Park Preserve, New York City, New York

A team of researchers from the State University of New York College of Environmental Science and Forestry (SUNY ESF) search for saltmarsh sparrow nests in Idlewild Park Preserve, New York City, New York. Photo courtesy of Alex Cook

In addition to having specific breeding locations, saltmarsh sparrows also have a specific breeding time frame—one that is dictated by the movement of the moon around the Earth, and the tides this celestial dance creates. Because the sparrows build their nests exclusively in coastal wetlands that are susceptible to flooding, their reproductive cycle relies on this predictable sequence—they have carved out a niche to raise their young in these coastal wetlands between the lunar flooding that occurs every 28 days on the highest of high tides.

But global warming—with accelerated sea level rise, more volatile weather conditions, and even a shift in global wind directions that floods or dries out coastal wetlands—has disturbed this delicate balance. Habitat destruction has further whittled this already small strip of suitable breeding ground down to a sliver.

All of this leads to a bird with an uncertain future. In the last two decades, the saltmarsh sparrow population has declined by 75 percent, a shockingly precipitous decrease leading scientists to believe that within 15 years the saltmarsh sparrow could join the passenger pigeon and the Carolina parakeet on the list of birds of the continental United States that have been erased from the face of the Earth forever.


Carrying bags teeming with equipment—transponders, calipers, bamboo poles, folding chairs, umbrellas, water, food, a tarp, and thin netting bundled up in threadbare plastic bags—we push through hip-high mugwort plants and reeds that stretch over our heads into the hazy sky above. The marsh smell is a strange amalgamation of swamp stench—hydrogen sulfide, methane, sulfur—and jet-fuel exhaust. Likewise, there is a peculiar discordance of noise—pulsating airplane engines mixing with the sounds of nature. We set up a field camp on a patch of damp, tamped-down grass just a few feet from a canal. The canal water is interspersed with shimmering, iridescent oil slicks that capture the rising sun in psychedelic explosions of color. Waxy white patches slowly float out to sea like cartwheeling snowflakes. The humidity is thick enough to chew. It is barely daybreak, but already oppressively hot, as if the sun were boiling the marsh. Everything is soggy. If you stand too long in one spot, you begin to sink.

Despite the pollution, the airport, and surrounding development, the Idlewild Park Preserve remains a dynamic and vibrant ecosystem, teeming with vegetation, insects, and birds. The park is part of the greater Jamaica Bay wetlands, which are a renowned haven for over 325 species of birds—nearly a third of all species found in North America. As the researchers set up the field camp, I spot bleach-white great egrets, black skimmers, cedar waxwings, yellow-crowned night-herons, and red-winged blackbirds, to name only a few.

researchers setting up a mist net

The SUNY ESF team sets up a mist net. They will use it to catch saltmarsh sparrows and briefly retain them for measurements and banding. Photo by Joseph Quaderer

Once the roar of the jets momentarily subsides, we can hear the calls and songs of the many birds that live in the preserve. The saltmarsh sparrow call is conspicuously absent. Most birds, especially songbirds, sing to declare a territory or to attract a mate. But saltmarsh sparrows are not like most birds—they are notoriously promiscuous. Typically, avian parents pair up to take care of young birds. Occasionally, birds will have “extra pair copulation,” behavioral ecologists’ term for screwing around, but saltmarsh sparrows bring extra pair copulation to a new level—males mate with multiple females and females mate with multiple males. Because saltmarsh sparrows don’t form pairs, the males don’t need to be territorial. The females never sing, and the males’ only song is a mating call. As a result, they are naturally more muted than other species, but even those muted calls are getting quieter and quieter each year.

Wild birds are subject to a litany of assaults—attacks by domestic cats (estimated to cause between 1.3 and four billion deaths a year in the United States alone), collisions with buildings and windows (up to another billion), pollution, and habitat destruction. But nothing poses an existential threat to birds the way climate change does. In 2014, the National Audubon Society conducted a study on 588 North American species of birds and concluded that over half of that population will lose more than 50 percent of their current climatic range by 2080.

jet flying over researchers in a saltmarsh

Idlewild Park Preserve is adjacent to the John F. Kennedy International Airport, and wildlife in the preserve, and human visitors, are subject to a low-flying jet every five minutes or so. Photo by Anna Peel

One of the major consequences of climate change is an increase in the pace of sea level rise, which is currently increasing three to five millimeters per year on the eastern coast of the United States. Even though the world’s oceans are connected, sea level rise is uneven, with the eastern coast of the United States experiencing rises considerably faster and higher than the global mean.

Historically, in salt marshes such as Idlewild, gradual sea level rise wasn’t an issue because, among other things, the marshes had the capacity to expand inland, where conditions were more accommodating. But today, artificial barriers such as roads, buildings, and dams block this natural creep.

Sea level rise also affects how often salt marshes are flooded. There are multifarious types of grasses growing in the marshes and some are better suited than others to withstand these more frequent inundations. The increased flooding is affecting the specific grass the saltmarsh sparrow prefers to nest in. Ultimately, climate change is not only reducing the available saltmarsh breeding habitat size, but also its quality.


The verdantly green vegetation carpeting the interior of the Idlewild Park Preserve is beautiful—fields of long, swaying grass sigh in the salty offshore winds of the Atlantic. To the untrained eye, the salt marsh looks homogenous, but it’s a finely tuned ecosystem calibrated to even the slightest variances in elevation. Height above sea level affects the likelihood of an area being inundated with salt water, and that in turn affects the vegetation that grows there. Mere centimeters can demarcate which areas are considered “high marsh” or “low marsh.” The vegetation the saltmarsh sparrow nests in is supremely attuned to these nuances.

The saltmeadow cordgrass (Spartina patens), a wispy, hay-like species native to the Atlantic Coast, grows in higher elevations in the marsh that are less likely to flood with storm surges. It’s the favored nesting habitat for saltmarsh sparrows, but with global warming causing more volatile weather conditions and increased flooding, the saltmeadow cordgrass is declining.

Alex Cook holding saltmarsh sparrow chicks

Biologist Alex Cook, lead of the SUNY ESF team, with several saltmarsh sparrow chicks briefly removed from their nest. Photo courtesy of Alex Cook

Cook and I trudge through the fields of saltmeadow cordgrass, sticking bamboo poles into the muck, and stretching thin black mist nets—finely woven nets that will be used to catch the birds—between them. Even though we’re at a relatively high elevation in the marsh, larger detritus deposited during storm surges peeks through the bursts of bright green grass: car and truck tires, some with metal rims; glass bottles—Johnnie Walker Black Label, Bud Light; and a once-black Valvoline oil container bleached gray by the sun. Throughout the marsh, the intersection of human refuse and natural bounty is omnipresent, yet perpetually jarring.

Closer to the tendrils of water—stretching like crooked fingers into the salt marsh—is the smooth cordgrass (Spartina alterniflora), a thicker, ribbonlike grass that can withstand more frequent flooding.

A few weeks later, I’ll meet with Chris Elphick, an avian specialist in the Department of Ecology and Evolutionary Biology at the University of Connecticut, in the salt marsh at Hammonasset Beach State Park to learn more about these grasses and saltmarsh sparrows, which he’s studied for 20 years. As we walk through the Connecticut salt marsh, our shadows long in the mid-July, late-afternoon sun, he tells me about a study he did in the early 2000s, in which he analyzed vegetation around the saltmarsh sparrow nests in nearly every major marsh system in Connecticut, around 60 study plots in all. When the same vegetation plots were resurveyed in 2013, everything had changed—across the board, the smooth cordgrass was more common, and the saltmeadow cordgrass was less common.

“So, the vegetation is shifting in a way that indicates the marshes are being flooded more often,” Elphick tells me.

Back in the Idlewild Park Preserve, as Cook and I walk through the marsh setting up the final mist nets, I note that the perimeter of the marsh is lined with the European common reed (Phragmites australis), which is invasive in American wetlands and is dramatically altering the sparrow’s already dwindling habitat. The hardier, more robust species, which grows to four meters in height, is not only altogether unsuitable for nesting, but it also hoards the sunlight and further decimates the grasses the saltmarsh sparrow can breed in.

“The vegetation for saltmarsh sparrows keeps getting worse and worse, but they’re still here,” Cook says as we set up the last mist net. “Either they’re adapting or they’re at their limit.”


Available vegetation dictates where saltmarsh sparrows breed, but the tidal cycles dictate when they breed. The highest spring tide, most likely to flood a salt marsh, occurs every 28 days. Saltmarsh sparrows have evolved to breed in between these flooding periods.

In the best-case scenario, it takes 22 days for the saltmarsh sparrow chicks to fully develop into functional birds and 27 days in the worst-case scenario. Because they primarily create their nests higher up in the marsh, in the saltmeadow cordgrass, the nests typically only flood during the month’s highest tide. If the salt marshes flood every 28 days, the mothers still have enough time to lay the eggs and the chicks have enough time to develop and fledge the nests.

But if the nests flood more frequently, it can be calamitous.

saltmarsh sparrow nest with eggs

Saltmarsh sparrow nests are particularly vulnerable to flooding. If the nest washes away on a high tide, the birds often try again—making a new nest, laying a fresh clutch of eggs, and seeing them hatch all within one lunar tide cycle. Photo courtesy of Alex Cook

During some spring tides or when there are storm surges—caused primarily by the strong winds in a hurricane or tropical storm, which have both worsened with global warming—the mother is forced to flee when the nest floods. Saltmarsh sparrow eggs can remain submerged for up to 90 minutes with no adverse consequences, but if they float out of the nest they die because there is no way for the mother to get the eggs back in the nest to properly brood them.

In 2009, Elphick worked with a graduate student whose study included locating saltmarsh sparrow nests in two Connecticut salt marshes. The student found over 200 nests at Hammonasset Beach State Park and another nearby marsh. But it was a wet and stormy year, with increased wind and flooding. The nests flooded so frequently that the saltmarsh sparrows never had enough time to raise their young. Out of over 200 nests, only about 10 saltmarsh sparrow chicks survived.

“There will reach a threshold when the flooding comes too often to allow the birds’ time to raise their young,” Elphick says. “After that threshold is crossed, the birds may have five or six years before they’re extinct.”


Thirty minutes after setting up the mist nets, we go back into the field, collect a half-dozen captured birds, and bring them back to camp. The researchers work quickly and systematically, recording the specifics of each bird. Using calipers, they measure obscure bird body parts: tarsus, wing chord, skeletal culmen, and nalospi. They poke a needle into a vein under each bird’s wing and use capillary tubes to withdraw fire truck–red blood for DNA and mercury testing. They place the birds in thin tan stockings to weigh them. All of this information will go to SHARP to help assess the strength and migratory habits of the population and note the development of birds they had previously examined.

After a brief lunch, the team sets out upon their second task of the day—monitoring the nests. Although each nest is marked by a fluorescent orange flag, they are still surprisingly hard to spot. We remain on the paths so we don’t step on the nests or wash out the eggs with the splashing from our boots.

Cook, technician Anna Peel, and I examine nearby nests while the rest of the team checks nests in the far corners of the salt marsh. Even though the researchers monitor the nests every three or four days, it still takes a bit of searching to find them. In addition to understanding which areas of the marsh and which types of vegetation the birds are selecting for nests, the researchers want to gather information about the specifics of each nest.

researchers measuring saltmarsh sparrow

The SUNY ESF research team quickly takes a suite of measurements on a saltmarsh sparrow caught in a mist net. Photo by Joseph Quaderer

First, we come across an abandoned nest. While Cook stands with a clipboard waiting to record information, Peel crouches down—her multi-pocketed field vest chock-full of scientific equipment brushing against the top of the grass—and sets up a one-meter perimeter around the empty nest, gently burrowing a measuring stick into the grass and thatch. Then she uses another measuring stick—green and barely thicker than a drinking straw—to note the average height of the grass and thatch.

We continue walking and spot another palm-sized nest nestled deep in the grass. Although we approach cautiously and gently, the mother flees before we ever see her. Furtive creatures. At active nests, the researchers move quickly to minimize the disturbance for the mother. Peel peeks into the nest to see how any fledgling birds or eggs are faring.

Saltmarsh sparrow clutches typically contain three to five eggs. This nest has one hatchling, a tiny featherless creature with its eyes still glued closed. Cook briefly removes it and colors its leg red with a harmless marker so the other researchers would know it has been discovered and recorded. She holds it for me to look at. The bird is calm and quiet. At the second active nest, she counts one egg. She plucks it from the nest and cups it in her palm to gauge its temperature.

“It’s cold,” she says. That means it’s dead. Cook is matter-of-fact when noting this. It’s not something she wants to see, but with the tide rising this morning the team needs to work quickly and there’s no time to linger. The third active nest has four live eggs. The last has three cold eggs: two in the nest and one on the ground. Cook gathers the three eggs and places them in my hand. They’re small—like oblong marbles—and beautiful, with a terrazzo-like pattern of tan and caramel splotches.

The cold eggs—which researchers are finding more and more these days—are collected in an empty Skippy peanut butter jar for another scientist in SHARP to examine. The field crew calls it the jar of death. “It smells like death,” Peel says as she unscrews the lid, arching her face away to avoid its repugnant odor.

A newly hatched saltmarsh sparrow chick

A newly hatched saltmarsh sparrow chick. A dab of red is added to its leg, so it won’t be counted again during other studies. Photo by Joseph Quaderer

In recent years, some of the nests have been fitted with chrome-plated, nickel-sized thermometers set to automatically record the temperature every 15 minutes, like an airliner’s black box. The aggregated data from the thermometers has not been analyzed, but the heat tells the story of each individual nest. An active nest is almost constantly warm. The mother broods on it, keeping it a steady temperature day and night, with only slight fluctuations for 20 to 30 minutes when she needs to leave the nest to feed. If the transponders show a temperature drop that lasts an hour or two, it means the nest was underwater. If the flooding isn’t too extreme, and the chicks don’t drown or the eggs don’t float away, the mother will continue to brood after the flooding subsides. But if the chicks drown or the eggs float away, the mother will abandon the nest. In studying all this information, scientists have determined that nests can flood up to nine times and the eggs still hatch and that a variance in nest height of six centimeters can be the difference between a successful and a flooded nest.

But even if flooding does wipe out all their young, saltmarsh sparrows get right back to laying eggs. Elphick tells me that he doesn’t think female saltmarsh sparrows can somehow divine where they are in the lunar cycle when they start laying eggs. Rather, they arbitrarily start their reproductive cycle, and if it’s not in sync with the lunar cycle, the flooding will likely wipe out the nest. Undeterred, as soon as the flooding subsides, they’ll start laying eggs again—this time perfectly synchronized with the lunar cycle and with a full 28 days to raise their young. It’s a brutal yet efficient way for these tiny birds to harmonize with the moon spinning around the Earth.

At 11 a.m., Cook, Peel, and I head back to the field camp. High tide is just 30 minutes away, and we don’t want to be splashing around with big boots, creating waves that could lift the eggs out of their nests.

The water level has risen nearly a meter, and the desiccated canal behind the camp has become a teeming rivulet. A flock of Canada geese in the canal, debating whether to come ashore, suspiciously watch us as we fold chairs, wrap bungee cords around the bamboo poles, load the scientific gear into bags, and roll up the muddied tarp.


As we’re preparing to leave the salt marsh, my mind drifts back to the eight eggs we found, half of which were added to the jar of death. I wonder what the next tide will bring for these beleaguered creatures.

When I meet with Elphick in Connecticut, he tells me that the clichéd metaphor of a canary in the coal mine is very applicable to the saltmarsh sparrow’s saga.

“The point is not that the canary didn’t do well,” he says. “The point is that the miners took the canary down into the mine because the bird was more sensitive to the gases than humans, but eventually those gases would have affected the humans. The saltmarsh sparrows are more affected by climate change, but it’s a sign of what is coming for us.”

saltmarsh sparrow on grass

Saltmarsh sparrows are one of North America’s most threatened birds. Much of their habitat is in or near heavily urbanized areas on the Atlantic Coast and is impacted by development, habitat degradation or loss, and the consequences of sea level rise. Photo by Raymond Hennessy/Alamy Stock Photo

Birds and people share the same spaces, and they need those spaces to be healthy. In addition to providing a natural habitat for plants, fish, and other wildlife, the marshes that line the world’s waterways provide buffers against waves and sea level rise, improve water quality, and reduce the damaging effects of hurricanes by absorbing storm energy in ways that neither solid land nor open water can.

There are approximately 1,600 hectares of salt marsh remaining in New York City today, less than 20 percent of the land that existed before human intervention. Slightly over 600 hectares are owned and managed by the New York City Department of Parks and Recreation, which is acquiring additional salt marsh land from other city agencies and private owners so that existing and newly acquired salt marshes can be properly maintained through a number of measures—adding sand to increase the marsh surface elevation, restoring eroded marsh edges, and removing large-scale marine debris.

It is unclear whether these changes will save the saltmarsh sparrow. The city has rebuilt 60 hectares of salt marshes to fortify the New York City coastline and protect its residents, but, thus far, the efforts haven’t appeared to boost the saltmarsh sparrow population.


After finishing all the surveys, Cook’s team grabs the bags full of equipment and we march single file through the noticeably muddier and wetter trail back toward dry land. As we bushwhack back through the taller vegetation, I can no longer see the marsh, but I can still hear the persistent buzz of insects and the staccato calls and songs of the birds behind me. I don’t hear the saltmarsh sparrow. One rarely does. The females never sing, of course, and the males, when they do sing, emit muted wheezy trills and oddly accented syllables. They are quiet birds, but with their vastly declining populations, saltmarsh sparrows are calling out to us as loudly as they can.



from Hacker News https://ift.tt/3vMJ7M8