Tuesday, August 31, 2021

The $150M Machine Keeping Moore’s Law Alive

In 1965, Gordon Moore, an electronics engineer and one of the founders of Intel, wrote an article for the 35th anniversary issue of Electronics, a trade magazine, that included an observation that has since taken on a life of its own. In the article, Moore noted that the number of components on a silicon chip had roughly doubled each year until then, and he predicted the trend would continue.

A decade later, Moore revised his estimate to two years rather than one. The march of Moore’s law has come into question in recent years, although new manufacturing breakthroughs and chip design innovations have kept it roughly on track.

EUV uses some extraordinary engineering to shrink the wavelength of light used to make chips, and it should help continue that streak. The technology will be crucial for making more advanced smartphones and cloud computers, and also for key areas of emerging technology such as artificial intelligence, biotechnology, and robotics. “The death of Moore’s law has been greatly exaggerated,” del Alamos says. “I think it’s going to go on for quite some time.”

Amid the recent chip shortage, triggered by the pandemic’s economic shock waves, ASML’s products have become central to a geopolitical struggle between the US and China, with Washington making it a high priority to block China's access to the machines. The US government has successfully pressured the Dutch not to grant the export licenses needed to send the machines to China, and ASML says it has shipped none to the country.

“You can’t make leading-edge chips without ASML’s machines,” says Will Hunt, a research analyst at Georgetown University studying the geopolitics of chipmaking. “A lot of it comes down to years and years of tinkering with things and experimenting, and it’s very difficult to get access to that.”

Each component that goes into an EUV machine is “astonishingly sophisticated and extraordinarily complex,” he says.

Making microchips already requires some of the most advanced engineering the world has ever seen. A chip starts out life as a cylindrical chunk of crystalline silicon that is sliced into thin wafers, which are then coated with layers of light-sensitive material and repeatedly exposed to patterned light. The parts of silicon not touched by the light are then chemically etched away to reveal the intricate details of a chip. Each wafer is then chopped up to make lots of individual chips.

Shrinking the components on a chip remains the surest way to squeeze more computational power out of a piece of silicon because electrons pass more efficiently through smaller electronic components, and packing more components into a chip increases its capacity to compute.

Lots of innovations have kept Moore’s law going, including novel chip and component designs. This May, for instance, IBM showed off a new kind of transistor, sandwiched like a ribbon inside silicon, that should allow more components to be packed into a chip without shrinking the resolution of the lithography.

But reducing the wavelength of light used in chip manufacturing has helped drive miniaturization and progress from the 1960s onwards, and it is crucial to the next advance. Machines that use visible light were replaced by those that use near-ultraviolet, which in turn gave way to systems that employ deep-ultraviolet in order to etch ever smaller features into chips.

A consortium of companies including Intel, Motorola, and AMD began studying EUV as the next step in lithography in the 1990s. ASML joined in 1999, and as a leading maker of lithography technology, sought to develop the first EUV machines. Extreme ultraviolet lithography, or EUV for short, allows a much shorter wavelength of light (13.5 nanometers) to be used, compared with deep ultraviolet, the previous lithographic method (193 nanometers).

But it has taken decades to iron out the engineering challenges. Generating EUV light is itself a big problem. ASML’s method involves directing high-power lasers at droplets of tin 50,000 times per second to generate high-intensity light. Lenses absorb EUV frequencies, so the system uses incredibly precise mirrors coated with special materials instead. Inside ASML’s machine, EUV light bounces off several mirrors before passing through the reticle, which moves with nanoscale precision to align the layers on the silicon.

“To tell you the truth, nobody actually wants to use EUV,” says David Kanter, a chip analyst with Real World Technologies. “It's a mere 20 years late and 10X over budget. But if you want to build very dense structures, it’s the only tool you’ve got.”



from Hacker News https://ift.tt/3gGWOYs

Automatic Extraction of Secrets from the Transistor Jungle Using Laser-Assisted [pdf]

Comments

from Hacker News https://ift.tt/3mQNEw4

Show HN: Compile for Arm at native speeds in an emulated system

Prebuilt images available on Docker Hub

https://hub.docker.com/repository/docker/valkmit/aws-graviton2-on-intel

What is this?

An Arm filesystem based off an AWS Graviton 2 system. All binaries are emulated under Qemu, EXCEPT for a custom toolchain built with Buildroot. The toolchain is run natively, allowing for up to 20x faster compile times than when emulated.

# file $(which bash)
/usr/bin/bash: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 3.7.0, BuildID[sha1]=03b374959d488851f8b6ef51a6a16e55eaedea98, stripped

# file $(realpath $(which aarch64-linux-gcc))
/x86_64/host/bin/toolchain-wrapper: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 3.2.0, BuildID[sha1]=9c88d4609953b73d518a95e44ebc93d642a8174e, stripped

Usage

If you wish to build the Docker image from source, you must run ./build.sh. At a minimum, you must provide the rootfs as a .tar.gz with -r. The script will download buildroot if it is not provided with -b. For a prebuilt image, please visit the corresponding Docker hub page (link provided above).

The only requirement to build is Docker - both Qemu and the toolchain are built in Docker containers based on Debian 10.

All of the native cross compile tools are available prefixed with aarch64-linux. For example, aarch64-linux-gcc, or aarch64-linux-g++. To observe how the toolchain is configured, you may view buildroot-graviton2-config.

Spin up an image quickly to play around with the compilers:

docker run -it --rm valkmit/aws-graviton2-on-intel

Justification

When compiling for a target architecture different from the host architecture, developers have roughly two options

  1. Set up a cross toolchain, which has fast performance, but makes it very difficult to automatically satisfy complex dependencies. For a sufficiently large project, this could mean compiling dozens of projects by hand using the newly built cross toolchain

  2. Emulate the target system, which makes dependency satisfaction easier, since a native package manager can be used, but slows down compile times - the compiler has to be emulated, too!

This project is designed to give developers the best of both worlds - the ease of using the target system's package manager (in this case, Yum configured with all of Amazon's aarch64 repos) - and the speed of native compilers.

Long story short, a native cross-compiler is transplanted onto the Gravon2 Arm filesystem. This project could probably be extended for other systems, too.

Isn't a Docker container configured with binfmt-misc superior?

Normally, yes. However, binfmt-misc currently doesn't have namespace support (though this is in the pipeline!) This means that if you choose to go the binfmt-misc route, you must run a privileged container. This project does not require a privileged Docker container.

The lack of requirement of privileged container means that you may use this on public build systems that do not have Arm support, and expect much better performance.

Credits

Lots of back and forth with gh:MelloJello



from Hacker News https://ift.tt/3jv9tPW

Unproblematize

The essence of software engineering is solving problems.

The first impression of this insight will almost certainly be that it seems like a good thing. If you have a problem, then solving it is great!

But software engineers are more likely to have mental health problems than those who perform mechanical labor, and I think our problem-oriented world-view has something to do with that.

So, how could solving problems be a problem?


As an example, let’s consider the idea of a bug tracker.

For many years, in the field of software, any system used to track work has been commonly referred to as a “bug tracker”. In recent years, the labels have become more euphemistic and general, and we might now call them “issue trackers”. We have Sapir-Whorfed our way into the default assumption that any work that might need performing is a degenerate case of a problem.

We can contrast this with other fields. Any industry will need to track work that must be done. For example, in doing some light research for this post, I discovered that the relevant term of art in construction is typically “Project Management” or “Task Management” software. “Projects” and “Tasks” are no less hard work, but the terms do have a different valence than “Bugs” and “Issues”.

I don’t think we can start to fix this ... problem ... by attempting to change the terminology. Firstly, the domain inherently lends itself to this sort of language, which is why it emerged in the first place.

Secondly, Atlassian has desperately been trying to get everybody to call their bug tracker a “software development tool” where you write “stories” for years, and nobody does. It’s an issue tracker where you file bugs, and that’s what everyone calls it and describes what they do with it. Even they have to protest, perhaps a bit too much, that it’s “way more than a bug and issue tracker”.


This pervasive orientation towards “problems” as the atom of work does extend to any knowledge work, and thereby to any “productivity system”. Any to-do list is, at its core, a list of problems. You wouldn’t put an item on the list if you were happy with the way the world was. Therefore every unfinished item in any to-do list is a little pebble of worry.

As of this writing, I have almost 1000 unfinished tasks on my personal to-do list.

This is to say nothing of any tasks I have to perform at work, not to mention the implicit א‎0 of additional unfinished tasks once one considers open source issue trackers for projects I work on.

It’s not really reasonable to opt out of this habit of problematizing everything. This monument to human folly that I’ve meticulously constructed out of the records of aspirations which exceed my capacity is, in fact, also an excellent prioritization tool. If you’re a good engineer, or even just good at making to-do lists, you’ll inevitably make huge lists of problems. On some level, this is what it means to set an intention to make the world — or at least your world — better.

On a different level though, this is how you set out to systematically give yourself anxiety, depression, or both. It’s clear from a wealth of neurological research that repeated experiences and thoughts change neural structures. Thinking the same thought over and over literally re-wires your brain. Thinking the thought “here is another problem” over and over again forever is bound to cause some problems of its own.

The structure of to-do apps, bug trackers and the like is such that when an item is completed — when a problem is solved — it is subsequently removed from both physical view and our mind’s eye. What would be the point of simply lingering on a completed task? All the useful work is, after all, problems that haven’t been solved yet. Therefore the vast majority of our time is spent contemplating nothing but problems, prompting the continuous potentiation of neural pathways which lead to despair.


I don’t want to pretend that I have a cure for this self-inflicted ailment. I do, however, have a humble suggestion for one way to push back just a little bit against the relentless, unending tide of problems slowly eroding the shores of our souls: a positivity journal.

By “journal”, I do mean a private journal. Public expressions of positivity can help; indeed, some social and cultural support for expressing positivity is an important tool for maintaining a positive mind-set. However, it may not be the best starting point.

Unfortunately, any public expression becomes a discourse, and any discourse inevitably becomes a dialectic. Any expression of a view in public is seen by some as an invitation to express its opposite. Therefore one either becomes invested in defending the boundaries of a positive community space — a psychically exhausting task in its own right — or one must constantly entertain the possibility that things are, in fact, bad, when one is trying to condition one’s brain to maintain the ability to recognize when things are actually good.

Thus my suggestion to write something for yourself, and only for yourself.

Personally, I use a template that I fill out every day, with four sections:

  • “Summary”. Summarize the day in one sentence that encapsulates its positive vibes. Honestly I put this in there because the Notes app (which is what I’m using to maintain this) shows a little summary of the contents of the note, and I was getting annoyed by just seeing “Proud:” as the sole content of that summary. But once I did so, I found that it helps to try to synthesize a positive narrative, as your brain may be constantly trying to assemble a negative one. It can help to write this last, even if it’s up at the top of your note, once you’ve already filled out some of the following sections.

  • “I’m proud of:”. First, focus on what you personally have achieved through your skill and hard work. This can be very difficult, if you are someone who has a habit of putting yourself down. Force yourself to acknowledge that you did something useful, even if you didn’t finish anything, you almost certainly made progress and that progress deserves celebration.

  • “I’m grateful to:”. Who are you grateful to? Why? What did they do for you? Once you’ve made the habit of allowing yourself to acknowledge your own accomplishments, it’s easy to see those; pay attention to the ways in which others support and help you. Thank them by name.

  • “I’m lucky because:”. Particularly in post-2020 hell-world it’s easy to feel like every random happenstance is an aggravating tragedy. But good things happen randomly all the time, and it’s easy to fail to notice them. Take a moment to notice things that went well for no good reason, because you’re definitely going to feel attacked by the universe when bad things happen for no good reason; and they will.

Although such a journal is private, it’s helpful to actually write out the answers, to focus on them, to force yourself to get really specific.

I hope this tool is useful to someone out there. It’s not going to solve any problems, but perhaps it will make the world seem just a little brighter.



from Hacker News https://ift.tt/3kHmNzW

The Secret Codes of Lady Wroth, the First Female English Novelist

Two summers ago, I found myself face to face with a 400-year-old mystery. I was trying to escape the maze of books at Firsts, London’s Rare Book Fair, in Battersea Park. The fair was a tangle of stalls overflowing with treasures gleaming in old leather, paper and gold. Then, as I rounded a corner, a book stopped me. I felt as though I had seen a ghost—and, in a sense, I had.

Stamped onto its cover was an intricate monogram that I recognized instantly. It identified the book as the property of Lady Mary Wroth. She was a pathbreaker. A contemporary of Shakespeare in the early 17th century, Wroth was England’s first female writer of fiction. The startling thing about seeing this book was that her house in England burned down two centuries ago, and her extensive library with it; not one book was believed to exist. As a literary scholar specializing in rare books, I had seen a photograph of the monogram five years earlier on the bound leather manuscript of a play Wroth had written that was not in the library at the time of the fire. Now it appeared that the volume I was staring at—a biography of the Persian emperor Cyrus the Great—had escaped the inferno as well.

The monogram was not merely a few fancy initials, although fashionable nobles of Wroth’s period were known to adorn their books, jewelry and portraits with elaborate designs. This was more: a coded symbol, a cipher. It was unmistakable to me. Ciphers conceal meanings in plain sight and require the viewer to possess some secret knowledge, or key, to understand their meaning, one which the creator wants only a few to know. To most people, Wroth’s cipher would look like a pretty decoration.

Little known today, Wroth was notorious in her time. A noblewoman at the court of King James I, Wroth was a published author at a time when the culture demanded a woman’s silence and subservience. Queen Elizabeth I’s Master of the Revels, Edmund Tilney, went so far as to say in 1568 that a husband should “steal away [his wife’s] private will.”

cover art on a book
This copy of Xenophon’s Cyropaedia belonged to Lady Wroth’s son. On the cover are entwined letters, a cipher, referring to her illicit love affair with his father. (Courtesy Vanessa Braganza)

But an author she was. In 1621, Wroth’s first and only printed work caused a scandal. A romance entitled The Countess of Montgomery’s Urania, often called simply the Urania, it’s the forerunner of modern novels. At nearly 600 pages, it contains more characters than War and Peace or Middlemarch, and is based largely on Wroth’s own family and acquaintances at court—some of whom were outraged to find their lives and exploits published under a veil of fiction. One aristocrat wrote a scathing invective about the impropriety of Wroth’s work. She fired back, calling him a “drunken poet” who penned “vile, railing and scandalous things” and brazenly challenged him to “Aver it to my face.” Later women novelists, such as Jane Austen, Charlotte Brontë and George Eliot, owed a historical debt to Mary Wroth’s 17th-century struggle to be heard.

Perhaps the defining point of Wroth’s life was when she fell in love with a man who was not her husband. He was William Herbert—the dashing 3rd Earl of Pembroke. Herbert had a reputation as a patron of the arts and was something of a cad. In 1609, Shakespeare dedicated his sonnets to “W.H.,” and scholars still speculate that William Herbert was the beautiful young man to whom the first 126 love sonnets are addressed.

Although we don’t know whether Wroth and Herbert’s romance began before or after her husband’s death in 1614, it continued into the early 1620s and lasted at least a few years, producing two children, Katherine and William. Wroth modeled the Urania’s main characters, a pair of lovers named Pamphilia and Amphilanthus, after herself and Herbert.

In the Urania, Pamphilia writes love poems and gives them to Amphilanthus. In real life, Wroth wrote a romantic play entitled Love’s Victory and gave a handwritten manuscript of it to Herbert. This volume, bound in fine leather, is the only other known to be marked with her cipher; designed with the aid of a bookbinder or perhaps by Wroth alone, the cipher must have been intended to remind Herbert of their love, for the jumbled letters unscramble to spell the fictional lovers’ names, “Pamphilia” and “Amphilanthus.”

Wroth’s romantic bliss was not to last. By the mid-1620s, Herbert abandoned her for other lovers. Around this time, she was at work on a sequel to the Urania. This second book, handwritten but never published, sees Pamphilia and Amphilanthus marry other people. It also introduces another character, a knight called “Fair Design.” The name itself is mysterious. To Wroth, “fair” would have been synonymous with “beautiful,” while “design” meant “creation.” Fair Design, then, was the fictionalized version of Wroth and Herbert’s son, William. The story’s secret, hinted at but never revealed, is that Amphilanthus is Fair Design’s father—and that Amphilanthus’ failure to own up to his paternity is why the boy lacks a real, traditional name.

a painting of a man in a frilled collar
William Herbert, 3rd Earl of Pembroke, cut a dashing figure in 17th-century England, intriguing not only Lady Wroth but also, apparently, Shakespeare. (© National Trust Images)

So, too, did William lack the validation his mother longed to see. In 17th-century England, being fatherless was as good as having no identity at all. Property and noble titles passed down from father to son. But William did not inherit his father’s lands or title. Herbert died in 1630, never having acknowledged his illegitimate children with Wroth.

The monogrammed book staring saucily back at me from a glass bookcase that day in Battersea could not have been a gift from Wroth to Herbert: It was published in 1632, two years after his death. I think Wroth intended to give her son this book, stamped with its elaborate cipher, the intertwined initials of his fictionalized mother and father. The book itself was a recent English translation of the Cyropaedia, a kind of biography of Cyrus the Great of Persia, written by the Greek scholar Xenophon in the fourth century B.C. It was a staple text for young men beginning political careers during the Renaissance, and Wroth took the opportunity to label it with the cipher, covertly legitimizing William even though his father had not. To his mother, William was the personification of Wroth’s fair design.

Although Wroth camouflaged her scandalous sex life in a coded symbol, others may have known of her hopes and dashed dreams. William’s paternity was probably an open secret. Wroth’s and Herbert’s families certainly knew about it, and so, in all likelihood, did William. The symbol’s meaning would have been legible to a small social circle, according to Joseph Black, a University of Massachusetts historian specializing in Renaissance literature. “Ciphers, or monograms, are mysterious: They draw the eye as ostentatious public assertions of identity. Yet at the same time, they are puzzling, fully interpretable often only to those few in the know.”

Wroth was a firebrand fond of secrets. She was also an obstinate visionary who lived inside her revolutionary imagination, inhabiting and retelling stories even after they ended. Writing gave her a voice that speaks audaciously across history, unfolding the fantasy of how her life should have turned out. This discovery of a book from Wroth’s lost library opens a tantalizing biographical possibility. “If this book survived,” Black says, “maybe others did as well.”

In the end, the cipher and its hidden meanings outlived its referents. William died fighting for the Royalist cause in the English Civil War in the 1640s. Wroth is not known to have written another word after Herbert’s death. She withdrew from court life and died in 1651, at the age of 63. Sometime thereafter, daughter Katherine probably gathered up some keepsakes from her mother’s house before it burned. They included the manuscript of the Urania’s sequel and William’s copy of the Cyropaedia, which survived to haunt the present and captivate a book detective one day in Battersea. As a student I lacked the means to buy Wroth’s orphaned book. But I told a Harvard curator exactly where he could find it. Today Lady Wroth’s Cyropaedia is shelved in the university’s Houghton Rare Books Library.

Hiding in Plain Sight

In early-modern Europe, ciphers expressed romance, friendship and more. Some remain mysteries to this day —By Ted Scheinman

Paying Court

(© The Trustees of the British Museum)

Hans Holbein the Younger, the German artist who served in Henry VIII’s court, created this plan for a small shield, likely when the king was romancing Anne Boleyn; the pair’s initials are joined in a lover’s knot. The image appears in Holbein’s Jewellery Book, now in the British Museum.  

Preview thumbnail for video 'Subscribe to Smithsonian magazine now for just $12

This article is a selection from the September issue of Smithsonian magazine



from Hacker News https://ift.tt/2W8VUMC

Apple and Google must allow other payment systems, new Korean law declares

South Korea has passed a bill written to prevent major platform owners from restricting app developers to built-in payment systems, The Wall Street Journal reports. The bill is now expected to be signed into law by President Moon Jae-in, whose party championed the legislation.

The law comes as a blow to Google and Apple who both require in-app purchases to flow only through their systems, instead of outside payment processors, allowing the tech giants to collect a 30 percent cut. If tech companies fail to comply with the new law, they could face fines of up to 3 percent of their South Korea revenue.

The law is an amendment to South Korea’s Telecommunications Business Act, and it could have a large impact on how Google’s Play Store and Apple’s App Store do business globally. South Korea’s National Assembly passed the bill on Tuesday.

Neither company is happy about it. The Verge reached out to both Google and Apple before the law passed. Google’s Senior Director of Public Policy responded with the following statement:

While the law has not yet been passed we worry that the rushed process hasn’t allowed for enough analysis of the negative impact of this legislation on Korean consumers and app developers. If passed, we will review the final law when available and determine how best to continue providing developers with the tools they need to build successful global businesses while delivering a safe and trustworthy experience for consumers.

An Apple spokesperson responded with a statement as well.

The proposed Telecommunications Business Act will put users who purchase digital goods from other sources at risk of fraud, undermine their privacy protections, make it difficult to manage their purchases, and features like “Ask to Buy” and Parental Controls will become less effective. We believe user trust in App Store purchases will decrease as a result of this proposal—leading to fewer opportunities for the over 482,000 registered developers in Korea who have earned more than KRW8.55 trillion to date with Apple.

Lobbyists for the two companies have reportedly argued to American officials that the Korean legislation violates a trade agreement, as it seeks to control the actions of US-based companies.

South Korea isn’t the only country that’s trying to bend American tech giants to its will. Russia requires that gadgets come pre-installed with apps made by Russian developers, and Australia is looking into regulating services like Apple Pay and Google Pay. Some in the US government have even proposed legislation similar to what was passed by South Korea. The Wall Street Journal notes that South Korea’s new legislation could end up being referenced by regulators in other countries.

Both Apple and Google have been trying to stave off such actions through changes to their store policies. Apple introduced its App Store Small Business Program, which halved Apple’s cut from developers earning less than a million dollars a year on its store. It also agreed to let developers inform their users about payment options outside the App Store, using the email addresses that users gave them. Google said that it would only take 15 percent of developers' first million dollars instead of 30 percent.

Both Apple and Google have faced legal challenges despite the changes, with the most notable coming from Epic Games. Epic argued that Apple and Google used their dominant positions to dictate what could and could not be done with their phones. While Epic’s argument is different against each company, they share the same core complaint: Apple and Google’s dominance over the app stores. Both cases are still ongoing.



from Hacker News https://ift.tt/38sjIy6

Judge in Nokia and Apple lawsuit owned Apple stock during proceedings

A federal judge was recently found to have owned Apple stock while presiding over a case brought against the tech giant by Nokia, though the discovery is unlikely to lead to further legal action.

Apple and Nokia were embroiled in a bitter patent dispute from 2009 to 2011, with both companies filing a series of legal complaints and regulatory challenges as competition in the smartphone market came to a head. The issue was ultimately settled in June 2011, and while terms of the agreement were kept confidential, Apple was expected to make amends with a one-time payment and ongoing royalties.

According to a new court filing on Monday, a federal judge presiding over one of many scattershot legal volleys filed by Nokia owned stock in Apple when the suit was lodged in 2010. Judge William M. Conley of the U.S. District Court for the Western District of Wisconsin disclosed the potential conflict of interest in a letter to both parties dated Aug. 27.

"Judge Conley informed me that it has been brought to his attention that while he presided over the case he owned stock in Apple," writes Joel Turner, the court's chief deputy clerk. "His ownership of stock neither affected nor impacted his decisions in this case."

It is unclear how many shares Judge Conley possessed during the case, but ownership of company stock in any capacity would have required his recusal under the Code of Conduct for United States Judges.

An advisory from the Judicial Conference Codes of Conduct Committee explains that disqualifying factors should be reported "as soon as those facts are learned," even if the realization occurs after a judge issues a decision.

"The parties may then determine what relief they may seek and a court (without the disqualified judge) will decide the legal consequence, if any, arising from the participation of the disqualified judge in the entered decision," Advisory Opinion 71 reads, as relayed by Turner.

Apple and Nokia are invited to respond to Conley's disclosure by Oct. 27 should they wish to seek redress, though the companies are unlikely to take action considering the case was not a lynchpin in Nokia's overarching strategy.

In hindsight, the 2011 settlement was a favorable outcome for Nokia, whose phone business withered and was sold first to Microsoft before landing at Foxconn. Once the world's dominant cellphone manufacturer, Nokia — the corporate entity — is no longer a player in the mobile market. It has, however, licensed its name to smartphones built by HMD.



from Hacker News https://ift.tt/3kGj5Xg

Wild claims about K performance

Sometimes I see unsourced, unclear, vaguely mystical claims about K being the fastest array language. It happens often enough that I'd like to write a long-form rebuttal to these, and a demand that the people who make these do more to justify them.

This isn't meant to put down the K language! K is in fact the only APL-family language other than BQN that I would recommend without reservations. And there's nothing wrong with the K community as a whole. Go to the k tree and meet them! What I want to fight is the myth of K, which is carried around as much by those who used K once upon a time, and no longer have any connection to it, as by active users.

The points I argue here are narrow. To some extent I'm picking out the craziest things said about K to argue against. Please don't assume whoever you're talking to thinks these crazy things about K just because I wrote them here. Or, if they are wrong about these topics, that they're wrong about everything. Performance is a complicated and often counter-intuitive field and it's easy to be mislead.

On that note, it's possible I've made mistakes, such as incorrectly designing or interpreting benchmarks. If you present me with concrete evidence against something I wrote below, I promise I'll revise this page to include it, even if I just have to quote verbatim because I don't understand a word of it.

When you ask what the fastest array language is, chances are someone is there to answer one of k, kdb, or q. I can't offer benchmarks that contradict this, but I will argue that there's little reason to take these people at their word.

The reason I have no measurements is that every contract for a commercial K includes an anti-benchmark clause. For example, Shakti's license says users cannot "distribute or otherwise make available to any third party any report regarding the performance of the Software benchmarks or any information from such a report". As I would be unable to share the results, I have not taken benchmarks of any commercial K. Or downloaded one for that matter. Shakti could publish benchmarks; they choose to publish a handful of comparisons with database software and none with array languages or frameworks. I do run tests with ngn/k, which is developed with goals similar to Whitney's K; the author says it's slower than Shakti but not by too much.

The primary reason I don't give any credence to claims that K is the best is that they are always devoid of specifics. Most importantly, the same assertion is made across decades even though performance in J, Dyalog, and NumPy has improved by leaps and bounds in the meantime—I participated in advances of 26% and 10% in overall Dyalog benchmarks in the last two major versions. Has K4 (the engine behind kdb and Q) kept pace? Maybe it's fallen behind since Arthur left but Shakti K is better? Which other array languages has the poster used? Doesn't matter—they are all the same but K is better.

A related theme I find is equivocating between different kinds of performance. I suspect that for interpreting scalar code K is faster than APL and J but slower than Javascript, and certainly any compiled language. For operations on arrays, maybe it beats Javascript and Java but loses to current Dyalog and tensor frameworks. Simple database queries, Shakti says it's faster than Spark and Postgres but is silent about newer in-memory databases. The most extreme K advocates sweep away all this complexity by comparing K to weaker contenders in each category. Just about any language can be "the best" with this approach.

Before getting into array-based versus scalar code, here's a simpler case. It's well known that K works on one list at a time, that is, if you have a matrix—in K, a list of lists—then applying an operation (say sum) to each row works on each one independently. If the rows are short then there's function overhead for each one. In APL, J, and BQN, the matrix is stored as one unit with a stride. The sum can use one metadata computation for all rows, and there's usually special code for many row-wise functions. I measured that Dyalog is 30 times faster than ngn/k to sum rows of a ten-million by three float (double) matrix, for one fairly representative example. It's fine to say—as many K-ers do—that these cases don't matter or can be avoided in practice; it's dishonest (or ignorant) to claim they don't exist.

I have a suspicion that users sometimes think K is faster than APL because they try out a Fibonacci function or other one-number-at-a-time code. Erm, your boat turns faster than a battleship, congratulations? Python beats these languages at interpreted performance. By like a factor of five. The only reason for anyone to think this is relevant is if they have a one-dimensional model where J is "better" than Python, so K is "better" than both.

Popular APL and J implementations interpret source code directly, without even building an AST. This is very slow, and Dyalog has several other pathologies that get in the way as well. Like storing the execution stack in the workspace to prevent stack overflows, and the requirement that a user can save a workspace with paused code and resume it in a later version. But the overhead is per token executed, and a programmer can avoid the cost by working on large arrays where one token does a whole lot of work. If you want to show a language is faster than APL generally, this is the kind of code to look at.

K is designed to be as fast as possible when interpreting scalar code, for example using a grammar that's much simpler than BQN's (speed isn't the only benefit of being simpler of course, but it's clearly a consideration). It succeeds at this, and K interpreters are very fast, even without bytecode compilation in advance.

But K still isn't good at scalar code! It's an interpreter (if a good one) for a dynamically-typed language, and will be slower than compiled languages like C and Go, or JIT-compiled ones like Javascript and Java. A compiler generates code to do what you want, while an interpreter is code that reads data (the program) to do what you want. Once the code is compiled, the interpreter has an extra step and has to be slower. This is why BQN uses compiler-based strategies to speed up execution, first compiling to bytecode to make syntax overhead irrelevant and then usually post-processing that bytecode. Compilation is fast enough that it's perfectly fine to compile code every time it's run.

A more specific claim about K is that the key to its speed is that the interpreter, or some part of it, fits in L1 cache. I know Arthur Whitney himself has said this; I can't find that now but here's some material from KX about the "L1/2 cache". Maybe this was a relevant factor in the early days of K around 2000—I'm doubtful. In the 2020s it's ridiculous to say that instruction caching matters.

Let's clarify terms first. The CPU cache is a set of storage areas that are smaller and faster than RAM; memory is copied there when it's used so it will be faster to access it again later. L1 is the smallest and fastest level. On a typical CPU these days it might consist of 64KB of data cache for memory to be read and written, and 64KB of instruction cache for memory to be executed by the CPU. When I've seen it the L1 cache claim is specifically about the K interpreter (and not the data it works with) fitting in the cache, so it clearly refers to the instruction cache.

(Unlike the instruction cache, the data cache is a major factor that makes array languages faster. It's what terms like "cache-friendly" typically refer to. I think the reason KX prefers to talk about the instruction cache is that it allows them to link this well-known consideration to the size of the kdb binary, which is easily measured and clearly different from other products. Anyone can claim to use cache-friendly algorithms.)

A K interpreter will definitely benefit from the instruction cache. Unfortunately, that's where the truth of this claim runs out. Any other interpreter you use will get just about the same benefit, because the most used code will fit in the cache with plenty of room to spare. And the best case you get from a fast core interpreter loop is fast handling of scalar code—exactly the case that array languages typically ignore.

So, 64KB of instruction cache. That would be small even for a K interpreter. Why is it enough? I claim specifically that while running a program might cause a cache miss once in a while, the total cost of these will only ever be a small fraction of execution time. This is because an interpreter is made of loops: a core loop to run the program as a whole and usually smaller loops for some specific instructions. These loops are small, with the core loop being on the larger side. In fact it can be pretty huge if the interpreter has a lot of exotic instructions, but memory is brought to the cache in lines of around 64 bytes, so that unused regions can be ignored. The active portions might take up a kilobyte or two. Furthermore, you've got the L2 and L3 caches as backup, which are many times larger than L1 and not much slower.

So a single loop doesn't overflow the cache. And the meaning of a loop is that it's loaded once but run multiple times—for array operations, it could be a huge number. The body of an interpreter loop isn't likely to be fast either, typically performing some memory accesses or branches or both. An L1 instruction cache miss costs tens of cycles if it's caught by another cache layer and hundreds if it goes to memory. Twenty cycles would be astonishingly fast for a go around the core interpreter loop, and array operation loops are usually five cycles or more, plus a few tens in setup. It doesn't take many loops to overcome a cache miss, and interpreting any program that doesn't finish instantly will take millions of iterations or more, spread across various loops.

Look, you can measure this stuff. Linux has a nice tool called perf that can track all sorts of hardware events related to your program, including cache misses. You can pass in a list of events with -e followed by the program to be run. It can even distinguish instruction from data cache misses! I'll be showing the following events:

perf stat -e cycles,icache_16b.ifdata_stall,cache-misses,L1-dcache-load-misses,L1-icache-load-misses

cycles is the total number of CPU cycles run. L1-dcache-load-misses shows L1 data cache misses and L1-icache-load-misses shows the instruction cache misses; cache-misses shows accesses that miss every layer of caching, which is a subset of those two (more detailed explanation here). icache_16b.ifdata_stall is a little fancy. Here's the summary given by perf list:

  icache_16b.ifdata_stall
       [Cycles where a code fetch is stalled due to L1 instruction cache miss]

That's just the whole cost (in cycles) of L1 misses, exactly what we want! First I'll run this on a J program I have lying around, building my old Honors thesis with JtoLaTeX.

 Performance counter stats for 'jlatex document.jtex nopdf':

     1,457,284,402      cycles:u
        56,485,452      icache_16b.ifdata_stall:u
         2,254,192      cache-misses:u
        37,849,426      L1-dcache-load-misses:u
        28,797,332      L1-icache-load-misses:u

       0.557255985 seconds time elapsed

Here's the BQN call that builds CBQN's bytecode sources:

 Performance counter stats for './genRuntime /home/marshall/BQN/':

       241,224,322      cycles:u
         5,452,372      icache_16b.ifdata_stall:u
           829,146      cache-misses:u
         6,954,143      L1-dcache-load-misses:u
         1,291,804      L1-icache-load-misses:u

       0.098228740 seconds time elapsed

And the Python-based font tool I use to build font samples for this site:

 Performance counter stats for 'pyftsubset […more stuff]':

       499,025,775      cycles:u
        24,869,974      icache_16b.ifdata_stall:u
         5,850,063      cache-misses:u
        11,175,902      L1-dcache-load-misses:u
        11,784,702      L1-icache-load-misses:u

       0.215698059 seconds time elapsed

Dividing the stall number by total cycles gives us percentage of program time that can be attributed to L1 instruction misses.

↗️
    "J""BQN""Python" ˘ 100 × 565.425 ÷ 1_457241499
┌─                             
╵ "J"      3.8435140700068633  
  "BQN"    2.240663900414938   
  "Python" 5.01002004008016    
                              ┘

So, roughly 4%, 2%, and 5%. The cache miss counts are also broadly in line with these numbers. Note that full cache misses are pretty rare, so that most misses just hit L2 or L3 and don't suffer a large penalty. Also note that instruction cache misses are mostly lower than data misses, as expected.

Don't get me wrong, I'd love to improve performance even by 2%. But it's not exactly world domination, is it? And it doesn't matter how cache-friendly K is, that's the absolute limit.

For comparison, here's ngn/k (which does aim for a small executable) running one of its unit tests—test 19 in the a20/ folder, chosen because it's the longest-running of those tests.

 Performance counter stats for '../k 19.k':

     3,341,989,998      cycles:u
        21,136,960      icache_16b.ifdata_stall:u
           336,847      cache-misses:u
        10,748,990      L1-dcache-load-misses:u
        20,204,548      L1-icache-load-misses:u

       1.245378356 seconds time elapsed

The stalls are less than 1% here, so maybe the smaller executable is paying off in some way. I can't be sure, because the programs being run are very different: 19.k is 10 lines while the others are hundreds of lines long. But I don't have a longer K program handy to test with (and you could always argue the result doesn't apply to Whitney's K anyway). Again, it doesn't matter much: the point is that the absolute most the other interpreters could gain from being more L1-friendly is about 5% on those fairly representative programs.



from Hacker News https://ift.tt/3Bq6WfP

What Slime Knows

“Nothing from nothing ever yet was born.”

— Lucretius, On the Nature of Things

 

IT IS SPRING IN HOUSTON, which means that each day the temperature rises and so does the humidity. The bricks of my house sweat. In my yard the damp air condenses on the leaves of the crepe myrtle tree; a shower falls from the branches with the slightest breeze. The dampness has darkened the flower bed, and from the black mulch has emerged what looks like a pile of snotty scrambled eggs in a shade of shocking, bilious yellow. As if someone sneezed on their way to the front door, but what came out was mustard and marshmallow.

I recognize this curious specimen as the aethalial state of Fuligo septica, more commonly known as “dog vomit slime mold.” Despite its name, it’s not actually a mold—not any type of fungus at all—but rather a myxomycete (pronounced MIX-oh-my-seat), a small, understudied class of creatures that occasionally appear in yards and gardens as strange, Technicolor blobs. Like fungi, myxomycetes begin their lives as spores, but when a myxomycete spore germinates and cracks open, a microscopic amoeba slithers out. The amoeba bends and extends one edge of its cell to pull itself along, occasionally consuming bacteria and yeast and algae, occasionally dividing to clone and multiply itself. If saturated with water, the amoeba can grow a kind of tail that whips around to propel itself; on dry land the tail retracts and disappears. When the amoeba encounters another amoeba with whom it is genetically compatible, the two fuse, joining chromosomes and nuclei, and the newly fused nucleus begins dividing and redividing as the creature oozes along the forest floor, or on the underside of decaying logs, or between damp leaves, hunting its microscopic prey, drawing each morsel inside its gooey plasmodium, growing ever larger, until at the end of its life, it transforms into an aethalia, a “fruiting body” that might be spongelike in some species, or like a hardened calcium deposit in others, or, as with Stemonitis axifera, grows into hundreds of delicate rust-colored stalks. As it transitions into this irreversible state, the normally unicellular myxomycete divides itself into countless spores, which it releases to be carried elsewhere by the wind, and if conditions are favorable, some of them will germinate and the cycle will begin again.

From a taxonomical perspective, the Fuligo septica currently “fruiting” in my front yard belongs to the Physaraceae family, among the order of Physarales, in class Myxogastria, a taxonomic group that contains fewer than a thousand individual species. These creatures exist on every continent and almost everywhere people have looked for them: from Antarctica, where Calomyxa metallica forms iridescent beads, to the Sonoran Desert, where Didymium eremophilum clings to the skeletons of decaying saguaro cacti; from high in the Spanish Pyrenees, where Collaria chionophila fruit in the receding edge of melting snowbanks, to the forests of Singapore, where the aethalia of Arcyria denudata gather on the bark of decaying wood, like tufts of fresh cotton candy.

Although many species are intensely colored—orange, coral pink, or red—others are white or clear. Some take on the color of what they eat: ingesting algae will cause a few slime molds to turn a nauseous green. Physarum polycephalum, which recently made its debut at the Paris Zoo, is a bright, egg yolk yellow, has 720 sexual configurations and a vaguely fruity smell, and appears to be motivated by, among other things, a passionate love of oatmeal.

Throughout their lives, myxomycetes only ever exist as a single cell, inside which the cytoplasm always flows—out to its extremities, back to the center. When it encounters something it likes, such as oatmeal, the cytoplasm pulsates more quickly. If it finds something it dislikes, like salt, quinine, bright light, cold, or caffeine, it pulsates more slowly and moves its cytoplasm away (though it can choose to overcome these preferences if it means survival). In one remarkable study published in Science, Japanese researchers created a model of the Tokyo metropolitan area using oat flakes to represent population centers, and found that Physarum polycephalum configured itself into a near replica of the famously intuitive Tokyo rail system. In another experiment, scientists blasted a specimen with cold air at regular intervals, and found that it learned to expect the blast, and would retract in anticipation. It can solve mazes in pursuit of a single oat flake, and later, can recall the path it took to reach it. More remarkable still, a slime mold can grow indefinitely in its plasmodial stage. As long as it has an adequate food supply and is comfortable in its environment, it doesn’t age and it doesn’t die.

Here in this little patch of mulch in my yard is a creature that begins life as a microscopic amoeba and ends it as a vibrant splotch that produces spores, and for all the time in between, it is a single cell that can grow as large as a bath mat, has no brain, no sense of sight or smell, but can solve mazes, learn patterns, keep time, and pass down the wisdom of generations.

 

 

Trichia decipiens

 

How do you classify a creature such as this? In the ninth century, Chinese scholar Twang Ching-Shih referred to a pale yellow substance that grows in damp, shady conditions as kwei hi, literally “demon droppings.” In European folklore, slime mold is depicted as the work of witches, trolls, and demons—a curse sent from a neighbor to spoil the butter and milk. In Carl Linnaeus’s Species Plantarum—a book that aspires to list every species of plant known at the time (nearly seven thousand by the 1753 edition)— he names only seven species of slime molds. Among those seven we recognize Fuligo in the species he calls Mucor septicus (“rotting mucus”), which he classifies, incorrectly, as a type of fungus.

At the time, life hadn’t been studied in detail at the microscopic level, and Linnaeus’s taxonomic classifications, few of which have withstood the scrutiny of modern science, were based almost entirely on observable phenotype—essentially, how they looked to the naked eye. He placed Mucor septicus in the same genus as Mucor mucedo, because, well, they both looked like mucus. The fruiting bodies of both of these species looked like a type of fungus, and fungus looked like a type of plant.

We now call Linnaeus the “father” of taxonomy. Though he wasn’t the first to try to impose order on nature—naturalists, philosophers, and artists had constructed their own schema as far back as Aristotle—he was the first to classify our own species within his system, naming us Homo sapiens and placing us, scandalously, within the animal kingdom. That idea, that humans were “natural” beings, “Anthropomorpha” in the same order as chimpanzees and gorillas and sloths, drew the ire of Linnaeus’s fellow naturalists, whose intellectual lineage could be traced back at least to Aristotle, who had ordered the physical world along a continuum from inanimate objects through plants and then to animals. These “ladders” or “scales of ascent,” in turn, inspired the “Great Chain of Being”—the Christian worldview, central to European thought from the end of the Roman Empire through the Middle Ages, that ordered all of creation from lowest to highest, beginning with the inanimate world, through plants and animals, placing humans just below angels, and angels just below God. If anything like slime mold appeared there, it would no doubt be near the very bottom, just above dirt.

Over time, Linnaeus revised his classifications of Homo sapiens, naming “varieties” that at first corresponded to what he saw as the four geographic corners of the planet, but which became hierarchical, assigned different intellectual and moral value based on phenotypes and physical attributes. The idea that humans could and should be ordered—that some were superior to others, that this superiority had a physical as well as social component—was deeply embedded in many previous schema. But Linnaeus’s taxonomy, unlike the systems that came before, gave these prejudices the appearance of objectivity, of being backed by scientific proof. When Darwin’s On the Origin of Species was published in 1859, it was on the foundation of this “science,” which had taught white Europeans to reject the idea of evolution unless it crowned them in glory.

But the history of taxonomic classification has always been about establishing hierarchy, beginning with Linnaeus, who offered the world his binomial naming system as well as its first three taxonomic kingdoms: plants (Regnum Vegetabile), animals (Regnum Animale), and minerals (Regnum Lapideum, which Linnaeus himself later abandoned). Ernst Haeckel—biologist, artist, philosopher, and fervent disciple of Darwin—expanded Linnaeus’s model in 1866. To the plant and animal kingdoms, Haeckel added a third: Protista, for the various microscopic organisms known but not understood at that time. These included sponges and radiolaria and myxomycetes, the term Heinrich Friedrich Link had proposed for slime molds in 1833. Developments in microscope technology in the nineteenth century had given Haeckel and his fellow biologists a glimpse into the world of organisms too small to see with the naked eye, and with it, a keen interest in accounting for the evolutionary relationships of all species on Earth in ever more minute detail. Haeckel called this new science “phylogeny,” and he filled pages and pages of his works with intricately illustrated phylogenetic trees—beautiful in their execution but diabolical in their implications. In perhaps his best-known illustration, “The Pedigree of Man,” he places “man” at the highest point of a great oak, while apes, ungulates, “skull-less animals,” worms, and amoeba are lower down because he saw them as less evolved and therefore closer to the root of creation. Elsewhere he similarly categorized humanity into as many as twelve different species with different evolutionary histories— white Europeans, in his view, being the most evolved, important, and civilized.

Taxonomy has evolved in the centuries since Haeckel and Linnaeus, but much of their thinking still remains. Even if science no longer views humans as divided into different and unequal species, we continue to refer to “race” as if it were a natural, biological category rather than a social one created in service of white supremacy. The myth that humans are superior to all other species—that we are complex and intelligent in a way that matters, while the intelligence and complexity of other species does not—also exists in service to white supremacy, conferring on far too many people an imagined right of total dominion over one another and the natural world.

 

Badhamia utricularis

Badhamia utricularis

 

In high school I learned that humans reigned over five kingdoms: animals, plants, fungi, protists, and bacteria. We came only from ourselves; we owed one another nothing. I learned this in my parents’ church too, that the world was made for men, that every life (my own included) was under their dominion. I did not learn until college about a taxonomic category that superseded kingdom, proposed in the 1970s by biologists Carl Woese and George Fox and based on genetic sequencing, that divided life into three domains: Bacteria, Eukarya, and Archaea, a recently discovered single-celled organism that has survived in geysers and swamps and hydrothermal vents at the bottom of the ocean for billions of years.

Perhaps a limit of our so-called intelligence is that we cannot fathom ourselves in the context of time at this scale, and that so many of us fail, so consistently, to marvel at any lives but our own. I remember a recent visit to the Morian Hall of Paleontology at the Houston Museum of Natural Science. I moved with the exhibit through geologic time, beginning with trilobite fossils from more than 500 million years ago, toward creatures that become larger and more terrifying before each of five extinction events, in all of which climate change has been a factor. Each time, millions of species have disappeared from the planet, but thanks to small, simple organisms, life has somehow carried on.

 

Slime mold might not have evolved much in the past two billion years, but it has learned a few things.

 

The hall’s high ceilings and gentle lighting make it feel more like a contemporary art exhibit than a scientific display, and though scientists might object to this approach, for a layperson like me, it fostered wonder, and wonder has often been my antidote to despair. At the very end of the winding geologic maze, I encountered mammals and megafauna before arriving in the smallest exhibit in the entire hall, where a wall case contained the fossilized skulls of the various human lineages, mapping the web of their links and connections. So much damage has been done by the lie that this world belongs only to a few, that some lives matter more than others. The consequences of that lie have changed Earth more in a few decades than in the previous several million years. Outside, the next extinction looms.

But it is also possible to move through the exhibit in the opposite way, beginning with the urgency of the present and journeying back through time—to pass through doorways in this history that show us unexpected connections, to see the web of life spread out before us in all its astonishing diversity. Any system that claims to impose a hierarchy of value on this web is, like petri dishes and toasters and even the very idea of nature, a human invention. Superiority is not an inherent reality of the natural world.

 

Humans have been lumbering around the planet for only a half million years, the only species young and arrogant enough to name ourselves sapiens in genus Homo. We share a common ancestor with gorillas and whales and sea squirts, marine invertebrates that swim freely in their larval phase before attaching to rocks or shells and later eating their own brain. The kingdom Animalia, in which we reside, is an offshoot of the domain Eukarya, which includes every life-form on Earth with a nucleus—humans and sea squirts, fungi, plants, and slime molds that are ancient by comparison with us—and all these relations occupy the slenderest tendril of a vast and astonishing web that pulsates all around us and beyond our comprehension.

The most recent taxonomies—those based on genetic evidence that evolution is not a single lineage, but multiple lineages, not a branch that culminates in a species at its distant tip, but a network of convergences—have moved away from their histories as trees and chains and ladders. Instead, they now look more like sprawling, networked webs that trace the many points of relation back to ever more ancient origins, beyond our knowledge or capacity for knowing, in pursuit of the “universal ancestors,” life-forms that came before metabolism, before self-replication—the several-billion-year-old plasmodial blobs from which all life on Earth evolved. We haven’t found evidence for them yet, but we know what we’re looking for: they would be simple, small, and strange.

 

Willkommlangea reticulata

 

A few years ago, near a rural village in Myanmar, miners came across a piece of amber containing a fossilized Stemonitis slime mold dating from the mid-Cretaceous period. Scientists were thrilled by the discovery, because few slime mold fossils exist, and noted that the 100-million-year-old Stemonitis looks indistinguishable from the one oozing around forests today. Perhaps slime mold hasn’t evolved much in that time, they speculated. Recent genetic analyses have suggested that slime molds are perhaps as old as one or two billion years—which would make them hundreds of millions of years older than plants, and would mean they pulled themselves out of the ocean on their cellbows at a time when the only land species were giant mats of bacteria. One special ability of slime molds that supports this possibility is their capacity for cryptobiosis: the process of exchanging all the water in one’s body for sugars, allowing a creature to enter a kind of stasis for weeks, months, years, centuries, perhaps even for millennia. Slime molds can enter stasis at any stage in their life cycle—as an amoeba, as a plasmodium, as a spore— whenever their environment or the climate does not suit their preferences or needs. The only other species who have this ability are the so-called “living fossils” such as tardigrades and Notostraca (commonly known as water bears and tadpole shrimp, respectively). The ability to become dormant until conditions are more favorable for life might be one of the reasons slime mold has survived as long as it has, through dozens of geologic periods, countless ice ages, and the extinction events that have repeatedly wiped out nearly all life on Earth.

Slime mold might not have evolved much in the past two billion years, but it has learned a few things during that time. In laboratory environments, researchers have cut Physarum polycephalum into pieces and found that it can fuse back together within two minutes. Or, each piece can go off and live separate lives, learn new things, and return later to fuse together, and in the fusing, each individual can teach the other what it knows, and can learn from it in return.

Though, in truth, “individual” is not the right word to use here, because “individuality”—a concept so central to so many humans’ identities—doesn’t apply to the slime mold worldview. A single cell might look to us like a coherent whole, but that cell can divide itself into countless spores, creating countless possible cycles of amoeba to plasmodium to aethalia, which in turn will divide and repeat the cycle again. It can choose to “fruit” or not, to reproduce sexually or asexually or not at all, challenging every traditional concept of “species,” the most basic and fundamental unit of our flawed and imprecise understanding of the biological world. As a consequence, we have no way of knowing whether slime molds, as a broad class of beings, are stable or whether climate change threatens their survival, as it does our own. Without a way to count their population as a species, we can’t measure whether they are endangered or thriving. Should individuals that produce similar fruiting bodies be considered a species? What if two separate slime molds do not mate but share genetic material? The very idea of separateness seems antithetical to slime mold existence. It has so much to teach us.

 

In 1973, in a suburb of Dallas, a sudden, particularly spectacular appearance of Fuligo septica across lawns sparked a panic. Firemen blasted the plasmodia with water, breaking the creatures to pieces, but those pieces continued to slime around and grow larger. The townspeople speculated that an indestructible alien species had invaded Earth, perhaps recalling the plot of the 1958 movie The Blob starring a young Steve McQueen. Scientists arrived in the panicked neighborhood to take samples, reassuring the community that what they had experienced was just a stage in the life cycle of a poorly understood organism: “a common worldwide occurrence,” they said. “Texas scientists think backyard blob is dead,” read a headline in the New York Times.

The slime mold in my yard is also dead, I think. The aethalia is pale, hardened, and calcified, with the texture and color of a summer cast protecting a child’s broken arm, browned by a season without washing. A breath of wind arrives and black dust lifts from the slime mold’s surface, blown toward the edge of my yard, and the next one over. And the next.

Spring in Houston is the season for working in the garden. We replant our tall ornamental grasses, killed in the recent unseasonable freeze; ours are a hybrid Pennisetum species, from the family Poaceae, a large taxonomic group that also contains Zea (a genus that includes corn), Oryza (rice), Saccharum (sugar cane), and Triticum (wheat). Fungi live on and among these plants, bringing them water and nourishment through the threadlike mycelium to keep them alive and aiding their decomposition when they die. As the plants decompose, they provide the food that bacteria eat, and myxomycete amoeba prey on these bacteria when they hatch from their spores. We plunge our shovels and hands in the dirt, the living substrate—alive in ways I have only just begun to fathom. We plant the grasses, fill the holes, lay down fresh mulch. We collect our tools and retreat indoors to the comforts of our home—our refrigerated food, our instant oatmeal, our beloved air conditioning.

Days later, I am leaving my house to walk the dogs, the air hanging dankly all around me, and out of the corner of my eye I see dozens of bright coral pink beads scattered across the surface of the fresh mulch, a new species I learn is Lycogala epidendrum, “wolf’s milk slime mold.” I know very little about it, but receive this marvelous arrival in the only way I know how: we are made by, and for, one another. O

 

Lacy M. Johnson is the author of several books, including The Reckonings and The Other Side, both National Book Critics Circle Award finalists. She teaches at Rice University and is the founding director of the Houston Flood Museum.

More of Alison Pollack’s work can be seen on Instagram @Marin_ mushrooms and Facebook @AlisonKPollack

 

Subscribe to Orion Ad
 


from Hacker News https://ift.tt/3DdFcwS

Brooks, Wirth and Go

Brooks, Wirth and Go.

It’s 1975.

The programmers have come back with the FORTRAN code, now in punch card form. The cards are taken back, with great consideration as to not to drop them, over to the mainframe. By the time they’ve been fed in, read, compiled, linked and executed by the computer, it has taken more than two weeks for the result to come back “[File name specification error]”. At this stage, the code has touched the hands of a lot of people, consuming weeks of work hours.

Meanwhile, another engineer programming in Smalltalk and Interlisp is writing and running their implementation directly against a system console. After a few seconds, they have their result.
“Ah” they go, fixing their mistake right then and there. Done.

The differences in turnaround time between these two approaches is on an order of four magnitudes.

Forget about a “10X programmer”, how about a “10 000X programmer”?

As the hardware of modern computers has evolved to being hundreds of billions times faster than the ones that put humans on the moon, these type of discrepancies have shrunk drastically. Gone are the days of timesharing, of waiting hours for even simple computations to return a result. Even cellphones are powerful enough to calculate every computation humanity had computed by the 20th century put together.

Software, hasn’t perhaps moved forward as much. One could argue that not much have happened to solve the software crisis since ALGOL 68. Perhaps even worse is how little we have (collectively) learnt from the giants of that era. I want to exemplify two of these giants, and the lessons they can teach us.

Brooks.

In 1964, IBM announced its most ambitious project to date: the

IBM 360

. The project was architected by none other than

Gene Amdahl

, and managed by

Frederick Brooks

.

It was the worlds first real programmable mainframe computer, opening up the notion that computers could be reprogrammed to suit new problems instead of being replaced by newer models. The architecture of the system introduced a lot of standards that we still use today, such as 8-bit bytes, 32-bit words

and more

.


What is perhaps even more interesting, was the project itself. The project was ... more expensive that what was first thought. It blew the budget by a factor of 200; from $25 million to $5 billion. For reference, the Manhattan Project was budgeted for $2 billion.

The project ran into every development- and management problem that you can think of.

Years after, Brooks decided that the best way to answer the question

"why do software projects so often go awry?"

would be to write a book about his experiences and lesson from IBM. That book was the now fabled

The Mythical Man Month

.



It’s perhaps one of the best reads on software management there is.

One essay from it is ‘No Silver Bullet’, which states that:

'There is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement within a decade in productivity, in reliability, in simplicity.'

Given how quickly a modern programmer could correct their mistakes compared to their punch-card predecessors, Brooks contended that the majority of remaining complexity was the problem itself, and that accidental complexity had mostly been solved.

That is not so say that productivity hasn’t increased since the 60s. On the contrary.

Take these examples:

  • Free/open source software.
  • Absurdly fast hardware.
  • Generalized computers.
  • Quick compilers.
  • The Internet.

Together, they have pushed our overall productivity to great heights. They’ve also re-introduced a lot of the accidental complexity that our predecessors fought so hard to remove in the first place.

(more on this later)

“Programmers aren’t quite as productive these days as they used to be”.

This notion of reducing accidental complexity to the bare minimum is the key to a lot of our problems, and there is no greater champion of this principle than Niklaus Wirth.

Wirth.

Having created

PASCAL

,

MODULA

and

MODULA-2

, Wirth set out to develop the

OBERON

family of languages in order to build his

operating system

on his

workstation

.

To say that Wirth accomplished a lot of great things in his career would be an understatement, and the examples given above are a

fraction

of his achievements.

He managed to execute on all these ideas by following a set of principles that can be summarized as follows:



You have to completely comprehend your idea in order to fully realize it.

'The language Oberon emerged from the urge to reduce the complexity of programming languages, of Modula in particular. This effort resulted in a remarkably concise language. The extent of Oberon, the number of its features and constructs, is smaller even than that of Pascal. Yet it is considerably more powerful.'

The man concluded that Pascal was too complicated. Pascal.

With his newfound power, he built his operating system on top of his own hardware from scratch in 12K SLOC, with a footprint of 200 kilobytes. For comparison, OSX runs in on ~86M SLOC with a footprint of 3 gigabytes, built by one of the wealthiest companies in the world. Now, perhaps OSX is more feature complete than Oberon, but certainly not by a factor of ~40 000X. Something was lost along the way.

Where Brooks notion of ‘No Silver Bullet’ and the philosophy of Wirth intersects is here:

'You cannot reduce the complexity of your problem by increasing the complexity of your language.'

The greater the surface area of your language, the more gusts of sand there will be to hide its essence. At some point, the needle has moved forward to where the cycle starts over as a subset of the old becomes the new and the cycle starts over once again.

This notion of ‘less is more’ reminds me of a quote of the same nature, from another giant:

'There are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies, and the other is to make it so complicated that there are no obvious deficiencies.'~ Tony Hoare

Rejecting Wirth’s premise in theory inevitably leads to Hoare’s second option [0].

'Somewhere between Objective-C and Swift1-5 you ended up with one framework from the past, one framework from the future, and one tangled mess in the present.'

What are the costs of taking this route?

The stones that bind us.

  • Training.

    • Learning a new operating system bound to your tech.
    • Learning a new IDE bound to your tech.
    • Learning a new framework to replace the one that already works.
    • Learning to use the new version of your old language.

    All of your old skills benefit from your years of experience, and like the Ship of Theseus there comes a point where those skills account for less and less. Experience should add value, not subtract it.
  • Hamster wheeling.

    • Previously working projects are broken after an update.
    • Other peoples previously working projects that you depend on are broken after an update.
    • Sifting through pages of documentation and StackOverflow posts that are no longer relevant.
    • Having to keep up with the news in order to anticipate your next on-call headache.

    Being forced to fix problems generated by external forces outside of your project, company, customer or continent is not helping anyone, especially not you.

    The Mess We're In by Joe Armstrong.
    Why Do We Have So Many Screwdrivers?
    The Thirty Million Line Problem.
    The Left Pad Story.
    • The trade is exceedingly hard to learn even without yaks to shave.
    • Everything but the kitchen sink is not a great way to introduce newcomers.
    • Every moment spent learning tools could have been spent getting to know the project
      or learning general skills that carry over to the next one.

    Most juniors you run into are overwhelmed, confused and pressured to keep up with the constantly changing layers of clothes of the Emperor.

    How do you teach .NET to beginners?
    How it feels to learn JavaScript in 2016.
    'The tailor is canonized as the patron saint of all consultants, because in spite of the enormous fees he extracted, he was never able to convince his clients of his dawning realization that their clothes have no Emperor.'~ Tony Hoare

With the exception of the Gray Beards and perhaps kernel developers, the industry at large tends to be unaware, ignoring or rejecting this premise. Instead, each revolution of the wheel is spun until it arrives exactly where it started, with the promise of new beginnings.

Luckily, there are exceptions. Here’s one of them.

Go.

This wonderfully tiny, famously “stuck in the 70s” language, ticks all the necessary boxes to avoid most (if not all) of these issues, and draws inspiration from older languages but with a modern touch.

  • Hit the ground running.
    • Single install, no licenses/registration/sacrificial ceremony.
    • Can run on anything, even if that thing is a dusty old laptop.
    • Language is (comparatively) easy to pick up.
    • Straight up procedural programming, with sprinkles of FP and OOP.
  • No IDE coupling.
    • No need to buy licences, no need to have engineers blocked by expired licenses.
    • No need to re-train engineers to put text into a textfile. If they have decades of experience using one editor, they can use it.
    • No solution files or complicated build systems that require IDE compilation in order to work.
  • Instantly compile to a static binary.
    • No need to sit around doing nothing whilst the project compiles.
    • No need to cook yourself as all your cores spin to a 100% in order to compile one kind of text to another kind of text.
    • Deploy by running a single executable.
  • If it worked ten years ago, it works now.
    • Being stuck in the 70s means no breaking changes since flared pants.
    • Batteries-included standard library for everything under the sun.
    • Every line of code is inspectable, no closed source libraries.

It was designed by Ken Thompson, Rob Pike and Robert Griesemer (a student of Wirth). The introductory book to the language was written by none other than Brian Kernighan. If it’s not already apparent, this language is the spiritual successor of C.

It’s been two years since I first picked up Go, and I can’t think of anything better for general[1] software development, especially when it comes to respecting the time of myself and others. It’s one of the few languages that allows me to program freely, without having to consult the internet or prod others who have more experience in it for things that should be self evident. There is less magic, less hiding, which yields much, much greater clarity. No surprises, ‘it just works’.

This isn’t to say that everyone feels this way, on the contrary. Critique is abundant. The discussions about Go’s missing features (say, lack of generics) have going on for years (more than a decade by now), which I can only assume will keep on going for the unforseeable future.

In the meantime, I urge you to give it a Go. Perhaps you’ll like it.

Fredrik Holmqvist




[0]

"For none of the evidence we have so far can inspire confidence that ADA has avoided any of the problems that have afflicted other complex language projects of the past ... I believe that by careful pruning of ADA, it is still possible to select a very powerful subset that would be reliable and efficient in implementation and safe and economic in use ... that is the great strength of PASCAL, that there are so few unnecessary features and almost no need for subsets." Tony Hoare's 1980 ACM Turing Award Lecture

[1] There are of course

better languages for specific types of software

.



from Hacker News https://ift.tt/3k7OEsZ

Readers reply: when and why did men stop wearing hats?

Why did men stop wearing hats? I saw a video clip recently of the crowd at the 1923 FA Cup final and virtually every man’s head was covered. These days, almost no one wears a hat as a matter of course. When did this change occur – and what prompted it? Dawn Welcher, Connah’s Quay

Send new questions to nq@theguardian.com.

Readers reply

Better layering and clothes meant no need for a hat to keep you warm. MrFabJP

I think it may be to do with a move towards less formal dress. I’m hoping for a hat revival soon, for everyone! Sophie8927

Hat-wearing seemed to be inculcated into every male from an early age. At my prep school caps had to be worn out of doors at all times, and for the generation that did national service, it would be unthinkable to be seen without headgear. Youthful rebellion almost certainly put an end to that – that and the change in hair fashions. Besides which, having a hat is such a pain the arse when you go indoors – so easy to lose it, sit on it, etc. The demise of the cloakroom doubtless sped the fashion on its way, too. CaptainGinger

It used to be something of a class signifier (flat cap for the working class, bowler hat for civil service types etc), and related to jobs with uniforms. As dress became less formal, and hair fashion became more widespread, the hat lost its cultural significance. Most of the lads at that Cup final would have the exact same short back and sides cut, or be balding. It’s an interesting topic, though, I was talking to my dad about it last week funnily enough. IDOBEEF

My mum (b.1952) said that one of the main things she noticed when JFK came on the scene was that he was shown on the television being interviewed and on the news without a hat, something very unusual at that time, and that “virtually overnight” all the men in Manchester stopped wearing hats. This led us to hypothesise that poor President Kennedy may have been knocked off by a group of irate milliners. northernslag

This is an easy one: cars.

The rise of private car ownership meant that more and more men weren’t standing around waiting for buses getting cold and wet. Plus, when you had a car – what did you do with your hat? You’d have the ludicrous situation where you’d put on your hat to go to the car – take it off, drive to work, put it on again to walk into the office, and then take it off again. So car owners gave up hats as too much faff.

Then ordinary aspiration cuts in. Even if you can’t afford a car, you’d like other people to think you did. Car owners didn’t wear hats, so if you wore one people would know you couldn’t afford one. So everyone stopped wearing hats. oldtimer1955

My father, who flew Lancaster bombers during the second world war and afterwards wore his Harris tweed trilby every day, always said that many of his forces contemporaries discarded their military issue headwear when they were demobbed and swore they would never wear a hat again. I can imagine this also applied to those who experienced national service, too, until that ended in 1963.

Combined with a freer lifestyle, TV showing US programmes where hats were for cowboys and cops, and simply changing habits, hats became for specific people and certain occasions. Maybe some satirical shows that poked fun at hat-wearing linked to social status (I am thinking of the famous “I know my place” sketch featuring Cleese, Barker and Corbett from The Frost Report in 1966 most particularly) reinforced to the new generation that hats were, quite literally, “old hat”!

Now they seem to have polarised to the highly formal and distinctly informal. My partner and I, both redheads, have hats for dogwalking and days out. I really enjoy knitting Peruvian hats with ear flaps for my friends. I suppose we now see hats as appropriate to the circumstances rather than something imposed on us by societal norms.

An interesting query today, which would have been my father’s 100 birthday. JayeKaye

My father – RAF 1944-46 – never wore a hat after demob. There’s a War Office photo of him which was taken as a publicity shot – wearing a forage cap, of course. However, as a working-class grammar schoolboy he’d had to wear a school cap throughout his schooldays – which might have attracted a bit of comment from his contemporaries – so maybe that had played a part, too. My grammar school, recently founded when I attended in the 1960s, was among the first to decide that caps weren’t part of the compulsory uniform. It was considered revolutionary at the time! richardarmstrong

The men’s fashion for longer hair from the 50s resulted in a new phenomenon: ‘Hat-hair’, defined by GQ as: “A flattened crown, paired with those distinct ridges carved like cave drawings into the sides of your skull.” The best way to avoid hat-hair is obviously not to wear a hat. The first intimation that things were heading that way was the Hat Council, which in 1952 felt the need to introduce the advertising slogan: “If you want to get ahead, get a hat.” areader10

The Hat Research Foundation (HRF), which was apparently a real thing, found that 19% of men in 1947 who didn’t wear hats said it was because they triggered the trauma of war associated with their uniforms. Maybe that’s when the decline began. ufs1968

In Madrid once the civil war had ended in 1939 a hat shop decided to advertise its wares with the slogan “Los rojos no usaban sombreros” (“the Reds didn’t wear hats”), with the clear implication that you’d better buy yourself a hat pdq or risk being suspected of having been a Red, and possibly ending up in front of a firing squad. Headgear was also a class signifier in Britain well into the 1960s: cf the cartoon working-class hero Andy Capp, and also the ludicrous spectacle of Tory politicians like MacMillan and Lord Hailsham donning cloth caps in 1963 when they realised that they might lose to Labour at the next election. AnChiarogEile

We stopped wearing hats because we didn’t need them. You need a hat if you have to walk, or ride, or sit in a freezing carriage or omnibus. From the 1960s onwards there was better (and warmer) public transport but more importantly the car became widely available. What’s more – an old-fashioned hat doesn’t fit inside a car. It’s a nuisance. goodgollymissmolly

Partly to do with the advent of antibiotics? Before the war, people knew you could realistically die of quite minor infections, and they also knew that you could realistically die of a cold if it deteriorated into pneumonia. Houses, public transport, workplaces, etc were not well heated, or not heated at all, and covering your head is one of the most important things you can do to conserve body heat and help yourself stay as healthy as possible. Wearing a hat in winter was pretty vital.

Nowadays we know that if a cold gets out of hand and turns into a bacterial lung infection, we have a medical safety net, so hat wearing in winter is not such an ingrained cultural habit. alisoncowe

As a bald old man who lives in a very cold climate and walks to run most of his errands – I think the comments are missing some of the more practical roots of hats. Hats are very effective at keeping you warm. My wife with her luxurious mane will occasionally mock my hat while we are for a walk. She simply doesn’t understand how much heat my bold pate releases.

So it may be coincidence that hats lost attractiveness about the same time people got central heating and stopped spending near as much time outdoors. But I very much doubt it.

I also wear a hat more in warm weather now if it is very sunny. This, in contrast to cold weather, is rooted in my being informed of the harm that extended sun does to my skin and eyes.

I really think that up to a couple of hundred years ago, 90% of the population would have viewed covering your head to provide warmth or protection from the sun, as self-evident. DaveCanuk



from Hacker News https://ift.tt/3zukkyW