Thursday, June 30, 2022

The Supreme Court’s Latest Decision Is a Blow to Stopping Climate Change

The Supreme Court’s decision in the case known as West Virginia et al. v. Environmental Protection Agency et al. is a serious blow to the EPA’s ability to fight climate change—and could have dangerous repercussions beyond this case. The timing of the decision feels especially harsh, as the nation is in the throes of the “Danger Season” for hazards such as heat waves, drought, wildfires and hurricanes, all worsened by climate change.

The majority 6–3 decision sharply curtails the EPA’s authority to set standards based on a broad range of flexible options to cut carbon emissions from the power sector—options such as replacing polluting fossil fuels with cheap and widely available wind and solar power coupled with battery storage. Instead, the Court has ruled that, though the agency can still regulate carbon emissions, it must do so narrowly and set standards solely based on options available at individual power plant facilities, such as efficiency measures to improve plant-level heat rates.

This decision wrongfully precludes the agency’s authority to set robust power plant carbon pollution standards in line with today’s technologies and practices adopted on a sector-wide basis. In fact, utilities are increasingly turning to these options—although not fast enough—and many had weighed in in support of EPA power plant carbon standards. The limited approach permitted by the court ruling will constrain the ability to drive the major cuts in emissions that are necessary to meet climate goals. Had the court ruled fully in favor of the EPA—or not taken the case at all—a much more meaningful dent in power plant carbon emissions would be within reach, while also delivering much greater reductions in other dangerous co-pollutants from burning fossil fuels such as particulate matter, mercury, nitrogen oxides and sulfur dioxide.

The petitioners who brought this case include state-level political officials and coal companies who are single-mindedly determined to block climate action and perpetuate fossil fuel dependence to serve their narrow political or business interests. And as I wrote previously, there are strong grounds to argue that this case should never have been taken up by the Supreme Court in the first place because there is no rule on the books to challenge. Given the expressed deep skepticism of this Court’s majority for the authority and expertise of federal agencies, today’s decision is not surprising but it is deeply troubling nevertheless.

After years of setbacks and delays to implementing EPA power plant carbon standards, and at a time when the climate crisis is so clearly unfolding all around us, this decision flies in the face of the urgent need for deep cuts in heat-trapping emissions to protect public health and the environment. Power plants are the second-largest source of U.S. carbon emissions today. Decarbonizing the power sector is also a linchpin of economy-wide efforts to cut emissions, through electrification of energy use for transportation, industrial purposes and in residential and commercial buildings. While clean energy progress is definitely underway, it is not happening fast enough or on the scale necessary to limit the threat from climate change.

The West Virginia v. EPA decision is also harmful in a broader sense because it goes to the heart of federal agencies’ abilities to interpret existing laws based on the best available science, and to then set robust standards accordingly. Once Congress passes protective laws like the Clean Air Act, agencies have generally had deference to implement those laws based on the latest scientific evidence of harms caused by pollutants and options to limit those harms. With this decision, the Court has instead hamstrung that authority. This deeply concerning precedent could potentially put other important environmental and public health policies at risk too. This development has come about as part of a decades-long well-funded and coordinated strategy by industry interests and their political allies aimed at protecting polluters and undermining public health safeguards.

The Supreme Court’s decision is out of step with legal precedent because prior court rulings have given deference to agency expertise in interpreting and implementing laws passed by Congress. It is also contrary to what the latest science shows is necessary and does not reflect the full potential to reduce heat-trapping emissions from the power sector using widely available and cost-effective technologies. As Justice Elena Kagan notes in the dissenting opinion, “Whatever else this Court may know about, it does not have a clue about how to address climate change.... The Court appoints itself—instead of Congress or the expert agency—the decision-maker on climate policy. I cannot think of many things more frightening.”

Despite this deeply harmful and ideologically motivated ruling, the EPA’s authority and responsibility to curtail heat-trapping emissions still stands. The EPA must now act promptly to propose and finalize as robust a set of power plant carbon standards as possible within the scope it has.

Congress, too, must act quickly to pass the months-long stalled budget reconciliation bill—with critical climate and energy components. That legislation must include tax credits to help advance renewable energy and electric vehicles; investments that will help communities become more resilient to climate change, especially low-income communities and communities of color that bear a disproportionate brunt of impacts; and strong labor and environmental justice provisions. Companies must also step up and do their part—net zero pledges on distant timelines mean little without concrete actions to make deep, absolute near-term cuts in emissions. The country needs a strong suite of policies at the federal, state and local level, across every sector of the economy, to deliver on its commitment to cut its heat-trapping emissions 50 to 52 percent below 2005 levels by 2030. Every hindrance, every delay, is deeply problematic given the urgency highlighted by the latest science.

With this decision, this Supreme Court has willfully made it much more difficult to make meaningful progress on climate change. Meanwhile global carbon emissions continue to rise at an alarming rate, sharply rebounding from the brief dip during the first year of the COVID-19 pandemic. Atmospheric concentrations of heat-trapping emissions are on a relentless upward trajectory, as is the increase in global average temperatures. There is no time to waste. As the Intergovernmental Panel on Climate Change stated in its recent report, “Any further delay in concerted anticipatory global action on adaptation and mitigation will miss a brief and rapidly closing window of opportunity to secure a livable and sustainable future for all.”

Unlike the Court’s ultraconservative majority, most people in the U.S. recognize the harm being wrought by climate change and want strong policies to address it. This decision is a warning that going forward, securing desperately needed progress on urgent priorities such as climate change will require an engaged and informed electorate and the protection of elections and voting rights. Rooting out the fossil fuel industry’s corrupting influence on our democracy is also vital. We must hold our policy makers’ feet to the fire and be willing to speak up in every venue—from corporate shareholder meetings to public utility commission hearings—where decisions about the future of our planet are being made.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.



from Hacker News https://ift.tt/ajFp14Z

Ask HN: My product isn't making any money, and I have rent and food to pay for

Comments

from Hacker News https://ift.tt/9FYuDmn

Meta slashes hiring plans, girds for 'fierce' headwinds

June 30 (Reuters) - Facebook-owner Meta Platforms Inc (META.O) has cut plans to hire engineers by at least 30% this year, CEO Mark Zuckerberg told employees on Thursday, as he warned them to brace for a deep economic downturn.

"If I had to bet, I'd say that this might be one of the worst downturns that we've seen in recent history," Zuckerberg told workers in a weekly employee Q&A session, audio of which was heard by Reuters.

Meta has reduced its target for hiring engineers in 2022 to around 6,000-7,000, down from an initial plan to hire about 10,000 new engineers, Zuckerberg said.

Register now for FREE unlimited access to Reuters.com

Meta confirmed hiring pauses in broad terms last month, but exact figures have not previously been reported.

In addition to reducing hiring, he said, the company was leaving certain positions unfilled in response to attrition and "turning up the heat" on performance management to weed out staffers unable to meet more aggressive goals.

"Realistically, there are probably a bunch of people at the company who shouldn't be here," Zuckerberg said.

"Part of my hope by raising expectations and having more aggressive goals, and just kind of turning up the heat a little bit, is that I think some of you might decide that this place isn't for you, and that self-selection is OK with me," he said.

The social media and technology company is bracing for a leaner second half of the year, as it copes with macroeconomic pressures and data privacy hits to its ads business, according to an internal memo seen by Reuters on Thursday.

The company must "prioritize more ruthlessly" and "operate leaner, meaner, better executing teams," Chief Product Officer Chris Cox wrote in the memo, which appeared on the company's internal discussion forum Workplace before the Q&A.

"I have to underscore that we are in serious times here and the headwinds are fierce. We need to execute flawlessly in an environment of slower growth, where teams should not expect vast influxes of new engineers and budgets," Cox wrote.

The memo was "intended to build on what we've already said publicly in earnings about the challenges we face and the opportunities we have, where we're putting more of our energy toward addressing," a Meta spokesperson said in a statement.

The guidance is the latest rough forecast to come from Meta executives, who already moved to trim costs across much of the company this year in the face of slowing ad sales and user growth. read more

Tech companies across the board have scaled back their ambitions in anticipation of a possible U.S. recession, although the slide in stock price at Meta has been more severe than at competitors Apple (AAPL.O) and Google (GOOGL.O).

The world's biggest social media company lost about half its market value this year, after Meta reported that daily active users on its flagship Facebook app had experienced a quarterly decline for the first time. read more

Its austerity drive comes at a tricky time, coinciding with two major strategic pivots: one aimed at re-fashioning its social media products around "discovery" to beat back competition from short-video app TikTok, the other an expensive long-term bet on augmented and virtual reality technology.

In his memo, Cox said Meta would need to increase fivefold the number of graphic processing units (GPUs) in its data centers by the end of the year to support the "discovery" push, which requires extra computing power for artificial intelligence to surface popular posts from across Facebook and Instagram in users' feeds.

Interest in Meta's TikTok-style short video product Reels was growing quickly, said Cox, with users doubling the amount of time they were spending on Reels year over year, both in the United States and globally.

Some 80% of the growth since March came from Facebook, he added.

That user engagement with Reels could provide a key route to bolster the bottom line, making it important to boost ads in Reels "as quickly as possible," he added.

Chief Executive Mark Zuckerberg told investors in April that executives viewed Reels as "a major part of the discovery engine vision," but at the time described the short video shift as a "short-term headwind" that would increase revenue gradually as advertisers became more comfortable with the format.

Cox said Meta also saw possibilities for revenue growth in business messaging and in-app shopping tools, the latter of which, he added, could "mitigate signal loss" created by Apple-led privacy changes.

He said the company's hardware division was "laser-focused" on successfully launching its mixed-reality headset, code-named "Cambria," in the second half of the year.

Register now for FREE unlimited access to Reuters.com

Reporting by Katie Paul; Editing by Kenneth Li, Peter Henderson and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.



from Hacker News https://ift.tt/Ex9DGcb

The Water Babies: Professor Ptthmllnsprts versus Old Bones

Richard Owen and Thomas Henry Huxley Inspecting a Water-Baby. Illustration by Linley Sambourne, from The Water-Babies, 1885. Alamy.
Richard Owen and Thomas Henry Huxley Inspecting a Water-Baby. Illustration by Linley Sambourne, from The Water-Babies, 1885. Alamy. 

Born into an illustrious scientific family, the future biologist Julian Huxley was a precocious child. At the age of five he came across a caricature by the eminent illustrator Linley Sambourne in the enormously successful children’s fairytale, The Water-Babies, written by Charles Kingsley. First published in 1863 with only two plates, it had sold so well that Sambourne was commissioned to produce a lavish new edition. The central character is an ill-treated chimney sweep called Tom, who falls into a river and is transformed into a water baby. During a long series of encounters with other children and teachers living beneath the surface, he eventually learns to follow the codes of Christian morality – the book’s major theme. In the illustration Julian recognised his square-jawed grandfather, Thomas Henry Huxley, brandishing a magnifying glass as he peers at a small naked boy imprisoned in a flask of liquid. Next to him is a balding scientist in a check jacket – Huxley’s arch rival, Richard Owen, an obstinate man renowned for nurturing enmities. A museum-based expert on fossils, Owen coined the word dinosaur, yet adamantly refused to accept that human beings had evolved from apes.

‘Dear Grandpater’, Julian wrote, ‘Have you seen a Waterbaby? Did you put it in a bottle? Did it wonder if it could get out? Could I see it some day?’ The elderly scientist replied diplomatically but enigmatically to these questions: posed innocently, they nevertheless went right to the core of the scientific quandaries addressed by Kingsley in his book.

 

Science and religion

As in previous centuries, there was no straightforward, black-and-white opposition between religion and science. Kingsley was a cleric and a naturalist who endorsed evolution by natural selection, while Owen was an eminent scientist who opposed Darwinism yet believed that God superintended the development of the living world. Some of his most famous discoveries were of extinct creatures, even though, according to his strict interpretations of the Bible, God initially created the living world just as it is now.

Sambourne and Kingsley both knew Huxley, Charles Darwin’s self-appointed publicity agent who is now best known for confronting Bishop ‘Soapy Sam’ Wilberforce during a fierce debate at Oxford about Darwinian evolution. Wilberforce had reportedly asked sarcastically whether Huxley had descended from an ape on his grandmother’s or grandfather’s side. Still only an undergraduate, his victim promptly retorted that he would rather be related to an ape than to an intellectually dishonest man. 

In his long allegorical narrative, Kingsley imagined his own small son voicing similar concerns to Julian Huxley’s. The fictionalised boy argued that, if Tom – the water baby of Kingsley’s title – really had been found, surely someone would have ‘put it into spirits [or] cut it into two halves, poor dear little thing, and sent one to Professor Owen, and one to Professor Huxley, to see what they would each say about it’. Kingsley reassured his young readers that Huxley was ‘a very kind old gentleman’, who would have kept Tom alive and petted him. Nevertheless, in The Water-Babies he fondly parodied Huxley as the foolish Professor Ptthmllnsprts – Put-them-all-in-spirits – who believed that anatomical analysis would yield the secrets of life.

Kingsley and Huxley admired each other’s intellect but were fundamentally opposed on the subject of religion. Whereas Kingsley was a devout Anglican clergyman, Huxley invented the term agnosticism to cover his own position, maintaining that, without definitive evidence, he lacked the certainty to be either an atheist or a Christian. After Huxley’s son died of scarlet fever Kingsley was his friend’s main source of comfort. He never convinced the grieving man to put his faith in God. 

‘Dinner in the Iguanodon Model, at the Crystal Palace, Sydenham’, London Illustrated News, 7 January 1854.
‘Dinner in the Iguanodon Model, at the Crystal Palace, Sydenham’, London Illustrated News, 7 January 1854. Alamy.

In The Water-Babies, Kingsley derides the value of Huxley’s logical rationality for solving the world’s mysteries. At one level a fairy story, the book is laced with multiple themes, although Kingsley stated his central message clearly: ‘Your soul makes your body, just as a snail makes its shell.’ Much as he appreciated the value of objective scientific research, Kingsley attached great significance to nature’s wonder, creating his book to refute the materialist view that life can exist independently of a spiritual God. 

 

A celebrated cleric

When he wrote The Water-Babies, Kingsley was Regius Professor of History at Cambridge, but – as is apparent from his evocative accounts of the creatures and landscapes Tom encounters – he was also a keen geologist and zoologist. Uniquely among theologians, Kingsley immediately supported the theory of evolution by natural selection when it was proposed by Darwin in On the Origin of Species in 1859. Gratified but surprised, Darwin opportunistically incorporated his approving comments in all future editions of the book, anonymising Kingsley as a ‘celebrated cleric’. According to Kingsley’s singular interpretation, Darwin’s theory relied on the noble concept of a deity who had ‘created primal forms capable of self development’. This far-seeing God had made living beings that could improve themselves, so there was no need for repeated divine interventions to fill in the gaps between different types of living creature.

Victorian families often read books aloud to one another and Kingsley evidently assumed that helpful adults would be on hand to clarify the obscurer points of his topical jokes. In an aside to his young readers he points out that, as a logical rationalist, Professor Ptthmllnsprts should recognise the inherent impossibility of proving a universal negative. How can you be sure that some strange creature does not exist just because you have never seen it? As the learned professor wanders along a rocky shore with Tom’s subaquatic friend Ellie, she asks the professor how he can possibly know that there is no such thing as a water baby. Forced into a logical cul-de-sac, Ptthmllnsprts resorts to a most unsatisfactory reply: ‘Because there ain’t.’ Turning away, he begins angrily poking among the weeds, but is horrified to see that Tom has become tangled up in his net. Instead of welcoming the living proof he has demanded, he retreats into outright denial and throws the water baby back into the water, ignoring the children’s protests. 

 

Of moas and men

Although he refused to accept that people had evolved from apes, Owen was a renowned fossil expert, celebrated for his ability to reconstruct an entire extinct creature from a small fragment. In 1839, as a young unknown anatomist, he concluded that the broken shaft of a thigh bone sent to him from New Zealand had originally belonged to a sluggish struthious bird (the technical term for ‘ostrich-like’). Despite scathing criticisms Owen persevered, distributing 100 copies of his article to rouse interest among naturalists. Within a few years, he had been sent so many bones that he was able to identify more than a dozen species of vanished moa birds. In a striking photograph a massive skeleton dwarfs, yet strangely mirrors, its human discoverer with his concave face, large eyes and thin, stooped body.

Owen was the first to provide definitive evidence that these giant birds had once existed in New Zealand and he became fondly known as ‘Old Bones’. According to Maori tradition, moas had brightly coloured necks, sported a crest on their heads and devoured unsuspecting forest travellers. Even though there was no confirmed report of a European sighting, before the islands had been thoroughly explored it was tempting to imagine that a few might survive in remote regions. It seemed that these giant birds had only very recently become extinct. Owen later bought for the British Museum a specimen from New Zealand bearing skin, ligaments and grayish-brown feathers. Some moa bones bear marks that might have been made with iron blades; he blamed human beings for hunting the birds so heavily that they could not survive. European travellers may well have been the last people to dine on moa.

Richard Owen and a moa skeleton, 1878. Alamy.
Richard Owen and a moa skeleton, 1878. Alamy.

By coining the word dinosaur, Owen helped to bolster an evolutionary theory in which he did not believe. Infuriatingly for him, it was his arch-rival Gideon Mantell who had been the first to unearth a few mysterious teeth in a quarry in 1822, which he claimed belonged to exceptionally large relatives of iguanas (subsequently called iguanadons to incorporate the Greek word for ‘tooth’). Two years later William Buckland – a clergyman and palaeontologist based in Oxford – made a still more momentous discovery of some gigantic fossil bones and so became the first person to describe what is now known as a dinosaur. But it was Owen who later supplied the label. Interpreted charitably, Owen was trying to make sense of all the fossil reptiles that had so far been dug up by bracketing them within a single group of dinosaurs. Viewed more cynically, he was an ambitious young career scientist keen to attack Mantell, an older doctor who worked on fossils in his spare time.

 

Champion chomper

Buckland never did get the opportunity to sample moa flesh, but he was an extremely adventurous Victorian gourmet who chomped his way through an extraordinary variety of animals. Although after one of these gastronomic experiments he refused to eat mole again, he regularly served his guests with delicacies such as panther steaks and mice on toast. Then, after the meal, he would reveal that the strange oval objects set inside the glass tabletop were sliced coprolites (fossilised dinosaur faeces). 

A more orthodox menu was supplied for a bizarre seven-course dinner party held as a publicity stunt to celebrate the New Year of 1854. This banquet took place at Crystal Palace inside the mould that was being prepared to produce a model iguanodon. Owen took his place at the head of a table with some 20 guests surrounded by pink and white drapery. When completed, the iguanodon contained 600 bricks and 100 feet of iron hooping, as well as tiles, stone and 38 casks of cement, although Owen – who had been excluded from the decision-making – repeatedly and tactlessly stressed that there was no evidence for the prominent horn on its snout. During the next half a century over a million visitors a year came to admire this massive beast and its 32 antediluvian companions, a supposedly realistic group assembled in Crystal Palace Park on the outskirts of London, where they can still be admired today.

 

Hippopotamus on the brain

Over many years Huxley and Owen confronted each other in savage debates that were fuelled by personal animosity, but which also represented contrasting approaches to the processes of evolution. Symbolically, their protracted row hinged on one tiny anatomical feature – a fold in the layers at the base of the brain called the hippocampus minor. With his reputation for inferring big conclusions from minute scraps of bony evidence, Owen insisted that this feature was found only in human brains. For him, the hippocampus minor provided clinching evidence against human beings having primate ancestors. But many other scientists were reluctant to dismiss evolution by natural selection on such an apparently flimsy basis. Huxley repeatedly accused him of ‘lying & shuffling’ and of dishonestly ignoring a host of other observed features.

In The Water-Babies, Kingsley parodied Owen’s position succinctly but ruthlessly. Your appearance and your behaviour are irrelevant, he sarcastically informed his Victorian audience; ‘the one true, certain, final and all-important difference between you and an ape is, that you have a hippopotamus major in your brain, and it has none’. But he also mocked Huxley’s avatar, Professor Ptthmllnsprts, for adopting a similarly narrow-minded approach and seeking to prove Owen wrong by focusing on anatomic features of non-human primates. Other differences are more important, Kingsley argued, ‘such as being able to speak, and make machines, and know right from wrong, and say your prayers …’.

Charles Darwin must have felt both disconcerted and delighted to find this theological naturalist in his camp. The subtleties of The Water-Babies may have been beyond the grasp of many children, but – as Julian Huxley exemplifies – the book and its illustrations provided a wonderfully effective piece of propaganda for the subversive doctrines of Darwinism. 

 

Patricia Fara is an Emeritus Fellow of Clare College, Cambridge. Her latest book is Life after Gravity: The London Career of Isaac Newton (Oxford University Press, 2021). 



from Hacker News https://ift.tt/FMHupJr

Minecraft content creator Technoblade has died

Minecraft streamer and content creator Technoblade has died following an extended battle with stage four cancer, his family announced today in a public statement and video. 

Techno previously revealed he had been diagnosed with cancer last August and had been sporadically updating his community on his situation while continuing to receive treatment and create content on YouTube.

In the video, posted on June 30, Techno’s father narrated a final, posthumous message titled “so long nerds,” written for such an occasion. It was filled to the brim with the creator’s signature dry and often dark humor, but also acts as one final ‘thank you’ to everyone who supported the Minecraft star over the years. 

“Thank you all for supporting my content over the years. If I had another 100 lives, I think I would choose to be Technoblade again every single time. As those were the happiest years of my life,” Techno said.

“I hope you guys enjoyed my content and that I made some of you laugh. I hope you all go on to live long, prosperous, and happy lives because I love you guys. Technoblade out.”

In his message, Techno, who reveals his name is actually Alex, talks about how the money from merch and other “sell-out” pushes over the last year is being used to send his siblings to college, if they want to go, along with thanking his viewers for giving him such happy moments—all while a slideshow of images showing him throughout various points of his treatment are shown on screen. 

Following Techno’s final message, his father details how he originally planned to write and record the video himself prior to his passing, but his deteriorating health and other factors kept it from happening. According to his father, he wrote the message from bed and passed away around eight hours later. 

“He was the most amazing kid anyone could ever ask for,” Technoblade’s father said. “I miss Technoblade. Thank you to all of you, for everything. You meant a lot to him.”

Technoblade’s family will continue operating his merch store, with all proceeds from orders being donated to his preferred charity, The Sarcoma Foundation of America, as they want to “continue spreading his message.” This includes, in the most Techoblade fashion ever, launching a “so long nerds” collection celebrating his life.



from Hacker News https://ift.tt/RziHlGS

Doom Builder

What is Doom Builder?

Doom Builder is an advanced 3D map editor for Doom and games based on the Doom engine, such as Heretic, Hexen and Strife. This editor is highly extendible for the different game engines of the Doom community. Doom Builder introduced the 3D editing mode in the Doom community and is still the leading editor for Doom levels today. See the screenshots page for a quick look at this editor or see the downloads page for the real deal. You can find extensions at the plugins page.

Forks / sourceports

Since the first release of Doom Builder there has been great enthousiasm from other community members in continuous development and addition of more advanced features specialized to certain Doom sourceports (forks). One such branch (Doom Builder X) focuses on continuous improvements while keeping the editor in its classic fashion and generic for all Doom soureceports. Another branch (Ultimate Doom Builder) has completely specialized the editor for the GZDoom sourceport and has added tools to editor special GZDoom features such as dynamic lighting and 3D floors. Similary, the Doom Builder 64 branch has added features specific to the Doom 64 game. Please check out these flavors of Doom Builder editors, they are worth your time and can help you get the most out of your mapping project!

Doom, that shooter from 1993

Despite the age of this game, it is still very popular and the most well known first person shooter. Doom has a large community of players, map authors and even mod authors. Making maps for this game is relatively simple and yet it allows for a great gameplay experience. Especially with the addition of scripting in the maps, it allows for interesting puzzles that rival even todays next-gen games! The community has spawned several new Doom engines that have remarkable new features to play around with in your map (and you can use those in Doom Builder!). Click the links on the right of this website for your starting points into the Doom community.



from Hacker News http://doombuilder.com/

Severe thermal throttling discovered in Apple's M2 MacBook Pro



from Hacker News https://twitter.com/VadimYuryev/status/1542188250697039872

Linux Kernel Module written in Scratch (a visual programming language for kids)

Ready to have your mind blown… just a little? Check this out:

“What am I looking at,” you ask?

On the left is Scratch — a visual programming tool, primarily geared towards kids, to help with learning concepts of coding. Instead of typing out your code, you drag and drop blocks of programming logic into place. Snapping them together like a jig saw puzzle.

In this case, instead of a programming tutorial or a simple children’s game, the Scratch project is an actual Linux Kernel Module.

On the right is some output from the Linux Kernel Log.

Those messages in the Linux Kernel Log were put there by that Linux Kernel Module (using the “printk” function) built in Scratch.

That’s right. It is now possible to build a functioning Linux Kernel Module entirely in a visual programming tool intended for kids. Because this is 2022, and we deserve to have some fun.

This bit of (awesome) madness is made possible thanks to the “scratchnative” project, which takes a Scratch project and converts it to C++. Thus opening up some truly ridiculous possibilities (such as creating this kernel module or even writing a whole gosh darned operating system).


Don’t forget to grab your subscription to The Lunduke Journal. So much good stuff.



from Hacker News https://ift.tt/lL9BGrE

Ask HN: Can I see your cheatsheet?

Comments

from Hacker News https://ift.tt/CsxOeYf

Clang IR (CIR): A New IR for Clang

Clang IR (CIR)

Clang IR (CIR) is a new IR for Clang. The LLVM’s discourse RFC goes in depth about the project motivation, status and design choices.

The source of truth for CIR is found at https://github.com/facebookincubator/clangir.

The main branch contains a stack of commits, occasionally rebased on top of LLVM upstream, tracked in latest-upstream-llvm branch.


Getting started

Git repo

$ git clone https://github.com/facebookincubator/clangir.git llvm-project

Remote

Alternatively, one can just add remotes:

$ cd llvm-project
$ git remote add fbi git@github.com:facebookincubator/clangir.git
$ git checkout -b clangir fbi/main

Building

In order to enable CIR related functionality, just add mlir and clang to the CMake list of enabled projects and do a regular LLVM build.

... -DLLVM_ENABLE_PROJECTS="clang;mlir;..." ...

See the steps here for general instruction on how to build LLVM.

For example, building and installing CIR enabled clang on macOS could look like:

CLANG=`xcrun -f clang`
INSTALLDIR=/tmp/install-llvm

$ cd llvm-project/llvm
$ mkdir build-release; cd build-release
$ /Applications/CMake.app/Contents/bin/cmake -GNinja \
 -DCMAKE_BUILD_TYPE=Release \
 -DCMAKE_INSTALL_PREFIX=${INSTALLDIR} \
 -DLLVM_ENABLE_ASSERTIONS=ON \
 -DLLVM_TARGETS_TO_BUILD="X86" \
 -DLLVM_ENABLE_PROJECTS="clang;mlir" \
 -DCMAKE_CXX_COMPILER=${CLANG}++ \
 -DCMAKE_C_COMPILER=${CLANG} ../
$ ninja install

Check for cir-tool to confirm all is fine:

$ /tmp/install-llvm/bin/cir-tool --help

Running tests

Test are an important part on preventing regressions and covering new feature functionality. There are multiple ways to run CIR tests.

The more aggresive (slower) one:

CIR specific test targets using ninja:

$ ninja check-clang-cir
$ ninja check-clang-cir-codegen

Using lit from build directory:

$ cd build
$ ./bin/llvm-lit -a ../clang/test/CIR

How to contribute

Any change to the project should be done over github pull requests, anyone is welcome to contribute!


Documentation

Operations

cir.alloca (::mlir::cir::AllocaOp)

Defines a scope-local variable

Syntax:

operation ::= `cir.alloca` $type `,` `cir.ptr` type($addr) `,` `[` $name `,` $init `]` attr-dict

The cir.alloca operation defines a scope-local variable.

Initialization style must be one of:

  • uninitialized
  • paraminit: alloca to hold a function argument
  • callinit: Call-style initialization (C++98)
  • cinit: C-style initialization with assignment
  • listinit: Direct list-initialization (C++11)

The result type is a pointer to the input’s type.

Example:

// int count = 3;
%0 = cir.alloca i32, !cir.ptr<i32>, ["count", cinit] {alignment = 4 : i64}

// int *ptr;
%1 = cir.alloca !cir.ptr<i32>, cir.ptr <!cir.ptr<i32>>, ["ptr", uninitialized] {alignment = 8 : i64}
...

Attributes:

Attribute MLIR Type Description
type ::mlir::TypeAttr any type attribute
name ::mlir::StringAttr string attribute
init ::mlir::cir::InitStyleAttr initialization style
alignment ::mlir::IntegerAttr 64-bit signless integer attribute whose minimum value is 0

Results:

Result Description
addr CIR pointer type

cir.binop (::mlir::cir::BinOp)

Binary operations (arith and logic)

Syntax:

operation ::= `cir.binop` `(` $kind `,` $lhs `,` $rhs  `)` `:` type($lhs) attr-dict

cir.binop performs the binary operation according to the specified opcode kind: [mul, div, rem, add, sub, shl, shr, and, xor, or].

It requires two input operands and has one result, all types should be the same.

%7 = binop(add, %1, %2) : i32
%7 = binop(mul, %1, %2) : i8

Traits: SameOperandsAndResultType, SameTypeOperands

Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

Attribute MLIR Type Description
kind ::mlir::cir::BinOpKindAttr binary operation (arith and logic) kind

Operands:

Operand Description
lhs any type
rhs any type

Results:

Result Description
result any type

cir.brcond (::mlir::cir::BrCondOp)

Conditional branch

Syntax:

operation ::= `cir.brcond` $cond
              $destTrue (`(` $destOperandsTrue^ `:` type($destOperandsTrue) `)`)?
              `,`
              $destFalse (`(` $destOperandsFalse^ `:` type($destOperandsFalse) `)`)?
              attr-dict

The cir.brcond %cond, ^bb0, ^bb1 branches to ‘bb0’ block in case %cond (which must be a !cir.bool type) evaluates to true, otherwise it branches to ‘bb1’.

Example:

  ...
    cir.brcond %a, ^bb3, ^bb4
  ^bb3:
    cir.return
  ^bb4:
    cir.yield

Traits: SameVariadicOperandSize, Terminator

Interfaces: BranchOpInterface, NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
cond CIR bool type
destOperandsTrue any type
destOperandsFalse any type

Successors:

Successor Description
destTrue any successor
destFalse any successor

cir.br (::mlir::cir::BrOp)

Unconditional branch

Syntax:

operation ::= `cir.br` $dest (`(` $destOperands^ `:` type($destOperands) `)`)? attr-dict

The cir.br branches unconditionally to a block. Used to represent C/C++ goto’s and general block branching.

Example:

  ...
    cir.br ^bb3
  ^bb3:
    cir.return

Traits: Terminator

Interfaces: BranchOpInterface, NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
destOperands any type

Successors:

Successor Description
dest any successor

cir.cast (::mlir::cir::CastOp)

Conversion between values of different types

Syntax:

operation ::= `cir.cast` `(` $kind `,` $src `:` type($src) `)`
              `,` type($result) attr-dict

Apply C/C++ usual conversions rules between values. Currently supported kinds:

  • int_to_bool
  • array_to_ptrdecay
  • integral

This is effectively a subset of the rules from llvm-project/clang/include/clang/AST/OperationKinds.def; but note that some of the conversions aren’t implemented in terms of cir.cast, lvalue-to-rvalue for instance is modeled as a regular cir.load.

%4 = cir.cast (int_to_bool, %3 : i32), !cir.bool
...
%x = cir.cast(array_to_ptrdecay, %0 : !cir.ptr<!cir.array<i32 x 10>>), !cir.ptr<i32>

Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

Attribute MLIR Type Description
kind ::mlir::cir::CastKindAttr cast kind

Operands:

Operand Description
src any type

Results:

Result Description
result any type

cir.cmp (::mlir::cir::CmpOp)

Compare values two values and produce a boolean result

Syntax:

operation ::= `cir.cmp` `(` $kind `,` $lhs `,` $rhs  `)` `:` type($lhs) `,` type($result) attr-dict

cir.cmp compares two input operands of the same type and produces a cir.bool result. The kinds of comparison available are: [lt,gt,ge,eq,ne]

%7 = cir.cmp(gt, %1, %2) : i32, !cir.bool

Traits: SameTypeOperands

Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

Attribute MLIR Type Description
kind ::mlir::cir::CmpOpKindAttr compare operation kind

Operands:

Operand Description
lhs any type
rhs any type

Results:

Result Description
result any type

cir.cst (::mlir::cir::ConstantOp)

Defines a CIR constant

Syntax:

operation ::= `cir.cst` `(` custom<ConstantValue>($value) `)` attr-dict `:` type($res)

The cir.cst operation turns a literal into an SSA value. The data is attached to the operation as an attribute.

  %0 = cir.cst(42 : i32) : i32
  %1 = cir.cst(4.2 : f32) : f32
  %2 = cir.cst(nullptr : !cir.ptr<i32>) : !cir.ptr<i32>

Traits: ConstantLike

Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

Attribute MLIR Type Description
value ::mlir::Attribute any attribute

Results:

Result Description
res any type

cir.get_global (::mlir::cir::GetGlobalOp)

Get the address of a global variable

Syntax:

operation ::= `cir.get_global` $name `:` `cir.ptr` type($addr) attr-dict

The cir.get_global operation retrieves the address pointing to a named global variable. If the global variable is marked constant, writing to the resulting address (such as through a cir.store operation) is undefined. Resulting type must always be a !cir.ptr<...> type.

Example:

%x = cir.get_global @foo : !cir.ptr<i32>

Interfaces: NoSideEffect (MemoryEffectOpInterface), SymbolUserOpInterface

Effects: MemoryEffects::Effect{}

Attributes:

Attribute MLIR Type Description
name ::mlir::FlatSymbolRefAttr flat symbol reference attribute

Results:

Result Description
addr CIR pointer type

cir.global (::mlir::cir::GlobalOp)

Declares or defines a global variable

Syntax:

operation ::= `cir.global` ($sym_visibility^)?
              (`constant` $constant^)?
              $sym_name
              custom<GlobalOpTypeAndInitialValue>($sym_type, $initial_value)
              attr-dict

The cir.global operation declares or defines a named global variable.

The backing memory for the variable is allocated statically and is described by the type of the variable.

The operation is a declaration if no inital_value is specified, else it is a definition.

The global variable can also be marked constant using the constant unit attribute. Writing to such constant global variables is undefined.

Symbol visibility is defined in terms of MLIR’s visibility, and C/C++ linkage types are still TBD.

Example:

// Public and constant variable with initial value.
cir.global public constant @c : i32 = 4;

Interfaces: Symbol

Attributes:

Attribute MLIR Type Description
sym_name ::mlir::StringAttr string attribute
sym_visibility ::mlir::StringAttr string attribute
sym_type ::mlir::TypeAttr any type attribute
initial_value ::mlir::Attribute any attribute
constant ::mlir::UnitAttr unit attribute
alignment ::mlir::IntegerAttr 64-bit signless integer attribute

cir.if (::mlir::cir::IfOp)

The if-then-else operation

The cir.if operation represents an if-then-else construct for conditionally executing two regions of code. The operand is a cir.bool type.

Examples:

cir.if %b  {
  ...
} else {
  ...
}

cir.if %c  {
  ...
}

cir.if %c  {
  ...
  cir.br ^a
^a:
  cir.yield
}

cir.if defines no values and the ‘else’ can be omitted. cir.yield must explicitly terminate the region if it has more than one block.

Traits: AutomaticAllocationScope, NoRegionArguments, RecursiveSideEffects

Interfaces: RegionBranchOpInterface

Operands:

Operand Description
condition CIR bool type

cir.load (::mlir::cir::LoadOp)

Load value from memory adddress

Syntax:

operation ::= `cir.load` (`deref` $isDeref^)? $addr `:` `cir.ptr` type($addr) `,`
              type($result) attr-dict

cir.load reads a value (lvalue to rvalue conversion) given an address backed up by a cir.ptr type. A unit attribute deref can be used to mark the resulting value as used by another operation to dereference a pointer.

Example:


// Read from local variable, address in %0.
%1 = cir.load %0 : !cir.ptr<i32>, i32

// Load address from memory at address %0. %3 is used by at least one
// operation that dereferences a pointer.
%3 = cir.load deref %0 : cir.ptr <!cir.ptr<i32>>

Attributes:

Attribute MLIR Type Description
isDeref ::mlir::UnitAttr unit attribute

Operands:

Operand Description
addr CIR pointer type

Results:

Result Description
result any type

cir.loop (::mlir::cir::LoopOp)

Loop

Syntax:

operation ::= `cir.loop` $kind
              `(`
              `cond` `:` $cond `,`
              `step` `:` $step
              `)`
              $body
              attr-dict

cir.loop represents C/C++ loop forms. It defines 3 blocks:

  • cond: region can contain multiple blocks, terminated by regular cir.yield when control should yield back to the parent, and cir.yield continue when execution continues to another region. The region destination depends on the loop form specified.
  • step: region with one block, containing code to compute the loop step, must be terminated with cir.yield.
  • body: region for the loop’s body, can contain an arbitrary number of blocks.

The loop form: for, while and dowhile must also be specified and each implies the loop regions execution order.

  // while (true) {
  //  i = i + 1;
  // }
  cir.loop while(cond :  {
    cir.yield continue
  }, step :  {
    cir.yield
  })  {
    %3 = cir.load %1 : cir.ptr <i32>, i32
    %4 = cir.cst(1 : i32) : i32
    %5 = cir.binop(add, %3, %4) : i32
    cir.store %5, %1 : i32, cir.ptr <i32>
    cir.yield
  }

Traits: NoRegionArguments, RecursiveSideEffects

Interfaces: LoopLikeOpInterface, RegionBranchOpInterface

Attributes:

Attribute MLIR Type Description
kind ::mlir::cir::LoopOpKindAttr Loop kind

cir.ptr_stride (::mlir::cir::PtrStrideOp)

Pointer access with stride

Syntax:

operation ::= `cir.ptr_stride` `(` $base `:` type($base) `,` $stride `:` type($stride) `)`
              `,` type($result) attr-dict

Given a base pointer as operand, provides a new pointer after applying a stride. Currently only used for array subscripts.

%3 = cir.cst(0 : i32) : i32
%4 = cir.ptr_stride(%2 : !cir.ptr<i32>, %3 : i32), !cir.ptr<i32>

Traits: SameFirstOperandAndResultType

Interfaces: NoSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
base any type
stride integer

Results:

Result Description
result any type

cir.return (::mlir::cir::ReturnOp)

Return from function

Syntax:

operation ::= `cir.return` ($input^ `:` type($input))? attr-dict

The “return” operation represents a return operation within a function. The operation takes an optional operand and produces no results. The operand type must match the signature of the function that contains the operation.

  func @foo() -> i32 {
    ...
    cir.return %0 : i32
  }

Traits: HasParent<FuncOp, ScopeOp, IfOp, SwitchOp, LoopOp>, Terminator

Operands:

Operand Description
input any type

cir.scope (::mlir::cir::ScopeOp)

Represents a C/C++ scope

cir.scope contains one region and defines a strict “scope” for all new values produced within its blocks.

Its region can contain an arbitrary number of blocks but usually defaults to one. The cir.yield is a required terminator and can be optionally omitted.

A resulting value can also be specificed, though not currently used - together with cir.yield should be helpful to represent lifetime extension out of short lived scopes in the future.

Traits: AutomaticAllocationScope, NoRegionArguments, RecursiveSideEffects

Interfaces: RegionBranchOpInterface

Results:

Result Description
results any type

cir.store (::mlir::cir::StoreOp)

Store value to memory address

Syntax:

operation ::= `cir.store` $value `,` $addr attr-dict `:` type($value) `,` `cir.ptr` type($addr)

cir.store stores a value (first operand) to the memory address specified in the second operand.

Example:

// Store a function argument to local storage, address in %0.
cir.store %arg0, %0 : i32, !cir.ptr<i32>

Operands:

Operand Description
value any type
addr CIR pointer type

cir.struct_element_addr (::mlir::cir::StructElementAddr)

Get the address of a member of a struct

The cir.struct_element_addr operaration gets the address of a particular named member from the input struct.

Example:

!22struct2EBar22 = type !cir.struct<"struct.Bar", i32, i8>
...
%0 = cir.alloca !22struct2EBar22, cir.ptr <!22struct2EBar22>
...
%1 = cir.struct_element_addr %0, "Bar.a"
%2 = cir.load %1 : cir.ptr <int>, int
...

Attributes:

Attribute MLIR Type Description
member_name ::mlir::StringAttr string attribute

Operands:

Operand Description
struct_addr CIR pointer type

Results:

Result Description
result CIR pointer type

cir.switch (::mlir::cir::SwitchOp)

Switch operation

Syntax:

operation ::= `cir.switch` custom<SwitchOp>(
              $regions, $cases, $condition, type($condition)
              )
              attr-dict

The cir.switch operation represents C/C++ switch functionality for conditionally executing multiple regions of code. The operand to an switch is an integral condition value.

A variadic list of “case” attribute operands and regions track the possible control flow within cir.switch. A case must be in one of the following forms:

  • equal, <constant>: equality of the second case operand against the condition.
  • anyof, [constant-list]: equals to any of the values in a subsequent following list.
  • default: any other value.

Each case region must be explicitly terminated.

Examples:

cir.switch (%b : i32) [
  case (equal, 20) {
    ...
    cir.yield break
  },
  case (anyof, [1, 2, 3] : i32) {
    ...
    cir.return ...
  }
  case (default) {
    ...
    cir.yield fallthrough
  }
]

Traits: AutomaticAllocationScope, NoRegionArguments, RecursiveSideEffects, SameVariadicOperandSize

Interfaces: RegionBranchOpInterface

Attributes:

Attribute MLIR Type Description
cases ::mlir::ArrayAttr cir.switch case array attribute

Operands:

Operand Description
condition integer

cir.yield (::mlir::cir::YieldOp)

Terminate CIR regions

Syntax:

operation ::= `cir.yield` ($kind^)? ($args^ `:` type($args))? attr-dict

The cir.yield operation terminates regions on different CIR operations: cir.if, cir.scope, cir.switch and cir.loop.

Might yield an SSA value and the semantics of how the values are yielded is defined by the parent operation. Note: there are currently no uses of cir.yield with operands - should be helpful to represent lifetime extension out of short lived scopes in the future.

Optionally, cir.yield can be annotated with extra kind specifiers:

  • break: breaking out of the innermost cir.switch / cir.loop semantics, cannot be used if not dominated by these parent operations.
  • fallthrough: execution falls to the next region in cir.switch case list. Only available inside cir.switch regions.
  • continue: only allowed under cir.loop, continue execution to the next loop step.

As a general rule, cir.yield must be explicitly used whenever a region has more than one block and no terminator, or within cir.switch regions not cir.return terminated.

Example:

cir.if %4 {
  ...
  cir.yield
}

cir.switch (%5) [
  case (equal, 3) {
    ...
    cir.yield fallthrough
  }, ...
]

cir.loop (cond : {...}, step : {...}) {
  ...
  cir.yield continue
}

Traits: HasParent<IfOp, ScopeOp, SwitchOp, LoopOp>, ReturnLike, Terminator

Attributes:

Attribute MLIR Type Description
kind ::mlir::cir::YieldOpKindAttr yield kind

Operands:

Operand Description
args any type

Passes

-cir-lifetime-check: Check lifetime safety and generate diagnostics

This pass relies on a lifetime analysis pass and uses the diagnostics mechanism to report to the user. It does not change any code.

Options

-history : List of history styles to emit as part of diagnostics. Supported styles: {all|null|invalid}
-remarks : List of remark styles to enable as part of diagnostics. Supported styles: {all|pset}

-cir-merge-cleanups: Remove unnecessary branches to cleanup blocks

Canonicalize pass is too aggressive for CIR when the pipeline is used for C/C++ analysis. This pass runs some rewrites for scopes, merging some blocks and eliminating unnecessary control-flow.



from Hacker News https://ift.tt/EIX8YLv

Wednesday, June 29, 2022

CandyCodes: Simple unique edible identifiers for authenticating pharmaceuticals

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.



from Hacker News https://ift.tt/qptZ5hX

Development Environments

Comments

from Hacker News https://ift.tt/Chka15n

10 Years of Meteor

10 Years of Meteor

My experience with a pioneering JavaScript framework

Note: I am not currently affiliated, nor have I ever been at any time, with Meteor Development Group, Apollo, or Tiny. These are my own personal thoughts.

If you only got into web development in the past couple years, you might quite possibly have never heard about Meteor.

But when it first came out in 2012 (before React or Vue even existed!) it was the hottest thing around for a while. Not only that, to this day Meteor still does many incredible things that the rest of the ecosystem has yet to catch up with.

And what’s more, even if you’ve never used Meteor I’m willing to bet you’ve used software that was influenced by it, one way or another. So read on to learn more about Meteor’s early rise, why it may have burned too bright, and the lasting impact it had on modern JavaScript development.

Before we move on: this essay is a reflection of my own personal experiences with Meteor and its community, and is not meant to be an exhaustive or impartial history of the project.

2012

Take yourself back to 2012. It’s the aftermath of the financial crisis, but Barack Obama is president of the U.S. and assuring the world that there’s hope. “Call Me Maybe” is topping the charts. And tech founders are to be admired (or at the most made fun of), not yet feared for their over-sized power.

Deep in Silicon Valley, a team of brilliant engineers currently enrolled in YCombinator has a realization: building web apps is too damn hard!

Out of this realization is born Meteor: a unified architecture on top of Node.js that bundles client, server, and database all in one neat little package that you can run (or even deploy for free!) with a single command.

What’s more, it was all reactive and in real-time! Can you imagine never having to write a fetch() or worrying about updating your client data cache? And all that using the same familiar MongoDB API syntax both on the server and client!

Yep, it turns out Meteor was already tackling a huge chunk of the problems of modern web development in 2012.

State of GraphQL 2022 developer survey

Do you use GraphQL? If you do, and have 15 minutes to spare, please consider taking the first ever State of GraphQL community survey! We just launched it, and we think it’s going to be a huge help to figuring out which GraphQL tools and features people actually enjoy using.

A Lone Pixel-Pusher

Way back in the early ’10s I was earning a living tweaking drop shadows in Photoshop. As strange as it seems today, this was the tool of choice when it came to designing websites. This being the heyday of iPhone-inspired skeuomorphism, your paycheck depended on your ability to simulate a realistic leather texture for that call-to-action button.

Like many other UI designers though, I felt disillusioned that my beautiful creations didn’t always survive the rocky transition from Photoshop to browser in one piece. This, along with a generous dose of Silicon Valley kool-aid (I had just been to the Valley for the first time in the summer of 2011, and it made a huge impression on me), pushed me to try and strike it rich on my own rather than keep playing second fiddle to unappreciative developers.

I wasn’t starting from zero either: long before pushing pixels I had obtained my computer science diploma, even if I wasn’t quite top of my class. For example, I was once suspected of plagiarism because I was incapable of explaining what my own code did (my defense: if I was going to copy somebody else’s code, I would’ve copied something that works!). But web development had to be easier than C and Java, right?

I went through Michael Hartl’s Rails Tutorial but got overwhelmed with too many new concepts at once: routes, models, controllers, views, auth… All this combined with having to learn a new language meant Rails never quite clicked for me.

On the other hand, something that did click was jQuery. It was JavaScript, which was a “normal-looking” programming language compared to Ruby, but with all the weird DoSomethingWithTheDOM() parts massaged into a sensible API.

The first project I built with jQuery was a little tool called Patternify, which still exists to this day! I had a ton of fun, but at some point I started being frustrated once more: playing around in the browser was fine, but to achieve anything big you needed to involve a server and a database at some point.

Patternify

My first client-side only web app: a pixel pattern drawing app.

So here I was, a designer-slash-front-end-developer who loved JavaScript but was deathly afraid of servers. In other words, the perfect audience for Meteor.

Discovering Meteor

Meteor made a big splash right from the get-go, with one of the biggest Hacker News launches of all time. As one commenter put it:

My first impression of this: wow. If Meteor is all it appears to be, this is nothing short of revolutionary.

And revolutionary it was. Once my friend Sean Grove pointed me to it (he was in the same YCombinator batch as the Meteor guys), Meteor immediately appealed to me.

At the time, I was trying to build a “Hacker News for designers”, something that for whatever reason nobody has ever quite succeeded in doing. With its real-time capabilities, Meteor seemed like the perfect for this kind of web app. I got to work, deciding to make the project open-source in the hopes of attracting unpaid labor like-minded contributors, and Telescope was born.

Telescope

The default Telescope theme.

Throughout this whole process I got to know the other members of the budding Meteor community, one of which was Tom Coleman, creator of Meteor’s first package manager (Meteor launched without any kind of first-party package manager and only supported regular npm packages much later, which seems hard to imagine today).

Spotlight

Tom Coleman

Tom Coleman

Tom founded development agency Percolate Studio with longtime collaborators Dominic Nguyen and Zoltan Olah. Later, the company was acquired by Meteor Development Group, and after that the trio went on to found Chromatic and maintain Storybook, the popular UI testing tool.

I was fresh off writing a somewhat successful design e-book so I approached Tom with a proposal: we’d combine his deep knowledge of Meteor and my design and marketing skills to write a Meteor tutorial book and establish ourselves as the de facto way to learn the framework just as it was taking off.

And guess what: we did exactly that! Discover Meteor launched in 2013 and thanks in no small part to a big boost from the Meteor team quickly became one of the main resources to learn Meteor, just as we had hoped. The book quickly became a big part of both our lives, and was also a big financial success (you can read this Gumroad case study to learn more).

Discover Meteor

Discover Meteor is no longer maintained and you can now read it for free.

We even had a podcast and a t-shirt, and I’m not sure how many other programming books can say the same. And did I mention Discover Meteor was also translated into dozens of other languages by volunteers (we made all the translations freely available)?

Fun fact: to this day I’ve still only met Tom in person once, for the Discover Meteor launch!

Things Cool Down

At first, it seemed like everything was great in Meteor land. Tom eventually went to work for the Meteor Development Group (the company behind Meteor, also known as MDG) itself, the Meteor community was growing, and the book was doing great.

Spotlight

Arunoda Susiripala

Arunoda Susiripala

Arunoda created libraries to address many under-served aspects of early Meteor days including routing, server-side rendering, deployment, and performance monitoring. He went on to create Storybook, and work for Vercel before creating gamedev company GDi4K.

But as the rest of the JavaScript ecosystem kept evolving (this is around the time React was gaining traction), many voices inside the Meteor community started questioning Meteor’s idiosyncratic approach.

It soon became clear that the community was splitting into two camps: those who appreciated Meteor’s clear value proposition of simplifying web development by providing an all-in-one environment, and those who wanted more openness towards the npm ecosystem to avoid being left behind by the rest of the webdev community.

Here’s the catch: the contents of the book clearly targeted the first camp, but we as programmers were firmly in the second. Were we supposed to keep advocating for practices we didn’t follow ourselves? Or should we rewrite the book from scratch to match what we actually did, in the process risking killing the very simplicity that made the book appealing in the first place?

We could never decide, and Discover Meteor slowly got out of date. And once Tom left the Meteor Development Group and stopped using Meteor altogether, it was clear that our adventure had come to an end.

7 Principles, 10 Years Later

Meteor was famous for its “7 Principles”, which made up the core of its philosophy. Let’s look back to see how they hold up 10 years later when applied to the JavaScript apps of today.

1. Data on the Wire. Don’t send HTML over the network. Send data and let the client decide how to render it.

Verdict: 👎

It’s since become apparent that you often do need to send HTML over the network, and things seem to be moving back towards handling as much as possible of your HTML compilation on the server, not on the client.

2. One Language. Write both the client and the server parts of your interface in JavaScript.

Verdict: 👍

It’s now become commonplace to reuse components on both the server and client but that wasn’t the case 10 years ago. And to this day Meteor still handles many aspects of server/client code sharing better than most other frameworks.

3. Database Everywhere. Use the same transparent API to access your database from the client or the server.

Verdict: 👎

The ability to write MongoDB commands (with some security constraints) from your browser was a big innovation at the time, but that paradigm never got popular beyond the borders of the Meteor community.

4. Latency Compensation. On the client, use prefetching and model simulation to make it look like you have a zero-latency connection to the database.

Verdict: 🤷

Also referred to as Optimistic UI, Latency Compensation is the idea of immediately showing the user the result of their actions by simulating the server response. While a nice idea in theory, in practice the complexity of updating the client cache and handling error states often makes it more trouble than it’s worth.

5. Full Stack Reactivity. Make realtime the default. All layers, from database to template, should make an event-driven interface available.

Verdict: 🤷

While many frameworks embraced client-side reactivity (starting with, well, React), Meteor’s full-stack reactivity has never been replicated quite the same way. While invaluable for chat apps, games, and other real-time apps, it can often become a performance-hungry liability for more “traditional” web apps.

6. Embrace the Ecosystem. Meteor is open source and integrates, rather than replaces, existing open source tools and frameworks.

Verdict: 👍

If anything, Meteor didn’t go far enough in this direction, since it still had its own build tool, package manager, etc. Today using the ecosystem is just a no-brainer for any new project.

7. Simplicity Equals Productivity. The best way to make something seem simple is to have it actually be simple. Accomplish this through clean, classically beautiful APIs.

Verdict: 👍

Meteor was, and still is, a pioneer in terms of simplicity and ease of use.

Burning Out

Tom wasn’t the only one who wanted to explore new skies. MDG itself had ventured into the GraphQL space, and soon pivoted to become Apollo, the company responsible for Apollo Client and Apollo Server among many other key building blocks of the GraphQL ecosystem.

Apollo

Out of Meteor, Apollo was born.

So what went wrong? I’m not privy to any inside information so this is just speculation on my part, but I think the team realized that the amount of work required to make Meteor work on a large enough scale to repay their investors was just too much. As it existed, Meteor could work as a niche tool for a specific kind of project, but it could never establish itself as a dominant force in enterprise software, which is where all the real money is.

Achieving this would not only have required years of additional effort, but also throwing away most of the past five years of work, alienating their current community in the process. So just like there was never a newer, better Discover Meteor 2.0, MDG also never launched a Meteor 2.0 to address its current flaws and instead wisely chose to focus on Apollo and move on.

From Telescope to Vulcan

While others had moved on, I on the other hand wasn’t out of Meteor land by any means.

Right from the start, my goal had been to find a framework that would let me launch web apps quickly and easily with JavaScript (the famed “Rails for JavaScript”, if you will), and although Meteor came close, it wasn’t quite there yet. Things like forms, models, internationalization, permissions, etc. were all missing, and I’d had to handle them myself for Telescope.

But what’s interesting is that although I had intended for Telescope to be a Hacker News clone, people had started using it for all kinds of different community-based apps, tweaking the templates to match their use case. This was such a cool direction that I decided to pivot the project to become a full-fledged general purpose web framework, and thus Vulcan.js was born.

Vulcan.js

Vulcan.js: a valiant attempt at creating the fabled “Rails for JavaScript”.

Looking back, building Vulcan on top of Meteor instead of starting from scratch was not a good idea to say the least. I was hamstrung by all the same issues that Meteor was plagued with, without any of Meteor’s advantages since I had eschewed the traditional Meteor building blocks in favor of React & co.

I had made the bet that the current Meteor community would see Vulcan as its escape raft towards the larger JavaScript ecosystem, a way to port their pure-Meteor apps to a hybrid Meteor-and-React model. But by then, the people who wanted to abandon Meteor had already long done so, and the remaining community of die-hards used Meteor because they actually liked it.

Then again, there was no way I could’ve built something like Vulcan from scratch without standing on Meteor’s shoulders, so maybe it had to be this way? And despite its relative obscurity, Vulcan has been used by others to build startups, was used at a time by popular community LessWrong, and has been used by myself to build multiple sites, including Sidebar (a design newsletter with over 80,000 subscribers) and an AirBnB-like apartment rental platform.

Today, thanks to the efforts of Eric Burel, the new version of Vulcan runs on Next.js. We’re trying to take the philosophy of Meteor, along with everything we learned in the past 10 years, and port it to the modern JavaScript ecosystem, similar to something like RedwoodJS. The amount of work involved is daunting, and makes me all the more admirative of the original Meteor team, who didn’t have React, Next.js, or GraphQL to leverage.

Meteor Today

In 2019, MDG sold Meteor to Tiny, a Canadian company that also owns Dribbble, Creative Market, and many other companies you probably know.

This marked a turning point for the framework: rather than try to conquer the world, Meteor would now focus on pleasing its existing community and growing at its own pace. This leaves us with a paradox: technically speaking, Meteor is the best and most stable it’s ever been; yet there is little interest in learning it from people who aren’t already using it.

State of JS

According to the State of JavaScript survey, the percentage of developers interested in learning Meteor (lighter teal bars) has been steadily going down.

And maybe that’s fine. After all, as the creator of a very-niche-but-beloved framework myself, I’m in no position to be casting any stones.

Still… I can’t help but think about what could have been. What if Meteor had taken over the world like we all hoped it would? What if the problems that still plague web development to this day (state management, reactivity, data fetching) had been dealt with once and for all a decade ago?

Meteor

Meteor today: much improved, but still very niche.

The Meteor Legacy

At this point you might be forgiven for thinking that Meteor burned too bright and fizzled out in the night sky without leaving a trace. But you’d be mistaken. Meteor did make a lasting impact, in more ways than one.

Spotlight

Evan You

Evan You

Probably the most famous Meteor alumni, Evan You worked on Meteor’s own Blaze UI framework before going on to revolutionize the front-end space with Vue.js.

One of the new front-end frameworks that quickly outgrew Meteor’s own popularity was Vue, created by Evan You. And where did Evan work before creating Vue? You guessed it, MDG.

Or, you might’ve heard of a tool named Storybook. It was created by Arunoda Susiripala, who was by far the most active open-source Meteor contributor (in fact a running joke was to talk about “the Arunodas”, plural, because his output was too much to be the work of a single individual).

Today, Storybook is maintained by a company called Chromatic, which was co-founded by a certain Tom Coleman. Remember him?

And like I mentioned previously, the original Meteor team itself went on to create Apollo.

And well, I guess there’s also me. Back in 2016 I launched the annual State of JavaScript developer survey. It’s no Vue or Storybook, but it did establish itself over the years as one of the largest independent developer surveys around, and went on to spawn first the State of CSS and now the State of GraphQL surveys. And the whole reason why I launched this survey in the first place was because I was so confused by the JavaScript ecosystem at large, after being coddled by Meteor’s walled garden for so long!

A New Direction for Web Apps?

As I reflect on Meteor, an old debate has come back from the grave to once more rock the web development web: SPAs vs MPAs.

Single-Page Applications are apps that load all their JavaScript with one request, and then function more or less as a self-contained little piece of software from then on. On the other hand, Multi-Page Applications make back-and-forth requests to the server every time you navigate to a new page, thus their name. SPAs tend to have much of their logic run on the client (and also on the server for that first load, if server-side rendering (SSR) is supported), while MPAs have much lighter clients that are less resource-hungry.

Spotlight

Jan Dvorak

Jan Dvorak

Jan is one of the many people that keep the Meteor community alive today thanks to initiatives like the Meteor Impact online conference.

While I’ve always been bothered by the downsides of SPAs, I always thought the gap would be bridged sooner or later, and that performance concerns would eventually vanish thanks to things like code splitting, tree shaking, or SSR. But ten years later, many of these issues remain. Many SPA bundles are still bloated with too many dependencies, hydration is still slow, and content is still duplicated in memory on the client even if it already lives in the DOM.

Yet something might be changing: for whatever reason, it feels like people are finally starting to take note and ask why things have to be this way. Remix has taken the React community by storm by offering a back-to-basic, performance-first approach, while Next.js is embracing React Server Components to solve some of the same issues.

Even tools that are more on the static side of things are evolving, with a new crop of options like Astro, Capri, or Slinkity letting you control which components to hydrate or not.

To say nothing of up-and-coming front-end frameworks like Marko, Qwik, and Solid, which are all about giving you better control over rendering and being more efficient. While over in Deno land, Fresh also embraces so-called “island architecture”.

Ten Years

A lot has changed in 10 years: between political upheavals, a devastating pandemic, and of course worsening climate change, the naive optimism of the early 2010s seems almost quaint. And we’re also more conscious of the fact that access to fast internet connections and modern devices is not always equally distributed throughout the population.

In this new context, Meteor’s all-real-time-all-the-time approach can seem a bit wasteful and excessive.

Don’t get me wrong, I enjoyed pushing the boundaries of the browser and redefining what a web app could be as much as anybody else. But as we enter JavaScript’s third age, maybe it’s time to slow down and build web apps in a more efficient and sustainable manner. All I can hope for is that we still manage to preserve just a tiny bit of that original Meteor magic and simplicity.

After all it’s been 10 years, but there’s yet to be another web framework that burns quite as bright as Meteor did at its zenith. Meteor logo

Webmentions

I attended the Meteor book release party in May 2013. Those truly were inspiring times to be a front end engineer and I thank you and others for the effort you put in those long years ago. Discover Meteor remains one of the finest pieces of tech writing I have ever seen. 👏

https://twitter.com/BradLedford/status/1541973372719902721

Thanks Sachs, what a great write up! Meteor and Discover Meteor had a huge impact on me. They were the perfect tools and environment to learn about full stack. I loved I could easily deploy code and get feedback from friends!

https://twitter.com/ojschwa/status/1541968590928498689


from Hacker News https://ift.tt/gQaNTM0

Tuesday, June 28, 2022

The Path Is Set for PCI-Express 7.0 in 2025

The ink is barely dry on the PCI-Express 6.0 specification, which was released after years of development in January 2022, we hardly have PCI-Express 5.0 peripherals in the market, and the PCI-SIG organization that controls the PCI-Express standard for peripheral interconnects already has us all coveting the bandwidth that will come later in the decade with PCI-Express 7.0 interconnects.

With I/O becoming ever more central to system architecture, the Peripheral Component Interconnect-Special Interest Group (PCI-SIG) body that drives the peripheral bus in systems is always looking out a lot further into the future to find the materials and new signaling and encoding methods that will keep the bandwidth improvements on the PCI-Express bus growing at a reasonably steady cadence. During the hegemony of the Intel Xeon processor in the datacenter in the 2010s, Intel did not have a lot of competition in processors and therefore there was not enough pressure to keep PCI-Express moving at something averaging around a three year cadence. And so the move from PCI-Express 3.0 to PCI-Express 4.0 took seven years.

To be fair, there were some pretty serious materials science and signaling barriers at the same time, which also hit datacenter switch and router ASICs and caused all kinds of issues with the normal bandwidth increases we see have seen historically in inter-node interconnects.

The good news is that both PCI-Express interconnects for inside systems and now across a few racks and the Ethernet and InfiniBand interconnects that span racks and whole datacenters are both picking up the innovation pace. It is beginning to look like there will be a three year cadence that, hopefully, all vendors and customers can line up against. (It might be more like 30 months than 36 months. We shall see.) It was beginning to look like it might be the fast paced two year cadence we saw in the move from PCI-Express 4.0 to PCI-Express 5.0, but perhaps that was a bit optimistic. That optimism was reflected in our August 2020 coverage of the PCI-Express 6.0 specification as it was moving towards ratification.

What we know for sure is that it can never take seven years again to do a PCI-Express speed hike. We also know that Intel is most definitely not alone in the CPU driver’s seat and needs PCI-Express to keep advancing steadily to support CXL-connected accelerators, storage, and main memory as much as any other compute engine vendor, and so it is now pushing PCI-Express as hard as others have been pulling it for years.

All’s well that ends better.

What we also know is that the advances in PCI-Express 6.0 lay a good foundation for PCI-Express to keep rolling well out into the next decade. That foundation includes the PAM-4 signaling that has made cheaper and cooler 100 Gb/sec Ethernet and InfiniBand possible and that laid the foundation for 200 Gb/sec, 400 Gb/sec, and now 800 Gb/sec ports on switches and routers. But it also includes lightweight forward error correction (FEC) that is necessary because signals are progressively fuzzier as bandwidth goes up with the addition of PAM-4. And of course, the new flow control unit, or FLIT, way of encoding each bit that is radically different from how it has been done in the past on the PCI, PCI-X, and PCI-Express buses.

We have tweaked this bandwidth chart from PCI-SIG, which does not show PCI-Express 6.0 being released in 2022, but in 2021, which is incorrect.

We said this two years ago, and it bears repeating now. On switch ASICs with PAM-4 encoding, there is a 100 nanosecond or so overhead that comes with forward error correction. The PCI-Express bus cannot sustain such a latency hit, and the PCI-Express 6.0 spec said it had to be under 10 nanoseconds, and in fact, the goal was to keep it down to 1 nanosecond or maybe 2 nanoseconds. And the engineers came up with the FLIT method of checking and encoding bits that overlays PAM-4 and that meets this ambitious – some might have said crazy – goal for error correction without a massive latency penalty.

As far as we can tell, they did it, but we won’t know for sure until the first PCI-Express 6.0 devices hit the streets in maybe early 2023 to late 2024. It usually takes 12 months to 18 months for new devices supporting the spec to get into the field, but a lot depends on when the CPUs get each generation, since that drives the peripherals. The desire to move to CXL main memory is pretty strong, and that requires lots of bandwidth and low latency, so we think engineers will be working on the PCI-Express specifications for 7.0, 8.0, and 9.0 with the mind of energy we have not seen in the past, and there will be a lot more of them, too, which increases the odds of breakthroughs.

The PCI-SIG has not released a lot of information about what the plan is for PCI-Express 7.0, but it will employ the same PAM-4 signaling and not move to PAM-8 or PAM-16 encoding, which the network ASIC folks have not moved to yet, either. (That could come in a few years, though, if clock speeds hit some walls.) A single lane of PCI-Express 7.0 will run at 128 Gb/sec without encoding overhead, which is four times PCI-Express 5.0 and two times PCI-Express 6.0, which was just ratified in January. (That was later than expected, but not hugely so.)

Here is how the PCI-Express lanes map out over the generations:

At these bandwidths, you can see why everyone is excited about the prospect of having the PCI-Express bus replace DDR4 memory controllers are we know them, or at the very least, augmenting memory bandwidth on CPUs with CXL-attached memory. At the bandwidth and latencies that the PCI-SIG has been able to drive and is expected to drive, why shouldn’t there be one less thing to design in a CPU? Why shouldn’t there be generic PCI-Express controllers that can be used to implement memory, NUMA buses for CPU interconnect in a shared memory system, and peripheral attachment?

Here is what the past of the PCI, PCI-X, and PCI-Express speed jumps have looked like, and how we can roughly project out with a three-year cadence for specifications:

The PCI-Express 7.0 spec is not expected to be ratified until 2025, and that means we won’t see it appearing in systems until 2026 or 2027. That’s a long way off, of course. Beyond that, it is hard to say what will happen with electrical signaling for peripherals and we might find ourselves in a world where CXL is running over optical links, some with outboard lasers and some with silicon photonics on the die.

But assuming electrical signaling can keep moving ahead – it is a better than even assumption that it can – then PCI-Express 10.0 should be in products in 2035 or 2036 and should be driving 1 Tb/sec signaling lanes and 4 TB/sec across an x16 duplex slot in a server. If we even have a thing called a “server” then, that is. By then, a server might be an abstraction of interconnected components, with an interconnect hypervisor standing in for a printer circuit motherboard and slots.



from Hacker News https://ift.tt/HxhgaGQ