Friday, June 30, 2023

Those Disturbers of My Rest: The First Treatise on Bedbugs (1730)

He starts to breed bedbugs, admiring them under microscopes. “A Bugg’s Body is shaped and shelled, and the Shell as transparent and finely striped as the most beautiful amphibious Turtle”. For eighteen months, Southall mates a new pair of bugs every fortnight, recording their reactions to various foods. “Their beloved Foods are Blood, dry’d Paste, Size, Deal, Beach, Osier, and some other Woods, the Sap of which they suck”. They don’t care for oak, walnut, cedar, or mahogany. In temperament, bedbugs are “watchful and cunning”, “timorous of us”, but when fighting each other, they war “as eagerly as Dogs or Cocks”, waging internecine battles where both parties “have died on the Spot”. He becomes intimate with their sexual habits. “They are hot in Nature, generate often, and shoot their Spawn all at once, and then leave it”. And he uses their bites as a heuristic of his personal health: “I daily am bit when practicing and at work in my Business, destroying them; and as they never swell me but when out of order, from thence I infer, that not only myself, but all such who are among Buggs, and do not swell with their Bites, are certainly in good Habit of Body.” One section of Southall’s bugbook approaches something like nature writing as he attends to the shifting colors of his developing subjects.



from Hacker News https://ift.tt/mslOG4A

The one-shot drug that keeps on dosing

On average, patients with chronic illnesses follow their prescribed treatments about 50 percent of the time. That’s a problem. If drugs aren’t taken regularly, on time, and in the right doses, the treatment may not work, and the person’s condition can worsen.

The issue isn’t that people are unwilling to take their prescriptions. It’s that some drugs, like HIV medications, require unwavering commitment. And essential medicines, like insulin, can be brutally expensive. Plus, the Covid pandemic illustrated the difficulties of delivering perishable follow-up vaccine shots to regions with no cold chain. “Are we really squeezing all the utility out of those drugs and vaccines?” asks Kevin McHugh, a bioengineer at Rice University. “The answer is, in general, no. And sometimes we’re missing out on a lot.”

For example, the injectable drug bevacizumab can be used to treat macular degeneration, a leading cause of blindness. But even though it’s effective, dosing adherence is notoriously low. “People hate getting injections into their eyes,” McHugh says. “And I don’t blame them at all—that’s terrible.”

McHugh’s lab is in the drug delivery business. The goal is to give patients what they want—less hassle—while also giving them what they need: consistent dosing. The lab’s answer is an injection of drug-delivering microparticles that release their contents in timed delays that can span days or even weeks. “We’re trying to engineer these delivery systems to work in the real world, as opposed to in this idealized version of the world,” McHugh says.

In the June issue of Advanced Materials, McHugh’s team described how their system works. It starts with an injection containing hundreds of tiny microplastic particles, each encapsulating a small dose of a drug. These miniscule capsules are made of the polymer PLGA, which our bodies break down safely. By adjusting the molecular weight of the polymer used for each capsule, the scientists can control how fast they erode and release medication. In this study, the team demonstrated a single shot containing four groups of microparticles that released their contents at 10, 15, 17, and 36 days after injection.

“Having long-acting delivery strategies is a great unmet need,” says SriniVas Sadda, an ophthalmologist with UCLA and the Doheny Eye Institute who was not involved in the study. The patients Sadda sees are elderly. They are often dependent on family members for transportation and may skip appointments because of other health problems. “Maybe they’ve fallen and broke their hip and they end up not coming in,” he says. “Missed visits can be a big problem because you miss treatment and the disease could get worse. And it’s not always possible to recover.”

It’s hard to have delicate control over the levels of a drug in your body, in part because most medications operate like sledgehammers. Pop an ibuprofen or an antidepressant, and those levels will spike as the drug quickly passes through your gastrointestinal tract. Extended release pills prolong a drug’s effect but still taper off from a peak. And you can’t simply front-load a steep dose to delay the next one, since some drugs, like insulin, have a narrow “therapeutic window” between being helpful and dangerous.



from Hacker News https://ift.tt/Ij68WP5

Thursday, June 29, 2023

Crossword Builder

Pick one of the available unfilled grids. For now, just these presets are provided, in the future you will be able to further modify and create your own unfilled grids. Proceed to the next step.

Note: coming back to the first step will clear the existing grid of any words.

Click or tap on the grid to select a cell, then use the keyboard to populate the grid as much as you'd like and arrows to move around. To toggle between horizontal and vertical words press Space, or click on the already selected square.

"Auto-fill" button becomes disabled when the grid has letters that do not yet belong to a complete word, so either erase those, or complete the words. Once you are satisfied, hit the button to auto-fill in the rest. After auto-fill runs you can explore the solution and use arrows to move around, as before, but you won't be able to change words.

If the crossword auto-fill isn't starting it is likely that the server is too busy. The auto-fill request will be queued and will be processed once it reaches the front of the queue.

Auto-fill can also take particularly long to run; the logic is randomized and this could happen. That said, the selected puzzles and dictionary should make this unlikely to occur.

You can share a partially-filled puzzle either by clicking the "link" button (next to the "?"), or simply by copying the URL, which is exactly what that share button does.

Sharing the complete solution is also possible, albeit indirectly, by right-clicking on the grid and saving it as an image.

I've included a couple of links below, feel free to reach out to me with ideas, suggestions or other feedback.



from Hacker News https://ift.tt/cAxiBOT

What to know about a fake job scam impersonating Gitlab

The GitLab Security Incident Response Team (SIRT) is aware of a fake job scam targeting job seekers by impersonating the GitLab name and GitLab team member names. Scammers have been observed requesting job seekers pay thousands of dollars for “technology equipment” after job seekers completed an in-depth, fake job application interview process. 

To help ensure you’re safe and secure, see the recommendations below in the section titled, "How to protect yourself."

Fake GitLab jobs: Warning signs

As of the time of this blog post, scammers have been posting fake GitLab jobs and have been subsequently following up with victims, using the following patterns.

Initial communications

  • Scammers are sending job seekers text messages claiming to be a GitLab recruiter. 
  • The scammers then send the job seeker a Microsoft Teams meeting link for the fake interview.
    • GitLab recruiters do not initially contact candidates via text message. Also, GitLab recruiters only use Zoom for video conferencing.

Interviews and continued communication 

  • Once on Microsoft Teams, the scammer requests the job seeker join a voice- or chat-only interview. 
  • Scammers were observed contacting job seekers from Outlook email accounts following the pattern: name.gitlab@outlook.com.
    • Email addresses from GitLab team members end in @gitlab.com.
  • Scammers used a “gitlabinc.com” domain in email signatures. That domain is not owned or affiliated with GitLab. 

Fake job offer and onboarding steps

  • Scammers requested job seekers create a Gmail email address with the pattern of firstname.gitlab@gmail.com.
    • GitLab assigns new team members official email addresses and do not request that new team members create their own.
  • Scammers sent poorly formatted letters of employment, benefits overviews, and background checks. 
  • The fake benefits overview document describes "efg&m" as the program administrator for GitLab benefits.
    • GitLab does not use "efg&m" for benefits management. 
  • The fake background check document requests full personal information, including a U.S. Social Security number.
    • GitLab does not request details such as a Social Security number via email. 

Request for money

  • In at least one case, scammers ultimately requested USD $11,000 from a job seeker for “start-up equipment," including a MacBook Pro.
    • GitLab follows a published technology purchasing process, as outlined in our handbook, and won’t ask you to pay for technology equipment up front.   

How to protect yourself

Job seekers should refer to GitLab’s Candidate Handbook page to understand the GitLab job application and interviewing process.

If you think you may be a victim of a fake job scam impersonating GitLab, there are a number of ways to protect yourself, and ensure that the proper authorities are aware. It is a good idea to check for signs of identity theft or any other signs of potential theft. The Los Angeles Times has a great article describing how to avoid job scams, with useful links describing how to check for potential identity theft and report job scams, alert the FTC, and more. 

Online employment scam resource

“GitLab Security is aware of a fake GitLab job scam, ultimately requesting job seekers pay thousands of dollars for 'technology equipment.' Learn how to spot the scam and protect yourself.” – Matt Coons



from Hacker News https://ift.tt/BdiHbj4

Wednesday, June 28, 2023

Vaporwave and Unicode Analysis

This article will explore the unique role that text plays in vaporwave music and art. Why do vaporwave tracks, albums, and artist names use stretched out fullwidth text, Japanese writing 変, and 𐒖Ƭᖇ𝚫ƝǤⵟ looking Unicode characters? Why are track titles sometimes formatted to look like FILENAME.AVI or Muzak Corp™ Song Title?

Analyzing the text characters that accompany vaporwave can help us understand vaporwave a whole. How so? In her book "Because Internet", linguist Gretchen McCulloch compares the challenges of analyzing speech at scale, with the ease of analyzing internet writing. Speech requires time and resources to decipher and process. Internet language, on the other hand, is easier to work with. Because text on the internet is public and digitized, McCulloch says it readily "brings new insight to classic linguistic questions".

The same logic applies to vaporwave. While scholars, podcasters, YouTubers, and music fans have worked to decipher vaporwave's sounds and images... we will focus on the text.

We've analyzed the text that accompanies more than 800,000 vaporwave tracks, from 2010 to present day. We've scraped track titles, album titles, artists names, and metadata from SoundCloud and Bandcamp; to learn how how vaporwave came to be, how the aesthetic has evolved, and where its headed next.

We have a YouTube video exploring many of the same ideas in this article. If you prefer watching to reading, you can check out the video below.

Also, if you're just looking for our vaporwave text generator, you can find that tool here. Otherwise, read on...



from Hacker News https://ift.tt/ISFtsD2

Tuesday, June 27, 2023

Way out in deep space astronomers spot precursor of carbon based life

Astronomers wielding the James Webb Space Telescope have detected methyl cations – important precursor molecules needed to create proteins and DNA and therefore fundamental to carbon-based life forms.

The molecules were spotted 1,350 light years away in a protoplanetary disk known as d203-506, located in the Orion Nebula.

The international team of researchers believe strong UV rays emanating from young nearby stars that lie outside the system provided the energy to form the molecules. It's not clear how they were created in the disk exactly – it's possible that methane molecules (CH4) disassociated to form methyl cations and hydrogen atoms.

These types of particles are vital to kickstart the reactions to create more complex carbon based molecules that are key to supporting life as we know it, explained Olivier Berne, lead scientist of the study published in Nature on Monday, and an astrophysicist at the French National Centre for Scientific Research (CNRS), to The Register.

"CH3+ can react with many molecules and form other more complex species. However, it is an early step in the process which leads to the formation of even more complex molecules which are crucial to life – like proteins and DNA. The methyl molecule also plays a crucial role in genetics: in a process called methylation, it regulates gene expression," he said.

Although scientists began speculating that the molecule was a key ingredient for organic chemistry in interstellar space in the 1970s, it has proven difficult to find. It requires a sensitive infrared detector like the NIRSpec and the MIRI instruments aboard the James Webb Space Telescope to do the job. 

"This detection not only validates the incredible sensitivity of Webb, but also confirms the postulated central importance of (CH3+) in interstellar chemistry," Marie-Aline Martin-Drumel, co-author of the paper, and a researcher at Université Paris-Saclay in France, enthused in a statement.

Finding the compounds around a protoplanetary disk – a giant hot swirling dense mass of gas and dust which forms around stars from which planets and asteroids can be born – raises questions about what precursors might be key for the development of life. Although UV radiation helps create methyl cations, it could also strip away the water needed for life as we know it.

Scientists, however, have found evidence from meteorites suggesting that the protoplanetary disk from which Earth was created was also subjected to intense UV rays – meaning it may have contained methyl cations but no water. Yet, somehow, life on Earth exists. How?

"This is an open question. It could be that some water is present in ices inside grains in those UV irradiated disks, but so far we have not been able to detect it. We will be searching for it," Berne told us. ®



from Hacker News https://ift.tt/aV4vP17

Theres a Severe Shortage of Cancer Drugs

[UPDATED at 3:15 p.m. ET]

On Nov. 22, three FDA inspectors arrived at the sprawling Intas Pharmaceuticals plant south of Ahmedabad, India, and found hundreds of trash bags full of shredded documents tossed into a garbage truck. Over the next 10 days, the inspectors assessed what looked like a systematic effort to conceal quality problems at the plant, which provided more than half of the U.S. supply of generic cisplatin and carboplatin, two cheap drugs used to treat as many as 500,000 new cancer cases every year.

Seven months later, doctors and their patients are facing the unimaginable: In California, Virginia, and everywhere in between, they are being forced into grim contemplation of untested rationing plans for breast, cervical, bladder, ovarian, lung, testicular, and other cancers. Their decisions are likely to result in preventable deaths.

Cisplatin and carboplatin are among scores of drugs in shortage, including 12 other cancer drugs, attention-deficit/hyperactivity disorder pills, blood thinners, and antibiotics. Covid-hangover supply chain issues and limited FDA oversight are part of the problem, but the main cause, experts agree, is the underlying weakness of the generic drug industry. Made mostly overseas, these old but crucial drugs are often sold at a loss or for little profit. Domestic manufacturers have little interest in making them, setting their sights instead on high-priced drugs with plump profit margins.

The problem isn’t new, and that’s particularly infuriating to many clinicians. President Joe Biden, whose son Beau died of an aggressive brain cancer, has focused his Cancer Moonshot on discovering cures — undoubtedly expensive ones. Indeed, existing brand-name cancer drugs often cost tens of thousands of dollars a year.

But what about the thousands of patients today who can’t get a drug like cisplatin, approved by the FDA in 1978 and costing as little as $6 a dose?

“It’s just insane,” said Mark Ratain, a cancer doctor and pharmacologist at the University of Chicago. “Your roof is caving in, but you want to build a basketball court in the backyard because your wife is pregnant with twin boys and you want them to be NBA stars when they grow up?”

“It’s just a travesty that this is the level of health care in the United States of America right now,” said Stephen Divers, an oncologist in Hot Springs, Arkansas, who in recent weeks has had to delay or change treatment for numerous bladder, breast, and ovarian cancer patients because his clinic cannot find enough cisplatin and carboplatin. Results from a survey of academic cancer centers released June 7 found 93% couldn’t find enough carboplatin and 70% had cisplatin shortages.

“All day, in between patients, we hold staff meetings trying to figure this out,” said Bonny Moore, an oncologist in Fredericksburg, Virginia. “It’s the most nauseous I’ve ever felt. Our office stayed open during covid; we never had to stop treating patients. We got them vaccinated, kept them safe, and now I can’t get them a $10 drug.”

A photo of Isabella McDonald sitting next to her father while holding a stuffed animal.
Isabella McDonald with her father, Brent.(Rachel McDonald)

The 10 cancer clinicians KFF Health News interviewed for this story said that, given current shortages, they prioritize patients who can be cured over later-stage patients, in whom the drugs generally can only slow the disease, and for whom alternatives — though sometimes less effective and often with more side effects — are available. But some doctors are even rationing doses intended to cure.

Isabella McDonald, then a junior at Utah Valley University, was diagnosed in April with a rare, often fatal bone cancer, whose sole treatment for young adults includes the drug methotrexate. When Isabella’s second cycle of treatment began June 5, clinicians advised that she would be getting less than the full dose because of a methotrexate shortage, said her father, Brent.

“They don’t think it will have a negative impact on her treatment, but as far as I am aware, there isn’t any scientific basis to make that conclusion,” he said. “As you can imagine, when they gave us such low odds of her beating this cancer, it feels like we want to give it everything we can and not something short of the standard.”

Brent McDonald stressed that he didn’t blame the staffers at Intermountain Health who take care of Isabella. The family — his other daughter, Cate, made a TikTok video about her sister’s plight — were simply stunned at such a basic flaw in the health care system.

Cate McDonald used this TikTok video to let people know about her sister’s osteosarcoma, a rare and dangerous bone cancer. She wanted to raise awareness of the critical shortages of generic drugs in the United States, including methotrexate, which her sister, Isabella, desperately needs. (Cate McDonald)

At Moore’s practice, in Virginia, clinicians gave 60% of the optimal dose of carboplatin to some uterine cancer patients during the week of May 16, then shifted to 80% after a small shipment came in the following week. The doctors had to omit carboplatin from normal combination treatments for patients with recurrent disease, she said.

On June 2, Moore and her colleagues were glued to their drug distributor’s website, anxious as teenagers waiting for Taylor Swift tickets to go on sale — only with mortal consequences at stake.

She later emailed KFF Health News: “Carboplatin did NOT come back in stock today. Neither did cisplatin.”

Doses remained at 80%, she said. Things hadn’t changed 10 days later.

Generics Manufacturers Are Pulling Out

The causes of shortages are well established. Everyone wants to pay less, and the middlemen who procure and distribute generics keep driving down wholesale prices. The average net price of generic drugs fell by more than half between 2016 and 2022, according to research by Anthony Sardella, a business professor at Washington University in St. Louis.

As generics manufacturers compete to win sales contracts with the big negotiators of such purchases, such as Vizient and Premier, their profits sink. Some are going out of business. Akorn, which made 75 common generics, went bankrupt and closed in February. Israeli generics giant Teva, which has a portfolio of 3,600 medicines, announced May 18 it was shifting to brand-name drugs and “high-value generics.” Lannett Co., with about 120 generics, announced a Chapter 11 reorganization amid declining revenue. Other companies are in trouble too, said David Gaugh, interim CEO of the Association for Accessible Medicines, the leading generics trade group.

The generics industry used to lose money on about a third of the drugs it produced, but now it’s more like half, Gaugh said. So when a company stops making a drug, others do not necessarily step up, he said. Officials at Fresenius Kabi and Pfizer said they have increased their carboplatin production since March, but not enough to end the shortage. On June 2, FDA Commissioner Robert Califf announced the agency had given emergency authorization for Chinese-made cisplatin to enter the U.S. market, but the impact of the move wasn’t immediately clear.

Cisplatin and carboplatin are made in special production lines under sterile conditions, and expanding or changing the lines requires FDA approval. Bargain-basement prices have pushed production overseas, where it’s harder for the FDA to track quality standards. The Intas plant inspection was a relative rarity in India, where the FDA in 2022 reportedly inspected only 3% of sites that make drugs for the U.S. market. Sardella, the Washington University professor, testified last month that a quarter of all U.S. drug prescriptions are filled by companies that received FDA warning letters in the past 26 months. And pharmaceutical industry product recalls are at their highest level in 18 years, reflecting fragile supply conditions.

The FDA listed 137 drugs in shortage as of June 13, including many essential medicines made by few companies.

Intas voluntarily shut down its Ahmedabad plant after the FDA inspection, and the agency posted its shocking inspection report in January. Accord Healthcare, the U.S. subsidiary of Intas, said in mid-June it had no date for restarting production.

Asked why it waited two months after its inspection to announce the cisplatin shortage, given that Intas supplied more than half the U.S. market for the drug, the FDA said via email that it doesn’t list a drug in shortage until it has “confirmed that overall market demand is not being met.”

Prices for carboplatin, cisplatin, and other drugs have skyrocketed on the so-called gray market, where speculators sell medicines they snapped up in anticipation of shortages. A 600-milligram bottle of carboplatin, normally available for $30, was going for $185 in early May and $345 a week later, said Richard Scanlon, the pharmacist at Moore’s clinic.

“It’s hard to have these conversations with patients — ‘I have your dose for this cycle, but not sure about next cycle,’” said Mark Einstein, chair of the Department of Obstetrics, Gynecology and Reproductive Health at Rutgers New Jersey Medical School.

Should Government Step In?

Despite a drug shortage task force and numerous congressional hearings, progress has been slow at best. The 2020 CARES Act gave the FDA the power to require companies to have contingency plans enabling them to respond to shortages, but the agency has not yet implemented guidance to enforce the provisions.

As a result, neither Accord nor other cisplatin makers had a response plan in place when Intas’ plant was shut down, said Soumi Saha, senior vice president of government affairs for Premier, which arranges wholesale drug purchases for more than 4,400 hospitals and health systems.

Premier understood in December that the shutdown endangered the U.S. supply of cisplatin and carboplatin, but it also didn’t issue an immediate alarm, she said. “It’s a fine balance,” she said. “You don’t want to create panic-buying or hoarding.”

More lasting solutions are under discussion. Sardella and others have proposed government subsidies to get U.S. generics plants running full time. Their capacity is now half-idle. If federal agencies like the Centers for Medicare & Medicaid Services paid more for more safely and efficiently produced drugs, it would promote a more stable supply chain, he said.

“At a certain point the system needs to recognize there’s a high cost to low-cost drugs,” said Allan Coukell, senior vice president for public policy at Civica Rx, a nonprofit funded by health systems, foundations, and the federal government that provides about 80 drugs to hospitals in its network. Civica is building a $140 million factory near Petersburg, Virginia, that will produce dozens more, Coukell said.

Ratain and his University of Chicago colleague Satyajit Kosuri recently called for the creation of a strategic inventory buffer for generic medications, something like the Strategic Petroleum Reserve, set up in 1975 in response to the OPEC oil crisis.

In fact, Ratain reckons, selling a quarter-million barrels of oil would probably generate enough cash to make and store two years’ worth of carboplatin and cisplatin.

“It would almost literally be a drop in the bucket.”

[Clarification: This article was updated at 3:15 p.m. ET on June 21, 2023, to clarify the role of Vizient and Premier. They negotiate drug purchases but don’t purchase the drugs themselves.]



from Hacker News https://ift.tt/BlfZaDs

Working with CSV files on shell/terminal

The Command Line is a powerful tool for processing data. With the right combination of commands, you can quickly and easily manipulate data files to extract the information you need. In this blog post, we will explore some of the ways you can use the command line to process data.

One of the key benefits of using the command line to process data is its flexibility. The command line provides a wide variety of tools and utilities that can be used to perform a wide range of data processing tasks. For example, you can use the awk command to extract specific fields from a delimited data file, or you can use the sort command to sort a file based on the values in a particular column.

Another benefit of the command line is its scriptability. Because the command line is a text-based interface, you can easily create scripts that combine multiple commands to perform complex operations on data files. This can be particularly useful for automating repetitive tasks, such as cleaning up data files or performing data transformations.

The command line also offers a high level of control over the data processing process. Because you have direct access to the data files and the tools that are used to process them, you can easily fine-tune the behavior of the commands and customize the output to suit your specific needs.

Overall, the command line is a powerful and flexible tool for processing data. With the right combination of commands and scripts, you can easily manipulate data files to extract the information you need. Whether you are a data scientist, a system administrator, or a developer, the command line offers a wealth of opportunities for working with data.


Here are some oneliners which can help you get started with processing data simply by using commandline

  1. To print the first column of a CSV file:
    awk -F, '{print $1}' file.csv
    
  2. To print the first and third columns of a CSV file:
    awk -F, '{print $1 "," $3}' file.csv
    
  3. To print only the lines of a CSV file that contain a specific string:
  4. To sort a CSV file based on the values in the second column:
  5. To remove the first row of a CSV file (the header row):
  6. To remove duplicates from a CSV file based on the values in the first column:
    awk -F, '!seen[$1]++' file.csv
    
  7. To calculate the sum of the values in the third column of a CSV file:
    awk -F, '{sum+=$3} END {print sum}' file.csv
    
  8. To convert a CSV file to a JSON array:
    jq -R -r 'split(",") | {name:.[0],age:.[1]}' file.csv
    
  9. To convert a CSV file to a SQL INSERT statement:
    awk -F, '{printf "INSERT INTO table VALUES (\"%s\", \"%s\", \"%s\");\n", $1, $2, $3}' file.csv
    

Lastly, these are just a few examples of the many things you can do with these oneliners to process CSV data. With the right combination of commands, you can quickly and easily manipulate CSV files to suit your needs.

I hope you enjoyed reading this post and got a chance to learn something new. If you have any oneliner when it comes to processing data especially CSV files feel free to comment below.


I write occasionally feel free to follow me on twitter



from Hacker News https://ift.tt/6rNbeSg

Sweden wants to build an entire city from wood

There is a global race to build the tallest wooden skyscraper. The record was held by Mjostarnet, an 85-metre tower on the shore of Lake Mjosa in Norway, which hosts flats, a hotel and a swimming pool—until Ascent, an 87-metre structure, was completed in Wisconsin in July 2022. And it will be put in the shade in turn by other buildings: a 90 metre tower is planned for Ontario, and a 100 metre one for Switzerland. (By way of comparison, St Paul’s cathedral in London is 110 metres tall.)

Listen to this story.
Enjoy more audio and podcasts on iOS or Android.

This week, though, a Swedish firm announced it was going for a different sort of record. It unveiled plans to build what could be the world’s biggest wooden city. Stockholm Wood City will be built in Sickla, an area in the south of the Swedish capital. Construction on the 250,000 square-metre site will begin in 2025. When complete, ten years later, it will contain 2,000 homes and 7,000 offices, along with restaurants and shops. The 12bn-krona ($1.4bn) project is led by Atrium Ljungberg, a Swedish urban development company.

By using wood the company hopes to reduce the project’s carbon footprint by up to 40%, compared with building in concrete and steel, says Annica Anäs, the company’s boss. Wood is a sustainable material that can be produced from renewable forests, which Sweden has in plenty. When used for building, it locks up the carbon that the trees absorbed from the atmosphere while growing. As with other modern construction projects using timber, Wood City will still use some concrete and steel in places like the foundations, but the overall amounts will be greatly reduced. As wooden buildings are much lighter, their foundations can be smaller.

The Swedish project will, as existing wooden skyscrapers do, employ large prefabricated sections made from what is called “engineered timber”. Instead of ordinary lumber, chipboard or plywood, engineered timber is a composite in which layers of wood are laminated together in specific ways. The wood grains in each layer are aligned to provide individual components of the building, such as floors, walls, cross braces and beams, with extremely high levels of strength. And because these parts can be manufactured in a factory, where tolerances are finer and quality control is easier to maintain than on a building site, the use of prefabricated sections cuts down on the delivery of raw materials and allows construction to proceed more quickly.

The burning question

Another advantage is that construction will not be as noisy as it would be if the town were built from concrete and bricks, adds Ms Anäs. This makes wooden buildings particularly suitable for urban redevelopment in general, since putting them up is less likely to annoy the neighbours. It should also be profitable. Ms Anäs is looking for a return on investment of 20% or better. “Sweden is progressive when it comes to wood construction,” she says. “But I don’t see any reason why it shouldn’t work elsewhere.”

The biggest concern most people have about wooden buildings is the risk of fire. The buildings in Wood City will be fitted with several fire-protection systems, such as sprinklers and flame-resistant layers, as would also be found on their concrete or brick counterparts.

At the same time, researchers are coming to believe that engineered timber is, by its nature, extremely fire resistant. To help win approval for the construction of the Ascent building, the us Forest Service carried out tests on the laminated timber columns it would use. After finding them difficult to burn, the columns were awarded an exemplary three-hour fire-resistance rating because they maintained their structural integrity.

Without a sustained heat source the charring of the outer layer of a big piece of timber protects the structure inside—try lighting a camp fire when you only have logs. Many of the large urban fires of old, like the Great Fire of London in 1666, were mostly fuelled by small sections of timber acting as kindling. So when it comes to building in wood, it is best to think big.

For more coverage of climate change, sign up for The Climate Issue, our fortnightly subscriber-only newsletter, or visit our climate-change hub.



from Hacker News https://ift.tt/Aue6ftU

Monday, June 26, 2023

The Best Place to Drink Is the Emptiest Bar in the City

Comments

from Hacker News https://ift.tt/DEYhPgJ

Cloud Why So Difficult?

A manifesto for cloud-oriented programming.

Don't get me wrong, I love the cloud! It has empowered me to build amazing things, and completely changed the way I use software to innovate and solve problems.

It's the "new computer", the ultimate computer, the "computerless computer". It can elastically scale, it's always up, it exists everywhere, it can do anything. It's boundless. It's definitely here to stay.

But holy crap, there is no way this is how we are going to be building applications for the cloud in the next decade. As the cloud evolved from "I don't want servers under my desk" to "my app needs 30 different managed services to perform its tasks", we kind of lost track of what a great developer experience looks like.

Building applications for the cloud sometimes feels like spilling my kids' bag of unused Lego blocks all over the living room floor, and trying to build a castle. After going through torn up play cards, scary Barbie-doll heads, and leaking dead batteries, you read the instructions the millionth time, only to realize you ended up building basically the same thing you've built last time.

Sorting Lego blocks is fun! It passes the time with the kiddos. It even feeds my OCD… But hell, this is not how I want to build professional software!

Let me try to describe what me and my developer friends are struggling with.

I want to focus on creating value for my users

When I build professional software, I want most of my time to be spent within the functional domain of my application, instead of non-functional mechanics of the platform I use.

It doesn't make sense that every time I want to execute code inside an AWS Lambda function, I have to understand that it needs to be bundled with tree-shaken dependencies, uploaded as a zip file to S3 and deployed through Terraform. Or that in order to be able to publish a message to SNS, my IAM policy must have a statement that allows the sns:Publish action on the topic's ARN. And does every developer need to understand what ARNs are at all?

All that stuff doesn't have anything to do with the value I am trying to create for my users. It's pure mechanics. Can we get rid of it?

I want to be independent

One of the most frustrating and flow killing situations for me as a developer is when I have to stop and wait for someone or something in order to continue.

It's like gliding happily in the air, enjoying the view, beautiful music in the background, and suddenly, BAM! A concrete wall.

This concrete wall takes many shapes and sizes when you build applications for the cloud. It's the DevOps person with an endless ticket queue; it's the IAM policy that needs to be updated; it's the deployment failure that only the external part-time consultant knows how to debug; It's the endless backlog of missing knobs and APIs in the internal platform that we hoped will change everything.

These barriers are frustrating because they force me to switch context, to apply "temporary" security policies and to invent ugly hacks that I don't want to talk about. It's a broken world.

I want to be independent. I want to be able to get things done, to stay in the flow. I want to improve the world one commit at a time, and move on to the next thing after I am finished. I want that dopamine rush of completing a task, not the shameful feeling of yet another unfinished thread.

I want instant feedback

I said I want independence, but don't mistake that for a belief that I write perfect code. Which is why I want to write code with a pencil, not with a pen.

Some developers can spend a full day coding without even invoking their compiler, and at the end of the day, they compile and deploy, and it just works.

I admire them, but I am not that type of developer. No sir. For me it's about iterations, iterations, iterations. I start small, sketch with a light pencil, take a look, erase a bunch of stuff, draw a thicker line, take a step back, squint, draw more and erase more, and take another look, rinse and repeat.

This is why, for me, the single most important thing is iteration speed. The sooner I can run my application and test it, the faster I can go back and iterate. This is where my flow is.

When I started programming, I used Borland C++. It used to take about 100ms to compile and run a program on an IBM PC AT machine (TURBO ON). An average iteration cycle in the cloud takes minutes. Minutes! Sometimes dozens of minutes!

Here's how an iteration looks like in the cloud today: I make a change to my code; then I need to compile it; deploy it to my test account; find my way around the management console to actually trigger it; wait for it to run and go search for the logs on another service. Then I realize there is an error response that tells me that I'm stupid, because how come I didn't know that I have to pass in Accept-Content: application/json, because otherwise I get some weird result called "XML" that I have no idea what to do with (just kidding, XML is great, no really). Now all over again...

So "write unit tests", you say, in a patronizing attempt to justify the current reality. "Great developers write unit tests". OK! So now I need to take my code, which makes about 20 external API calls, and somehow mock out the API responses by copying and pasting them from outdated documentation, only to figure out that my requests are rejected because I am missing some implicit action in my IAM security statement. We've all been there.

To be honest, give me the developer experience of the 90s. I want to make a change, and I want to be able to test this change either interactively or through a unit test within milliseconds, and I want to do this while sitting in an airplane with no WiFi, okay? (we didn't have WiFi in the 90s).

So this is just a rant?

Hell no! I am a programmer. I sometimes feel like I've been writing software since birth. I've been doing it in socially perilous times, when being a computer geek was not cool.

What I have always loved about being a developer is that if I was not happy with my tools, I could make my own. Building tools is in our DNA, after all - humans have been building tools for over a million years.

And I am not happy with my tools.

In March 2022, I joined forces with Shai Ber, a good friend and a former Microsoft colleague, and we founded Monada with the mission to unlock the cloud for developers. We've assembled an incredible crew of beautiful geeks that share our passion for developer experience and open-source, and started our journey to empower developers (i.e. ourselves) to solve these fundamental problems.

Compilers to the rescue

So how are we going to solve all of these problems at once? We are building a programming language for the cloud.

"A programming language!?," you ask. "Doesn't the world have enough programming languages?," "Isn't it really hard to write a compiler?," "What are the chances that developers will want to learn a whole new language?," "Why can't you hack into an existing language toolchain, squint your eyes tight enough and call it a day?"

I am not one to build programming languages on a whim. In fact, I've spent the last five years building the AWS CDK, which is a multi-language library that addresses some of the challenges I am talking about by allowing developers to define cloud infrastructure using their favorite programming language.

To "meet developers where they are" is a beautiful tenet of AWS, and of the CDK, and inspired us to create awesome technology such as JSII and constructs.

But sometimes, "where they are" is not a good enough model for creating the desired experience.

Defining infrastructure with code does enable us to create a higher-level of abstraction, but as long as my application code needs to interact with this infrastructure, the abstraction becomes too leaky. I'm yanked back down to having to understand more than I need to, and I have to be an expert in things like IAM, VPC, ALB, EBS and basically more TLAs than I would ever want to keep in my head.

The languages we use today are all designed around the idea that the computer is a single machine. They've reached the point in which they are able to offer us solid abstractions over these machines. They abstract away the CPU, memory, file system, process management and networking. As a developer, I don't have to care how a file is laid out on disk, or even how much memory I need for my hash map. I simply write readFile() or new Dictionary() and go about my day. Yes, it's not a bad idea for me to have some sense of what's happening under the hood, but I am not forced to.

Most of these languages also offer me type-safety. When I call a function with the wrong number of arguments, I get yelled at by my compiler. I don't have to wait until my application is running only to realize I forget an argument, or passed in the wrong type.

In the cloud, I'm on my own. Every time my code needs to interact with a cloud resource or a service - and that's happening more and more as the industry evolves - I have to leave the comfort and safety of my programming language. I must jump outside the boundaries of the machine and into the wild wild west of the internet, and my compiler is none the wiser.

And suddenly, it's almost painfully obvious where all the pain came from. Cloud applications today are simply a patchwork of disconnected pieces. I have a compiler for my infrastructure, another for my functions, another for my containers, another for my CI/CD pipelines. Each one takes its job super seriously, and keeps me safe and happy inside each of these machines, but my application is not running on a single machine anymore, my application is running on the cloud.

The cloud is the computer.

Wing, a cloud-oriented programming language

When new programming paradigms emerge, it takes languages time to catch up. I used to love building object-oriented code in C, but it was a leaky abstraction. I had to understand how objects are laid out in memory, how V-tables work, and remember to pass the object as the first argument for each function. When programming languages started to support object-oriented concepts as first-class citizens, this paradigm was democratized, and today most developers don't even know what V-tables are, and the world keeps spinning.

Wing, or winglang if you want to be cute about it, has all the good stuff you would expect from a modern, object-oriented, strongly-typed and general-purpose language, but it also includes a few additional primitives designed to support the distributed and service-based nature of the cloud as first-class citizens.

Check it out

We have been working on Wing for almost a year now, and I am excited to invite you to check it out and let me know what you think.

While still in Alpha and not yet ready for production use, it's already possible to build some real applications with it.

Check out https://github.com/winglang/wing for more details.



from Hacker News https://ift.tt/1eaZWcp

Build Your Own Docker with Linux Namespaces Cgroups and Chroot

Introduction

Containerization has transformed the world of software development and deployment. Docker ↗️, a leading containerization platform, leverages Linux namespaces, cgroups, and chroot to provide robust isolation, resource management, and security.

In this hands-on guide, we’ll skip the theory (go through the attached links above if you want to learn more about the mentioned topics) and jump straight into the practical implementation.

🙅🏻‍♂️

Before we delve into building our own Docker-like environment using namespaces, cgroups, and chroot, it’s important to clarify that this hands-on guide is not intended to replace its functionality.

Docker have features such as layered images, networking, container orchestration, and extensive tooling that make it a powerful and versatile solution for deploying applications.

The purpose of this guide is to offer an educational exploration of the foundational technologies that form the core of Docker. By building a basic container environment from scratch, we aim to gain a deeper understanding of how these underlying technologies work together to enable containerization.

Let’s build Docker

Step 1: Setting Up the Namespace

To create an isolated environment, we start by setting up a new namespace. We use the unshare command, specifying different namespaces (--uts, --pid, --net, --mount, and --ipc), which provide separate instances of system identifiers and resources for our container.

unshare --uts --pid --net --mount --ipc --fork

Step 2: Configuring the cgroups

Cgroups (control groups) help manage resource allocation and control the usage of system resources by our containerized processes.

We create a new cgroup for our container and assign CPU quota limits to restrict its resource usage.

mkdir /sys/fs/cgroup/cpu/container1
echo 100000 > /sys/fs/cgroup/cpu/container1/cpu.cfs_quota_us
echo 0 > /sys/fs/cgroup/cpu/container1/tasks
echo $$ > /sys/fs/cgroup/cpu/container1/tasks

On the third line we write the value 0 to the tasks file within the /sys/fs/cgroup/cpu/container1/ directory. The tasks file is used to control which processes are assigned to a particular cgroup.

By writing 0 to this file, we are removing any previously assigned processes from the cgroup. This ensures that no processes are initially assigned to the container1 cgroup.

On the fourth line we write the value of $$ to the tasks file within the /sys/fs/cgroup/cpu/container1/ directory.

$$ is a special shell variable that represents the process ID (PID) of the current shell or script. By this, we are assigning the current process (the shell or script) to the container1 cgroup.

This ensures that any subsequent child processes spawned by the shell or script will also be part of the container1 cgroup, and their resource usage will be subject to the specified CPU quota limits.

Step 3: Building the Root File System

To create the file system for our container, we use debootstrap to set up a minimal Ubuntu environment within a directory named "ubuntu-rootfs". This serves as the root file system for our container.

debootstrap focal ./ubuntu-rootfs http://archive.ubuntu.com/ubuntu/

Step 4: Mounting and Chrooting into the Container

We mount essential file systems, such as /proc, /sys, and /dev, within our container’s root file system. Then, we use the chroot command to change the root directory to our container’s file system.

mount -t proc none ./ubuntu-rootfs/proc
mount -t sysfs none ./ubuntu-rootfs/sys
mount -o bind /dev ./ubuntu-rootfs/dev
chroot ./ubuntu-rootfs /bin/bash

The first command mounts the proc filesystem into the ./ubuntu-rootfs/proc directory. The proc filesystem provides information about processes and system resources in a virtual file format.

Mounting the proc filesystem in the specified directory allows processes within the ./ubuntu-rootfs/ environment to access and interact with the system’s process-related information.

The next command mounts the sysfs filesystem into the ./ubuntu-rootfs/sys directory. The sysfs filesystem provides information about devices, drivers, and other kernel-related information in a hierarchical format.

Mounting the sysfs filesystem in the specified directory enables processes within the ./ubuntu-rootfs/ environment to access and interact with system-related information exposed through the sysfs interface.

Finally we bind the /dev directory to the ./ubuntu-rootfs/dev directory. The /dev directory contains device files that represent physical and virtual devices on the system.

By binding the /dev directory to the ./ubuntu-rootfs/dev directory, any device files accessed within the ./ubuntu-rootfs/ environment will be redirected to the corresponding devices on the host system.

This ensures that the processes running within the ./ubuntu-rootfs/ environment can interact with the necessary devices as if they were directly accessing them on the host system.

Step 5: Running Applications within the Container

Now that our container environment is set up, we can install and run applications within it. In this example, we install Nginx web server to demonstrate how applications behave within the container.

(container) $ apt update
(container) $ apt install nginx
(container) $ service nginx start

Conclusion

By taking a hands-on approach and exploring the code and command examples, we’ve gained a practical understanding of building our own Docker-like environment using Linux namespaces, cgroups, and chroot.

Of course docker containerization is lot more than what we just explored above but these fundamentals empowers us to create isolated and efficient environments for our applications.



from Hacker News https://ift.tt/KGRrf7H

The dangers of tea drinking in nineteenth century Ireland

In many places around the world, hospitality means offering guests a cup of tea. As historian Tricia Cusack writes, this was increasingly true in nineteenth-century Ireland. But when the people doing the drinking were from the lower classes, many medical and social commentators raised alarms.

Cusack writes that the practice of taking afternoon tea spread from fashionable Dublin to upper and middle classes around Ireland in the 1800s. Women could demonstrate their families’ status with tasteful tea parties governed by rules of etiquette imported from England. Among these were that the tea must be of good quality, refreshments should be placed on a silver tray, and nothing serious or controversial should be discussed. Moderation was also crucial. As one etiquette manual advised, “It is not usual for a lady to take more than one cup of tea.” Tea was part of a larger package of Victorian femininity, which called for women to focus a great deal of effort on keeping a bright, clean home.

But when it came to the urban poor and farm laborers, popular discourse was very different. As early as 1745, a treatise on tea by British writer Simon Mason promoted afternoon tea drinking as a digestive aid for elites who enjoyed large meals and many glasses of wine. On the other hand, he discouraged “an imprudent Use of Tea, by Persons of an inferior Rank, and mean Abilities.” In particular, when it came to women who “work hard and live low,” he argued, tea “makes them peevish and unkind to their husbands… These poor Creatures, to be fashionable and imitate their Superiors, are neglecting their Spinning, Knitting, etc spending what their Husbands are working hard for.”

Cusack writes that many commentators also disapproved of the way the lower classes prepared tea. Where genteel guides called for tea to be steeped only briefly, the practice among the poor was to keep a kettle continually brewing on the hob or in the ashes of a fire, ready to share with neighbors who stopped by or to drink with meals. Medical authorities argued that the continual brewing extracted all the tannins from tea, resulting in gastric distress, nervous disorders, and even hallucinations.

Upper-class commenters warned that tea-drinking by the poor, particularly poor women, was not just unhealthy but dangerous to social order. For example, in one nineteenth-century “improvement” story, a young woman warns a servant that if she began drinking tea “you would be hankering after it, when you got the way of it.” Another describes an unwholesome family in which the wife’s tea-drinking habit drives her to thievery and threatens financial ruin.

Cusack concludes that the very different judgements placed on tea drinking reflected its place as “an ambivalent practice, deemed as important for supporting civilized social life as it was claimed to be instrumental in undermining it.”


Support JSTOR Daily! Join our new membership program on Patreon today.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

By: Tricia Cusack

The Canadian Journal of Irish Studies, Vol. 41, THE FOOD ISSUE (2018), pp. 178–209

Canadian Journal of Irish Studies



from Hacker News https://ift.tt/HEw8emN

Sunday, June 25, 2023

John Goodenough has died

John Bannister Goodenough, the American co-inventor of Lithium-ion batteries and a co-winner of 2019 Nobel prize for Chemistry, has passed away. He was just a month short of turning 101.

Goodenough’s death has been confirmed to businessline by his student Nicholas Grundish.

British-American scientist Stan Whittingham, who shared the Nobel prize with Goodenough, was the first to reveal that lithium can be stored within sheets of titanium sulphide. Goodenough perfected it with a cobalt-based cathode to create a product that today touches nearly everyone’s life.

Goodenough also played a significant role in the development of Random Access Memory (RAM) for computers.

John Goodenough was born to American parents in Jena, Germany, per the Nobel Prize website. After studying mathematics at the Yale University, he served the US Army during the Second World War as a meteorologist. \

He then studied at the University of Chicago and received a doctorate in physics in 1952. He subsequently worked at the Massachusetts Institute of Technology and Oxford University in the UK.

He had been a professor at the University of Texas at Austin.

In 2008, Goodenough wrote his autobiography, Witness to Grace, which he called “my personal history”. The book touches upon science and spirituality.

businessline carried an article about Goodenough on July 10, 2022, just before the scientist turned 100.

Prof Preetham Singh of IIT-BHU, who was one of Goodenough’s students, recalls that the Nobel laureate was “a great soul, very humanistic, whose doors were always open to anyone for discussion, suggestion and help.”

SHARE

  • Copy link
  • Email
  • Facebook
  • Telegram
  • LinkedIn
  • WhatsApp
  • Reddit

Published on June 26, 2023



from Hacker News https://ift.tt/d9rl2eS

XML is the future

My first hype exposure was "use the Extensible Markup Language for everything". Learning from it allowed me to live through the front end stack explosion, the micro-service overdose and many, many more silly trends.

It turns out Grandma was right. Eat vegetables, exercise, sleep well.

And use the right tool for the right job.

Well, she didn't say that last one.

But she could have.

When I started programming, XML was going to replace everything. HTML with XHTML, validation with DTD, transformation and presentation with XSLT, communication with SOAP.

I missed the train on the OOP hype, that was the generation before me, but I read so many articles warning me about it, that I applied the reasoning to XML: let's wait and see if this is actually as good as they say before investing everything in it.

Turns out, XML was not the future. It was mostly technical debt.

It was mostly useful for things like documents, and I believe the most successful use of it are still MS Office and LibreOffice file formats. They are just zips of XML.

I was lucky to learn this lesson very early in my career: there is no silver bullet, any single tool, no matter how good it is, must be evaluated from the engineering point of view of pros and cons. Everything has a cost, and implies compromises. It's a matter of ROI. Which is hard to evaluate without experience.

Bottom line, time is once again the great equalizer, there is no substitute to observe how a complex system evolves, no matter your model of the world.

But above all, I learned that geeks think they are rational beings, while they are completely influenced by buzz, marketing, and their emotions. Even more so than the average person, because they believe they are less susceptible to it than normies, so they have a blind spot.

XML was just the beginning of many, many waves of hype.

When MongoDB came around (it's web scale!), suddenly you had to use NoSQL for everything. Didn't matter that there was absolutely no relation between 2 NoSQL systems. It's like labeling a country as "doesn't speak English". Didn't matter MongoDB was a terrible product at the time that was destroying your data (they did fix that, it's now a good DB to have in your toolbox). Didn't matter that most people using it didn't need free replication because their data could fit in a SQlite file.

So we watched beginners put their data with no schema, no consistency, and broken validation in a big bag of blobs. The projects fail in mass.

Then the node era arrived. Isomorphic JavaScript was all the rage, you had to use the same language in the frontend and the backend, and make everything async. But JS sucked, so most JS projects were created... to avoid writing ES5. I mean, no import, no namespace, terrible scoping, schizophreniac this, prototype based inheritance, weak types, come on! So we got coffeescript, then babel, webpack, typescript, react + JSX, etc.

We were told to stay on top of the most modern ecosystem, and by that I mean dealing with compatibility being broken every two months. That’s the price of cutting edge tree-shaking transpilation. That, and a left-pad way of life you couldn’t debug because the map files were generated wrong.

At this point, everything needed to be a Single Page Application with client-side routing, immutable data structures and some kind of store. That is, if you could chose between flux, redux, alt, reflux, flummox, fluxible, fluxxor, marty.js, fynx, MacFly, DeLorean.js, fluxify, fluxury, exim, fluxtore, Redx, fluxx… No, I’m not making that up.

But because you still had to pass a lot of data through the wire, and since everything had to be on the client, GraphQL was born. Of course, all that stuff had terrible accessibility, SEO and first-rendering time issues, leading to the rise of Server-Side Rendering, aka CGI with extra steps. This couldn’t stop there, so the community added hydration on top.

This turned out to be an immense addition in complexity, and created tons and tons of disposable code base, leading to, you get it, failed projects and waste of money.

Because, of course, most of those tasks could have been done with Ruby-On-Rail, Symfony or Django and a pinch of jQuery. At least, they would have been finished with those boring techs. Instead, dead projects began to accumulate, and for one Figma shinning, you had a trail of hidden bodies behind corporate walls nobody dared to talk about.

It was taboo to speak about this madness. You were the one not getting it.

You would think people drowning while trying to produce a basic CRUD app would have been a red flag.

Instead, it inspired teams everywhere in the world to make things harder on themselves.

First, the "everything should be a micro-service" crowd started to take over. Every single small website had a docker container for the restish API, plus one for the front end, and one for the database. Indirection layers on top of indirection layers. To communicate between all that, why not a little message queue? ZeroMQ, RabbitMQ... And a good exchange format, like grpc with protobuff.

Believe it or not, it became very hard to make your todo-list app work with all those, so a solution was found: adding orchestration. Docker swarm, and now kubernetes.

At this stage, so much time and money were obliterated the cloud felt like a savior: they will do all that for you, for a fee. You just had to learn their entire way of doing things, debug their black box, be locked in their ecosystem, and carefully optimize and configure - using state-of-the-art templated YAML files and hostile UIs - your entire project, so that you could only spend 10 times more on hosting, and not 10000 times by mistake.

Easy.

Second, big data arrived. You had to store every single click of your users. A/B test everything as well, so that you consistently annoy 10% of your customers and make support unbearable. Now the data you had was gigantic! And if it was not, you had to believe it, and you needed some kind of Dynamo data lake. Or maybe a time series db. Or a graph one. You needed something, that's for sure.

Third, all of that stuff was now very slow. It was not because of the terrible technical decisions leading to use Google level industrial architectures for your 100 request/seconds website, no. It was because you used a slow language. So let's rewrite everything in Go. Or Rust.

The compilation step is not going to have any impact on the feedback loop anyway, since the CI pipeline already takes 73 minutes.

That was the last straw, so out of tiredness, devs went back to simple ways...

Just kidding, they went in flocks to serverless lambda and SaaS services you call from the edge, cause not owning your stack is the future!

Meanwhile, while the blog posts about burn out were increasing tenfold, somewhere at the top, leaders heard the call of money.

You can't grow without making everything social.

Gamify, gamify, gamify.

Block chain will change the universe.

You need an AMP website.

Your stuff is not competitive without Machine Learning.

If you lived through all those, you know what remains about it: almost nothing.

A few "share" buttons and "login with" workflows. Some points and badges. Graphs.

Things either died, or filled the niche they were good at, as they should.

Some were replaced by the future of today.

I like the new hype: YAGNI is popular again.

Projects like Vue, HTMX and unpoly, alpine.js or just vanilla are getting traction.

There is talk of coming back to using Postgres for most things.

37signals is on the spotlight once more, because they left the cloud.

It will, of course, be overdone. Because minimalism being hyped is still... hype.

You do need the cloud, containers, nosql, go, rust and js build systems. Modern software requirements, customers’ expectations and incredible new features are not to be ignored.

Just not for everything.

Nothing is ever needed for everything.



from Hacker News https://ift.tt/xEHq8K5

NRF52840 Connect Kit Rapid prototyping kit for your next connected projects

nRF52840 Connnect Kit

Current Version Documentation PRs Welcome

Rapid prototyping kit for your next connected projects

Introduction

nRF52840 Connect Kit is an open-source prototyping kit designed for connected projects. It is built using the nRF52840 SoC, which has protocol support for Bluetooth LE, Bluetooth mesh, Thread, Zigbee, 802.15.4, ANT and 2.4 GHz proprietary stacks. It provides Arm TrustZone® CryptoCell cryptographic unit as well as numerous peripherals such as USB 2.0, NFC-A, GPIO, UART, SPI, TWI, PDM, I2S, QSPI, PWM, ADC, QDEC to support a wide range of applications.

The design is available in an easy-to-use form factor with USB-C and 40 pin DIP/SMT type, including up to 32 multi-function GPIO pins (7 can be used as ADC inputs) and Serial Wire Debug (SWD) port. It features RGB LED, Buttons, external 64 Mbit QSPI flash and flexible power management with various options for easily powering the unit from USB-C, external supplies or batteries, and also has Chip antenna and U.FL receptacle options to support various wireless scenarios.

nRF52840 Connect Kit supports nRF Connect SDK, which integrates the Zephyr RTOS, protocol stacks, samples, hardware drivers and much more. We also offer Python support, allowing you access hardware-specific functionality and peripherals with Python programming language.

product hero

Key Features

  • Nordic Semiconductor nRF52840 SoC

    • 64 MHz Arm® Cortex-M4 with FPU
    • 1 MB Flash + 256 KB RAM
    • Bluetooth LE, Bluetooth mesh, Thread, Zigbee, 802.15.4, ANT and 2.4 GHz proprietary
    • Arm TrustZone® Cryptocell 310 Security Subsystem
    • 2.4 GHz Transceiver with +8 dBm TX Power
    • GPIO, UART, SPI, TWI(I2C), PDM, I2S, QSPI, PWM, QDEC, 12-bit ADC support
    • Integrated USB 2.0 Full-speed Controller
    • Integrated NFC-A Tag
  • Ultra low power 64 Mbit QSPI flash memory

  • User programmable RBG LED and Buttons

  • Up to 32 multi-function General Purpose IOs (7 can be used as ADC inputs)

  • Arm Serial Wire Debug (SWD) port via edge pins

  • Flexible power management with various options for easily powering the unit

  • Wide input voltage range: 1.8 V to 5.5 V, output 3.3V and up to 2A when Input ≥ 2.3 V

  • 3.3V IO Operating Voltage

  • Reversible USB-C connector

  • Available in Chip antenna and U.FL receptacle options

  • 40 pin 55.88mm x 20.32mm (2.2" x 0.8") DIP/SMT form factor

  • Shipped with UF2 Bootloader supporting Drag-and-drop programming over USB drive

  • Built on open source, supporting nRF Connect SDK, Zephyr RTOS, Python, etc.

Hardware Diagram

The following figure illustrates the nRF52840 Connect Kit hardware diagram. The design is available in Chip antenna and U.FL receptacle options, both have most of the same components except the antenna interface.

Hardware Diagram

Documentation

We offer an extensive set of documentation such as out of box experience, getting started and developer guides, which can help you save big by reducing development effort.

Where to buy

nRF52840 Connect Kit is available on the following channels (click to go directly to the product):

makerdiary store Taobao Tindie

Community Support

Community support is provided via GitHub Discussions. You can also reach us on Makerdiary Community.

We would love to have more developers contribute to this project! If you're passionate about making this project better, see our Contributing Guidelines for more information.

License

Copyright (c) 2016-2023 Makerdiary. See LICENSE for further details.



from Hacker News https://ift.tt/Nq1EW2k

Saturday, June 24, 2023

Btreefs generates executable code at runtime to unpack btree nodes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
// SPDX-License-Identifier: GPL-2.0

#include "bcachefs.h"
#include "bkey.h"
#include "bkey_cmp.h"
#include "bkey_methods.h"
#include "bset.h"
#include "util.h"

#undef EBUG_ON

#ifdef DEBUG_BKEYS
#define EBUG_ON(cond)             BUG_ON(cond)
#else
#define EBUG_ON(cond)
#endif

const struct bkey_format bch2_bkey_format_current = BKEY_FORMAT_CURRENT;

void bch2_bkey_packed_to_binary_text(struct printbuf *out,
                                     const struct bkey_format *f,
                                     const struct bkey_packed *k)
{
        const u64 *p = high_word(f, k);
        unsigned word_bits = 64 - high_bit_offset;
        unsigned nr_key_bits = bkey_format_key_bits(f) + high_bit_offset;
        u64 v = *p & (~0ULL >> high_bit_offset);

        if (!nr_key_bits) {
                prt_str(out, "(empty)");
                return;
        }

        while (1) {
                unsigned next_key_bits = nr_key_bits;

                if (nr_key_bits < 64) {
                        v >>= 64 - nr_key_bits;
                        next_key_bits = 0;
                } else {
                        next_key_bits -= 64;
                }

                bch2_prt_u64_binary(out, v, min(word_bits, nr_key_bits));

                if (!next_key_bits)
                        break;

                prt_char(out, ' ');

                p = next_word(p);
                v = *p;
                word_bits = 64;
                nr_key_bits = next_key_bits;
        }
}

#ifdef CONFIG_BCACHEFS_DEBUG

static void bch2_bkey_pack_verify(const struct bkey_packed *packed,
                                  const struct bkey *unpacked,
                                  const struct bkey_format *format)
{
        struct bkey tmp;

        BUG_ON(bkeyp_val_u64s(format, packed) !=
               bkey_val_u64s(unpacked));

        BUG_ON(packed->u64s < bkeyp_key_u64s(format, packed));

        tmp = __bch2_bkey_unpack_key(format, packed);

        if (memcmp(&tmp, unpacked, sizeof(struct bkey))) {
                struct printbuf buf = PRINTBUF;

                prt_printf(&buf, "keys differ: format u64s %u fields %u %u %u %u %u\n",
                      format->key_u64s,
                      format->bits_per_field[0],
                      format->bits_per_field[1],
                      format->bits_per_field[2],
                      format->bits_per_field[3],
                      format->bits_per_field[4]);

                prt_printf(&buf, "compiled unpack: ");
                bch2_bkey_to_text(&buf, unpacked);
                prt_newline(&buf);

                prt_printf(&buf, "c unpack:        ");
                bch2_bkey_to_text(&buf, &tmp);
                prt_newline(&buf);

                prt_printf(&buf, "compiled unpack: ");
                bch2_bkey_packed_to_binary_text(&buf, &bch2_bkey_format_current,
                                                (struct bkey_packed *) unpacked);
                prt_newline(&buf);

                prt_printf(&buf, "c unpack:        ");
                bch2_bkey_packed_to_binary_text(&buf, &bch2_bkey_format_current,
                                                (struct bkey_packed *) &tmp);
                prt_newline(&buf);

                panic("%s", buf.buf);
        }
}

#else
static inline void bch2_bkey_pack_verify(const struct bkey_packed *packed,
                                        const struct bkey *unpacked,
                                        const struct bkey_format *format) {}
#endif

struct pack_state {
        const struct bkey_format *format;
        unsigned           bits;      /* bits remaining in current word */
        u64                     w; /* current word */
        u64                     *p;   /* pointer to next word */
};

__always_inline
static struct pack_state pack_state_init(const struct bkey_format *format,
                                         struct bkey_packed *k)
{
        u64 *p = high_word(format, k);

        return (struct pack_state) {
                .format    = format,
                .bits      = 64 - high_bit_offset,
                .w = 0,
                .p = p,
        };
}

__always_inline
static void pack_state_finish(struct pack_state *state,
                              struct bkey_packed *k)
{
        EBUG_ON(state->p <  k->_data);
        EBUG_ON(state->p >= k->_data + state->format->key_u64s);

        *state->p = state->w;
}

struct unpack_state {
        const struct bkey_format *format;
        unsigned           bits;      /* bits remaining in current word */
        u64                     w; /* current word */
        const u64          *p;   /* pointer to next word */
};

__always_inline
static struct unpack_state unpack_state_init(const struct bkey_format *format,
                                             const struct bkey_packed *k)
{
        const u64 *p = high_word(format, k);

        return (struct unpack_state) {
                .format    = format,
                .bits      = 64 - high_bit_offset,
                .w = *p << high_bit_offset,
                .p = p,
        };
}

__always_inline
static u64 get_inc_field(struct unpack_state *state, unsigned field)
{
        unsigned bits = state->format->bits_per_field[field];
        u64 v = 0, offset = le64_to_cpu(state->format->field_offset[field]);

        if (bits >= state->bits) {
                v = state->w >> (64 - bits);
                bits -= state->bits;

                state->p = next_word(state->p);
                state->w = *state->p;
                state->bits = 64;
        }

        /* avoid shift by 64 if bits is 0 - bits is never 64 here: */
        v |= (state->w >> 1) >> (63 - bits);
        state->w <<= bits;
        state->bits -= bits;

        return v + offset;
}

__always_inline
static bool set_inc_field(struct pack_state *state, unsigned field, u64 v)
{
        unsigned bits = state->format->bits_per_field[field];
        u64 offset = le64_to_cpu(state->format->field_offset[field]);

        if (v < offset)
                return false;

        v -= offset;

        if (fls64(v) > bits)
                return false;

        if (bits > state->bits) {
                bits -= state->bits;
                /* avoid shift by 64 if bits is 0 - bits is never 64 here: */
                state->w |= (v >> 1) >> (bits - 1);

                *state->p = state->w;
                state->p = next_word(state->p);
                state->w = 0;
                state->bits = 64;
        }

        state->bits -= bits;
        state->w |= v << state->bits;

        return true;
}

/*
 * Note: does NOT set out->format (we don't know what it should be here!)
 *
 * Also: doesn't work on extents - it doesn't preserve the invariant that
 * if k is packed bkey_start_pos(k) will successfully pack
 */
static bool bch2_bkey_transform_key(const struct bkey_format *out_f,
                                   struct bkey_packed *out,
                                   const struct bkey_format *in_f,
                                   const struct bkey_packed *in)
{
        struct pack_state out_s = pack_state_init(out_f, out);
        struct unpack_state in_s = unpack_state_init(in_f, in);
        u64 *w = out->_data;
        unsigned i;

        *w = 0;

        for (i = 0; i < BKEY_NR_FIELDS; i++)
                if (!set_inc_field(&out_s, i, get_inc_field(&in_s, i)))
                        return false;

        /* Can't happen because the val would be too big to unpack: */
        EBUG_ON(in->u64s - in_f->key_u64s + out_f->key_u64s > U8_MAX);

        pack_state_finish(&out_s, out);
        out->u64s       = out_f->key_u64s + in->u64s - in_f->key_u64s;
        out->needs_whiteout = in->needs_whiteout;
        out->type       = in->type;

        return true;
}

bool bch2_bkey_transform(const struct bkey_format *out_f,
                        struct bkey_packed *out,
                        const struct bkey_format *in_f,
                        const struct bkey_packed *in)
{
        if (!bch2_bkey_transform_key(out_f, out, in_f, in))
                return false;

        memcpy_u64s((u64 *) out + out_f->key_u64s,
                    (u64 *) in + in_f->key_u64s,
                    (in->u64s - in_f->key_u64s));
        return true;
}

struct bkey __bch2_bkey_unpack_key(const struct bkey_format *format,
                              const struct bkey_packed *in)
{
        struct unpack_state state = unpack_state_init(format, in);
        struct bkey out;

        EBUG_ON(format->nr_fields != BKEY_NR_FIELDS);
        EBUG_ON(in->u64s < format->key_u64s);
        EBUG_ON(in->format != KEY_FORMAT_LOCAL_BTREE);
        EBUG_ON(in->u64s - format->key_u64s + BKEY_U64s > U8_MAX);

        out.u64s   = BKEY_U64s + in->u64s - format->key_u64s;
        out.format = KEY_FORMAT_CURRENT;
        out.needs_whiteout = in->needs_whiteout;
        out.type   = in->type;
        out.pad[0] = 0;

#define x(id, field)      out.field = get_inc_field(&state, id);
        bkey_fields()
#undef x

        return out;
}

#ifndef HAVE_BCACHEFS_COMPILED_UNPACK
struct bpos __bkey_unpack_pos(const struct bkey_format *format,
                                     const struct bkey_packed *in)
{
        struct unpack_state state = unpack_state_init(format, in);
        struct bpos out;

        EBUG_ON(format->nr_fields != BKEY_NR_FIELDS);
        EBUG_ON(in->u64s < format->key_u64s);
        EBUG_ON(in->format != KEY_FORMAT_LOCAL_BTREE);

        out.inode  = get_inc_field(&state, BKEY_FIELD_INODE);
        out.offset = get_inc_field(&state, BKEY_FIELD_OFFSET);
        out.snapshot       = get_inc_field(&state, BKEY_FIELD_SNAPSHOT);

        return out;
}
#endif

/**
 * bch2_bkey_pack_key -- pack just the key, not the value
 */
bool bch2_bkey_pack_key(struct bkey_packed *out, const struct bkey *in,
                   const struct bkey_format *format)
{
        struct pack_state state = pack_state_init(format, out);
        u64 *w = out->_data;

        EBUG_ON((void *) in == (void *) out);
        EBUG_ON(format->nr_fields != BKEY_NR_FIELDS);
        EBUG_ON(in->format != KEY_FORMAT_CURRENT);

        *w = 0;

#define x(id, field)      if (!set_inc_field(&state, id, in->field)) return false;
        bkey_fields()
#undef x
        pack_state_finish(&state, out);
        out->u64s       = format->key_u64s + in->u64s - BKEY_U64s;
        out->format     = KEY_FORMAT_LOCAL_BTREE;
        out->needs_whiteout = in->needs_whiteout;
        out->type       = in->type;

        bch2_bkey_pack_verify(out, in, format);
        return true;
}

/**
 * bch2_bkey_unpack -- unpack the key and the value
 */
void bch2_bkey_unpack(const struct btree *b, struct bkey_i *dst,
                 const struct bkey_packed *src)
{
        __bkey_unpack_key(b, &dst->k, src);

        memcpy_u64s(&dst->v,
                    bkeyp_val(&b->format, src),
                    bkeyp_val_u64s(&b->format, src));
}

/**
 * bch2_bkey_pack -- pack the key and the value
 */
bool bch2_bkey_pack(struct bkey_packed *out, const struct bkey_i *in,
               const struct bkey_format *format)
{
        struct bkey_packed tmp;

        if (!bch2_bkey_pack_key(&tmp, &in->k, format))
                return false;

        memmove_u64s((u64 *) out + format->key_u64s,
                     &in->v,
                     bkey_val_u64s(&in->k));
        memcpy_u64s_small(out, &tmp, format->key_u64s);

        return true;
}

__always_inline
static bool set_inc_field_lossy(struct pack_state *state, unsigned field, u64 v)
{
        unsigned bits = state->format->bits_per_field[field];
        u64 offset = le64_to_cpu(state->format->field_offset[field]);
        bool ret = true;

        EBUG_ON(v < offset);
        v -= offset;

        if (fls64(v) > bits) {
                v = ~(~0ULL << bits);
                ret = false;
        }

        if (bits > state->bits) {
                bits -= state->bits;
                state->w |= (v >> 1) >> (bits - 1);

                *state->p = state->w;
                state->p = next_word(state->p);
                state->w = 0;
                state->bits = 64;
        }

        state->bits -= bits;
        state->w |= v << state->bits;

        return ret;
}

#ifdef CONFIG_BCACHEFS_DEBUG
static bool bkey_packed_successor(struct bkey_packed *out,
                                  const struct btree *b,
                                  struct bkey_packed k)
{
        const struct bkey_format *f = &b->format;
        unsigned nr_key_bits = b->nr_key_bits;
        unsigned first_bit, offset;
        u64 *p;

        EBUG_ON(b->nr_key_bits != bkey_format_key_bits(f));

        if (!nr_key_bits)
                return false;

        *out = k;

        first_bit = high_bit_offset + nr_key_bits - 1;
        p = nth_word(high_word(f, out), first_bit >> 6);
        offset = 63 - (first_bit & 63);

        while (nr_key_bits) {
                unsigned bits = min(64 - offset, nr_key_bits);
                u64 mask = (~0ULL >> (64 - bits)) << offset;

                if ((*p & mask) != mask) {
                        *p += 1ULL << offset;
                        EBUG_ON(bch2_bkey_cmp_packed(b, out, &k) <= 0);
                        return true;
                }

                *p &= ~mask;
                p = prev_word(p);
                nr_key_bits -= bits;
                offset = 0;
        }

        return false;
}
#endif

/*
 * Returns a packed key that compares <= in
 *
 * This is used in bset_search_tree(), where we need a packed pos in order to be
 * able to compare against the keys in the auxiliary search tree - and it's
 * legal to use a packed pos that isn't equivalent to the original pos,
 * _provided_ it compares <= to the original pos.
 */
enum bkey_pack_pos_ret bch2_bkey_pack_pos_lossy(struct bkey_packed *out,
                                           struct bpos in,
                                           const struct btree *b)
{
        const struct bkey_format *f = &b->format;
        struct pack_state state = pack_state_init(f, out);
        u64 *w = out->_data;
#ifdef CONFIG_BCACHEFS_DEBUG
        struct bpos orig = in;
#endif
        bool exact = true;
        unsigned i;

        /*
   * bch2_bkey_pack_key() will write to all of f->key_u64s, minus the 3
   * byte header, but pack_pos() won't if the len/version fields are big
   * enough - we need to make sure to zero them out:
   */
        for (i = 0; i < f->key_u64s; i++)
                w[i] = 0;

        if (unlikely(in.snapshot <
                     le64_to_cpu(f->field_offset[BKEY_FIELD_SNAPSHOT]))) {
                if (!in.offset-- &&
                    !in.inode--)
                        return BKEY_PACK_POS_FAIL;
                in.snapshot        = KEY_SNAPSHOT_MAX;
                exact = false;
        }

        if (unlikely(in.offset <
                     le64_to_cpu(f->field_offset[BKEY_FIELD_OFFSET]))) {
                if (!in.inode--)
                        return BKEY_PACK_POS_FAIL;
                in.offset  = KEY_OFFSET_MAX;
                in.snapshot        = KEY_SNAPSHOT_MAX;
                exact = false;
        }

        if (unlikely(in.inode <
                     le64_to_cpu(f->field_offset[BKEY_FIELD_INODE])))
                return BKEY_PACK_POS_FAIL;

        if (unlikely(!set_inc_field_lossy(&state, BKEY_FIELD_INODE, in.inode))) {
                in.offset  = KEY_OFFSET_MAX;
                in.snapshot        = KEY_SNAPSHOT_MAX;
                exact = false;
        }

        if (unlikely(!set_inc_field_lossy(&state, BKEY_FIELD_OFFSET, in.offset))) {
                in.snapshot        = KEY_SNAPSHOT_MAX;
                exact = false;
        }

        if (unlikely(!set_inc_field_lossy(&state, BKEY_FIELD_SNAPSHOT, in.snapshot)))
                exact = false;

        pack_state_finish(&state, out);
        out->u64s       = f->key_u64s;
        out->format     = KEY_FORMAT_LOCAL_BTREE;
        out->type       = KEY_TYPE_deleted;

#ifdef CONFIG_BCACHEFS_DEBUG
        if (exact) {
                BUG_ON(bkey_cmp_left_packed(b, out, &orig));
        } else {
                struct bkey_packed successor;

                BUG_ON(bkey_cmp_left_packed(b, out, &orig) >= 0);
                BUG_ON(bkey_packed_successor(&successor, b, *out) &&
                       bkey_cmp_left_packed(b, &successor, &orig) < 0);
        }
#endif

        return exact ? BKEY_PACK_POS_EXACT : BKEY_PACK_POS_SMALLER;
}

void bch2_bkey_format_init(struct bkey_format_state *s)
{
        unsigned i;

        for (i = 0; i < ARRAY_SIZE(s->field_min); i++)
                s->field_min[i] = U64_MAX;

        for (i = 0; i < ARRAY_SIZE(s->field_max); i++)
                s->field_max[i] = 0;

        /* Make sure we can store a size of 0: */
        s->field_min[BKEY_FIELD_SIZE] = 0;
}

void bch2_bkey_format_add_pos(struct bkey_format_state *s, struct bpos p)
{
        unsigned field = 0;

        __bkey_format_add(s, field++, p.inode);
        __bkey_format_add(s, field++, p.offset);
        __bkey_format_add(s, field++, p.snapshot);
}

/*
 * We don't want it to be possible for the packed format to represent fields
 * bigger than a u64... that will cause confusion and issues (like with
 * bkey_packed_successor())
 */
static void set_format_field(struct bkey_format *f, enum bch_bkey_fields i,
                             unsigned bits, u64 offset)
{
        unsigned unpacked_bits = bch2_bkey_format_current.bits_per_field[i];
        u64 unpacked_max = ~((~0ULL << 1) << (unpacked_bits - 1));

        bits = min(bits, unpacked_bits);

        offset = bits == unpacked_bits ? 0 : min(offset, unpacked_max - ((1ULL << bits) - 1));

        f->bits_per_field[i] = bits;
        f->field_offset[i]   = cpu_to_le64(offset);
}

struct bkey_format bch2_bkey_format_done(struct bkey_format_state *s)
{
        unsigned i, bits = KEY_PACKED_BITS_START;
        struct bkey_format ret = {
                .nr_fields = BKEY_NR_FIELDS,
        };

        for (i = 0; i < ARRAY_SIZE(s->field_min); i++) {
                s->field_min[i] = min(s->field_min[i], s->field_max[i]);

                set_format_field(&ret, i,
                                 fls64(s->field_max[i] - s->field_min[i]),
                                 s->field_min[i]);

                bits += ret.bits_per_field[i];
        }

        /* allow for extent merging: */
        if (ret.bits_per_field[BKEY_FIELD_SIZE]) {
                ret.bits_per_field[BKEY_FIELD_SIZE] += 4;
                bits += 4;
        }

        ret.key_u64s = DIV_ROUND_UP(bits, 64);

        /* if we have enough spare bits, round fields up to nearest byte */
        bits = ret.key_u64s * 64 - bits;

        for (i = 0; i < ARRAY_SIZE(ret.bits_per_field); i++) {
                unsigned r = round_up(ret.bits_per_field[i], 8) -
                        ret.bits_per_field[i];

                if (r <= bits) {
                        set_format_field(&ret, i,
                                         ret.bits_per_field[i] + r,
                                         le64_to_cpu(ret.field_offset[i]));
                        bits -= r;
                }
        }

        EBUG_ON(bch2_bkey_format_validate(&ret));
        return ret;
}

const char *bch2_bkey_format_validate(struct bkey_format *f)
{
        unsigned i, bits = KEY_PACKED_BITS_START;

        if (f->nr_fields != BKEY_NR_FIELDS)
                return "incorrect number of fields";

        /*
   * Verify that the packed format can't represent fields larger than the
   * unpacked format:
   */
        for (i = 0; i < f->nr_fields; i++) {
                unsigned unpacked_bits = bch2_bkey_format_current.bits_per_field[i];
                u64 unpacked_max = ~((~0ULL << 1) << (unpacked_bits - 1));
                u64 packed_max = f->bits_per_field[i]
                        ? ~((~0ULL << 1) << (f->bits_per_field[i] - 1))
                        : 0;
                u64 field_offset = le64_to_cpu(f->field_offset[i]);

                if (packed_max + field_offset < packed_max ||
                    packed_max + field_offset > unpacked_max)
                        return "field too large";

                bits += f->bits_per_field[i];
        }

        if (f->key_u64s != DIV_ROUND_UP(bits, 64))
                return "incorrect key_u64s";

        return NULL;
}

/*
 * Most significant differing bit
 * Bits are indexed from 0 - return is [0, nr_key_bits)
 */
__pure
unsigned bch2_bkey_greatest_differing_bit(const struct btree *b,
                                          const struct bkey_packed *l_k,
                                          const struct bkey_packed *r_k)
{
        const u64 *l = high_word(&b->format, l_k);
        const u64 *r = high_word(&b->format, r_k);
        unsigned nr_key_bits = b->nr_key_bits;
        unsigned word_bits = 64 - high_bit_offset;
        u64 l_v, r_v;

        EBUG_ON(b->nr_key_bits != bkey_format_key_bits(&b->format));

        /* for big endian, skip past header */
        l_v = *l & (~0ULL >> high_bit_offset);
        r_v = *r & (~0ULL >> high_bit_offset);

        while (nr_key_bits) {
                if (nr_key_bits < word_bits) {
                        l_v >>= word_bits - nr_key_bits;
                        r_v >>= word_bits - nr_key_bits;
                        nr_key_bits = 0;
                } else {
                        nr_key_bits -= word_bits;
                }

                if (l_v != r_v)
                        return fls64(l_v ^ r_v) - 1 + nr_key_bits;

                l = next_word(l);
                r = next_word(r);

                l_v = *l;
                r_v = *r;
                word_bits = 64;
        }

        return 0;
}

/*
 * First set bit
 * Bits are indexed from 0 - return is [0, nr_key_bits)
 */
__pure
unsigned bch2_bkey_ffs(const struct btree *b, const struct bkey_packed *k)
{
        const u64 *p = high_word(&b->format, k);
        unsigned nr_key_bits = b->nr_key_bits;
        unsigned ret = 0, offset;

        EBUG_ON(b->nr_key_bits != bkey_format_key_bits(&b->format));

        offset = nr_key_bits;
        while (offset > 64) {
                p = next_word(p);
                offset -= 64;
        }

        offset = 64 - offset;

        while (nr_key_bits) {
                unsigned bits = nr_key_bits + offset < 64
                        ? nr_key_bits
                        : 64 - offset;

                u64 mask = (~0ULL >> (64 - bits)) << offset;

                if (*p & mask)
                        return ret + __ffs64(*p & mask) - offset;

                p = prev_word(p);
                nr_key_bits -= bits;
                ret += bits;
                offset = 0;
        }

        return 0;
}

#ifdef HAVE_BCACHEFS_COMPILED_UNPACK

#define I(_x)                     (*(out)++ = (_x))
#define I1(i0)                                            I(i0)
#define I2(i0, i1)                (I1(i0),                I(i1))
#define I3(i0, i1, i2)            (I2(i0, i1),            I(i2))
#define I4(i0, i1, i2, i3)        (I3(i0, i1, i2),        I(i3))
#define I5(i0, i1, i2, i3, i4)    (I4(i0, i1, i2, i3),    I(i4))

static u8 *compile_bkey_field(const struct bkey_format *format, u8 *out,
                              enum bch_bkey_fields field,
                              unsigned dst_offset, unsigned dst_size,
                              bool *eax_zeroed)
{
        unsigned bits = format->bits_per_field[field];
        u64 offset = le64_to_cpu(format->field_offset[field]);
        unsigned i, byte, bit_offset, align, shl, shr;

        if (!bits && !offset) {
                if (!*eax_zeroed) {
                        /* xor eax, eax */
                        I2(0x31, 0xc0);
                }

                *eax_zeroed = true;
                goto set_field;
        }

        if (!bits) {
                /* just return offset: */

                switch (dst_size) {
                case 8:
                        if (offset > S32_MAX) {
                                /* mov [rdi + dst_offset], offset */
                                I3(0xc7, 0x47, dst_offset);
                                memcpy(out, &offset, 4);
                                out += 4;

                                I3(0xc7, 0x47, dst_offset + 4);
                                memcpy(out, (void *) &offset + 4, 4);
                                out += 4;
                        } else {
                                /* mov [rdi + dst_offset], offset */
                                /* sign extended */
                                I4(0x48, 0xc7, 0x47, dst_offset);
                                memcpy(out, &offset, 4);
                                out += 4;
                        }
                        break;
                case 4:
                        /* mov [rdi + dst_offset], offset */
                        I3(0xc7, 0x47, dst_offset);
                        memcpy(out, &offset, 4);
                        out += 4;
                        break;
                default:
                        BUG();
                }

                return out;
        }

        bit_offset = format->key_u64s * 64;
        for (i = 0; i <= field; i++)
                bit_offset -= format->bits_per_field[i];

        byte = bit_offset / 8;
        bit_offset -= byte * 8;

        *eax_zeroed = false;

        if (bit_offset == 0 && bits == 8) {
                /* movzx eax, BYTE PTR [rsi + imm8] */
                I4(0x0f, 0xb6, 0x46, byte);
        } else if (bit_offset == 0 && bits == 16) {
                /* movzx eax, WORD PTR [rsi + imm8] */
                I4(0x0f, 0xb7, 0x46, byte);
        } else if (bit_offset + bits <= 32) {
                align = min(4 - DIV_ROUND_UP(bit_offset + bits, 8), byte & 3);
                byte -= align;
                bit_offset += align * 8;

                BUG_ON(bit_offset + bits > 32);

                /* mov eax, [rsi + imm8] */
                I3(0x8b, 0x46, byte);

                if (bit_offset) {
                        /* shr eax, imm8 */
                        I3(0xc1, 0xe8, bit_offset);
                }

                if (bit_offset + bits < 32) {
                        unsigned mask = ~0U >> (32 - bits);

                        /* and eax, imm32 */
                        I1(0x25);
                        memcpy(out, &mask, 4);
                        out += 4;
                }
        } else if (bit_offset + bits <= 64) {
                align = min(8 - DIV_ROUND_UP(bit_offset + bits, 8), byte & 7);
                byte -= align;
                bit_offset += align * 8;

                BUG_ON(bit_offset + bits > 64);

                /* mov rax, [rsi + imm8] */
                I4(0x48, 0x8b, 0x46, byte);

                shl = 64 - bit_offset - bits;
                shr = bit_offset + shl;

                if (shl) {
                        /* shl rax, imm8 */
                        I4(0x48, 0xc1, 0xe0, shl);
                }

                if (shr) {
                        /* shr rax, imm8 */
                        I4(0x48, 0xc1, 0xe8, shr);
                }
        } else {
                align = min(4 - DIV_ROUND_UP(bit_offset + bits, 8), byte & 3);
                byte -= align;
                bit_offset += align * 8;

                BUG_ON(bit_offset + bits > 96);

                /* mov rax, [rsi + byte] */
                I4(0x48, 0x8b, 0x46, byte);

                /* mov edx, [rsi + byte + 8] */
                I3(0x8b, 0x56, byte + 8);

                /* bits from next word: */
                shr = bit_offset + bits - 64;
                BUG_ON(shr > bit_offset);

                /* shr rax, bit_offset */
                I4(0x48, 0xc1, 0xe8, shr);

                /* shl rdx, imm8 */
                I4(0x48, 0xc1, 0xe2, 64 - shr);

                /* or rax, rdx */
                I3(0x48, 0x09, 0xd0);

                shr = bit_offset - shr;

                if (shr) {
                        /* shr rax, imm8 */
                        I4(0x48, 0xc1, 0xe8, shr);
                }
        }

        /* rax += offset: */
        if (offset > S32_MAX) {
                /* mov rdx, imm64 */
                I2(0x48, 0xba);
                memcpy(out, &offset, 8);
                out += 8;
                /* add %rdx, %rax */
                I3(0x48, 0x01, 0xd0);
        } else if (offset + (~0ULL >> (64 - bits)) > U32_MAX) {
                /* add rax, imm32 */
                I2(0x48, 0x05);
                memcpy(out, &offset, 4);
                out += 4;
        } else if (offset) {
                /* add eax, imm32 */
                I1(0x05);
                memcpy(out, &offset, 4);
                out += 4;
        }
set_field:
        switch (dst_size) {
        case 8:
                /* mov [rdi + dst_offset], rax */
                I4(0x48, 0x89, 0x47, dst_offset);
                break;
        case 4:
                /* mov [rdi + dst_offset], eax */
                I3(0x89, 0x47, dst_offset);
                break;
        default:
                BUG();
        }

        return out;
}

int bch2_compile_bkey_format(const struct bkey_format *format, void *_out)
{
        bool eax_zeroed = false;
        u8 *out = _out;

        /*
   * rdi: dst - unpacked key
   * rsi: src - packed key
   */

        /* k->u64s, k->format, k->type */

        /* mov eax, [rsi] */
        I2(0x8b, 0x06);

        /* add eax, BKEY_U64s - format->key_u64s */
        I5(0x05, BKEY_U64s - format->key_u64s, KEY_FORMAT_CURRENT, 0, 0);

        /* and eax, imm32: mask out k->pad: */
        I5(0x25, 0xff, 0xff, 0xff, 0);

        /* mov [rdi], eax */
        I2(0x89, 0x07);

#define x(id, field)                                                      \
  out = compile_bkey_field(format, out, id,                       \
                           offsetof(struct bkey, field),          \
                           sizeof(((struct bkey *) NULL)->field),      \
                           &eax_zeroed);
        bkey_fields()
#undef x

        /* retq */
        I1(0xc3);

        return (void *) out - _out;
}

#else
#endif

__pure
int __bch2_bkey_cmp_packed_format_checked(const struct bkey_packed *l,
                                          const struct bkey_packed *r,
                                          const struct btree *b)
{
        return __bch2_bkey_cmp_packed_format_checked_inlined(l, r, b);
}

__pure __flatten
int __bch2_bkey_cmp_left_packed_format_checked(const struct btree *b,
                                               const struct bkey_packed *l,
                                               const struct bpos *r)
{
        return bpos_cmp(bkey_unpack_pos_format_checked(b, l), *r);
}

__pure __flatten
int bch2_bkey_cmp_packed(const struct btree *b,
                         const struct bkey_packed *l,
                         const struct bkey_packed *r)
{
        return bch2_bkey_cmp_packed_inlined(b, l, r);
}

__pure __flatten
int __bch2_bkey_cmp_left_packed(const struct btree *b,
                                const struct bkey_packed *l,
                                const struct bpos *r)
{
        const struct bkey *l_unpacked;

        return unlikely(l_unpacked = packed_to_bkey_c(l))
                ? bpos_cmp(l_unpacked->p, *r)
                : __bch2_bkey_cmp_left_packed_format_checked(b, l, r);
}

void bch2_bpos_swab(struct bpos *p)
{
        u8 *l = (u8 *) p;
        u8 *h = ((u8 *) &p[1]) - 1;

        while (l < h) {
                swap(*l, *h);
                l++;
                --h;
        }
}

void bch2_bkey_swab_key(const struct bkey_format *_f, struct bkey_packed *k)
{
        const struct bkey_format *f = bkey_packed(k) ? _f : &bch2_bkey_format_current;
        u8 *l = k->key_start;
        u8 *h = (u8 *) (k->_data + f->key_u64s) - 1;

        while (l < h) {
                swap(*l, *h);
                l++;
                --h;
        }
}

#ifdef CONFIG_BCACHEFS_DEBUG
void bch2_bkey_pack_test(void)
{
        struct bkey t = KEY(4134ULL, 1250629070527416633ULL, 0);
        struct bkey_packed p;

        struct bkey_format test_format = {
                .key_u64s  = 3,
                .nr_fields = BKEY_NR_FIELDS,
                .bits_per_field = {
                        13,
                        64,
                        32,
                },
        };

        struct unpack_state in_s =
                unpack_state_init(&bch2_bkey_format_current, (void *) &t);
        struct pack_state out_s = pack_state_init(&test_format, &p);
        unsigned i;

        for (i = 0; i < out_s.format->nr_fields; i++) {
                u64 a, v = get_inc_field(&in_s, i);

                switch (i) {
#define x(id, field)      case id: a = t.field; break;
        bkey_fields()
#undef x
                default:
                        BUG();
                }

                if (a != v)
                        panic("got %llu actual %llu i %u\n", v, a, i);

                if (!set_inc_field(&out_s, i, v))
                        panic("failed at %u\n", i);
        }

        BUG_ON(!bch2_bkey_pack_key(&p, &t, &test_format));
}
#endif


from Hacker News https://ift.tt/KNzOcsq