Wednesday, November 30, 2022

Scientists Discover Two ‘Alien’ Minerals in Meteorite

A slice of the El Ali meteorite, now housed in the U of A's Meteorite Collection, contains two ... [+] minerals never before seen on Earth.

University of Alberta

A team of researchers has discovered at least two new minerals that have never before been seen on Earth in a 15 ton meteorite found two years ago in Somalia—the ninth largest meteorite ever found.

"Whenever you find a new mineral, it means that the actual geological conditions, the chemistry of the rock, was different than what's been found before," says Chris Herd, a professor in the Department of Earth & Atmospheric Sciences and curator of the University of Alberta's Meteorite Collection. "That's what makes this exciting: In this particular meteorite you have two officially described minerals that are new to science."

The two minerals found came from a single 70 gram slice that was sent to the university for classification, and there already appears to be a potential third mineral under consideration. If researchers were to obtain more samples from the massive meteorite, there's a chance that even more might be found, Herd notes.

The two newly discovered minerals have been named elaliite and elkinstantonite. The first receives its name from the meteorite itself, dubbed the "El Ali" meteorite because it was found lying on the ground of a valley used as a pasture for camels near the town of El Ali, in the Hiiraan region of Somalia. Herd named the second mineral after Lindy Elkins-Tanton, vice president of the ASU Interplanetary Initiative, professor at Arizona State University's School of Earth and Space Exploration and principal investigator of NASA's upcoming Psyche mission - a space rendezvous with a unique metal-rich asteroid orbiting the Sun between Mars and Jupiter.

"Lindy has done a lot of work on how the cores of planets form, how these iron nickel cores form, and the closest analog we have are iron meteorites. So it made sense to name a mineral after her and recognize her contributions to science," Herd explains.

In collaboration with researchers at UCLA and the California Institute of Technology, Herd classified the El Ali meteorite as an iron-silicate-meteorite, one of over 350 known.

As Herd was analyzing the meteorite to classify it, he saw something that caught his attention. He brought in the expertise of Andrew Locock, head of the U of A's Electron Microprobe Laboratory, who has been involved in other new mineral descriptions.

"The very first day he did some analyses, he said, "You've got at least two new minerals in there,'" says Herd. "That was phenomenal. Most of the time it takes a lot more work than that to say there's a new mineral."

Researchers are continuing to examine the minerals to determine what they can tell us about the conditions in the meteorite when it formed.

"That's my expertise—how you tease out the geologic processes and the geologic history of the asteroid this rock was once part of," says Herd. "I never thought I'd be involved in describing brand new minerals just by virtue of working on a meteorite."

"Whenever there's a new material that's known, material scientists are interested too because of the potential uses in a wide range of things in society," Herd notes. Some minerals found in meteorites could be used as templates to create synthetic materials with unique magnetic properties and applications in new technologies.

While the future of the meteorite remains uncertain, Herd says the researchers have received news that it appears to have been moved to China in search of a potential buyer. It remains to be seen whether additional samples will be available for scientific purposes.

The researchers described the findings at the Space Exploration Symposium on Nov. 21. Material provided by the University of Alberta.



from Hacker News https://ift.tt/oHUwSuQ

New Browser Dynamic Viewport Units

The viewport and its units #

To size something as tall as the viewport, you can use the vw and vh units.

  • vw = 1% of the width of the viewport size.
  • vh = 1% of the height of the viewport size.

Give an element a width of 100vw and a height of 100vh, and it will cover the viewport entirely.

A light blue element set to be 100vw by 100vh, covering the entire viewport. The viewport itself is indicated using a blue dashed border.
A light blue element set to be 100vw by 100vh, covering the entire viewport.
The viewport itself is indicated using a blue dashed border.

The vw and vh units landed in browsers with these additional units

  • vi = 1% of the size of the viewport’s inline axis.
  • vb = 1% of the size of the viewport’s block axis.
  • vmin = the smaller of vw or vh.
  • vmax = the larger of vw or vh.

These units have good browser support.

Browser support: chrome 20, Supported 20 firefox 19, Supported 19 edge 12, Supported 12 safari 6, Supported 6

The need for new viewport units #

While the existing units work well on desktop, it’s a different story on mobile devices. There, the viewport size is influenced by the presence or absence of dynamic toolbars. There are user interfaces such as address bars and tab bars.

Although the viewport size can change, the vw and vh sizes do not. As a result, elements sized to be 100vh tall will bleed out of the viewport.

100vh on mobile is too tall on load.
100vh on mobile is too tall on load.

When scrolling down these dynamic toolbars will retract. In this state, elements sized to be 100vh tall will cover the entire viewport.

100vh on mobile is “correct” when the User-Agent user interfaces are retracted.
100vh on mobile is “correct” when the User-Agent user interfaces are retracted.

To solve this problem, the various states of the viewport have been specified at the CSS Working Group.

  • Large viewport: The viewport sized assuming any UA interfaces that are dynamically expanded and retracted to be retracted.
  • Small Viewport: The viewport sized assuming any UA interfaces that are dynamically expanded and retracted to be expanded.
Visualizations of the large and small viewports.
Visualizations of the large and small viewports.

The new viewports also have units assigned to them:

  • Units representing the large viewport have the lv prefix. The units are lvw, lvh, lvi, lvb, lvmin, and lvmax.
  • Units representing the small viewport have the sv prefix. The units are svw, svh, svi, svb, svmin, and svmax.

The sizes of these viewport-percentage units are fixed (and therefore stable) unless the viewport itself is resized.

Two mobile browser visualizations positioned next to each other. One has an element sized to be 100svh and the other 100lvh.
Two mobile browser visualizations positioned next to each other.
One has an element sized to be 100svh and the other 100lvh.

In addition to the large and small viewports, there‘s also a dynamic viewport which has dynamic consideration of the UA UI:

  • When the dynamic toolbars are expanded, the dynamic viewport is equal to the size of the small viewport.
  • When the dynamic toolbars are retracted, the dynamic viewport is equal to the size of the large viewport.

Its accompanied units have the dv prefix: dvw, dvh, dvi, dvb, dvmin, and dvmax. Their sizes are clamped between their lv* and sv* counterparts.

100dvh adapts itself to be either the large or small viewport size.
100dvh adapts itself to be either the large or small viewport size.

These units ship in Chrome 108, joining Safari and Firefox which already have support.

Browser support: chrome 108, Supported 108 firefox 101, Supported 101 edge 108, Supported 108 safari 15.4, Supported 15.4

Caveats #

There‘s a few caveats to know about Viewport Units:

  • None of the viewport units take the size of scrollbars into account. On systems that have classic scrollbars enabled, an element sized to 100vw will therefore be a little bit too wide. This is as per specification.

  • The values for the dynamic viewport do not update at 60fps. In all browsers updating is throttled as the UA UI expands or retracts. Some browsers even debounce updating entirely depending on the gesture (a slow scroll versus a swipe) used.

  • The on-screen keyboard (also known as the virtual keyboard) is not considered part of the UA UI. Therefore it does not affect the size of the viewport units. In Chrome you can opt-in to a behavior where the presence of the virtual keyboard does affect the viewport units.

Additional resources #

To learn more about viewports and these units check out this episode of HTTP 203. In it, Bramus tells Jake all about the various viewports and explains how exactly the sizes of these units are determined.

Additional reading material:

Last updated: Improve article


from Hacker News https://ift.tt/dS0XCEy

Mozilla, Microsoft yank TrustCor's root certificate authority

Reardon and Egelman alerted Google, Mozilla and Apple to their research on TrustCor in April. They said they had heard little back until The Post published its report.



from Hacker News https://ift.tt/Ej2g9vF

Vector Search Engine

Qdrant is a vector similarity engine. It deploys as an API service providing search for the nearest high-dimensional vectors.
With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more!



from Hacker News https://qdrant.tech/

Couple Died by Suicide After the DEA Shut Down Their Pain Doctor (2022)

It was a Tuesday in early November when federal agents from the Drug Enforcement Administration paid a visit to the office of Dr. David Bockoff, a chronic pain specialist in Beverly Hills. It wasn’t a Hollywood-style raid—there were no shots fired or flash-bang grenades deployed—but the agents left behind a slip of paper that, according to those close to the doctor’s patients, had consequences just as deadly as any shootout.

On Nov. 1, the DEA suspended Bockoff’s ability to prescribe controlled substances, including powerful opioids such as fentanyl. While illicit fentanyl smuggled across the border by Mexican cartels has fueled a record surge in overdoses in recent years, doctors still use the pharmaceutical version during surgeries and for soothing the most severe types of pain. But amid efforts to shut down so-called “pill mills” and other illegal operations, advocates for pain patients say the DEA has gone too far, overcorrecting to the point that people with legitimate needs are blocked from obtaining the medication they need to live without suffering. 

One of Bockoff’s patients who relied on fentanyl was Danny Elliott, a 61-year-old native of Warner Robins, Georgia. In March 1991, Elliott was nearly electrocuted to death when a water pump he was using to drain a flooded basement malfunctioned, sending high-voltage shocks through his body for nearly 15 minutes until his father intervened to save his life. Elliott was never the same after the accident, which left him with debilitating, migraine-like headaches. Once a class president and basketball star in high school, he found himself spending days on end in a darkened bedroom, unable to bear sunlight or the sound of the outdoors. 

“I have these sensations like my brain is loose inside my skull,” Elliott told me in 2019, when I first interviewed him for the VICE News podcast series Painkiller. “If I turn my head too quickly, left or right, it feels like my brain sloshes around. Literally my eyes burn deep into my skull. My eyes hurt so bad that it hurts to blink.”

After years of trying alternative pain treatments such as acupuncture, along with other types of opioids, around 2002 Elliott found a doctor who prescribed fentanyl, which gave him some relief. But keeping a doctor proved nearly impossible amid the ongoing federal crackdown on opioids. Bockoff, Elliott said, was his third doctor to be shut down by the DEA since 2018. As Elliott described it, each transition meant weeks or months of desperate scrambling to find a replacement, plus excruciating withdrawals due to his physical dependence on opioids, followed by the return of that burning eyeball pit of despair.

After the DEA visited Bockoff on Nov. 1, Elliott posted on Twitter: “Even though I knew this would happen at some point, I'm stunned. Now I can't get ANY pain relief as a #cpp [chronic pain patient.] So I'm officially done w/ the US HC [healthcare] system.”

Privately, Elliott and his wife Gretchen, 59, were frantically trying to find another doctor. He sent a text to his brother, Jim Elliott, saying he was “praying for help but not expecting it.” 

Jim, a former city attorney for Warner Robins who is now in private practice, was traveling when he received his brother’s message. They made plans to talk later in the week, after Danny had visited a local physician for a consultation. In subsequent messages, Danny told Jim that Gretchen had reached out to more than a dozen doctors. Each one had responded saying they would not take him as a patient.

Jim recalled sensing in Danny “a level of desperation I hadn't seen before.” Then, on the morning of Nov. 8, he woke up to find what he called “a suicide email” from his brother. Jim called the local police department in Warner Robins to request a welfare check. The officers arrived a few minutes before 8:30 a.m. to find both Danny and Gretchen dead inside their home. 

A police report obtained by VICE News lists a handgun as the only weapon found at the scene. Warner Robins police said additional records could not be released because the case is “still active.” The department issued a press release calling the deaths a “dual suicide.” 

Jim shared a portion of a note that Danny left behind: “I just can't live with this severe pain anymore, and I don't have any options left,” he wrote. “There are millions of chronic pain patients suffering just like me because of the DEA. Nobody cares. I haven't lived without some sort of pain and pain relief meds since 1998, and I considered suicide back then. My wife called 17 doctors this past week looking for some kind of help. The only doctor who agreed to see me refused to help in any way. What am I supposed to do?”

At a joint funeral for Danny and Gretchen Elliott on Nov. 14 in Warner Robins, mourners filled a mortuary chapel to overflow capacity. Eulogies recalled a couple completely devoted to each other. They were doting cat owners, dedicated fans of Georgia Tech and Atlanta sports teams, and devout Christians, even as Danny’s chronic pain increasingly left him unable to attend church. In photos, the Elliotts radiate happiness with their smiles. But their lives were marred by pain: Gretchen was a breast cancer survivor. She married Danny in 1996, well after his accident, signing up to be his caregiver as part of their life partnership.

“It was a Romeo and Juliet story. They didn't want to live without each other,” said Chuck Shaheen, Danny’s friend since childhood and Warner Robins’ former mayor. “I understand the DEA and other law enforcement, they investigate and then act. But what do they do with the patients that are no longer able to have treatment?”

Shaheen and Danny both worked in years past for Johnson & Johnson, which is among the companies sued for allegedly causing the opioid crisis. Shaheen was also previously a salesperson for Purdue Pharma, the maker of OxyContin, another company blamed for spreading addiction. But Shaheen said Danny was not among those chasing a high—he, like others with severe chronic pain, was just seeking a semblance of normalcy.

“They're not doctor shopping,” Shaheen said. “They're not trying to escalate their dose. They're trying to function.”

Danny told me in 2019 that the relief he obtained from fentanyl didn’t make him feel euphoric or even completely pain free. He was using fentanyl patches and lozenges designed for people with terminal cancer pain, at extremely high doses that raised eyebrows whenever he was forced to switch doctors. But it was the only thing that worked for him.

“I call it turning the volume of my pain down from an eight or nine or even 10 sometimes to a six or a five,” he said. “The pain doesn't get much lower than that, but for me, that's almost pain free. It was the happiest thing I've ever experienced in my life.”

There are millions of chronic pain patients suffering just like me because of the DEA. Nobody cares

Gretchen’s brother, Eric Welde, choked up as he spoke with VICE News at the funeral about his perspective on the family’s loss: “In my mind, what the DEA is essentially doing is telling a diabetic who's been on insulin for 20 years that they no longer need insulin and they should be cured. They just don't understand what chronic pain is.”

So far, no criminal charges have been filed against Bockoff. In response to an inquiry from VICE News about the deaths of Danny and Gretchen Elliott, the doctor emailed a statement that said: “I am unable to participate in an interview except to say: Their blood is on the DEA’s hands.”

The DEA responded to a list of questions about Bockoff and the Elliotts’ suicides with an email saying Bockoff received what’s known as an “Immediate Suspension Order,” which according to public records is warranted in cases where the agency believes the prescriber poses “an imminent danger to public health or safety.” The DEA said local public health partners were notified in advance to coordinate under a federal program designed to mitigate overdose risks among patients who lose access to doctors. The agency offered no further comment.

Data on suicides by chronic pain patients is scarce, but experts who study these cases estimate that hundreds—perhaps thousands—of Americans have taken their own lives in the aftermath of losing access to prescription opioids and other medications. Some cases have occasionally made news, like a woman in Tennessee who was arrested for buying a gun to assist her husband’s suicide after his doctor abruptly cut down on his medication used to treat back pain.

Starting around 2016, a backlash to prescribing opioids began to spread across the U.S. healthcare system, sparked in part by guidance from the Centers for Disease Control and Prevention (CDC) that prompted scrutiny of patients on doses equivalent to over 90 milligrams of morphine per day.

The National Committee for Quality Assurance, which develops quality metrics for the healthcare industry, has implemented its own 90 milligram threshold, and patients over that baseline count as receiving “poor care,” regardless of their dose history. In practice that means doctors have strong incentives to reduce the dosage, even for someone like Elliott, who had been taking the same prescription for years, and even if it’s not necessarily in the best interests of the patient.

Since 2018, the CDC has developed an initiative called the Opioid Rapid Response Program, which is supposed to assist when doctors lose the ability to prescribe pain medication. Stephanie Rubel, a health scientist in CDC’s Injury Center who leads the program, said when the DEA visited Bockoff’s office, “a healthcare professional was onsite in case any patients arrived for their appointments.”

Rubel, in a statement sent via the CDC’s press office, said everyone from the county health department to Medicare providers were alerted about the DEA’s action against Bockoff. But Rubel also noted that the CDC program “does not provide direct assistance to patients affected by a disruption, including referrals or medical care.” In fact, the only help that patients like Elliott received was a flier with a list of local emergency rooms they could visit if—or when—they started experiencing withdrawal.

“Any loss of life due to suicide is one too many,” Rubel said. “This case is heartbreaking and emblematic of the trauma, pain and danger many patients face when these disruptions occur and is why ORRP [Opioid Rapid Resposne Program] has been developed to help prepare state and local jurisdictions to respond when disruptions in care occur.”

Dr. Stefan Kertesz, a professor at the University of Alabama at Birmingham Heersink School of Medicine, had been acquainted with Danny since 2018, meeting him in another moment of crisis. Danny’s doctor at the time had just been arrested by the DEA, and Kertesz, who conducts research and advocates on behalf of chronic pain patients, stepped in to help. It was difficult, Kertesz told VICE News, because “the doses he was on were orders of magnitude higher than most doctors are familiar with.”

What the DEA is essentially doing is telling a diabetic who's been on insulin for 20 years that they no longer need insulin and they should be cured

Danny ultimately found another doctor but was forced to change once more before landing with Bockoff in Beverly Hills. Kertesz cautioned that he was not familiar with the details of Bockoff’s case, but said the doctor was known for treating “opioid refugees” who’d been turned away from other physicians. Danny and his wife would fly from Georgia to Los Angeles for appointments, and other patients with unique circumstances came from around the country.

Bockoff had practiced medicine in California for 53 years with no record of disciplinary action or complaints with the state medical board, according to the Pain News Network, which reported the DEA searched the doctor’s office about a year ago. The DEA took, but eventually returned, some patient records. 

Asked about the DEA’s handling of the Bockoff case, Kertesz replied: “Honestly, it seemed to me like bombing a village. It could be they think they're getting the bad guy, but it's not a precision munition. Whoever is launching the bomb has to consider the collateral damage.” 

Clinical research on chronic pain patients is complicated, Kertesz said, but “a lengthy series of studies confirm that there is a strong association between opioid reduction or stoppage and suicide.” While reducing opioid intake can be helpful for some people, he said, Danny and other longtime users with medical needs should not be forced to go cold turkey.

“Even if you believe the doctors did something wrong, I can't find somebody who believes all those patients should die,” Kertesz said. “And if we agree they shouldn't all die, then why would we act in such a way that we know we're going to massively increase their risk of death?”

Another former Bockoff patient was Kristen Ogden’s husband Louis. Much like Gretchen Elliott, Ogden has supported her husband for years as he’s battled chronic pain caused by a rare condition similar to fibromyalgia. And like the Elliotts, the Odgens have dealt with the fallout of DEA actions that triggered desperate searches for new doctors.

The Ogdens live in Virginia and had just landed in California for a doctor’s appointment when they got the news about the DEA’s visit to Bockoff’s office. They found the emergency room flier to be a slap in the face.

“They probably look at you as an addict and they recommend that you do whatever you can to get off these medications,” Ogden said. “They're not there to help us at all.”

Ogden is the co-founder of an advocacy group called Families for Intractable Pain Relief, and she started reaching out to her network, including other patients. She spoke to Danny by phone and described him as sounding “consumed by this dread of what he fully expected was going to be the next step for his life—months of untreated pain.”

Honestly, it seemed to me like bombing a village

Ogden said she’s personally called at least 10 doctors seeking treatment for her husband but to no avail. Other Bockoff patients are in the same boat, she said, and nobody she knows has been able to find another specialist willing to continue with a similar course of care.

Dr. Thomas Sachy of Gray, Georgia, was the first doctor to prescribe Danny fentanyl and remained his physician until the DEA raided his practice in 2018. Federal authorities have alleged Sachy had his office set up like a “trap house” with firearms on the premises. Sachy is charged with “issuing prescriptions not for a legitimate medical purpose and not in the usual course of professional practice.” Two employees and Sachy’s 84-year-old mother, who worked at his clinic, were also initially charged but their cases have since been dropped.

Sachy agreed to plead guilty in the case to avoid a possible life sentence but later withdrew the plea. He maintains his innocence. His trial is scheduled to start in January in federal court in Georgia. Wearing an electronic ankle monitor to track his location while out on bond, Sachy attended the Elliotts’ funeral service in Warner Robins, where he sat for an interview with VICE News.

Federal prosecutors have accused Sachy of prescribing opioids that contributed to the deaths of patients. Sachy in turn blames the DEA for the suicides of two patients who took their lives in the aftermath of the raid.

“My patients weren't young drug addicts off the street,” Sachy said. “They were middle-aged and older with health problems. And the thing about pain, chronic pain, and the anxiety and the suffering that comes with it, it wears you down.”

Similar to what happened with Bockoff, after the DEA visited Sachy’s office in Georgia, the only resource made available to patients was a list of local pain management facilities and resources for opioid withdrawal, including emergency rooms. Sachy scoffed at the idea of his patients visiting an ER for help: “They'd look at them like they were insane or criminals or both.”

“It's absolutely frustrating,” Sachy said. “It's absolutely heartbreaking. It sucks. It destroys everything you think a physician should do and be and should be able to accomplish. It’s all taken away. And it's just utter helplessness.”

Among the Elliott family and other pain patients, helplessness and anger remain common sentiments. Ogden said her husband and other chronic pain patients have spoken with an attorney about the possibility of a lawsuit against the DEA.

As a lawyer who worked for years in public office, Jim Elliott knows civil litigation against the government can be an uphill battle. He said the family is still deciding how to move forward in response to the deaths of Danny and Gretchen.

Jim emphasized that “it wasn't as if pain medication made Danny's life great.”

Fentanyl just made the pain bearable. And when that was taken away, Danny saw no future.

“He was taking a high level of pain medication but he wasn't an addict and he wasn't trying to get high or anything,” Jim Elliott said. “He was just trying to live a life. And they closed every door for people like that.”

If you are in crisis, please call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), or contact the Crisis Text Line by texting TALK to 741741.



from Hacker News https://ift.tt/hVFWu4c

UK high street banks are quarantining from crypto

Last year, it was almost impossible to miss posters for cryptocurrencies plastered all over London’s public transportation network. Crypto was booming, and commuters were subjected to nearly 40,000 crypto advertisements from 13 different companies in the span of six months. These flyers seemed to captivate retail investors and investment bankers alike, with major institutions like Standard Chartered and JP Morgan overseeing major cryptocurrency investments from their London offices alongside a speculating market of some 2.3 million ordinary men and women. The missing link between the two, it seemed, were the high street banks. An agglomeration of building societies, multinational and challenger banks frequently criticised by UK regulators for setting interest rates too low and overdraft charges too high, this group seemed half opposed to these volatile new financial products and half seduced.  

What a difference a year makes. The onset of the crypto winter, combined with the fallout from the collapse of FTX, seems to have shut down any interest among UK high street banks in facilitating crypto trading or ownership among their retail customer base. While crypto giants like BlockFi and Grayscale Bitcoin fight for survival, Starling Bank has become the latest British retail bank to insulate itself from the sector, announcing last week that it would now totally ban all cryptocurrency transactions, which it now describes as‘high risk.’ It follows on from a decision it made last summer to prohibit crypto-related transactions using its accounts, only to allow them to resume a week later. 

This crypto ban by Starling comes off the heels of Santander’s decision to limit customer deposits to crypto exchanges to £1,000 per transaction and a monthly limit of £3,000. These two banks join the ranks of other financial institutions that have banned or severely limited their exposure to crypto, including Halifax, Nationwide, HSBC, The Co-operative Bank, and Virgin Money. According to the personal finance comparison site Finder, roughly 47% of UK banks currently do not allow customers to conduct transactions with any crypto exchanges. 

But the chaos left in the wake of FTX’s collapse aside, British banks will also be concerned by the increasing rate of crypto fraud reported in the country. New data obtained by the Financial Times reveals how losses stemming from crypto fraud reported to Action Fraud surged by more than 30% year-on-year between October 2021 and September 2022. This is despite a general decrease in overall fraud levels as consumers emerge from the pandemic, according to UK Finance

Perhaps unsurprisingly, some crypto industry figures have reacted strongly to the news of UK banks banning or limiting transactions, describing British banks as a threat to the industry at large. The Twitter account for Sovryn, a decentralised finance provider, condemned the bans as hypocritical. ‘Banks do not meddle in any other "high risk" activities - they'll happily let you purchase tobacco, alcohol, or prescription drugs”, it tweeted. 'Where’s the logic?'

Su Carpenter, director of operations at CryptoUK, believes that, while the need for banks to limit their exposure to potential frauds and scams is understandable, an outright ban on crypto transactions is disproportionate. “There are more effective methods that could be introduced to balance the need to protect potentially vulnerable customers whilst still allowing individuals the right to choose how and where they invest their own money, whether in crypto or other investment options,” she says. “More robust transaction monitoring and the desire to work collaboratively with the crypto and digital assets sector would seem like a more logical way to better understand and mitigate potential risks.”

Carpenter also said that successful crypto scams remain a fraction of the total losses sustained from fraud across the UK, and that there are “highly effective analytics and monitoring tools available to better understand the nature of transactions” that would be a “more appropriate response” to the problem of fraud than instituation's embarking on a straight crypto ban. 

But it’s not like banks have a rule book to refer back to when making these decisions, explains Molly White, the software engineer behind ‘Web3 is going great,’ a running list of all the scams, collapses and scandals currently bedevilling the cryptoverse. “If the banks are overwhelmingly seeing issues in the crypto industry, it doesn’t really surprise me that they’ve chosen to take such a broad approach,” she says. 

Content from our partners

White is also sceptical about solutions purporting to show which crypto companies represent a risk for banks and their customers. “It’s hard to individually predict which companies might later become a problem because, if banks knew a project was fraudulent, then people wouldn’t really be putting money into it,” she says. 

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

As financial institutions and the crypto industry continue to grapple over the right amount of regulation, others in the private sector have begun questioning the longevity and relevance of the industry itself after back-to-back scandals. Earlier this month in The Economist, the FTX collapse was described as a ‘catastrophic blow to crypto’s reputation and aspirations’ while a Reuters podcast similarly declared that the incident ‘consigns crypto to fringes of finance’. 

White is less convinced, believing that cycles of collapse and reinvention are in the very nature of the crypto industry. “We saw the collapse of Mt. Gox years ago and some might have said that was the death knell for crypto – but of course it wasn’t,” she says, referring to the Japan-based Bitcoin exchange which filed for bankruptcy in 2014. “The crypto industry finds ways to sort of reinvent itself every couple of years with increasing veneers of legitimacy. So, I’d be surprised if that didn’t happen again unless there’s some sort of external factor involved - like regulatory change.”

Homepage image by Thomas Krych/SOPA Images/LightRocket via Getty Images.



from Hacker News https://ift.tt/7DQuZeI

The Fake Snow Leopard: Photomontage Spread Around the World

Comments

from Hacker News https://ift.tt/qebPiE1

Tuesday, November 29, 2022

The Evolution of Mathematical Software

By Jack J. Dongarra
Communications of the ACM, December 2022, Vol. 65 No. 12, Pages 66-72
10.1145/3554977
Comments
code and colored bars on a computer display

Credit: WhiteMocca

Over four decades have passed since the concept of computational modeling and simulation as a new branch of scientific methodology—to be used alongside theory and experimentation—was first introduced. In that time, computational modeling and simulation has embodied the enthusiasm and sense of importance that people in our community feel for the work they are doing. Yet, when we try to assess how much progress we have made and where things stand along the developmental path for this new "third pillar of science," recalling some history about the development of the other pillars can help keep things in perspective. For example, we can trace the systematic use of experiment back to Galileo in the early 17th century. Yet for all the incredible successes it enjoyed over its first three centuries, the experimental method arguably did not fully mature until the elements of good experimental design and practice were finally analyzed and described in detail in the first half of the 20th century. In that light, it seems clear that while computational science has had many remarkable successes, it is still at a very early stage in its growth.

Many of those who want to hasten that growth believe the most progressive steps in that direction require much more community focus on the vital core of computational science: software and the mathematical models and algorithms it encodes. Of course, the general and widespread obsession with hardware is understandable, especially given exponential increases in processor performance, the constant evolution of processor architectures and supercomputer designs, and the natural fascination that people have for big, fast machines. I am not exactly immune to it. When it comes to championing computational modeling and simulation as a new part of the scientific method, the complex software "ecosystem" that coincides must be forefront.


No entries found

Log in to Read the Full Article

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.


from Hacker News https://ift.tt/zBRjXs3

Ungox – Mark Karpeles new venture

Comments

from Hacker News https://www.ungox.com

Neuromancer: Miles Teller Eyed for New Apple+ Sci-Fi Series

Apple is gearing up a new sci-fi series, Neuromancer, for its streaming service and wants Top Gun: Maverick star Miles Teller to lead the project.

Apple TV+ is slowly becoming one of the most interesting streaming services on the market. While it doesn’t have as much content as Netflix or Disney+, the streamer convinces with high-quality series and films led by the biggest stars in Hollywood. Its beloved comedy series Ted Lasso has secured multiple big Emmy wins over its last two seasons and the Apple TV+ Original film Coda became the first Best Picture Oscar winner by a streaming service. Apple is looking at another great year as the highly anticipated third season of Ted Lasso and the Steven Spielberg-produced Masters of the Air limited series are both set to hit the streamer in 2023.

RELATED: DISNEY DEVELOPING AN ESCAPE TO WITCH MOUNTAIN DISNEY+ SERIES; IN TALKS WITH BRYCE DALLAS HOWARD FOR KEY ROLE: EXCLUSIVE

Disney+’s focus lies on Marvel Studios and Star Wars content but is also looking to bring back an old property with its Escape To Witch Mountain TV series, which we exclusively revealed to be in development. Simultaneously HBO Max is about to release the highly anticipated The Last of Us series, as well as Succession Season 4, and has just renewed their Emmy-winning satire The White Lotus for a third season. But Apple keeps its momentum as its currently casting another big-budget series and we can reveal some exclusive first information about the project.

FROM JETS TO HACKING

Neuromancer

We at The Illuminerdi can exclusively reveal that Apple TV+ is looking to adapt the acclaimed book trilogy Neuromancer as a long-form TV series. The book is considered one of the earliest and best-known works in the cyberpunk genre. Set in a futuristic Japan, the show will follow the hacker, Case, who after breaking out of prison agrees to do one last job that brings him in contact with a forceful AI.

The streamer has offered Top Gun: Maverick star Miles Teller the lead role of Case, a hacker, antihero, and drug addict. If he agrees to Apple’s offer, the actor is looking at a one-season deal.

Teller is best known for the box-office smash hit, starred in acclaimed films like Whiplash and War Dogs, and set foot in the superhero genre with 2015’s Fantastic Four. He was also the lead of last year’s Paramount Plus original limited series The Offer. He is set to star in the next film from Doctor Strange and The Black Phone director Scott Derrickson, alongside Anya Taylor-Joy.

Neuromancer Wants More

Miles Teller in Top Gun Maverick

Neuromancer is additionally currently casting the female lead, Molly. They are looking for an actress in her 30’s or 40’s, who is physically fit. Molly is a mercenary, who was recruited by the same person as Case. The character is supposed to resemble Trinity from the Matrix films. It is intended that Molly will carry over as the lead for a potential Season 2 and 3.

Lastly, Neuromancer is casting Linda Lee, Case’s love interest. They are looking for an actress in her 20s or 30s for a recurring role.

RELATED: DEVOTION REVIEW – A BEAUTIFUL MOVIE OF MEN BEING VULNERABLE

Graham Roland will serve as a writer, producer, and showrunner of Neuromancer. He is best known for his work on the hit show Lost and Amazon Prime Video’s Jack Ryan. Author William Gibson will also serve as an executive producer. We can also reveal that at least one episode will be directed by J.D. Dillard. His newest film, Devotion, based on a true story war drama starring Jonathan Majors (Lovecraft Country) and Glen Powell (Top Gun: Maverick) is currently playing in theaters. Besides that, he is best known for his work on The Outsider.

Neuromancer

Neuromancer is looking to start production in the Summer of 2023. What do you all think about the Neuromancernews? Do you want Miles Teller to sign on for the project? Have you read the novel? Are you excited about this show? Let’s discuss everything in the comments down below and on our social media.

KEEP READING: AGATHA: COVEN OF CHAOS: MORE STORY DETAILS REVEALED: EXCLUSIVE



from Hacker News https://ift.tt/fUG7V1H

Creating a handwritten TrueType font in Linux (2018)

Creating a handwritten TrueType font in Linux

Mar 21, 2018 Art Inkscape Linux

I always wanted to create a font out of my handwritten letters and in this small tutorial I will show you the way I have done it with the help of FontForge, Inkscape and GIMP.

A handwritten TrueType font

Writing some letters

Start by writing the alphabet in lower and upper case plus numerics and some special characters. Good for this job is a graph paper for example. Afterwards you scan the paper and the result should look similar to my one.

Scanned handwritten letters

Preparing the scan

Let us open the scan in GIMP and use the Brightness and Contrast tool under the menu point Colors. Increase the Contrast until the letters have nice shape. The hadnwritten scan should look similar to my one. Mostly black and white should be visible.

Scanned handwritten letters with high contrast

From raster graphics to vector graphics with Inkscape

Copy the whole scan with high contrast from GIMP to your clipboard and open Inkscape. Paste the scan from the clipboard into Inkscape. Open the Trace Bitmap tool under the menu item Path while having the scan object selected.

Use Grays under Multiple scans and set the number of Scans to 2. Press the OK button afterwards and close the tool if it ready.

Preparing the handwritten vector graphic

The result should be an object group with layers. Ungroup the the object group and remove the background layer. Your vector graphic should look similar to my SVG image in Inkscape.

Handwritten font SVG

From SVG to a TrueType font with FontForge

First of all start FontForge and press the New button to create a new font.

FontForge new font

The following steps have to be repeated for every character in your font. I will show you the steps on the example character A.

Extract character from SVG path

Duplicate the characters layer by pressing Ctrl + D for example. Use the Edit paths by Nodes tool or just pressing F2 while having one characters layer selected. And remove every node and path that is not part of the A character. The following image shows all nodes except the A as selected before deleting them.

Inkscape Edit path of handwritten font

Simplify the character path

The next step is to simplify the path of the character. The tool is in the menu under Path and Simplify or just pressing Ctrl + L. The simplified character should look similar to my one afterwards.

Inkscape Simplify a handwritten character

Paste the character path to FontForge

Copy the character path in Inkscape and let us open the character A in our empty FontForge font with a double click. Now paste the path of the character into FontForge. The result should similar to my one.

FontForge Handwritten character

Scale the character in FontForge with stable aspect ratio

Please select the whole path of the character with Ctrl + A and use the Scale the Selection tool of the toolbar. Use the scale tool with the Shift key pressed to keep the aspect ratio of the path. Move the character to the zero line and scale it until the character fits the top boundary.

FontForge Handwritten character scaled

Afterwards you have to move the right boundary until it fits your character. Depending on your style you should keep more or less space between the boundary and the character. Close the character window and repeat the last steps for every character including the space character.

Export TrueType font with FontForge

Your font should similar to my one afterwards.

FontForge Handwritten font

Please make sure to save your font as sfd file and after all you can export it as TrueType font via the menu item File and Generate Fonts....

Your handwritten font as web font

There are other font formats, but normally every modern browser understands TrueType fonts. The following small example loads my font and shows you the usage of it.

<style>
@font-face {
    font-family: MrPoopybutthole;
    src: url(path_to/mrpoopybutthole.ttf);
}
</style>
<p style="font-family: MrPoopybutthole;">MrPoopybutthole</p>

MrPoopybutthole



from Hacker News https://ift.tt/iS8Xwmt

UK ditches ban on 'legal but harmful' online content in favour of free speech

LONDON, Nov 28 (Reuters) - Britain will not force tech giants to remove content that is "legal but harmful" from their platforms after campaigners and lawmakers raised concerns that the move could curtail free speech, the government said on Monday.

Online safety laws would instead focus on the protection of children and on ensuring companies removed content that was illegal or prohibited in their terms of service, it said, adding that it would not specify what legal content should be censored.

Platform owners, such as Facebook-owner Meta and Twitter, would be banned from removing or restricting user-generated content, or suspending or banning users, where there is no breach of their terms of service or the law, it said.

The government had previously said social media companies could be fined up to 10% of turnover or 18 million pounds ($22 million) if they failed to stamp out harmful content such as abuse even if it fell below the criminal threshold, while senior managers could also face criminal action.

The proposed legislation, which had already been beset by delays and rows before the latest version, would remove state influence on how private companies managed legal speech, the government said.

It would also avoid the risk of platforms taking down legitimate posts to avoid sanctions.

Digital Secretary Michelle Donelan said she aimed to stop unregulated social media platforms damaging children.

"I will bring a strengthened Online Safety Bill back to Parliament which will allow parents to see and act on the dangers sites pose to young people," she said. "It is also freed from any threat that tech firms or future governments could use the laws as a licence to censor legitimate views."

Britain, like the European Union and other countries, has been grappling with the problem of legislating to protect users, and in particular children, from harmful user-generated content on social media platforms without damaging free speech.

The revised Online Safety Bill, which returns to parliament next month, puts the onus on tech companies to take down material in breach of their own terms of service and to enforce their user age limits to stop children circumventing authentication methods, the government said.

If users were likely to encounter controversial content such as the glorification of eating disorders, racism, anti-Semitism or misogyny not meeting the criminal threshold, the platform would have to offer tools to help adult users avoid it, it said.

Only if platforms failed to uphold their own rules or remove criminal content could a fine of up to 10% of annual turnover apply.

Britain said late on Saturday that a new criminal offence of assisting or encouraging self-harm online would be included in the bill.

($1 = 0.8317 pounds)

Reporting by Paul Sandle; Editing by Alex Richardson

Our Standards: The Thomson Reuters Trust Principles.



from Hacker News https://ift.tt/SWpDfgK

Ask HN: Can I own an IP address and take it with me across providers?

Comments

from Hacker News https://ift.tt/Cmapnko

Can you game core allocation on Apple Silicon?

If you use an Apple silicon Mac, you’ll be aware that it uses its Efficiency (E) and Performance (P) cores differently, and you may have seen how identical tasks running exclusively on its E cores generally take longer than those running mostly on its P cores. For over a year I’ve been trying to get a better understanding of what underlies this, and how macOS decides which core type to run threads on.

While it’s easy to become overwhelmed with detail, the broad rules appear based on the Quality of Service (QoS) assigned by apps to the threads they create:

  • Threads with the lowest QoS, ‘background’ or 9, or lower, are normally run exclusively on the E cores.
  • Threads with higher QoS, which range up to ‘user interactive’ at 33, can be run on either core type, but are normally run on P cores when they’re available.

The QoS set by an app is moderated by macOS, whose internal QoS metrics are more subtle and flexible.

This is supported by the following comment in the source code of sched_amp_common.c in the Darwin kernel:
“The default AMP scheduler policy is to run utility and by [that should I think be ‘bg’ for ‘background’] threads on E-Cores only. Run-time policy adjustment unlocks ability of utility and bg to threads to be scheduled based on run-time conditions.”
Neither the developer nor user appears to have access to any facility to adjust run-time policy, though, and force background threads to be run on P cores to any significant extent.

Methods

Investigating this isn’t easy. I have relied primarily on measuring the performance of test threads, with the support of sampling measurements using powermetrics. For visualisation, I have shown CPU History windows from Activity Monitor, while documenting the shortcomings and misrepresentations they contain.

This article extends those to look specifically at differences seen between E and P cores, and how this changes in lightweight virtualisation.

My test threads are coded in assembly language, and consist of tight loops that only access registers. Although those used here consist of floating-point calculations, I’ve shown that they behave the same as others using only integer instructions, others including NEON instructions, and using Accelerate calls.

These aren’t intended to simulate normal working threads, but to estimate a core’s maximum instruction throughput with instructions that don’t rely on memory or disk access, or other out-of-core features that could constrain execution. Before we can start to study more complex threads, we need to understand what happens in simpler cases.

For these tests, I show results obtained from running:

  • Tight floating-point loops in my app AsmAttic,
  • native in macOS 13.0.1 on a Mac Studio M1 Max,
  • and in a four-core macOS 13.0.1 virtual machine using my virtualiser Viable.
  • Test compressions and decompressions of a 1 GB file using my app Cormorant, native and in a VM.

For AsmAttic tests, I ran 1-4 threads each running the same number of loops of test code, measuring total time taken using the Mach clock. These were run at a background QoS, and at the maximum ‘userInteractive’ setting.

Results

As shown previously, there’s a strong linear relationship between the number of threads (T) and the rate of loop execution (R, loops/s) on the host at high QoS, which was essentially identical to that in the VM. However, in the VM the relationship was almost identical at low QoS too. Regression equations obtained were:

  • Host QoS 33: R = (1.411 x 10^7) + (1.4784 x 10^8).T
  • VM QoS 33: R = (1.3628 x 10^7) + (1.4717 x 10^8).T
  • VM QoS 9: R = (1.2246 x 10^7) + (1.4696 x 10^8).T

Thus, in those three conditions, each thread was executed at approximately 15 x 10^7 loops/s. These are shown in the graph below, where the upper solid line represents the first of those three regressions, and the three sets of points are very closely clustered along that.

hostvmpecores1

The lower broken line shows matching results when running the same test natively on the host at a QoS of 9, which could hardly be more different. That series shows the following:

  • a single thread ran at 4.8 x 10^7 loops/s, one third of the rate of a single thread at high QoS, completing in 20.8 s;
  • two threads ran at 20 x 10^7 loops/s, just over four times the rate of a single thread at low QoS, completing in 9.9 s;
  • three threads completed in 30.8 s, close to the sum of a single and two threads, 30.7 s;
  • four threads completed in 19.9 s, close to twice that of two threads, 19.8 s.

Separate measurements using powermetrics demonstrate that, when running a single thread in this test, the two E cores ran at a total of 100% active residency and a frequency of 972 MHz, but running two threads (at double the total active residency) their frequency is 2064 MHz, 2.1 times the lower frequency.

Explanation

When run natively on the host at high QoS, test threads were run exclusively on the P cores, as they were when run in the VM regardless of the QoS, at roughly three times the speed of a single thread on the E cores. Because threads with high QoS can still be run on E cores, that doesn’t mean that high QoS threads will only be run on P cores, though: I have previously shown how, with increasing numbers of high QoS threads, some will be run on E cores, and macOS can migrate threads from P to E cores as it sees fit.

When run natively on the host at low QoS, test threads were run exclusively on the E cores. However, the frequency of E cores is adjusted according to the number of threads being run on them. When a single low QoS thread is being run on the two E cores in an M1 Pro or Max chip, both E cores run at a frequency of 972 MHz. When two low QoS threads are being run, frequency is increased to maximum, 2064 MHz. This enables the E cores to complete test threads in half the time.

When more than two threads of low QoS are run on two E cores, they are despatched so that two threads are run concurrently at 2064 MHz. When the number of threads running on the two E cores falls to one, core frequency is reduced to 972 MHz again to complete that single remaining thread.

More complex threads

For tests using core-bound code, the above appears to hold good when extraneous tasks are light and don’t affect the availability of cores. The situation also becomes more complex for code in which out-of-core resources may limit performance. This is illustrated by results from test compression and decompression using Apple Archive, called in the API from Swift.

Time required to compress and decompress a test 1 GB file when running native on the host Mac was greatly influenced by QoS:

  • At a QoS of 33, compression took 0.56 s, and decompression 0.27 s.
  • At a QoS of 9, compression took 4.4 s, and decompression 0.72 s.

Thus compression took nearly 8 times longer at low QoS, and decompression only 2.7 times as long.

When run in a VM, both compression and decompression took 1.3-1.5 seconds regardless of QoS. That was significantly faster than native compression at low QoS, but slower than native decompression at low QoS, and slower than both at high QoS. Uniformity of time taken in the VM suggests that rate was limited by another factor, in this case almost certainly the speed of writing the output file to the VM’s virtual disk.

Summary of conclusions

  • When running natively on Apple silicon chips, macOS allocates threads assigned QoS of ‘background’ (9) and below to be run on the E cores. While macOS may be able to adjust that at run-time, neither the developer nor user appears able to change that.
  • Threads assigned higher QoS can be run on either core type, but when available P cores are normally preferred.
  • Threads containing the vCPUs of a lightweight macOS Virtual Machine are normally run at higher QoS, and are therefore normally preferentially run on P cores when available. Each vCPU is normally run as a single thread on the host.
  • When run in a VM, QoS in the guest isn’t used to determine core type allocation, which is subsumed to that of the higher QoS used by the host.
  • Thus core-intensive threads normally run at low QoS can see significant performance improvements when run in a VM. However, threads whose performance is constrained by out-of-core factors, such as disk writes, may not show improvement in performance when run in a VM.
  • Substantial differences in performance resulting from QoS settings are likely to vanish when run in a VM. This can result in anomalous behaviour in VMs, where threads with low QoS may run as if they had high QoS.
  • You can game the core type allocation system by running tasks in a VM.
  • Despite the substantial differences in performance of the E cores depending on the number of threads they’re running, trying to game their frequency behaviour is unlikely to prove successful.

I’m very grateful to Saagar Jha for his criticism and pointers, even though I’m sure he still considers this to be grossly oversimplified.



from Hacker News https://ift.tt/pTdKebg

Incident report: Some GOV.UK URLs blocked by deceptive site warning

A picture reading GOV.UK Incident Report

What happened

On Friday 30 September the GOV.UK Team became aware that some users of GOV.UK were being prevented from accessing files hosted on GOV.UK. This was because some browsers were blocking the files with a message saying: “Deceptive site ahead”. The issue affected supplementary resources for government publications, such as PDF files.

As soon as we became aware of this problem we started our incident process. This coordinates investigation into the problem and communication across the organisation. We were able to establish that this identification of a deceptive site was incorrect and that it was related to a system misconfiguration. We resolved this misconfiguration within 2 hours of declaring the incident, which allowed users to access files again.

What users saw

Users could browse www.gov.uk as normal, but if they followed certain links, they would see a “Deceptive site ahead” warning in their browser. This type of warning occurs when web browsers block access to a website that may be pretending to be another website. The purpose of a deceptive website is typically malicious, potentially acting as a phishing scam or tricking users into downloading software intended to harm their computers (such as malware or ransomware).

Screenshot of error messaging saying "Deceptive site ahead. Attackers on assets-origin.production.govuk.digital may trick you into doing something dangerous like installing software or revealing your personal information (for example, passwords, phone numbers or credit cards."

A lot of pages on GOV.UK contain lists of supporting documents, which can be in a variety of different file formats. Taking the Household Support Fund: guidance for local councils guidance, for example, we can see attachment types including HTML, PDF and MS Excel Spreadsheet.

Screenshot of section of the Household Support Fund: guidance for local councils page showing the different formats

Users attempting to view an attachment that renders in their browser, such as a PDF, would see the browser security warning. They could still download the file by right-clicking the link and saving it. 

The warning displayed the domain name of the site that was blocked: "assets-origin.production.govuk.digital". This is an internal domain that GOV.UK uses and is not available to the public. It seemed odd that we were only seeing the warning when trying to view certain file types, and that the files could still be downloaded using the right-click workaround.

Cause of the problem 

There were 2 factors that contributed to the cause of this incident. These were:

  • an internal GOV.UK domain name being added to a list of unsafe sites that web browsers use
  • a misconfiguration that led to an internal domain name being used in the process to deliver file downloads on GOV.UK.

Unsafe site list

Most major web browsers make use of the Safe Browsing API provided by Google. This tool provides a list of websites that have been detected as serving deceiving or malicious content. Web browsers then block access when a user attempts to view these sites.

As a government website, we highly value this Safe Browsing API tool as it helps keep users safe. Websites attempting to deceive users by appearing to be a government website are unfortunately common and provide numerous risks to users.

We understand that this Safe Browsing API tool became aware of an internal domain we were testing on the public internet. As the software we were testing looks similar to GOV.UK, as a pre-release version, it was added to the list. The site was hosted on a subdomain of “govuk.digital”, however we found that it wasn’t just that site that was flagged as potentially deceptive. Instead, the whole “govuk.digital” domain was flagged. This meant this problem affected not just the site being tested, but any of our other internal tools that use the “govuk.digital” domain.

Earlier Misconfiguration

It was a surprise to us that having the “govuk.digital” domain marked as deceptive was impacting members of the public. It was only for internal usage and there should not be any internal tooling exposed when files are served to users.

We discovered that some configuration changes a few weeks earlier had caused a modest problem that had gone undetected, which then became a severe problem once our internal domain was marked as deceptive.

When users view PDFs or images from GOV.UK in their browser, they are also served a small icon called a favicon, which is displayed in the browser tab and bookmarks. Browsers automatically request it when viewing a file on a site. The earlier configuration changes had caused a request for this icon to be redirected to an internal domain name, “assets-origin.production.govuk.digital”. This domain name is not served on the public internet, which meant requests for this icon failed outside our internal network.

The consequence of this problem is very modest for users: it doesn’t affect viewing the requested file, it just means that the icon in the browser didn’t show. Our monitoring had not captured this problem, so this issue lay undiscovered following the configuration change.

Once the internal domain was marked as deceptive, this modest problem became significant. When users requested a PDF in their web browser, their browser would also attempt to request the icon from the blocked domain. The browser would then automatically block all other requests so that the user would not be served the PDF or image they requested. 

How we fixed it

Once we’d identified the code change that introduced the issue, we reached out to the team that authored the change, to check that undoing it wouldn’t cause additional issues. We then checked on a test environment to make sure that reverting the change would fix the issue as expected. Finally, we applied the revert to our production environment.

This resolved the issue for our users, who could now access all attachments as normal. However, we still had some internal tooling that relied on the govuk.digital domain, which was still being blocked by the browser.

We followed Google’s well-documented process to request a review of our domain. Within 24 hours, the domain was successfully removed from the Safe Browsing block list, meaning we now had full access to our tooling.

Next steps 

This incident provided us with a reminder of the importance that GOV.UK can be trusted. As people working on GOV.UK, we place a lot of pride in the trust people have in GOV.UK and we work hard to maintain and develop it. This incident also reminded us that trust is not just held by users but also by the systems that operate the infrastructure of the internet and that we risk their trust by taking actions they find surprising.

In our incident review, we questioned the steps that could be taken to avoid a recurrence of an incident like this.

We've now iterated the request headers we set at the CDN layer, to be stricter in terms of the headers we trust at the application level. We've also reviewed our policy and guidelines around usage of internal domain names, setting clearer guidance for teams. Finally, we will improve our monitoring capabilities to catch issues of this nature should they occur again in future.



from Hacker News https://ift.tt/3W4PEeJ

The Transistor of 2047: Expert Predictions

The 100th anniversary of the invention of the transistor will happen in 2047. What will transistors be like then? Will they even be the critical computing element they are today? IEEE Spectrum asked experts from around the world for their predictions.


What will transistors be like in 2047?

Expect transistors to be even more varied than they are now, says one expert. Just as processors have evolved from CPUs to include GPUs, network processors, AI accelerators, and other specialized computing chips, transistors will evolve to fit a variety of purposes. “Device technology will become application domain–specific in the same way that computing architecture has become application domain–specific,” says H.-S. Philip Wong, an IEEE Fellow, professor of electrical engineering at Stanford University, and former vice president of corporate research at TSMC.

Despite the variety, the fundamental operating principle—the field effect that switches transistors on and off—will likely remain the same, suggests Suman Datta, an IEEE Fellow, professor of electrical and computer at Georgia Tech, and director of the multi-university nanotech research center ASCENT. This device will likely have minimum critical dimensions of 1 nanometer or less, enabling device densities of 10 trillion per square centimeter, says Tsu-Jae King Liu, an IEEE Fellow, dean of the college of engineering at the University of California, Berkeley, and a member of Intel’s board of directors.

"It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale"—Sri Samavedam

Experts seem to agree that the transistor of 2047 will need new materials and probably a stacked or 3D architecture, expanding on the planned complementary field-effect transistor (CFET, or 3D-stacked CMOS). [For more on the CFET, see "Taking Moore's Law to New Heights."] And the transistor channel, which now runs parallel to the plane of the silicon, may need to become vertical in order to continue to increase in density, says Datta.

AMD senior fellow Richard Schultz, suggests that the main aim in developing these new devices will be power. “The focus will be on reducing power and the need for advanced cooling solutions,” he says. “Significant focus on devices that work at lower voltages is required.”

Will transistors still be the heart of most computing in 25 years?

It’s hard to imagine a world where computing is not done with transistors, but, of course, vacuum tubes were once the digital switch of choice. Startup funding for quantum computing, which does not directly rely on transistors, reached US $1.4 billion in 2021, according to McKinsey & Co.

But advances in quantum computing won’t happen fast enough to challenge the transistor by 2047, experts in electron devices say. “Transistors will remain the most important computing element,” says Sayeef Salahuddin, an IEEE Fellow and professor of electrical engineering and computer science at the University of California, Berkeley. “Currently, even with an ideal quantum computer, the potential areas of application seem to be rather limited compared to classical computers.”

Sri Samavedam, senior vice president of CMOS technologies at the European chip R&D center Imec, agrees. “Transistors will still be very important computing elements for a majority of the general-purpose compute applications,” says Samavedam. “One cannot ignore the efficiencies realized from decades of continuous optimization of transistors.”

Has the transistor of 2047 already been invented?

Twenty-five years is a long time, but in the world of semiconductor R&D, it’s not that long. “In this industry, it usually takes about 20 years from [demonstrating a concept] to introduction into manufacturing,” says Samavedam. “It is safe to assume that the transistor or switch architectures of 2047 have already been demonstrated on a lab scale” even if the materials involved won’t be exactly the same. King Liu, who demonstrated the modern FinFET about 25 years ago with colleagues at Berkeley, agrees.

But the idea that the transistor of 2047 is already sitting in a lab somewhere isn’t universally shared. Salahuddin, for one, doesn’t think it’s been invented yet. “But just like the FinFET in the 1990s, it is possible to make a reasonable prediction for the geometric structure” of future transistors, he says.

AMD’s Schultz says you can glimpse this structure in proposed 3D-stacked devices made of 2D semiconductors or carbon-based semiconductors. “Device materials that have not yet been invented could also be in scope in this time frame,” he adds.

Will silicon still be the active part of most transistors in 2047?

Experts say that the heart of most devices, the transistor channel region, will still be silicon, or possibly silicon-germanium—which is already making inroads—or germanium. But in 2047 many chips may use semiconductors that are considered exotic today. These could include oxide semiconductors like indium gallium zinc oxide; 2D semiconductors, such as the metal dichalcogenide tungsten disulfide; and one-dimensional semiconductors, such as carbon nanotubes. Or even “others yet to be invented,” says Imec’s Samavedam.

"Transistors will remain the most important computing element"—Sayeef Salahuddin

Silicon-based chips may be integrated in the same package with chips that rely on newer materials, just as processor makers are today integrating chips using different silicon manufacturing technologies into the same package, notes IEEE Fellow Gabriel Loh, a senior fellow at AMD.

Which semiconductor material is at the heart of the device may not even be the central issue in 2047. “The choice of channel material will essentially be dictated by which material is the most compatible with many other materials that form other parts of the device,” says Salahuddin. And we know a lot about integrating materials with silicon.

In 2047, where will transistors be common where they are not found today?

Everywhere. No, seriously. Experts really do expect some amount of intelligence and sensing to creep into every aspect of our lives. That means devices will be attached to our bodies and implanted inside them; embedded in all kinds of infrastructure, including roads, walls, and houses; woven into our clothing; stuck to our food; swaying in the breeze in grain fields; watching just about every step in every supply chain; and doing many other things in places nobody has thought of yet.

Transistors will be “everywhere that needs computation, command and control, communications, data collection, storage and analysis, intelligence, sensing and actuation, interaction with humans, or an entrance portal to the virtual and mixed reality world,” sums up Stanford’s Wong.

This article appears in the December 2022 print issue as “The Transistor of 2047.”



from Hacker News https://ift.tt/oSWL7JD

Monday, November 28, 2022

Low Latency Optimization: Understanding Pages (Part 1)

Introduction

Latency is often a crucial factor in algorithmic trading. At HRT, a lot of effort goes into minimizing the latency of our trading stack. Low latency optimizations can be arcane, but fortunately there are a lot of very good guides and documents to get started.

One important aspect that is not often discussed in depth is the role of huge pages and the translation lookaside buffer (TLB). In this series of posts, we’ll explain what they are, why they matter, and how they can be used. We’ll be focusing on the Linux operating system running on the 64-bit x86 hardware, but most points will apply to other architectures as well. We’ve tried to find a balance of providing the most accurate information without digging in the minutiae. 

This series of posts is relatively technical, and requires some high-level understanding of operating systems (OS) concepts like memory management, as well as some hardware details such as the CPU caches. In the first post, we will explain the benefits of huge pages. In the second post, we will explain how they can be used in a production environment.

Memory Management 101

The hardware and operating system deal with memory in chunks. These chunks are called pages. For example, when memory is allocated or swapped by the operating system, it’s done in units of pages.

On the 64-bit x86 architecture, the default page size is 4 kilobytes (KiB). But x86 64-bit CPUs also support two other page sizes which Linux calls huge pages: 2MiB and 1GiB1. For simplicity’s sake, this series of posts is focused on 2MiB pages. 1GiB pages can also be helpful, but they’re so big that use cases for them tend to be more specialized.

11GiB pages are sometimes called gigantic pages as well

A quick primer on address translation 

When regular programs run, they use virtual addresses to access memory. These addresses are usually just valid within the current process. The hardware and the operating system cooperate to map these into an actual physical address in physical memory (RAM). This translation is done per page (you can probably already see why the size of the page matters).

Because programs only see the virtual addresses, on every memory access the hardware must translate the virtual address visible to the program into a physical RAM address (if the virtual address is indeed backed by physical memory). Memory access means any loads or stores from/to the processor for data or instructions regardless of whether they’re cached.

These translations are stored by the operating system in a data structure called the page table which is also understood by the hardware. For each virtual page backed by real memory, an entry in the page tables contains the corresponding physical address. The page tables are usually unique for each process running on the machine.

Why accessing the page tables might significantly increase latency

Unless the program’s allocator and/or operating system are set up to use huge pages, memory will be backed by 4KiB pages. The page tables on x86 use multiple hierarchical levels. Therefore, looking up the physical address of a 4 KiB page in the page tables requires at least 3 dependent memory loads.2

2 4 loads if 5-level paging is supported by the CPU and enabled in the Linux kernel

The CPU cache will be used to try to fulfill these (similarly to any regular memory access). But let’s imagine that all of these loads are uncached and need to come from memory. Using 70ns as the memory latency, our memory access already has a 70*3=210 nanosecond latency — and we have not even tried to fetch the data yet!

Enter the translation lookaside buffer

CPU designers are well aware of this problem and have come up with a set of optimizations to reduce the latency of address translation. The specific optimization we are focusing on in this post is the translation lookaside buffer (TLB).

The TLB is a hardware cache for address translation information. It contains an up-to-date copy of a number of recently accessed entries in the page tables (ideally all entries in the page tables for the current process). Just like accessing the CPU caches is faster than accessing memory, looking up entries in the TLB is much faster than searching in the page tables3. When the CPU finds the translation it is looking for in the TLB, it’s called a TLB hit. If it does not, it’s a TLB miss.

But, just like the regular CPU caches, the TLB size is limited. For many memory-hungry processes, the entire page table’s information will not fit in the TLB.

3 Depending on the state of the cache, it will vary from 3x to ~80x times faster, though some of this latency could be hidden by speculative page walks.

How big is the TLB?

The TLB structure is not completely straightforward; we’ll approximate it to a set of entries in the hardware. A decent rule of thumb is that recent server x86 CPUs come with a TLB of about 1500-2000 entries per core (cpuid can, for example, be used to display this information for your CPU).

Therefore for processes that use 4KiB pages, the TLB can cache translations to cover 2000 (number of entries in the TLB) * 4KiB (the page size) bytes, i.e ~8MiB worth of memory. This is dramatically smaller than the working set of many programs. Even the CPU cache is usually considerably larger!

Now, let’s say we are using huge pages instead — suddenly our TLB can contain the translation information for 2000*2MiB = ~4GiB. That is much, much better.  

Other benefits of huge pages

One huge page covers 512 times more memory than a 4KiB page. That means the number of entries in the page tables is also 512 times smaller for the same working set than if we were using regular pages. This not only greatly reduces the amount of memory used by page tables, but also makes the tables themselves easier to cache.

Additionally, the format of page tables for 2MiB is simpler than for regular pages, so a lookup requires one fewer memory access.

Therefore, even if a program gets a TLB miss for an address backed by a huge page, the page table lookup will be measurably (or even dramatically) faster than it would have been for a regular page.

A quick-and-dirty benchmark

No post about optimization would be complete without a completely artificial benchmark 😀. We wrote a simple program that allocates a 32GiB array of double precision numbers. It then adds 130 million random doubles from this array (full source code is available here). On the first run, the program generates a random list of indices in the array then stores them in a file. Subsequent runs will read the file so memory accesses will be the same during each run.

We ran this program on an otherwise idle Intel Alder Lake machine. The baseline time for the initialization part of the program was 40% faster when using huge pages. The array is initialized linearly, which is the best case scenario for the hardware so the speedup is not dramatic. However, when doing random accesses to add the doubles, the runtime is decreased by a factor of 4.5. Note that the number of seconds for a run can vary significantly with small changes in the program or the use of different compilers.  However, the performance improvement for huge pages remains quite clear.

When you should not use huge pages

Huge pages should be thought of as an optimization. Just like any other optimization, they may or may not apply to your workload. Benchmarking is important to determine whether it’s worth investing time in setting them up. In the second post of this series, we’ll detail how they can be used and list some substantial caveats.

Conclusion

On every memory access for code or data, the hardware translates virtual addresses into physical addresses. Using huge pages measurably speeds up this translation.

Huge pages also allow the TLB to cover a lot of memory. Ideally, we’d want our entire working set to be translatable by the TLB without ever going to the page tables. This reduces latency of memory access, and also frees some room in the CPU caches (which no longer need to contain as many cached page table entries).

See you in the next post!

If you find this content interesting and would like to know more about HRT, check out our website or consider applying to join our team!

Further reading

What every programmer should know about memory



from Hacker News https://ift.tt/LdmCAfv