Monday, January 31, 2022

Home Affairs singles out Meta as most reluctant to stop online abuse

social-media.jpg
Image: Getty Images

The Department of Home Affairs has called for more oversight on social media algorithms and online platforms using encryption as being a potential mechanism for preventing online abuse.

Those calls were made by Home Affairs representatives on Tuesday afternoon when they appeared before the Select Committee on Social Media and Online Safety. The committee is currently undertaking a social media probe into the practices of major technology companies to curb toxic online behaviour.

The committee's probe was approved by the federal government at the end of last year with the intention of building on the proposed social media legislation to "unmask trolls".

Home Affairs digital and technology policy head Brendan Dowling on Tuesday said his department has become increasingly concerned about the rollout of encryption on online platforms. In expressing these concerns, Dowling said his department was not anti-encryption and acknowledged the cybersecurity and privacy benefits of the technology, but noted the rollout of encryption on online platforms has not been done with the intention of prioritising the safety of users.

"I think we're seeing platforms adopt the idea of safety by design, but that continues to be a concern where safety seems to be an after feature or an afterthought to the design of platforms," Dowling told the committee.

"One of our most real and immediate concerns is that encryption is being rolled out without the associated consideration of safety features. To use an example, there are mechanisms to ensure that you can identify known child abuse material in any encrypted environment.

"There are technical ways to achieve the identification of that deeply troubling material, but what we're seeing is platforms are looking to roll out further encryption to deal with privacy issues or security issues without regard to how they're going to prioritise public safety, child safety, and assistance to law enforcement in those environments so that's one example of where we see the innovation being ahead of the safety considerations."

In Home Affair's submission to the committee, the department specifically called out Meta as being "frequently the most reluctant to work with government" when it comes to promoting a safe online environment, adopting a safety-by-design approach, and taking adequate proactive measures to prevent online harms.

"Digital platforms continue to be manipulated by malicious actors, and those seeking to do harm are able to exploit their technologies faster than industry can develop new safety features," Home Affairs wrote in its submission.

Home Affair's remarks are in stark contrast to the ones made by Meta when it appeared before the same committee a fortnight ago, when the company's ANZ policy director Mia Garlick dismissed the claims made by whistleblower Frances Haugen that company prioritises profit over safety as being "categorically not true".

"Safety is at the core of our business," Garlick said at the time.

The committee's findings for its social media probe are set to be released later this month.

RELATED COVERAGE



from Latest Topic for ZDNet in... https://ift.tt/luM9ats7j

A Transistor for Sound Points Toward Whole New Electronics

Potential future transistors that consume far less energy than current devices may rely on exotic materials called "topological insulators," in which electricity flows across only surfaces and edges, with virtually no dissipation of energy. In research that may help pave the way for such electronic topological transistors, scientists at Harvard have now invented and simulated the first acoustic topological transistors, which operate with sound waves instead of electrons.

Topology is the branch of mathematics that explores the nature of shapes independent of deformation. For instance, an object shaped like a doughnut can be deformed into the shape of a mug, so that the doughnut's hole becomes the hole in the cup's handle. However, the object couldn't lose the hole without changing into a fundamentally different shape.

Employing insights from topology, researchers developed the first electronic topological insulators in 2007. Electrons zipping along the edges or surfaces of these materials are "topologically protected," meaning that the patterns in which the electrons flow will stay unchanged in the face of any disturbances they might encounter, a discovery that helped win the Nobel Prize in Physics in 2016. Scientists later designed photonic topological insulators, in which light is similarly protected.

However, creating electronic topological transistors in which the dissipationless flow of electrons can get switched on and off in topological materials requires dealing with complicated quantum mechanics. By using acoustic topological insulators, in which sound waves can experience topological protection, scientists were able to sidestep this complexity to create acoustic topological transistors.

Still, designing an acoustic topological transistor wasn't easy. "We knew our approach to topological logic could work, but we still needed to find a viable selection of materials where it actually did work," says study lead author Harris Pirie, currently at the University of Oxford. "We took a fairly brute force approach—there was one summer where we were running calculations on about 20 computers at the same time to test thousands of different materials and designs."

Although there were many designs the scientists found that very nearly worked, the designs always seemed to be compromised in some way—for example, "the device was too large to be practical," Pirie says. "Then one day, we finally found a design that met all the constraints, our eureka moment! After that, it was just a matter of designing the ancillary components—the heat converter, the expanding base plate—to make it all work."

The design consists of a honeycomb lattice of steel pillars anchored to a plate made of another substance, all sealed in an airtight box. The plate is made of a material that expands greatly when heated.

The lattice of the device has slightly larger pillars on one side and slightly smaller pillars on the other. These differences in the size and spacing of the pillars govern the lattice's topology, which in turn influences whether sound waves can flow through a set of pillars or not. For instance, at 20 degrees Celsius, ultrasound cannot pass through the device, but at 90 degrees C, it can zip along the edge between the sides. In essence, heat can switch this device from one state to another, much as electricity does so with conventional transistors.

The scientists also designed a second device that converts ultrasound waves into heat. When both devices are coupled together, they form an acoustic transistor that can control the state of another identical transistor, just as electricity flowing in a conventional transistor can toggle other transistors.

The researchers noted these acoustic topological transistors are scalable. This means the same design could also work for the gigahertz frequencies commonly employed in circuitry that is potentially useful for processing quantum information, Pirie says.

"More generally, the control of topologically protected acoustic transport has applications in a number of important fields, including efficient acoustic-noise reduction, one-way acoustic propagation, ultrasound imaging, echolocation, acoustic cloaking, and acoustic communications," Pirie says.

The design principles used to develop acoustic topological transistors could be adapted for use in photonic devices in a fairly straightforward manner, "at least in principle, because the acoustic wave equation mathematically maps onto its photonic counterpart," Pirie says. Meaning: The physics of sound waves and light waves are similar enough that the lessons of a topological transistor of one variety easily translate to a topological transistor of the other kind.

However, "this mapping doesn't exist in electronics," Pirie says, which makes it more challenging to develop an electronic topological transistor from this work. However, "it's still likely we could follow the same general scheme in electronics—we just have to find the right materials to use," he notes.

The scientists detailed their findings online earlier this month in the journal Physical Review Letters.

From Your Site Articles

Related Articles Around the Web



from Hacker News https://ift.tt/3dBEabQyn

Containers are not just for Kubernetes

In the last few years, the word Container (and maybe even Docker) has become somewhat synonymous with Kubernetes. This was of course unintentional — Kubernetes has attained the kind of mindshare very few technology trends in recent times have. This write-up is not to criticize Kubernetes, which I think is a brilliant technology. Instead, I want to make an argument that containers can do so much more even without Kubernetes.

Why Docker?

Let us think about the problems that Docker was created to solve:

  1. Remove all the dependency nightmares that cause production environments to be invariably different from development environments.

2. Efficiency of packaging and deployment — both in terms of size and time

3. Improved developer productivity — especially the inner loop of Edit -> Build -> Debug

All excellent goals to achieve, and largely accomplished.

What went wrong?

Along came the Kubernetes wave. It wasn’t meant to solve any of the above problems. It was supposed to provide efficient utilization of compute resources while improving reliability and availability. So, both containers and Kubernetes serve different purposes. And let us accept it, Kubernetes is still complex.

Unfortunately, somehow the concept of Kubernetes and containers got attached to each other. As a result, for many developers, the mental bar for using containers went higher because they thought they need to learn Kubernetes just to use containers.

How is cloud PaaS helping?

That notion is slowly disappearing now, thanks in large part to all the public cloud providers who are now beginning to provide container support in many contexts that don’t require a user to directly use Kubernetes. In fact, now that many Platform-as-a-Service (PaaS) products support bring-your-own-container-image, the whole idea of PaaS is now able to shed its image of being a super restrictive environment that can handle only basic scenarios through bring-your-code mechanism. You can pretty much package anything in your container image. If this container works on your machine, chances are it will work just fine on the cloud PaaS environment.

At DigitalOcean, we have recently announced bring your own container image support for our PaaS product — App Platform. App Platform actually runs on Kubernetes and is entirely based on containers. Even before this feature was announced, it supported building container images using Dockerfile (in addition to regular code and framework based mechanism). Ability to bring pre-built container images provides even more flexibility.

Sometime ago, Google Cloud had introduced their Cloud Run product which is their serverless container platform, primarily optimized for container based workflows. They did this despite the fact that they already had an event-driven serverless compute product (Cloud Functions) but that was mostly for code-based workflows.

AWS, for the longest time, tried to keep the concept of PaaS (Lambda in their case) different from the concept of containers. Their managed Kubernetes service obviously supported containers but they resisted for a long time, adding container support to Lambda. That changed last year when they finally announced that Lambda is going to support containers in addition to code.

In my previous write-up on the topic of PaaS, I had made a simplifying assumption that serverless is just a variant of PaaS. As you can see, I am continuing with that assumption here as well.

If we combine the benefits of Docker we earlier discussed with the benefits of fully managed PaaS (as described in the above linked article), we will soon realize that Docker and containers may actually be a better fit (for most developers and use cases) with PaaS environments rather than with Kubernetes.



from Hacker News https://ift.tt/9O2pzowQf

How to Mislead with Facts

Verified facts can be used to support erroneous conclusions. Here is how we can put an end to that.

Fact-checking has become popularized as the definitive process for certifying truth in the media. This has occurred in response to the proliferation of a wide variety of internet subcultures, often based largely upon misinformation. Propaganda and bad faith communication are all too common, making the checking of facts an important part of sensemaking.

While fact-checking is necessary, it is often not enough to provide the whole picture. Under current conditions of escalating culture and information war, facts themselves have become weapons.[1] Neither propaganda nor bad faith communication require the speaking of falsehoods.[2] It is often more effective to mislead and misinform through a strategic use of verified facts. The ability to critique and correct for the misuse of facts in public culture is an essential component of the democratic way of life.

Unfortunately, today it is standard practice for both institutions and individuals from all sectors of society to offer strategically cherry-picked and decontextualized facts,[3] set within a predetermined emotional or ethical frame.[4] This way of using facts is an effective tool to bring some people towards previously unappealing conclusions. It also provides rhetorical ammunition to those already predisposed to drawing these conclusions. While honestly passing the scrutiny of the fact-checkers, such an approach is nevertheless far from entirely truthful.

Verified facts are collected as ammunition for culture war, rather than for the sake of gaining a comprehensive understanding.

In today’s so-called “post-truth” digital media landscapes, the practice of weaponizing facts has become widespread, microtargeted, and optimized for psychological impact. The normalization of mishandling facts threatens to undermine people’s sense of living in a shared reality. For some, it goes so far as to undermine the idea that reality can be known at all.

Democratic forms of government are now being undermined by the mishandling and misrepresentation of “facts.” Stopping our descent into a “fact-blind culture” requires a new approach to the way we pay attention to and talk about “the facts.” For those seeking to improve the state of the epistemic commons, and address 21st-century challenges to sensemaking, there is no way forward that does not involve fundamental upgrades to how “facts” are handled in public discourse.[5]

There is a growing body of literature on fact-checking as a media practice.[6][7][8][9] The fields of epistemology and the philosophy of science now have sub-branches seeking to address the crisis concerning “facts” in public culture. A thriving international movement of fact-checking is leading to the establishment of many new organizations aimed at certifying the truth. The details of these efforts can be found elsewhere.[10]

Despite often earnest effort, the recent growth of fact-checking is not making the situation obviously better. Some argue that more fact-checking is in fact making things worse. How can that be?

The answer is that fact-checking—the verification of specific claims—does nothing to address the three primary ways in which facts can be used to mislead (see the box below). Because fact-checking offers official verification, it permits easier use of facts to mislead and misinform. This sounds counterintuitive. But the more accepted a fact is, the greater its effect when it is made part of a misleading campaign.

Image created by Adaptive Cultures

A misleading campaign of facts runs according to some combination of the three primary ways outlined in the box above. Information campaigns that are factually truthful but nevertheless misleading are the stuff of classic propaganda, as we have documented in our recent series on the problem of modern propaganda.[11] For decades there has been cumulative innovation in the industry of public relations.[12] Techniques for misleading with facts have been continually and scientifically advanced.

Facts become weapons for use in politically charged discourses in which winning is more important than accurately representing larger and more complex truths.

Today, the strategic misuse of facts is becoming a common practice employed by everyday citizens on social media. Many people post to their social media feeds only those facts they endorse, which support their existing beliefs and ideologies. Verified facts are collected as ammunition for culture war, rather than for the sake of gaining a comprehensive understanding. Microtargeting then caters to these preferences, ensuring that there is a steady supply of cherry-picked facts on offer. The resultant filter bubbles and algorithmic radicalization have been discussed in our related paper on 21st-century information warfare.[13]

The algorithmic radicalization prevalent on social media does not require “fake news.” Because facts can be used to mislead, extreme polarization and ideological identity capture can occur when individuals engage with information that is factual. This is possible when facts are taken out of context, cherry-picked, and emotionally loaded. Facts become weapons for use in politically charged discourses in which winning is more important than accurately representing larger and more complex truths. This debases the usefulness of “facts” and fact-stating discourses, which is to debase a necessary component of adequate public sensemaking.[14]

But what would happen if we decided to slow down and think about the facts together? What if we really wanted to understand what was going on in a way that accounted for all the facts and their various frames and interpretations?

The rivalrous “checking” of facts must give way to a more collaborative mutual understanding of facts. With a focus on education, this approach requires that individuals seek earnestly to evaluate the complexity of factual claims. Working together, individuals engage in a collaborative process to understand the implied significance and meaning of the facts in question, including all the associated complexity and nuance. There are four essential ways of understanding facts (see Box 2). Understanding facts requires a process that transcends but includes the familiar process of fact-checking, adding considerations of context, representativeness, and framing.

Interpreting a fact involves values and judgments not determined by the fact itself.

Image created by Adaptive Cultures

Beyond simply verifying a fact, it must be placed in context and positioned relative to all other closely relevant facts. This includes gathering facts about the methods used to generate the fact in question. Verifying one fact requires verifying many others, while working towards presenting as comprehensive a network of related truths as possible. The emotional impact of any given fact is always complex. Interpreting a fact involves values and judgments not determined by the fact itself. Verification of factuality is only the beginning of a larger process of meaning-making, which involves considerations that cannot be reduced to specific debates about the “facts.”

Our task is to create new processes for determining what counts as a shared, socially meaningful, mutually understood “truth.”

Misleading with facts can only be done when attention is not paid to all four ways involved in the comprehensive understanding of facts. Fact-checking as currently practiced typically only focuses on one of the four. Educational efforts aimed at improving public discourse must consider more than how to detect deceptions and lies. There is a great deal more to understanding a fact than knowing if it is true. And there is a great deal more to understanding complex realities than agreeing on a set of facts.

The point of this article is not that fact-checking is bad, but that it is necessary yet partial. As it stands, it is inadequate as a response to information war—but this does not mean it should be abandoned. The future of our civil society and public sphere depends upon drastically upgrading current approaches to dealing with “facts.” Our task is to create new processes for determining what counts as a shared, socially meaningful, mutually understood “truth.” Obviously, this requires more than making sure that every fact is checked.

It is possible to expand our approaches to dealing with facts in public discourse in ways that include more complexity, nuance, and perspective-taking. A start would be to have fact-checking sites and discussions informed by the models offered above, instead of being constrained to only “checking.” Until such steps are taken to improve public culture, it will remain as easy to mislead with facts as it is to manipulate through deception—perhaps even easier.

The stakes are high when it comes to the future of “facts.” As has been made clear: the mishandling of facts eventually breaks public sensemaking, and the breaking of public sensemaking eventually breaks society. The clock is ticking. As more and more “facts” pile up, our culture nevertheless gets farther and farther away from reality. Information warfare is now systematically and rapidly undermining the possibility of coherent public fact-stating discourses. This leads to a situation where political and public relations campaigns begin to operate more explicitly and self-consciously outside the truth. Of course, “facts” remain important—especially if they are officially verified—because they can be used as ammunition. But the larger, more complex truth is lost, accepted as a casualty of culture war.

This kind of cynical, post-truth culture is antithetical to democratic ways of life. But the solution is not to create centralized “Truth Committees.” These would serve as the official legitimators of censorship, becoming the ultimate authorities on shared social reality. Open societies are defined, in part, by the free flow of reliable facts through public culture. They are different thereby from societies that route information through narrow channels and give over individual judgment to the dictates of authorities. Responsibility for the integrity of public fact-stating discourses should be distributed throughout civil society. The movement around the advancing field of fact-checking should not seek to consolidate power, but to distribute it.

There is no technical “fix” or simple solution for improving the overall tenor and complexity of public communication about facts. There are, however, possibilities for digital technologies to enable educational initiatives of profound depth at massive scale. The same technologies that are now being used to mislead us with facts can be used to help us piece all the facts together and place them in the right context. The now crucial nexus of digital technologies, education, and politics can be reconfigured to allow for widespread learning and mutual understanding. Even though these facts are clear, there is, as always, the question of what we choose to do with them.



from Hacker News https://ift.tt/hXyDK0Nbk

Please Make a Dumb Car

Today’s cars are dumb where they should be smart, and smart where they should be dumb. Enough already. Make a car that’s pretty much all dumb and watch it sell — because what automakers are giving people is so bad, they’ll pay more to have less of it.

Cars now are like budget smartphones with wheels: loaded with bloatware, unintuitive and slow to operate. Carmakers have always struggled with user interfaces, but until recently the biggest problem we had was “too many knobs.” How I long for those days!

The proliferation of touchscreens and LCDs has made every car feel like a karaoke booth. Animations show reclaimed energy from braking, the speedometer changes color as you approach the limit, the fan speed and direction is under three menus. And besides being non-functional, these interfaces are even ugly! The type, the layouts, and animations scream “designed by committee and approved by someone who doesn’t have to use it.”

Not to mention the privacy and security concerns. I was dubious the first time I saw a GPS in a car, my mom’s old RX300, about 20 years ago. “Yeah… that’s how they get you,” I thought. And now, Teslas with missed payments drive themselves to be impounded. Welcome to the future — your car is a narc now!

The final indignity is that these features are being sold as upscale, not downmarket, options. Screens are so cheap that you can buy a few million and use them everywhere, for everything, and tell buyers “enjoy the next generation of mobility!” But in reality it’s a cost-saving measure that cuts down on part numbers and lets your dashboard team kick the can down the road as often as they want. You know this for sure because high-end models are going back to knobs and dials for that “premium feel.”

So here’s what I would like: a dumb car. This is what I think that looks like.

Dare to be stupid

First of all: no screens whatsoever. This is for a couple reasons, both practical and aesthetic.

Practically speaking, nearly all of what these screens do is already performed by smartphones. There’s no need for a deeply outdated, laggy, manufacturer-branded Spotify or Apple Music app, your phone does it perfectly already. Navigation, similarly, is handled perfectly by the phone. Both of these, I need hardly add, already work fine with voice commands, too.

Not having GPS or data (or hidden microphones or cameras) also makes your vehicle feel more private, obviously. Sure, they can still get your phone, but at least they’ll need to put a GPS package on your undercarriage like the old days if they want to track your movements beyond that.

Illustration of a dashboard of an older car.

Image Credits: Bryce Durbin / TechCrunch

For media, an aux input does it all. Doubles as a charging cable, and you could easily swap it out for different and new devices. Include a bit of smart cable routing and your phone can conveniently be mounted in a number of places around the cockpit — not that you should be looking at it or touching it (use your words). If you want Bluetooth, I’ve got a dongle for you. The only thing the car should have is a volume dial, maybe a three-button basic playback control cluster on the wheel.

As for the climate controls on those big center LCDs, a couple knobs will do it. No one really believes these “zone” things work, right? No car is big enough to have zones in it. A blue-to-red dial, blower select, and A/C and recirculate toggles get it done just fine.

In the instrument cluster, we can have ordinary needle gauges. Speed, fuel, oil, temperature, and the usual idiot lights: check engine, low tire pressure, etc.

Aesthetically, the digital versions of these have always bothered me. Drivers are meant to be focused on the road, but these clusters often have distracting, bright information that’s constantly changing. The difference between 69 and 70 on a gauge is an eighth of an inch, just like the difference between 67 and 68, and 68 and 69. That continuous, predictable variation is intuitive and precise enough for pretty much any driving purpose. On a digital display the numbers are blinky and big, constantly drawing your eye as they dip from 71 to 69, numbers that look completely different and you can’t really check out of the corner of your eye.

Keep it simple, keep it safe

Losing the media and navigation means we can do without a lot of the computation capability that goes into a modern car, but we don’t want to go without it entirely. There are safety features introduced in the last few years that ought to be included on every new car, smart or dumb. Traction control, blind spot and lane exit warnings, and even automatic emergency braking require a certain amount of CPU power and they should get it, because they save lives. Backup cameras are one thing people may not want to go without — but you’d be surprised how informative a basic proximity beeper is.

The engine itself is also far more computerized than in the old days. Unlike the computerization of the cabin, however, this has many positive effects, such as improved mileage, lower emissions, better reliability, and easier diagnosis for servicing. The exact level of electronics required for safe, responsive pedals and steering are probably a matter of some debate, but we can leave that to the experts.

I’m tempted to ask for manual window knobs and door locks, but that would put us over the line into affectation (if indeed we have not already left that line far behind). We’re not trying to recreate vintage cars but to make a modern one stripped of superfluous technology. Power seat adjustment, though, that’s a luxury even today. Use the lever.

Note that nothing I’ve proposed is specific to gas-powered cars; electric vehicles are just as prone to these bad decisions as the rest. This isn’t about nostalgia but rather abandoning a pernicious yet universally followed design philosophy. (…Okay, it is a little about nostalgia, but only a little.)

Of course what I’m describing, despite its seeming simplicity, probably amounts to something like a luxury vehicle, in that it’s not aiming at minimizing cost. Nearly every existing car line is designed with the “latest” tech in mind and to do away with that is a major departure from existing molds, assembly work, QA, and so on. Plus while I think the concept would attract many, it still wouldn’t outsell much. It’s a niche vehicle for sure, and the price would reflect that.

Still, all I want is a car that isn’t as overbearing as all the rest of the devices I already own, sending me notifications, dinging, reporting errors, asking permissions, needing updates — my god! Leaving aside the whole spurious “back in my day” argument, there simply isn’t much point to these features now, certainly not enough to justify their prominence or poor quality. Let’s see what it’s like to make a car that focuses on letting the driver drive, and accommodating rather than trying to replace the supercomputers we all carry around in our pockets.



from Hacker News https://ift.tt/BhpAgeiPI

Sunday, January 30, 2022

What is the inverse of a circle?

What is the Inverse of a Circle?

In a word, this:

In this post we'll try to make sense of what we're seeing, and try to understand what it means to invert a circle!

Table of Contents

Inverse of a Real Number

For any number a we think of the inverse a^{-1} as whichever number yields:

\tag{1.0} a a^{-1} = 1

For real numbers this is easy:

\tag{1.1} a^{-1} = \frac{1}{a}

Qualitatively, the inverse of a big number is a small number and vice versa. There are two special numbers, a = 1 and a = -1, which are their own inverses. We call these fixed points under inversion.

There are two more special points at a = 0 and a = \infin, such that:

\tag{1.2} \frac{1}{0} = \infin

and

\tag{1.3} \frac{1}{\infin} = 0

Now, \infin is not actually a real number and you can sometimes get into trouble treating it like one, but for our purposes today we can just pretend that it is.

Inverse of a Complex Number

A complex number is usually written as \text{p} = a + bi. It has a length given by:

\tag{1.4} \|\text{p}\| = \sqrt{a^2 + b^2}

And an angle \theta_{\text{p}} given by:

\tag{1.5} \theta_{\text{p}} = \tan^{-1}\left(\frac{b}{a}\right)

Its inverse is defined just like before:

\tag{1.6} \text{p}^{-1} = \frac{1}{a+bi}

But having a complex number in the denominator of a fraction is hard to reason about. How can we pull that i into the numerator so we can make sense of it?

A typical choice is to multiply both the numerator and denominator by the complex conjugate, a-bi:

\begin{aligned} \text{p}^{-1} &= \frac{1}{a + bi} \\ &= \frac{1}{a + bi} \cdot \frac{a-bi}{a-bi} \\ &= \frac{a-bi}{a^2 -abi + abi - b^2 i^2} \\ &= \frac{a-bi}{a^2 + b^2} \end{aligned}

\tag{1.7} \text{p}^{-1} = \textcolor{orange}{\frac{1}{a^2 + b^2}} \textcolor{ff00ff}{(a - bi)}

These components separately control the length and angle of \text{p}^{-1}:

The \textcolor{orange}{\frac{1}{a^2 + b^2}} component means the new length is the inverse of the old length, just as with real numbers.

The \textcolor{ff00ff}{a - bi} component means the direction is the same, but mirrored over the real axis.

Now that we have some idea of what to expect, we can play around with a complex number inverter to try and build some intuition:

Can you visually confirm that big numbers become small and that every number is mirrored over the real axis?

One cool property of complex inversion is that every point inside the unit circle gets inverted outside of the circle and vice versa, but every point on the unit circle stays on the unit circle.

Notice the two fixed points at 1 + 0i and -1 + 0i. It makes sense that fixed points can only occur on the real axis, since inversion mirrors over the real axis.

As a hint of things to come: What happens if you move your cursor along the vertical line with real value 0.25? What path does the inverse trace out?

Conformal Mappings

To appreciate inversion of a circle we need to understand a little bit about conformal mappings.

For our purposes, a mapping \Mu is a function that turns one complex number into another:

For example:

\Mu_{double}(a+bi) = 2a + 2bi

or

\Mu_{mirror}(a+bi) = a - bi

There is also a convenient shorthand for mappings, \mapsto, which is pronounced "maps to" and is used like this:

a + bi \mapsto a^2 + 3bi

Which can also be used to define a mapping or illustrate specific properties of a particular mapping.

Conformal mappings have a special simplifying property: they locally preserve angles.

Locally preserving angles means that while the large-scale structure may change, the small-scale structure stays exactly the same but for scaling and rotation. That means any two lines which intersect at \theta before the mapping will still intersect at \theta after the mapping.

This also implies that any two lines which are parallel before the mapping are still parallel after the mapping, at least locally. Even though the mapping may warp straight lines into curves, if you zoom in close enough to any small neighboring line segments they are still locally parallel.

Said another way: Tiny squares before the mapping will still be tiny squares after the mapping, though they may be shifted, scaled, and rotated. Squares will not warp into rectangles or diamonds.

From this falls an equivalent definition of conformal: Circles before the mapping remain circles after the mapping, but for scale and translation. The sizes and locations of the circles may change, but circles never get squished into ovals. These definitions will all be important later.

To test our understanding let's examine a common mapping that transforms points on the Earth onto points in the complex plane, the Mercator Projection.

We call it a projection because we're specifically mapping from a sphere to a plane, but it's just a particular kind of mapping.

If the longitude is represented as \theta and latitude is represented as \phi, then the Mercator Projection is:

\tag{1.8} M(\theta, \phi) = R\theta + R \ln\left[\tan{\left(\frac{\pi}{4} + \frac{\phi}{2} \right)} \right]i

Is this mapping conformal? We can run a series of tests:

  1. Lines of constant Latitude (like the Equator, Tropic of Capricorn, etc) are all mutually parallel on the surface of the Earth. So too, in this projection. \checkmark
  2. Lines of constant Longitude (like the Prime Meridian) are all mutually parallel on the surface of the Earth. Again, this is also true in this projection. \checkmark
  3. Where Latitude and Longitude lines intersect, they do so at 90\degree angles. This is also true after projection. \checkmark

From these tests we can deduce that the projection is conformal!

There is another projection called the Stereographic Projection which is more useful for inverting circles:

The Stereographic Projection unwraps the Earth onto the 2D plane by placing the South Pole at the origin and stretching out the North Pole to r = \infin according to:

\tag{1.9} S(\theta, \phi) = 2R\tan{\left(\frac{\phi}{2}\right)} (\cos(\theta) + \sin(\theta) i)

Is the Stereographic projection conformal?

  1. Lines of constant Latitude are mutually parallel: \checkmark
  2. Lines of constant Longitude are all mutually parallel: \checkmark
  3. Lines of Latitude and Longitude are perpendicular: \checkmark

So yes, this projection is also conformal.

A consequence of conformality is that any circle on the sphere of the Earth maps to a circle on the 2D plane. This is trivially true of the Equator, but how about the Prime Meridian, which has become an infinite line?

Well, the Prime Meridian passes through the North Pole which maps to the point at \infin. That means the Prime Meridian, when projected onto the complex plane, must be a circle which includes the point \infin, which means it must be a circle of radius \infin, which looks like a line!

We can also look at the Stereographic projection in the opposite direction: Let's wrap the entire complex plane onto a sphere such that the South Pole of the sphere touches the origin and the North Pole represents all the points that are infinitely far away from the origin.

If we're careful, we can do this in just such a way that the unit circle in the complex plane maps to the sphere's equator. This object is called the Riemann Sphere:

If you flip back and forth between the Riemann Sphere and the Stereographic projection image you may be able to convince yourself that they are in fact two descriptions of the same mapping.

Qualitatively: All the small complex numbers (magnitude <1) are in the Southern hemisphere and all the big complex numbers (magnitude >1) are in the Northern hemisphere.

The surface of the Riemann Sphere contains the entire complex plane so it is reasonable for us to ask: What does complex inversion look like on this surface?

The answer is easy to visualize:

A point \text{p} on the complex plane is first mirrored across the equator to \text{p}'. This transforms the magnitude from something small to something large.

Then \text{p}' is mirrored across the real meridian, finally arriving at \text{p}^{-1}.

These two mirroring operations correspond exactly with the two components of complex inversion, derived above:

\tag{1.7} \text{p}^{-1} = \textcolor{orange}{\frac{1}{a^2 + b^2}} \textcolor{ff00ff}{(a - bi)}

With this new picture in mind, try to visually walk through the known special cases from earlier:

  • Fixed points at 1 and -1
  • 0 \mapsto \infin
  • i \mapsto -i

Now that we've got some 3D intuition for complex inversion on the Riemann Sphere, we're ready to tackle the full problem.

Inverse of a Circle

Given that the Stereographic Projection is conformal, we know that a circle on the complex plane maps to a circle on the Riemann Sphere and vice versa.

Reflection across the unit circle preserves the shape of the circle, it just moves it from one hemisphere to the other.

By that same logic, mirroring across the real meridian also preserves the shape of the circle.

Finally, mapping that circle back to the complex plane is performed using the inverse of the Stereographic Projection, which must also be conformal.

In short:

The inverse of a circle on the complex plane must be another circle on the complex plane.

Qualitatively: Small circles close to the origin invert to big circles far from the origin and vice versa. Circles above the real line invert to circles below the real line and vice versa.

The symmetry of the Riemann Sphere now lets us identify some special cases:

  1. A circle centered on 1 or -1 is its own inverse
  2. A circle centered on i inverts to a circle with the same radius, centered on -i

Points 1 and 2 are easily confirmed by simulation, but now we have a rich visual understanding of why they occur:

But wait, placing the cursor at +1 does not create a circle which is its own inverse! We need to travel out to something like +1.1 for the inverse to match. What gives?

The issue here is that a circle centered on +1 on the Riemann Sphere does not translate to a circle centered on +1 in the complex plane:

When projected onto the complex plane, the circle pictured must contain +1 and its left side must fall between 0 and 1, but its right side could extend very far out into extremely large numbers. The larger the radius of the circle, the larger the offset between the centers.

This is a subtle point that is worth rehashing. The exercise we did above where we reflected a circle across the equator (unit circle) and then again across the real meridian (real axis), all of that is correct both for the points on the circumference of the circle and for the midpoint of the circle once it lives on the Riemann Sphere. We just need to remember that while the Stereographic Projection preserves the shapes of circles, it does warp the space inside them such that their midpoints are not usually preserved!

Now consider a circle that touches the origin:

A circle that touches the origin inverts to a circle that passes through \infin!

In order for a circle to pass through \infin, it must stretch to the edge of the complex plane. It must be a circle of infinite radius, which looks like a straight line!

We saw this phenomenon as a special case earlier with the Prime Meridian and the Stereographic Projection, but here it appears as a general rule: any circle that touches the origin will invert to a line, not just meridians.

In this case we know that line passes through +1 and that it approaches \infin in a manner parallel to the imaginary axis, so this straight line must be a vertical line that passes through +1. Scroll back up to the simulation to confirm this for yourself!

Let's put a bow on this topic by identifying a few more fixed points, or should we call them fixed circles?

  • The unit circle inverts to itself
  • The real axis inverts to itself
  • Any circle centered on 1 or -1 (on the Riemann Sphere, not the complex plane) inverts to itself. The imaginary axis is therefore its own inverse

So inversion of a circle has one infinite family of fixed circles: those parallel to the imaginary axis, and then two special outliers which happen to also be fixed.

Enriching the Structure of a Circle

Let's say that our input circle is spinning clockwise.

We can then declare that all points locally on the right hand side of the circumference are inside the circle, so we'll paint them orange.

On the complex plane we now have filled-in circles with arrows to indicate direction:

On the Riemann Sphere it will look like a dome:

Playing around, we can see that circles centered on 0 are flipped inside out! Another way to say this is that their spin direction is reversed.

The filled-in unit circle is no longer the equator, it is the entire Southern Hemisphere on the Riemann Sphere. Under inversion it becomes the entire Northern Hemisphere! So this circle is no longer its own inverse.

The real axis is no longer the real meridian, it is the entire hemisphere closer to us in the diagram, which inverts to the entire hemisphere far from us. This circle too, is no longer a special "fixed circle".

But the imaginary axis is still its own inverse! The half-plane to the right of the imaginary axis forms the Eastern Hemisphere on the Riemann Sphere, which inverts to itself.

By adding structure to the circle, the inversion function has lost two of its fixed circles!

Rotations on the Riemann Sphere

We've been visualizing complex inversion as two reflections but we can also view it as a single 180\degree rotation about the [-1, 1] axis:

Go ahead and verify a few identities to convince yourself that they all work out identically:

  • Fixed points at 1 and -1
  • 0 \mapsto \infin
  • i \mapsto -i
  • imaginary axisimaginary axis
  • unit circleunit circle but not if it is filled in
  • real axisreal axis but not if it is filled in

Looking at it this way provides new insight. Rotation about the [-1, 1] axis could also be called rotation in a plane perpendicular to the [-1, 1] axis. One such plane passes through the entire imaginary axis.

So in a colloquial sense, complex inversion is a 180 \degree rotation in the imaginary plane. Any circle parallel to that plane (just look on the Riemann Sphere) must invert to itself because circles are invariant to rotation. Circles perpendicular to that plane will only invert to themselves if they lack handedness.

We've also stumbled on a tantalizing new question! What happens if we were to rotate about the [-i, i] axis, aka in the real plane?

Well, large numbers still become small numbers and vice versa, but it's like we're reflecting over the imaginary axis instead of the real axis.

This implies some different definition of complex inversion! Before, we were using:

\tag{1.7} \text{p}^{-1}_{\textcolor{green}{imaginary}} = \textcolor{orange}{\frac{1}{a^2 + b^2}} \textcolor{ff00ff}{(a - bi)}

But this new rotation corresponds to:

\tag{1.10} \text{p}^{-1}_{\textcolor{blue}{real}} = \textcolor{orange}{\frac{1}{a^2 + b^2}} \textcolor{ff00ff}{(-a + bi)}

Which is not an operation that I've seen before! It models a Bizarro-World complex plane where the special properties of the imaginary axis have been taken away and instead granted to the real axis.

From this we can simulate a different but still perfectly reasonable definition of the inverse of a circle!

We can see in the simulation and on the Riemann Sphere that for this operation:

  • The fixed points are \pm i
  • The fixed circle is the real axis and any other circle centered on \pm i

That brings us to our last contender, a rotation about the [0, \infin] axis, or a rotation in the plane defined by the unit circle:

Can you see what this operation does to complex numbers? Big numbers stay big and small numbers stay small. This type of complex inversion is only changing the direction of complex numbers.

On the complex plane, all we're doing is rotating 180 \degree around the origin!

  • Our fixed points are 0 and \infin
  • Our fixed circle is the unit circle and any other circle centered on the origin

By inspection we can write out this inversion formula as:

\tag{1.11} \text{p}^{-1}_{\textcolor{red}{unit}} = \textcolor{ff00ff}{-a - bi}

The three definitions that we've come to all look unique but now we know that they are actually three members of the same family: They are all 180 \degree rotations of the Riemann Sphere!

Digging Deeper

I hope this post conveys a little bit of what math can feel like when you just go exploring. It's not so much about equations and manipulation of symbols as it is about asking "what if" and then following through.

Speaking of, here's some ways you can dive deeper:

What does it mean to rotate by 90 \degree on the Riemann Sphere? Is this a familiar operation viewed in a new light, or something new? Can you derive an explicit formula for it in all three planes?

What does it mean to rotate the Riemann Sphere by any arbitrary \phi? Can you derive an explicit formula, perhaps making use of the complex exponential re^{i\theta}?

Can you generate one of the inversion formulas above using only the other two? What implications does this have?

On the Riemann Sphere, what happens if you reflect an odd number of times instead of an even number? What are the fixed points and circles of such an operation? What about with filled circles?

Are there any squares which invert to themselves? Does it matter if you just use the four corners of the square vs every point along its edges?

The field of math we've been playing in is called Complex Analysis. You can also learn more by picking up a book, playing around online, or just watch some videos on Youtube!

I'm on Twitter at @mferraro89 and you can shoot me an email me at mattferraro.dev@gmail.com



from Hacker News https://ift.tt/Dbrkt5aCE

Examples of common false beliefs in mathematics

The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes.

Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are

(i) a bounded entire function is constant;
(ii) $\sin z$ is a bounded function;
(iii) $\sin z$ is defined and analytic everywhere on $\mathbb{C}$;
(iv) $\sin z$ is not a constant function.

Obviously, it is (ii) that is false. I think probably many people visualize the extension of $\sin z$ to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense.

A second example is the statement that an open dense subset $U$ of $\mathbb{R}$ must be the whole of $\mathbb{R}$. The "proof" of this statement is that every point $x$ is arbitrarily close to a point $u$ in $U$, so when you put a small neighbourhood about $u$ it must contain $x$.

Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied.



from Hacker News https://ift.tt/M4EZJAOPt

Zero-copy network transmission with io_uring

LWN.net needs you!

Without subscribers, LWN would simply not exist. Please consider signing up for a subscription and helping to keep LWN publishing

By Jonathan Corbet
December 30, 2021

When the goal is to push bits over the network as fast as the hardware can go, any overhead hurts. The cost of copying data to be transmitted from user space into the kernel can be especially painful; it adds latency, takes valuable CPU time, and can be hard on cache performance. So it is unsurprising that the developers working with

io_uring

, which is all about performance, have turned their attention to zero-copy network transmission.

This patch set

from Pavel Begunkov, now in its second revision, looks to be significantly faster than the

MSG_ZEROCOPY option

supported by current kernels.

As a reminder: io_uring is a relatively new API for asynchronous I/O (and related operations); it was first merged less than three years ago. User space sets up a pair of circular buffers shared with the kernel; the first buffer is used to submit operations to the kernel, while the second receives the results when operations complete. A suitably busy process that keeps the submission ring full can perform an indefinite number of operations without needing to make any system calls, which clearly improves performance. Io_uring also implements the concept of "fixed" buffers and files; these are held open, mapped, and ready for I/O within the kernel, saving the setup and teardown overhead that is otherwise incurred by every operation. It all adds up to a significantly faster way for I/O-intensive applications to work.

One thing that io_uring still does not have is zero-copy networking, even though the networking subsystem supports zero-copy operation via the MSG_ZEROCOPY socket option. In theory, adding that support is simply a matter of wiring up the integration between the two subsystems. In practice, naturally, there are a few more details to deal with.

A zero-copy networking implementation must have a way to inform applications when any given operation is truly complete; the application cannot reuse a buffer containing data to be transmitted if the kernel is still working on it. There is a subtle point that is relevant here: the completion of a send() call (for example) does not imply that the associated buffer is no longer in use. The operation "completes" when the data has been accepted into the networking subsystem for transmission; the higher layers may well be done with it, but the buffer itself may still be sitting in a network interface's transmission queue. A zero-copy operation is only truly done with its data buffers when the hardware has done its work — and, for many protocols, when the remote peer has acknowledged receipt of the data. That can happen long after the operation that initiated the transfer has completed.

So there needs to be a mechanism by which the kernel can tell applications that a given buffer can be reused. MSG_ZEROCOPY handles this by returning notifications via the error queue associated with the socket — a bit awkward, but it works. Io_uring, instead, already has a completion-notification mechanism in place, so the "really complete" notifications fit in naturally. But there are still a few complications resulting from the need to accurately tell an application which buffers can be reused.

An application doing zero-copy networking with io_uring will start by registering at least one completion context, using the IORING_REGISTER_TX_CTX registration operation. The context itself is a simple structure:

    struct io_uring_tx_ctx_register {
        __u64 tag;
    };

The tag is a caller-chosen value used to identify this particular context in future zero-copy operations on the associated ring. There can be a maximum of 1024 contexts associated with the ring; user space should register them all with a single IORING_REGISTER_TX_CTX operation, passing the structures as an array. An attempt to register a second set of contexts will fail unless an intervening IORING_UNREGISTER_TX_CTX operation has been done to remove the first set.

Zero-copy writes are initiated with the new IORING_OP_SENDZC operation. As usual, a set of buffers is passed to be written out to the socket (which must also be provided, obviously). Additionally, each zero-copy write must have a context associated with it, stored in the submission queue entry's user_data field. The context is specified as an index into the array of contexts that was registered previously (not as the tag associated with the context). These writes will use the kernel's zero-copy mechanism when possible and will "complete" in the usual way, with the usual result in the completion ring, perhaps while the supplied buffers are still in use.

To know that the kernel is done with the buffers, the application must wait for the second notification informing it of that fact. Those notifications are not (by default) sent for every zero-copy operation that is submitted; instead, they are batched into "generations". Each completion context has a sequence number that starts at zero. Multiple operations can be associated with each generation; the notification for that generation is sent once all of the associated operations have truly completed.

It is up to user space to tell the kernel when to move on to a new generation; that is done by setting the IORING_SENDZC_FLUSH flag in a zero-copy write request. The flag itself lives in the ioprio field of the submission queue entry. The presence of this flag indicates that the request being submitted is the last of the current generation; the next request will begin the new generation. Thus, if a separate done-with-the-buffers notification is needed for each write request, IORING_SENDZC_FLUSH should be set on every request.

When a given generation completes, the notification will show up in the completion ring. The user_data field will contain the context tag, while the res field will hold the generation number. Once the notification arrives, the application will be able to safely reuse the buffers associated with that generation.

The end result seems to be quite good; benchmarks included in the cover letter suggest that io_uring's zero-copy operations can perform more than 200% better than MSG_ZEROCOPY. Much of that improvement likely comes from the ability to use fixed buffers and files with io_uring, cutting out much of the per-operation overhead. Most applications won't see that kind of improvement, of course; they are not so heavily dominated by the cost of network transmission. If your business is providing the world with cat videos, though, zero-copy networking with io_uring is likely to be appealing.

For now, the new zero-copy operations are meticulously undocumented. Begunkov has posted a test application that can be read to see how the new interface is meant to be used. There have not been many comments on this version (the second) of this series. Perhaps that will change after the holidays, but it seems likely that this work is getting close to ready for inclusion.


(

Log in

to post comments)



from Hacker News https://ift.tt/Z0IXcN36l

Chip shortages due to lack of investment in right fabs

Semiconductor shortage issues will continue through the first half of 2022 as the industry attempts to build up inventory to normal levels, according to research firm IDC.

It cites limited investment in mature process technology as one reason, with many vital components for the automotive industry and other sectors manufactured using these older processes.

The semiconductor market continues to experience uneven shortages and tight supply, IDC noted, and the research firm highlights that one of the key supply constraints for semiconductors has been in mature process nodes.

By mature process nodes, it means older manufacturing processes at 40nm and above, which are used for low-cost production of automotive semiconductors and other chips such as LCD drivers and power management controllers. These are not as glamorous as the latest high-density CPU chips, but nevertheless are required to build a complete system in many cases.

According to its forthcoming report, Semiconductor Manufacturing and Foundry Services Market and Technology Assessment, IDC said it estimates that 67 per cent of semiconductors were produced using these mature process nodes during 2021, rather than leading edge process nodes, which it defines as 16nm or below.

But while leading edge manufacturing makes up just 15 per cent of semiconductor wafers by volume, it accounts for 44 per cent of the total revenue. Capital investment in the foundry market has therefore tended to focus on bleeding edge, while mature process technology manufacturing has received only limited investment.

However, one chip manufacturer that has apparently been bucking the trend and making good money from these older process nodes is Texas Instruments, which saw its revenue for the fourth quarter of 2021 grow by 19 per cent compared to the same quarter a year ago.

One effect is that the automotive market continues to be impacted as chips move up the value chain, restricting the supply to automobile manufacturers. This drives the latter to use up their semiconductor supply on higher value vehicles first, which IDC claims raised the average selling price of vehicles during 2021.

The good news is that this situation should start to improve this year, according to IDC's Nina Turner, research manager for its Enabling Technologies and Semiconductor team.

"Automotive semiconductors will continue to be a limiting constraint on the automotive market through the first half of 2022, but barring any unforeseen shutdowns or semiconductor manufacturing issues, supply should gradually improve through the second half of the year," she said.

IDC predicts that foundry capacity will continue to grow in the Asia-Pacific region, such that by 2025, it expects South Korea and China will have increased their share of wafer manufacturing capacity to 19 per cent and 15 per cent of the market, up from 16 per cent and 12 per cent respectively.

Despite this, IDC said it expected Taiwan to increase its own share of the foundry market to 68 per cent by 2025, edging up slightly from 67 per cent in 2020, due to the investment and success of TSMC and the other Taiwanese foundry service suppliers.

The overall foundry market is forecast to grow at a five-year compound annual growth rate of 12 per cent between 2020 and 2025.

Those new fabs and their much-needed semiconductor manufacturing capacity will thus come too late to make any difference in 2022, IDC warned. Foundry companies are slowly adding what capacity they can, but improvement will be incremental for the rest of this year, accelerating only in 2023 and beyond. ®



from Hacker News https://ift.tt/jYUopkHun

Rents are up 40 percent in some cities, forcing millions to move

Kiara Age moved in less than a year ago and now it’s time to move again: Rent on her two-bedroom apartment in Henderson, Nev., is rising 23 percent to nearly $1,600 a month, making it impossibly out of reach for the single mother.

Age makes $15 an hour working from home as a medical biller while also caring for her 1-year-old son, because she can’t afford child care. By the time she pays rent — which takes up more than half of her salary — and buys groceries, there’s little left over.

“I am trying to figure out what I can do,” said Age, 32, who also has an 8-year-old daughter. “Rent is so high that I can’t afford anything.”

Rental prices across the country have been rising for months, but lately the increases have been sharper and more widespread, forcing millions of Americans to reassess their living situations.

Advertisement
Story continues below advertisement

Average rents rose 14 percent last year, to $1,877 a month, with cities like Austin, New York and Miami notching increases of as much as 40 percent, according to real estate firm Redfin. And Americans expect rents will continue to rise — by about 10 percent this year — according to a report released this month by the Federal Reserve Bank of New York. At the same time, many local rent freezes and eviction moratoriums have already expired.

“Rents really shot up in the second half of 2021,” said Daryl Fairweather, chief economist at Redfin. “The pandemic was kind of a pause on the economy and now that things are reopening, inflation is picking up, rents are going up and people are realizing they don’t have as much disposable income as they might have thought they had.”

Higher rent prices are also expected to be a key driver of inflation in coming months. Housing costs make up a third of the U.S. consumer price index, which is calculated based on the going rate of home rentals. But economists say there is a lag of 9 to 12 months before rising rents show up in inflation measures. As a result, even if inflation were to subside for all other components of the consumer price index, rising rents alone could keep inflation levels elevated through the year, said Frank Nothaft, chief economist at real estate data firm CoreLogic.

Advertisement
Story continues below advertisement

While the Federal Reserve’s likely interest rate increases are expected to slow soaring housing costs — already mortgage rates have been trending higher, which tends to cool the real estate market — the restraint on rental prices is expected to be much less direct and take longer to filter through.

In the meantime, the Biden administration has begun reallocating unused funds from its $46.5 billion Emergency Rental Assistance program to help residents with rent and utility payments in cities such as Washington, D.C., Houston and San Diego. President Biden has also vowed to add nearly 100,000 affordable homes over the next three years by providing low-cost funding to qualifying developers, and encouraging states and local governments to reduce zoning and financing rules for affordable housing.

The pandemic has exacerbated inequalities in many parts of life, and housing is no different. Homeowners benefited from rock-bottom interest rates and surging home prices, while renters have faced surging costs with little reprieve. And unlike markups in other categories — such as food or gas, where prices can waver in both directions — economists say annual leases and long-term mortgages make it unlikely that housing costs will come back down quickly once they rise.

Advertisement
Story continues below advertisement

Eleven million households, or 1 in 4 renters, spend more than half of their monthly income on rent, according to an analysis of 2018 census data by Harvard University’s Joint Center for Housing Studies, though experts say that figure is likely even higher now.

“The fact is, for too many Americans, housing is unaffordable,” said Dennis Shea, executive director of the J. Ronald Terwilliger Center for Housing Policy at the Bipartisan Policy Center. “We have an inadequate supply of homes — for both rent and for sale — and of course the lowest income families are being hit hardest.”

In interviews with renters around the country, many said their monthly payments had recently risen or were set to go up in the coming weeks. Multiple people said that despite local rent freezes, their management companies had found ways to increase monthly dues by tacking on new “amenity fees” or charging for services like trash collection that had previously been included.

Many said they began looking for other rental options, only to find that everything around them had gone up in price, too. Some said they’re considering relocating altogether — from Austin to Richmond, or New York City to Dover, Del.

Aleksei Valentín and his husband, both doctoral students, recently downsized from a one-bedroom apartment in Frederick, Md., to a studio in a neighboring county in search of lower rents. Their previous building repeatedly hiked amenity fees during the pandemic, Valentín said, adding about $200 in extra charges to their monthly rent of $1,290. That apartment is now on the market for $1,600.

Advertisement
Story continues below advertisement

“First the fees started going up, and then we started getting notices that if we didn’t put our trash out exactly like they liked, we’d have to start paying fines,” said Valentín, 39. “The more people moved out, the more the amenity fees went up.”

There were other problems too but the final straw, he said, came when the company removed the dumpster closest to his apartment. Valentín, who is in a wheelchair, said he had to travel a quarter-mile to dispose of his trash. Now he and his husband pay $1,400 a month and said they were deliberate about moving to a county with strong tenant rights laws.

Like many millennials, the 30-somethings say home-ownership feels increasingly out of reach. They have student loan debt and little in savings. Even buying a tiny house “feels like a pipe dream,” Valentín said.

Advertisement
Story continues below advertisement

“Prices are so high, inflation is astronomical, and the house my parents bought for $30,000 in the late ’70s is worth over half a million today,” he added. “How can [we] enter into that market without intergenerational wealth? It’s impossible.”

The share of first-time home buyers has dropped to its lowest level in eight years, according to the National Association of Realtors. The group estimates that nearly 1 million renters were priced out of the housing market last year because of rising real estate prices and increased competition from wealthier, all-cash buyers.

Homes were in short supply even before the pandemic, particularly for low- and middle-income families. In 2020, the market added just 65,000 entry-level homes, those smaller than 1,400 square feet, compared with roughly 400,000 a year in the late 1970s, according to federally chartered mortgage investor Freddie Mac. As a result, more families are renting longer.

Desiree Tizon, a 29-year-old social media manager in Miami, was shocked to find that rent for her two-bedroom apartment on the waterfront is jumping 39 percent to nearly $3,700 a month in March. She’s not sure whether she and her boyfriend will move or stay, but said the higher cost cuts into any hope of saving up for a down payment.

Advertisement
Story continues below advertisement

“I began looking around Miami, and it turns out all rent prices are up a ton,” she said. “We’re not finding anything reasonable or even comparable to our unit. Maybe we’ll find a place that’s $200 cheaper but is it worth the hassle of moving all of our things?”

She and her boyfriend, who works in tech, both got higher-paying jobs during the pandemic. They could technically afford the extra $1,000 a month, she said, though it doesn’t feel right on principle.

“If we had enough money for a down payment, we could put that into a mortgage,” she said. “But now with higher rent, we’re stuck in this rat race for that much longer.”

Early in the pandemic, Americans, especially those who could work from home, moved from cities to suburbs or bought second vacation homes in search of more space. Low interest rates and booming demand lifted housing prices to record highs. Apartments, meanwhile, were left to grapple with mounting vacancies and began offering perks like free parking, $500 gift cards, even Peloton bikes and TVs, in exchange for long-term leases.

But as the economy reopened and people headed back to cities, apartments began filling back up. Now, with home prices and mortgage rates both on the upswing, rentals are in high-demand again, giving landlords room to start asking for more money, said Fairweather of Redfin.

Advertisement
Story continues below advertisement

In Austin, Shadow LeMere is moving her belongings to storage and preparing to live in her car until she can find an affordable apartment. Rent on the two-bedroom she shares with her wife is set to rise 43 percent, from $1,500 a month to about $2,200, in early February.

She and her wife are planning to drive to Georgia to drop off their cat, Ziggy, with a friend. After that, they’ll make their way up the East Coast in their Kia Soul until they find a cheaper place to settle down.

“We were pretty much existing on fumes before this,” the 49-year-old said. “And now it’s gotten so much worse.”

LeMere, who used to work as an accountant, receives $900 a month in disability benefits. Her wife, Brandi Kalinowski, spent a decade as an auditor for a national grocery chain but quit last week in anticipation of the move.

Advertisement
Story continues below advertisement

The apartment building where they’ve lived for seven years typically raises rents by about 6 percent a year, LeMere said. But this year’s massive jump has left many residents in dire situations. It’s become common, she said, to see piles of people’s belongings in the parking lot for nonpayment of rent.

“We can’t afford to live here anymore,” she said. “Not in Austin, not in Texas. We’re going to nomad it for a bit and see where we end up.”

Andrew Van Dam contributed to this report.



from Hacker News https://ift.tt/SDm1XFE6w

Fantastic Symbols and Where to Find Them – Part 2

This is a blog post series. If you haven’t read Part 1 we recommend you to do so first!

In the first blog post, we learned about the fantastic symbols (debug symbols), how the symbolization process works and lastly, how to find the symbolic names of addresses in a compiled binary.

The actual location of the symbolic information depends on the programming language implementation the program is written in. We can categorize the programming language implementations into three groups: compiled languages (with or without a runtime), interpreted languages, and JIT-compiled languages.

In this post, we will continue our journey to find fantastic symbols. And we will look into where to find them for the other types of programming language implementations.

JIT-compiled language implementations

Examples of JIT-compiled languages include Java, .NET, Erlang, JavaScript (Node.js) and many others.

Just-In-Time compiled languages compile the source code into bytecode, which is then compiled into machine code at runtime, often using direct feedback from runtime to guide compiler optimizations on the fly.

Because functions are compiled on the fly, there is no pre-built, discoverable symbol table in any object files. Instead, the symbol table is created on the fly. The symbol mappings (location to symbol) are usually stored in the memory of the runtime or virtual machine and used for rendering human-readable stack traces when it is needed , e. g. when an exception occurs, the runtime will use the symbol mappings to render a human-readable stack trace.

The good thing is that most of the runtimes provide supplemental symbol mappings for the just-in-time compiled code for Linux to use perf.

perf defines an interface to resolve symbols for dynamically generated code by a JIT compiler. These files usually can be found in /tmp/perf-$PID.map, where $PID is the process ID of the process of the runtime that is running on the system.

The runtimes usually don't enable providing symbol mappings by default. You might need to change a configuration, run the virtual machine with a specific flag/environment variable or run an additional program to obtain these mappings. For example, JVM needs an agent to provide supplemental symbol mapping files, called perf-map-agent.

Let's see an example perf map file for NodeJS. The runtimes out there output this file with more or less the same format, more or less!

To generate a similar file for Node.js, we need to run node with --perf-basic-prof option.



node --perf-basic-prof your-app.js

This will create a map file at /tmp/perf-<pid>.map that looks like this:


3ef414c0 398 RegExp:[{(]
3ef418a0 398 RegExp:[})]
59ed4102 26 LazyCompile:~REPLServer.self.writer repl.js:514
59ed44ea 146 LazyCompile:~inspect internal/util/inspect.js:152
59ed4e4a 148 LazyCompile:~formatValue internal/util/inspect.js:456
59ed558a 25f LazyCompile:~formatPrimitive internal/util/inspect.js:768
59ed5d62 35 LazyCompile:~formatNumber internal/util/inspect.js:761
59ed5fca 5d LazyCompile:~stylizeWithColor internal/util/inspect.js:267
4edd2e52 65 LazyCompile:~Domain.exit domain.js:284
4edd30ea 14b LazyCompile:~lastIndexOf native array.js:618
4edd3522 35 LazyCompile:~online internal/repl.js:157
4edd37f2 ec LazyCompile:~setTimeout timers.js:388
4edd3cca b0 LazyCompile:~Timeout internal/timers.js:55
4edd40ba 55 LazyCompile:~initAsyncResource internal/timers.js:45
4edd42da f LazyCompile:~exports.active timers.js:151
4edd457a cb LazyCompile:~insert timers.js:167
4edd4962 50 LazyCompile:~TimersList timers.js:195
4edd4cea 37 LazyCompile:~append internal/linkedlist.js:29
4edd4f12 35 LazyCompile:~remove internal/linkedlist.js:15
4edd5132 d LazyCompile:~isEmpty internal/linkedlist.js:44
4edd529a 21 LazyCompile:~ok assert.js:345
4edd555a 68 LazyCompile:~innerOk assert.js:317
4edd59a2 27 LazyCompile:~processTimers timers.js:220
4edd5d9a 197 LazyCompile:~listOnTimeout timers.js:226
4edd6352 15 LazyCompile:~peek internal/linkedlist.js:9
4edd66ca a1 LazyCompile:~tryOnTimeout timers.js:292
4edd6a02 86 LazyCompile:~ontimeout timers.js:429
4edd7132 d7 LazyCompile:~process.kill internal/process/per_thread.js:173

Each line has START, SIZE and symbolname fields, separated with spaces. START and SIZE are hex numbers without 0x. symbolname is the rest of the line, so it could contain special characters.

With the help of this mapping file, we have everything we need to symbolize the addresses in the stack trace. Of course, as always, this is just an oversimplification.

For example, these mappings might change as the runtime decides to recompile the bytecode. So we need to keep an eye on these files and keep track of the changes to resolve the address correctly with their most recent mapping.

Each runtime and virtual machine has its peculiarities that we need to adapt. But those are out of the scope of this post.

Interpreted language implementations

Examples of interpreted languages include Python, Ruby, and again many others. There are also languages that commonly use interpretation as a stage before JIT compilation, e. g. Java. Symbolization for this stage of compilation is similar to interpreted languages.

Interpreted language runtimes do not compile the program to machine code. Instead, interpreters and virtual machines parse and execute the source code using their REPL routines. Or execute their own virtual processor. So they have their own way of executing functions and managing stacks.

If you observe (profile or debug) these runtimes using something like perf, you will see symbols for the runtime. However, you won't see the language-level context you might be expecting.

Moreover, the interpreter itself is probably written in a more low-level language like C or C++. And when you inspect the object file of the runtime/interpreter, the symbol table that you would find would show the internals of the interpreter, not the symbols from the provided source code.

Finding the symbols for our runtime

The runtime symbols are useful because they allow you to see the internal routines of the interpreter. e. g. how much time your program spends on garbage collection. And it's mostly like the stack traces you would see in the debugger or profiler will have calls to the internals of the runtime. So these symbols are also helpful for debugging.

Node Stack Trace

Most of the runtimes are compiled with production mode, and they most likely lack the debug symbols in their release binaries. You might need to manually compile your runtime in debug mode to actually have them in the resulting binary. Some runtimes, such as Node.js, already have them in their production distributions.

Lastly, to completely resolve the stack traces of the runtime, we might need to obtain the debug information for the linked libraries. If you remember from the first blog post, debuginfo files can help us. Debuginfo files for software packages are available through package managers in Linux distributions. Usually for an available package called mypackage there exists a mypackage-dbgsym, mypackage-dbg or mypackage-debuginfo package. There are also public servers that serve debug information. So we need to find the debuginfo files for the runtime we are using and all the linked libraries.

Finding the symbols for our target program

The symbols that we look for in our own program likely are stored in a memory table that is specific to the runtime. For example, in Python, the symbol mappings can be accessed using symtable.

As a result, you need to craft a specific routine for each interpreter runtime (in some cases, each version of that runtime) to obtain symbol information. Educated eyes might have already noticed, it's not an easy undertaking considering the sheer amount of interpreted languages out there. For example, a very well known Ruby profiler, rbspy, generates code for reading internal structs of the Ruby runtime for each version.

If you were to write a general-purpose profiler, like us, you would need to write a special subroutine in your profiler for each runtime that you want to support.

Again, don't worry, we got you covered

The good news is we got you covered. If you are using Parca Agent, we already do the heavy lifting for you to symbolize captured stack traces. And we keep extending our support for the different languages and runtimes. For example, Parca has already support for parsing perf JIT interface to resolve the symbols for collected stack traces.

Check Parca out and let us know what you think, on Discord channel.

Further reading



from Hacker News https://ift.tt/pOBUx95DA

Blueberry Earth or “what if the entire Earth was replaced with blueberries?“

Bibtex formatted citation

×


from Hacker News https://ift.tt/BGmd39NjK

Topos Theory in a Nutshell

Topos Theory in a Nutshell

John Baez

October 10, 2021

Okay, you wanna know what a topos is? First I'll give you a hand-wavy vague explanation, then an actual definition, then a few consequences of this definition, and then some examples. Finally I'll tell you some more things to read.

I'll warn you: it takes a lot of work to learn enough topos theory to really use it to solve problems. Thus, when you're getting started the main reason to learn about it should not be to quickly solve some specific problems, but to broaden your horizons and break out of the box that traditional mathematics, based on set theory, imposes on your thinking.

1. Hand-Wavy Vague Explanation

Around 1963, Bill Lawvere decided to figure out new foundations for mathematics, based on category theory. His idea was to figure out what was so great about sets, strictly from the category-theoretic point of view. This is an interesting project, since category theory is all about objects and morphisms. For the category of sets, this means sets and functions. Of course, the usual axioms for set theory are all about sets and membership. Thus analyzing set theory from the category-theoretic viewpoint forces a radical change of viewpoint, which downplays membership and emphasizes functions.

In the spring of 1966 Lawvere encountered the work of Alexander Grothendieck, who had invented a concept of "topos" in his work on algebraic geometry. The word "topos" means "place" in Greek. In algebraic geometry we are often interested not just in whether or not something is true, but in where it is true. For example, given two functions on a space, where are they equal? Grothendieck thought about this very hard and invented his concept of topos, which is roughly a category that serves as a place in which one can do mathematics.

Ultimately, this led to a concept of truth that has a very general notion of "space" built into it!

By 1971, Lawvere and Myles Tierney had taken Grothendieck's original concept of topos — now called a "Grothendieck topos" — generalized and distilled it, and come up with the concept of topos I'll talk about here. This is sometimes called an "elementary topos", to distinguish it from Grothendieck's notion... but often it's just called a topos, and that's what I'll do.

So what is a topos?

A topos is category with certain extra properties that make it a lot like the category of sets. There are many different topoi; you can do a lot of the same mathematics in all of them; but there are also many differences between them. For example, the axiom of choice need not hold in a topos, and the law of the excluded middle ("either P or not(P)") need not hold. The reason is that truth is not a yes-or-no affair: instead, we keep track of "how" true statements are, or more precisely where they are true. Some but not all topoi contain a "natural numbers object", which plays the role of the natural numbers.

But enough hand-waving. Let's see precisely what a topos is.

2. Definition

There are various equivalent definitions of a topos, some more terse than others. Here is a rather inefficient one:

A topos is a category with:

A) finite limits and colimits,

B) exponentials,

C) a subobject classifier.

It's not too long! But it could be made even shorter: we don't need to mention colimits, since that follows from the rest.

3. Some Consequences of the Definition

Unfortunately, if you don't know some category theory, the above definition will be mysterious and will require a further sequence of definitions to bring it back to the basic concepts of category theory — object, morphism, composition, identity. Instead of doing all that, let me say a bit about what these items A)-C) amount to in the category of sets:

A) says that there are:

  • an initial object (an object like the empty set)
  • a terminal object (an object like a set with one element)
  • binary coproducts (something like the disjoint union of two sets)
  • binary products (something like the Cartesian product of two sets)
  • equalizers (something like the subset of X consisting of all elements x such that f(x) = g(x), where f,g: X → Y)
  • coequalizers (something like the quotient set of X where two elements f(y) and g(y) are identified, where f,g: Y → X)

In fact A) is equivalent to all this stuff. However, I should emphasize that A) says all this in an elegant unified way; it's a theorem that this elegant way is the same as all the crud I just listed.

B) says that for any objects X and Y, there is an object YX, called an "exponential", which acts like "the set of functions from X to Y".

C) says that there is an object called the "subobject classifier", Ω, which acts like the two-element set {0,1}. In the category of sets we use {0,1} as our set of "truth values". In particular, functions f: X → {0,1} are secretly the same as subsets of X. Similarly, for any object X in a topos, morphisms f: X → Ω are secretly the same as "subobjects" of X.

Learning more about all these concepts is probably the best use of your time if you wants to learn a little bit of topos theory. Even if you can't remember what a topos is, these concepts can help you become a stronger mathematician or mathematical physicist!

4. Examples

Suppose you're an old fuddy-duddy. Then you want to work in the topos Set, where the objects are sets and the morphisms are functions.

Suppose you know the symmetry group of the universe, G. And suppose you only want to work with sets on which this symmetry group acts, and functions which are compatible with this group action. Then you want to work in the topos G-Set.

Suppose you have a topological space that you really like. Then you might want to work in the topos of presheaves on X, or the topos of sheaves on X. Sheaves are important in twistor theory and other applications of algebraic geometry and topology to physics.

Generalizing the last two examples, you might prefer to work in the topos of presheaves on an arbitrary category C, also known as hom(Cop, Set).

For example, if C = Δ (the category of nonempty finite totally ordered sets), a presheaf on Δ is a simplicial set. Algebraic topologists love to work with these, and physicists need more and more algebraic topology these days, so as we grow up, eventually it pays to learn how to do algebraic topology using the category of simplicial sets, hom(Δop, Set).

Or, you might like to work in the topos of sheaves on a topological space — or even on a "site", which is a category equipped with something like a topology. These ideas were invented by Grothendieck as part of his strategy for proving the Weil conjectures. In fact, this is how topos theory got started. And the power of these ideas continues to grow. For example, in 2002, Vladimir Voevodsky won the Fields medal for cracking a famous problem called Milnor's Conjecture with the help of "simplicial sheaves". These are like simplicial sets, but with sets replaced by sheaves on a site. Again, they form a topos. Zounds!

But if all this sounds too terrifying, never mind - there are also examples with a more "foundational" flavor:

Suppose you're a finitist and you only want to work with finite sets. Then you want to work in the topos of finite sets, and functions between those. In topos theory, the infinite is not "built in" — it's an extra feature you can add if you want.

Suppose you're a constructivist and you only want to work with "effectively constructible" sets and "effectively computable" functions. Then you want to work in the "effective topos" developed by Martin Hyland.

Suppose you like doing calculus with infinitesimals, the way physicists do all the time — but you want to do it rigorously. Then you want to work in the "smooth topos" developed by Lawvere and Anders Kock.

Or suppose you're very concerned with the time of day, and you want to work with time-dependent sets and time-dependent functions between them. Then there's a topos for you — I don't know a spiffy name for it, but it exists: an object gives you a set S(t) for each time t, and a morphism gives you a function f(t): S(t) → T(t) for each time t. This too gives a topos!

People often go a bit further, and work with a topos where an object S consists of a set S(t) for each time t and a function St,t' : S(t) → S(t') whenever t &leq; t'. The idea here is that as time goes by you can learn new elements are in your set, or learn that two elements are equal, so you get a map from S(t) to S(t'). I'll let you guess what the morphisms should be in this topos.

There are lots of topoi! You can hand-craft them to your specific needs.

5. For more

If you want to learn more about topos theory, this could be the easiest place to start:
  • F. William Lawvere and Steve Schanuel, Conceptual Mathematics: A First Introduction to Categories, Cambridge U. Press, Cambridge, 1997.

It may seem almost childish at first, but it gradually creeps up on you. Schanuel has told me that you must do the exercises — if you don't, at some point the book will suddenly switch from being too easy to being way too hard! If you stick with it, by the end you will have all the basic concepts from topos theory under your belt, almost subconsciously.

After that, try this one:

  • F. William Lawvere and Robert Rosebrugh, Sets for Mathematics, Cambridge U. Press, Cambridge, 2003.

This is a great introduction to category theory via the topos of sets: it describes ordinary set theory in topos-theoretic terms, making it clear which axioms will be dropped when we go to more general topoi, and why. It goes a lot further than the previous book, and you need some more sophistication to follow it, but it's still written for the beginner.

I got a lot out of the following book:

Don't be scared by the title: it starts at the beginning and explains categories before going on to topoi and their relation to logic. In fact, many toposophers complain that it's not substantial enough — it shows how topoi illuminate concepts from logic, but it doesn't show you can do lots of cool stuff with topoi. But for beginners, this is probably just fine!

When you want to dig deeper, try this:

It's a "short expository text for readers who are confident in basic category theory but know little or nothing about toposes". Then try this:
  • Saunders Mac Lane and Ieke Moerdijk, Sheaves in Geometry and Logic: a First Introduction to Topos Theory, Springer, New York, 1992.
or this:
  • Colin McLarty, Elementary Categories, Elementary Toposes, Clarendon Press, Oxford, 1995.

Mac Lane and Moerdijk's book is both broad and deep, explaining in great detail how topos theory unifies logic and geometry. McLarty's is shorter, more laconic, and focused strongly on logic. If you want to hear McLarty talk in a less technical way about the big picture of topos theory, I highly recommend this:

He complains about the misperception, common among logicians, that topos theory arose as an attempt to generalize set theory:
Category theory arose from a complicated array of practical problems in topology. Topos theory arose from Grothendieck's work in geometry, Tierney's interest in topology and Lawvere's interest in the foundations of physics. The two subjects are typical in this regard. An important mathematical concept will rarely arise from generalizing one earlier concept. More often it will arise from attempts to unify, explain, or deal with a mass of earlier concepts and problems. It becomes important because it makes things easier, so that an accurate historical treatment would begin at the hardest point. I will sketch a more accurate history of categories and toposes and show some ways the common sense history obscures their content and especially obscures categorical foundations for mathematics. Yet I doubt the more accurate history will help beginners learn category theory. I conclude with a more broadly falsified history that could help introduce the subject.

And after that... well, let's not rush this! For example, this classic is now available for free online:

but it's advanced enough to make any beginner run away screaming! I love it now, but that took years.

These books are bound to have a similar effect:

  • Peter Johnstone, Topos Theory, London Mathematical Society Monographs 10, Academic Press, 1977.
  • Peter Johnstone, Sketches of an Elephant: a Topos Theory Compendium, Oxford U. Press, Oxford. Volume 1, comprising Part A: Toposes as Categories, and Part B: 2-categorical Aspects of Topos Theory, 720 pages, published in 2002. Volume 2, comprising Part C: Toposes as Spaces, and Part D: Toposes as Theories, 880 pages, published in 2002. Volume 3, comprising Part E: Homotopy and Cohomology, and Part F: Toposes as Mathematical Universes, in preparation.

... but once you get deeper into topos theory, you'll see they contain a massive hoard of wisdom. I'm trying to read them now. McLarty has said that you can tell you really understand topoi if you can follow Johnstone's classic Topos Theory. It's long been the key text on the subject, but as a referee of his new trilogy wrote, it was "far too hard to read, and not for the faint-hearted". His Sketches of an Elephant spend more time explaining things, but they're so packed with detailed information that nobody unfamiliar with topos theory would have a chance of seeing the forest for the trees. Also, they assume a fair amount of category theory. But they're great!


Mathematics is not the rigid and rigidity-producing schema that the layman thinks it is; rather, in it we find ourselves at that meeting point of constraint and freedom that is the very essence of human nature. — Hermann Weyl

© 2021 John Baez
baez@math.removethis.ucr.andthis.edu



from Hacker News https://ift.tt/KQYcZtb9x