Thursday, March 31, 2022

Seen from space, the longest conveyor belt is in Morocco

Morocco is home to the longest conveyor belt in the world. The mechanical device is used to transport phosphate rocks mined in Bou Craa, one of the largest phosphate mines in the world.

The small town is located in the south-east of the city of Laayoune and is inhabited almost exclusively by employees of Phosboucraa, a subsidiary of phosphates giant OCP.

The Bou Craa belt is a major tool in phosphate mining and production in the region. The winding system of interlinked belts is used to transport phosphate ore from a mining operation in Bou Craa to Laayoune beach, where it can be shipped worldwide, Atlas Obscura wrote.

Instead of trucking the phosphate ore to the coast for a distance of almost 102 kilometers (61-62 miles), the belt does the job.

A belt seen from space

This extra-long conveyance system «carries 2,000 metric tons of rock per hour», NASA wrote in 2018. According to the US agency, this belt has «often attracted astronaut attention in this otherwise almost featureless landscape».

Indeed, on its way to Laayoune, wind blows the lighter particles of white ore off the belt. This creates a «bold ivory streak along the length of the conveyance system», Atlas Obscura wrote. This bold ivory streak can be seen all the way from space, according to NASA satellite images.

In 2000, true-color satellite images by NASA showed the belt from space. «In both images, a conspicuous straight line runs from the center of the mining operations toward the northwest. As of 2008, this conveyor belt system was the world’s longest», NASA wrote.

Phosphate mining in the region started in the 1960's while the Bou Craa phosphate mining operation began growing steadily in 1974. Other satellite images by NASA in 2018 show that the area around the mine has grown significantly in the past five decades.



from Hacker News https://ift.tt/196wPIb

Australia's SkyGuardian drones shot down by spicy cybers

skyguardian-drone-gettyimages.jpg
Image: Matt Cardy/Getty Images

The Australian government has cancelled the SkyGuardian armed drone program for the Royal Australian Air Force. The funding is being redirected to the newly-announced REDSPICE cybersecurity and intelligence program.

REDSPICE, the Resilience, Effects, Defence, Space, Intelligence, Cyber and Enablers program, is a flagship component of the federal Budget announced on Tuesday.

The program aims to double the staffing levels of the Australian Signals Directorate (ASD) over the next four years, creating some 1,900 new jobs. The total program budget is AU$9.9 billion over the next decade, boosting both offensive and defensive cyber capabilities.

"This is the biggest ever investment in Australia's cyber preparedness," said Treasurer Josh Frydenberg.

However in Senate Estimates on Friday, defence officials confirmed that little of this is new money.

Of the AU$9.9 billion total, only AU$4.2 billion is budgeted to be spent over the four-year forward estimates period through to 2025–2026. And of that amount, only around AU$588.5 million is new funding.

A big chunk of the existing funding will come from the now-cancelled project AIR 7003, a planned AU$1.3 billion program to develop an armed remotely piloted aircraft system.

In November 2019, the government had confirmed that defence's preferred platform was the General Atomics MQ-9B SkyGuardian, a variant of the Predator B drone known in the UK as the Protector.

AIR 7003 had been scheduled for government consideration in the current 2021-22 financial year.

According to Asia Pacific Defence Reporter, General Atomics had proposed developing a multi-national service hub in Adelaide.

"The company has probably spent around $30 million on the project over a decade and is unlikely to recover a single cent," wrote editor Kym Bergmann.

"The scant information available indicates that Defence Minister Peter Dutton has asked the Department to identify projects that need to be cancelled to free up funds to hire more personnel, particularly in support of the cyber security announcement."

According to defence officials, around AU$10 million had been spent on AIR 7003 before its cancellation.

The remainder of REDSPICE funding comes from other cancelled projects. This includes about AU$3 billion of "both unapproved and approved" funding which had been allocated to the now-cancelled Attack-class submarines, the SEA 1000 Future Submarine Program, and around AU$236 million for "an ICT remediation project around modernisation and mobility".

Funds also come from previously planned ASD projects which have now become part of REDSPICE.

Witnesses before Estimates on Friday morning were unable to shed any light on where the name REDSPICE came from.

Related Coverage



from Latest Topic for ZDNet in... https://ift.tt/Dkqo2LA

Spin – WebAssembly Framework

Introducing Spin

We are pleased to announce our new WebAssembly framework, Spin. Spin is a foundational piece of the Fermyon Platform. It is also a great way to get started writing WebAssembly for the cloud.

What is a WebAssembly Framework?

We think of WebAssembly primarily as a compile target. Pick a language, write your code, and compile it to Wasm. But what kinds of code does one write in WebAssembly?

The original way to run a WebAssembly module was in the browser. For that reason, early WebAssembly effort was focused on optimizing performance-intensive code to be executed on a web page or client-side web app.

WebAssembly has now moved beyond the browser. Some platforms, like the Envoy proxy, allow you to write plugins in Wasm. Command line runtimes like Wasmtime and WAMR run Wasm binaries on the command line, allowing developers to write a single CLI application that can run on Windows, macOS, and Linux (regardless of the underlying architecture).

Here at Fermyon, we are most excited about the prospect of writing microservices and server-side web applications in WebAssembly. We gave a preview of this when we built Wagi. But with Spin, we’re taking things to a new level. Most specifically, Spin offers a framework for building apps.

What do we mean when we talk about a “framework”? A framework provides a set of features and conventions that assist a developer in reaching their desired goal faster and with less work. Ruby on Rails and Python Django are two good examples.

Spin is a framework for web apps, microservices, and other server-like applications. It provides the interfaces for writing WebAssembly modules that can do things like answer HTTP requests. One unique thing about Spin is that it is a multi-language framework. Rust and Go both have robust support in Spin, but you can also write Python, Ruby, AssemblyScript, Grain, C/C++, and other languages.

We are excited to already be using Spin in production. The Spin docs are (appropriately enough) running on Spin. That website is powered by the Bartholomew CMS system and is running on an HA Nomad cluster.

Spin is a foundational new technology that sets the pace for what we at Fermyon are building.

Spin is a Foundation

We have talked about the way we build applications in Spin. Part of the reason that Spin provides a framework is because by doing so, we can take advantage of some of the compelling features of WebAssembly. And in so doing, we can create serverless-style programs with many benefits.

Over the last few years, our discussions with developers have turned up some common themes:

  • Ease of development is very important
  • Function-as-a-Service systems like Lambda are nice to develop, but frustrating to operate
  • Being locked into a proprietary platform is no fun
  • Developers are not operators, and shouldn’t have to solve operations problems

As we built Spin, these ideas were foremost in our minds. We set out to build something that delivered on the promises of serverless, but had all the virtues of local development and modern frameworks. We’re building based on standards and open source tooling, and we’re working hard to please both developers and platform operators (including DevOps).

Beyond that, Spin has given us a chance to rethink microservices. When we built Wagi, our goal was to make something that worked with WebAssembly as it was in 2020. WebAssembly technology has gone through two highly productive years of development since then. Spin takes advantage of the new directions taken with WebAssembly, like components, WIT, and improvements to WASI (We’ll cover these things later in the post). Doing so has led us to build a framework that makes it easy for developers to achieve top-of-mind goals (like security) without having to spend countless cycles maintaining the low-level code.

Rather than explain further, let’s just look at an example. Here’s what a Spin “Hello World” looks like written in Rust:

#[http_component]​
fn hello_world(_req: Request) -> Result<Response> {​
    Ok(Response::builder()​
        .status(200)​
        .body(Some("Hello, Fermyon!".into()))?)​
}​

Rust coder or not, most developers are familiar with the HTTP request/response model used here. Writing a Spin module is not much different in Go:

func main() {
 spin.HandleRequest(func(w http.ResponseWriter, r *http.Request) {
  fmt.Fprintln(w, "Hello, Fermyon!")
 })
}

Again, we just use the common request/response model that is idiomatic for Go.

Most important, though, is what we don’t have to do. There is no requirement that you create a web server to handle requests or start long-lived listeners or manage a pool for network connections. A Spin app can be as simple as a function. Of course, you can also add your preferred libraries to build out far more sophisticated apps… but at no time do you have to manage all of the low-level server duties like setting up SSL, handling interrupts, or wiring up sockets and ports. Such duties are delegated to Spin itself.

Following the model of stateless microservices, a Spin application can be bootstrapped, executed, and shut down again in milliseconds (or even nanoseconds if you are careful). And that means you don’t have idle WebAssembly modules hanging around consuming memory and processor space while waiting for inbound requests. Because Spin is fast, it can also be light on resources.

When you’re ready to see for yourself, Spin provides a simple spin up command to locally run your code.

Fermyon currently only supports a top-level HTTP request/response model for a few languages. We’re adding more. But you don’t need to wait to use Spin to run AssemblyScript, Python, Ruby, Grain, or others. Spin ships with a 100% compatible Wagi implementation. As long as your chosen language supports the WASI specification for files and environment variables, you can use it to write Spin apps.

Not Just HTTP

For supported languages like Rust and Go, Spin supports more than just HTTP responders. Here’s an example that listens on a Redis channel:

use anyhow::Result;
use bytes::Bytes;
use spin_sdk::redis_component;
use std::str::from_utf8;

/// A simple Spin Redis component.
#[redis_component]
fn on_message(message: Bytes) -> Result<()> {
    println!("{}", from_utf8(&message)?);
    Ok(())
}

In this case, the Spin component listens for a message, and then prints the message. Every time a new message is pushed onto the relevant channel, the WebAssembly component is started, and the on_message function is executed. Again, this whole process happens in milliseconds. Spin is fast!

HTTP and Redis are the first two responders for Spin, and more are on the way. Jump into the discussion on Discord or peruse the issue queue to see what else is in the works.

If you’re ready to dive in to some practical coding, you can walk through a tutorial on building a URL shortener with Spin.

Where is Fermyon Going with Spin?

Spin is our framework, and our execution environment. Already, it has some key features that we will make use of, including:

We are hard at work linking up to storage (like key/value, object storage, and RDBMS) as well as improved support for a variety of languages. And we are working closely with Bytecode Alliance to continue building standards and reference implementations.

Our goal is to build an excellent development platform for the next generation of microservices and web apps, achieving many of the goals that serverless computing has long pursued.

Getting Involved

As with the rest of Fermyon’s open source projects, Spin is licensed under the Apache 2 licenses. The source is hosted on GitHub. We’re happy to receive issues and PRs in the project. We’re working hard to make our documentation top-grade. If you have any trouble, please file an issue and help us improve. We want Spin to be easy!

If you’re interested in chatting, join the Fermyon Discord server. There’s a dedicated Spin channel, and you can also stay on top of other Fermyon news.

If you want to be the first to know what we’re up to, click the 👋 Get Updates button in the top-right corner or follow us on Twitter at @FermyonTech.



from Hacker News https://ift.tt/QOxiLnv

OpenBB wants to be an open source challenger to Bloomberg Terminal

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Learn more about Transform 2022


Let the OSS Enterprise newsletter guide your open source journey! Sign up here.

Anyone who has worked in the financial services sector will at least be aware of Bloomberg Terminal, a research, data and analytics platform used to garner real-time insights on the financial markets. Bloomberg Terminal has emerged as something of an industry standard, used by more than 300,000 people at just about every major financial and investment-related corporation globally — but it costs north of $20,000 per user each year to license, a fee that is prohibitively high for many organizations.

This is something that OpenBB has set out to tackle, by democratizing an industry that has been “dominated by monopolistic and proprietary incumbents” for the past four decades — and it’s doing so with an entirely open source approach.

After launching initially last year as an open source investment research terminal called Gamestonk Terminal, the founding team, Didier Lopes, Artem Veremey, and James Maslek, were approached by OSS Capital to make an investment and build a commercial company on top of the terminal. And so OpenBB is formally launching this week with $8.5 million in funding from OSS Capital, with contributions from notable angel investors including early Google backer Ram Shriram, entrepreneur and investor Naval Ravikant, and Elad Gil.

Open for business

OpenBB
OpenBB

The newly named OpenBB Terminal is very much an alpha-stage product, one that’s aimed at the more technically minded. It’s pitched as a “Python-based integrated environment for investment research,” allowing any trader to access data science and machine learning smarts to unpack raw, unrefined data.

In its initial guise, OpenBB is deployed via a command line interface (CLI), though plans are afoot to build a proper GUI for regular users. The platform gleans its investment data via publicly available sources, among others that require an API key — these include Alpha Vantage, Financial Modeling Prep, Finnhub, Reddit, Twitter, Coinbase, the SEC, and many more.

OpenBB leans on machine learning across myriad use-cases. For example, it can look at Apple’s share price over the past week, and then grab news headlines via one of Finnhub’s APIs and derive sentiment from each headline using natural language processing (NLP), and then correlate the impact of news on Apple’s share price.

OpenBB: Natural language processing (NLP) is used to correlate the impact of news headlines on companies' share prices
OpenBB: Natural language processing (NLP) is used to correlate the impact of news headlines on companies’ share prices

Elsewhere, OpenBB can leverage deep learning to predict stock price movement using historical data, though in reality the model can be applied to just about anything, including economic data, crypto, and more. The company plans to double down on these predictive smarts.

“The idea in the future is that we don’t rely just on the past historical data to train the model, but we use further data available in our platform,” Lopes told VentureBeat. “For example, building powerful models that use share price, news, sentiment on social media, insider trading….. anything, really.”

Industry standard

While Bloomberg Terminal is the industry standard for countless financial organizations, there are other alternatives on the market, such as Refinitiv Eikon and Factset. But OpenBB hopes that its open source credentials, and foundations in Python, will position it to win over many new users — flexibility is the name of the game.

“By being open source, affordable, and highly customizable due to the usage of Python, we differentiate from these platforms as well tailor to the specific needs of small-to-medium-sized institutions,” Lopes said. “The advantage we have over competitors is our open source nature when it comes to incorporating external data sources.”

Indeed, being open source means that the broader community can add their own flavors to the OpenBB mix — by way of example, one contributor who was interested in the foreign currency exchange market (Forex) added an Oanda integration to the project.

Given that the entire source code is available for anyone to modify, companies can create their own version of the terminal with customizations that suit their niche use-cases. If they want to remove all the clutter and work purely with one type of asset, they can create a sort of light-weight version of the terminal with a much narrower focus on Forex, or cryptocurrency, for example.

But who is the actual intended end-user, exactly? In truth, it could be anyone from regional investment banks and hedge funds, to venture capitalists, family offices, and mutual funds. Although the product isn’t quite at that stage yet — that is where the initial seed capital enters the fray. It’s all about building the product into something that could serve a potentially large market.

“In the long term, we would also be able to target companies like Morgan Stanley, JP Morgan, Blackrock, Vanguard, UBS, Goldman Sachs, Deutsche Bank, and similar,” Lopes explained. “[But] we fully understand that this is not possible right now.”

There is no escaping the pervasiveness of Bloomberg Terminal, and it’s clear that it’s not going to be knocked off its perch any time soon — but that isn’t the direct goal of OpenBB.

“Being a product that has been around for more than 40 years, it [Bloomberg Terminal] has become a staple for many of the larger institutions,” Lopes conceded. “OpenBB realizes that it can’t directly compete with this industry standard. One of the big caveats of the Bloomberg Terminal is that the costs are relatively high for a small-to-medium sized institution, which is an area which we can capitalize on.”

OpenBB is also looking to differentiate in areas such as portfolio optimization and attribution (reports), and tailor itself more to the needs of smaller institutions. Moreover, it also aims to target different asset classes that may not be covered so well on alternative platforms — this may include cryptocurrencies, NFTs, fintech lending services, and so on.

“Digital assets is a niche area that isn’t covered extensively — for example, providing insights on movements within this industry, but also more advanced areas like valuation of loans to a farmer in Africa,” Lopes said. “These are topics that we can quite easily differentiate when we notice there is a lot of interest in this area. That is one of our advantages by being open source and developing in Python.”

And then there is academia too, an arena where OpenBB could thrive — teachers could use the terminal to explain market movements to students using real data, or PhD students could develop their thesis to build products or features that can be accessed by anyone around the world. And this could all work to OpenBB’s benefit too.

“Given our product being free and open source, we can easily reach academia, which allows us to stay at the vanguard of innovation since students and researchers can develop new features to further strengthen OpenBB Terminal capabilities,” Lopes said.

Terminal ascent

For now, OpenBB Terminal will be an entirely free proposition, but with the weight of a commercial business behind it and $8.5 million in the bank, there will be a concerted push to monetize it. Some ideas currently under consideration include building a “slick 21st century UI,” as well as developing a software-as-a-service (SaaS) model, where OpenBB serves up the computational power to run machine learning models on vast amounts of data.

OpenBB is also exploring ways to build bridges between data sources and investors. For example, an investor probably wouldn’t want to pay for raw data from a given data source, but if OpenBB Terminal could extract insights from that data using machine learning or data science techniques and deliver it with context — this is something that an individual or organization may wish to pay for.

It is still early days for OpenBB, but the early traction it gained last year in its initial form suggests there is a real demand — and that is why OSS Capital is betting on Lopes and Co.

“The investment research industry has been dominated by monopolistic and proprietary incumbents since the 1980s, and it has taken until now for someone to develop an open source, democratized platform for the current and next generation of market makers, traders and equities professionals,” OSS Capital founder Joseph Jacks said. “OpenBB is the right idea, at the right time.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.



from Hacker News https://ift.tt/UidV0xH

Infinite Mac: An Instant-Booting Quadra in the Browser

tl;dr

I’ve extended James Friend’s in-browser Basilisk II port to create a full-featured classic 68K Mac in your browser. You can see it in action at system7.app or macos8.app. For a taste, see also this screencast:

Backstory

It’s a golden age of emulation. Between increasing CPU power, WebAssembly, and retrocomputing being so popular The New York Times is covering it, it’s never been easier to relive your 80s/90s/2000s nostalgia. Projects like v86 make it easy to run your chosen old operating system in the browser. My heritage being of the classic Mac line, I was curious what the easiest to use emulation option was in the modern era. I had earlier experimented with Basilisk II, which worked well enough, but it was rather annoying to set up, as far as gathering a ROM, a boot image, messing with configuration files, etc. As far as I could tell, that was still the state of the art, at least if you were targeting late era 68K Mac emulation.

Some research into browser-based alternatives uncovered a few options:

However, none of these setups replicated the true feel of using a computer in the 90s. They’re great for quickly launching a single program and playing around with it, but they don’t have any persistence, way of getting data in or out of it, or running multiple programs at once. macintosh.js comes closest to that — it packages James’s Basilisk II port with a large (~600MB) disk image and provides a way of sharing files with the host. However, it’s an Electron app, and it feels wrong to download a ~250MB binary and dedicate 1 CPU core to running something that was meant to be in a browser.

I wondered what it would take to extend the Basilisk II support to have a macintosh.js-like experience in the browser, and ideally go beyond it.

Streaming Storage and Startup Time

The first thing that I looked into was reducing the time spent downloading the disk image that the emulator uses. There was some low-hanging fruit, like actually compressing it (ideally with Brotli), and dropping some unused data from it. However, it seemed like this goal was fundamentally incompatible with the other goal of putting as much software as possible onto it — the more software there was, the bigger the required disk image.

At this point I switched my approach to downloading pieces of the disk image on demand, instead of all upfront. After some false starts, I settled on an approach where the disk image is broken up into fixed-size content-addressed 256K chunks. Filesystem requests from Emscripten are intercepted, and when they involve a chunk that has not been loaded yet, they are sent off to a service worker who will load the chunk over the network. Manually chunking (as opposed to HTTP range requests) allows each chunk to be Brotli-compressed (ranges technically support compression too, but it’s lacking in the real world). Using content addressing makes the large number of identical chunks from the empty portion of the disk map to the same URL. There is also basic prefetching support, so that sequential reads are less likely to be blocked on the network.

Along with some old fashioned web optimizations, this makes the emulator show the Mac’s boot screen in a second, and be fully booted in 3 seconds, even with a cold HTTP cache.

Building Disk Images, or Docker 1995-style

I wanted to have a sustainable and repeatable way of building a disk image with lots of Mac software installed. While I could just boot the native version of Basilisk II and manually copy things over, if I made any mistakes, or wanted to repeat the process with a different base OS, I would have to repeat everything, which would be tedious and error-prone. What I effectively wanted was a Dockerfile I could use to build a disk image out of a base OS and a set of programs. Though I didn’t go quite that far, I did end up something that is quite flexible:

  1. A bare OS image is parsed using machfs (which can read and write the HFS disk format)
  2. Software that’s been preserved by the Internet Archive as disk images can be copied into it, by reading those images with machfs and merging them in
  3. Software that’s available as Stuffit archives or similar is decompressed with the unar and lsar utilities from XADMaster and copied into the image (the Macintosh Garden is a good source for these archives).
  4. Software that’s only available as installers is installed by hand, and then the results of that are extracted into a zip file that can be also copied into the image.

I wanted to have a full-fidelity approach to the disk image creation, so I had to extend both machfs and XADMaster to preserve and copy Finder metadata like icon positions and timestamps. There was definitely some cognitive dissonance in dealing with late 80s structures in Python 3 and TypeScript.

Interacting With The Outside World

Basilisk II supports mounting a directory from the “host” into the Mac (via the ExtFS module). In this case the host is the pseudo-POSIX file system that Emscripten creates, which has an API. It thus seemed possible to handle files being dragged into the emulator by reading them on the browser side and sending the contents over to the worker where the emulator runs, and creating them in a “Downloads” folder. That worked out well, especially once I switched a custom lazy file implementation and fixed encoding issues.

To get files out, the reverse process can be used, where files in a special “Uploads” folder are watched, and when new ones appear, the contents are sent to the browser (as a single zip file in the case of directories).

Persistence

While Emscripten has an IDBFS mode where changes to the filesystem are persisted via IndexedDB, it’s not a good fit for the emulator, since it relies on there being an event loop, which is not the case in the emulator worker. Instead I used an approach similar to uploading to send the contents of a third ExtFS “Saved” directory, which can then be persisted using IndexedDB on the browser side.

Performance

The emulator using 100% of the CPU seems like a fundamental limitation — it’s simulating another CPU, and there’s always another instruction for it to run. However, Basilisk II is working at a slightly higher-level, and it knows when the Mac is idle (and waiting for the user input), and allows the host to intercept this and yield execution. I made that work in the browser-based version by using Atomics to wait until either there was user input or a screen refresh was required, which dropped CPU utilization significantly. A previous blog post has more details, including the hoops required to get it working in Safari (which are thankfully not required with Safari 15.2).

The bulk of the remaining time was spent updating the screen, so I made some optimizations there to do less per-pixel manipulation, avoid some copies altogether, and not send the screen contents when they haven’t changed since the last frame.

The outcome of all this is that the emulator idles at ~13% of the CPU, which makes it much less disruptive to be left in the background.

Odds and Ends

There were a bunch more polish changes to improve the experience: making it responsive to bigger and smaller screens, handling touch events so that it’s usable on an iPad (though double-taps are still tricky), fixing the scaling to preserve crispness, handling other color modes, better keyboard mapping, and much more.

There is a ton more work to be done, but I figured MARCHintosh was as good a time at any to take a break and share this with the world. Enjoy!



from Hacker News https://ift.tt/2j5kXTK

About the security content of iOS 15.4.1 and iPadOS 15.4.1

About the security content of iOS 15.4.1 and iPadOS 15.4.1

This document describes the security content of iOS 15.4.1 and iPadOS 15.4.1.

About Apple security updates

For our customers' protection, Apple doesn't disclose, discuss, or confirm security issues until an investigation has occurred and patches or releases are available. Recent releases are listed on the Apple security updates page.

Apple security documents reference vulnerabilities by CVE-ID when possible.

For more information about security, see the Apple Product Security page.

iOS 15.4.1 and iPadOS 15.4.1

Released March 31, 2022

AppleAVD

Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation)

Impact: An application may be able to execute arbitrary code with kernel privileges. Apple is aware of a report that this issue may have been actively exploited.

Description: An out-of-bounds write issue was addressed with improved bounds checking.

CVE-2022-22675: an anonymous researcher

Information about products not manufactured by Apple, or independent websites not controlled or tested by Apple, is provided without recommendation or endorsement. Apple assumes no responsibility with regard to the selection, performance, or use of third-party websites or products. Apple makes no representations regarding third-party website accuracy or reliability. Contact the vendor for additional information.

Published Date: 



from Hacker News https://ift.tt/RDCBKbN

Billy Wilder: The Art of Screenwriting (1996)

undefinedIn her study at home in North Bennington, 2018. Interview still frame courtesy of Stephanie Black.


Jamaica Kincaid was born Elaine Potter Richardson on Antigua in 1949. When she was sixteen, her family interrupted her education, sending her to work as a nanny in New York. In time, she put herself on another path. She went from the New School in Manhattan to Franconia College in New Hampshire, and worked at Magnum Photos and at the teen magazine Ingenue. In the mid-’70s, she began to write for The Village Voice, but it was at The New Yorker, where she became a regular columnist for the Talk of the Town section, that everything changed for her. Her early fiction, much of which also appeared in that magazine, was collected in At the Bottom of the River (1983), a book that, like her Talk stories, announced her themes, her style, the uncanny purity of her prose. She has published the novels Annie John (1985), Lucy (1990), The Autobiography of My Mother (1996), Mr. Potter (2002), and See Now Then (2013). A children’s book, Annie, Gwen, Lilly, Pam and Tulip, came out in 1986. Aside from the collected Talk Stories (2001), her nonfiction works include A Small Place (1988), a reckoning with the colonial legacy on Antigua; My Brother (1997), a memoir of the tragedy of AIDS in her family; and two books on gardening, My Garden (Book) (1999) and Among Flowers: A Walk in the Himalaya (2005).

Kincaid divides her time between Cambridge, Massachusetts, where she is a professor of African American studies at Harvard University, and Bennington, Vermont, where her large brown clapboard house with yellow window trim is shielded by trees. She has two children from her marriage to the composer Allen Shawn, the son of the former New Yorker editor William Shawn, and in the living room she displays on a table—proudly, apologetically—productions from the arts-and-crafts camps and classes that her son and daughter attended over the years. The study where she writes is a sunroom surrounded on three sides by windows. The terrace that starts at the back door ends in a border of stones; the lawn, planted with thousands of daffodils, slopes down to a thickly shaded creek. Nearby are a vegetable garden caged against wildlife and a cottage in which lives Trevor, her bearded young assistant. Over some twenty years, Kincaid has made what my partner, the poet James Fenton, calls a “plantsman’s garden,” full of rare species. Her hundreds of plants are layered into a composition of informal design, expressive of her refined aesthetic and untroubled eccentricity. She has plants that move her because of how they look or how they behave, or because of their histories.

This conversation began at a public event at the 92nd Street Y in 2013, and was picked up again in her Vermont kitchen eight years later, in the summer of 2021, when the social restrictions of the pandemic had, for a time, eased. Jamaica Kincaid is a generous host. She cooks with flair. Her big, broad-frame glasses evoke the Italian movie stars of the sixties. The years have gone by, but she is still tall. Her voice is as musical as ever, high-pitched, the Anglo-Caribbean lilt beguiling. She is a presence; everything begins to happen when she talks. In person and on the page, Kincaid’s is a literary voice. She is alive to the advantage in the irony that her literary heritage had not predicted her, exalted, brave, free.

INTERVIEWER

Why did your family send you to America? Wasn’t London still a capital of empire in the mid-’60s, the cultural center of the Commonwealth?

JAMAICA KINCAID

If they’d known anyone in London, they would have sent me there. But they didn’t have any long-term plan in mind. The idea wasn’t that I would establish myself and then have the rest of my family join me. I was simply sent away to support them. My father—my stepfather—had gotten ill, and my parents had three boy children. The arrival of my youngest brother had plunged us into a kind of poverty we’d never known. It used to be a tradition in agricultural families that you’d sacrifice the eldest child. I remember the darkness of being sent away—sheer misery of a kind that I didn’t know existed. Until then homesickness was something I only knew from books. I think I first came across it in one of the Brontës.

INTERVIEWER

So there wasn’t any excitement in it?

KINCAID

Not at all, because I was going as a servant. I remember walking in the hot sun to one of the American bases in Antigua—past the crazy house, as we called the lunatic asylum, and the dead house, where the bodies of people who died in the hospital were put until they were collected by the undertaker—to be interviewed by an American soldier’s wife. I was very bitter about it because I had before me what seemed to be a successful future. I might have gone to the University of the West Indies. I would have gotten a scholarship. It seemed cruel even to other people because I was known as what we called a “bright child.” No, there wasn’t any cause for celebration, though my mother did make me a new dress and see me off to the airport.

INTERVIEWER

Homesickness—this kind of interrupted love—is a big element in your work.

KINCAID

Well, perhaps, but I never really felt I belonged even in Antigua, even when I was little. My mother came from Dominica, and the thing about those little islands is that people from one island or the other don’t like each other. She was an outsider in Antigua, and she looked different. She was part Carib Indian, and they used to call her the Red Woman.

I suppose that my work is always mourning something, the loss of a paradise—not the thing that comes after you die, but the thing that you had before. I often think of the time before my brothers were born—and this might sound very childish, but I don’t care—as this paradise of my mother and me always being together. There were times when my mother and I would go swimming and she would disappear for a second, and I would imagine the depths just rolling over her, that she’d go deeper and deeper and I’d never see her again . . . And then she would pop up somewhere else. Those memories are a constant source of some strange pleasure for me.

I was pulled out of school to take care of my youngest brother while my mother went to work, and when she realized I hadn’t been looking after him properly, that I had been reading instead, she gathered all the books I had stolen from the library over the years and burned them. You can probably tell from my writing that I’m obsessed with notions of justice and injustice—those things that are wrong that can never be made right.

Nowadays if I were to be homesick it would be for Vermont, which is strange. But perhaps it makes sense—I grew up in a place where I saw the sea every day and, near the end of my life, I’m living in a place where the water has run out.

INTERVIEWER

Did Lucy come out of a feeling that you needed to put your arrival to America in its place somehow—to examine it, or to leave it behind?

KINCAID

Not so much to put anything in its place as to give an account of what had happened to me. Lucy is about the making of a person. You can see in it the sentimentality of Jane Eyre. A sense of, I’m all alone in the world, and I have integrity. You might want this, but I will do that. Lucy stops sending her salary home, and I did stop sending mine. I still have the clothes I bought at Bonwit Teller. I was the best-dressed nanny you ever saw.

INTERVIEWER

Were you refashioning yourself? 

KINCAID

I loved dressing up and going out. You might say that was the influence of my mother. By the time my youngest brother was born her life had collapsed on her, but she was a very elegant woman when I was young. I used to be ashamed to be seen with her because she was so sexy—men of all ages would stop her and talk to her. I remember she wore her hair in a French roll, and she wore what they called a hobble skirt.

After I moved to New York, I modeled for people like Steven Meisel. I clearly had one of those eating problems, but I didn’t know what they were. I didn’t know that there was anything about me that had a name, that could be diagnosed. I ended up smoking Lucky Strikes, just because I liked the way it looked, the gesture. For some reason, I decided to cut off my hair and bleach it blond. I dressed in old clothes, thrift-shop clothes.

I styled myself to look like no one else. And I also knew I didn’t want to write like anyone else. When I started writing Talk pieces at The New Yorker, I tried to get away from the anonymous “we” they used. They had very good writers, but they were these old, stout white men. I hated the we. I had such contempt for a certain kind of writing, which I would now call “white writing.” It was so dull and mannered.



from Hacker News https://ift.tt/nVkP8zH

Phenethylamines I Have Known and Loved

# SUBSTANCE CHEMICAL NAME
1 AEM alpha-Ethyl-3,4,5-trimethoxy-PEA
2 AL 4-Allyloxy-3,5-dimethoxy-PEA
3 ALEPH 4-Methylthio-2,5-dimethoxy-A
4 ALEPH-2 4-Ethylthio-2,5-dimethoxy-A
5 ALEPH-4 4-Isopropylthio-2,5-dimethoxy-A
6 ALEPH-6 4-Phenylthio-2,5-dimethoxy-A
7 ALEPH-7 4-Propylthio-2,5-dimethoxy-A
8 ARIADNE 2,5-Dimethoxy-alpha-ethyl-4-methyl-PEA
9 ASB 3,4-Diethoxy-5-methoxy-PEA
10 B 4-Butoxy-3,5-dimethoxy-PEA
11 BEATRICE 2,5-Dimethoxy-4,N-dimethyl-A
12 BIS-TOM 2,5-Bismethylthio-4-methyl-A
13 BOB 4-Bromo-2,5,beta-trimethoxy-PEA
14 BOD 2,5,beta-Trimethoxy-4-methyl-PEA
15 BOH beta-Methoxy-3,4-methylenedioxy-PEA
16 BOHD 2,5-Dimethoxy-beta-hydroxy-4-methyl-PEA
17 BOM 3,4,5,beta-Tetramethoxy-PEA
18 4-Br-3,5-DMA 4-Bromo-3,5-dimethoxy-A
19 2-Br-4,5-MDA 2-Bromo-4,5-methylenedioxy-A
20 2C-B 4-Bromo-2,5-dimethoxy-PEA
21 3C-BZ 4-Benzyloxy-3,5-dimethoxy-A
22 2C-C 4-Chloro-2,5-dimethoxy-PEA
23 2C-D 4-Methyl-2,5-dimethoxy-PEA
24 2C-E 4-Ethyl-2,5-dimethoxy-PEA
25 3C-E 4-Ethoxy-3,5-dimethoxy-A
26 2C-F 4-Fluoro-2,5-dimethoxy-PEA
27 2C-G 3,4-Dimethyl-2,5-dimethoxy-PEA
28 2C-G-3 3,4-Trimethylene-2,5-dimethoxy-PEA
29 2C-G-4 3,4-Tetramethylene-2,5-dimethoxy-PEA
30 2C-G-5 3,4-Norbornyl-2,5-dimethoxy-PEA
31 2C-G-N 1,4-Dimethoxynaphthyl-2-ethylamine
32 2C-H 2,5-Dimethoxy-PEA
33 2C-I 4-Iodo-2,5-dimethoxy-PEA
34 2C-N 4-Nitro-2,5-dimethoxy-PEA
35 2C-O-4 4-Isopropoxy-2,5-dimethoxy-PEA
36 2C-P 4-Propyl-2,5-dimethoxy-PEA
37 CPM 4-Cyclopropylmethoxy-3,5-dimethoxy-PEA
38 2C-SE 4-Methylseleno-2,5-dimethoxy-PEA
39 2C-T 4-Methylthio-2,5-dimethoxy-PEA
40 2C-T-2 4-Ethylthio-2,5-dimethoxy-PEA
41 2C-T-4 4-Isopropylthio-2,5-dimethoxy-PEA
42 psi-2C-T-4 4-Isopropylthio-2,6-dimethoxy-PEA
43 2C-T-7 4-Propylthio-2,5-dimethoxy-PEA
44 2C-T-8 4-Cyclopropylmethylthio-2,5-dimethoxy-PEA
45 2C-T-9 4-(t)-Butylthio-2,5-dimethoxy-PEA
46 2C-T-13 4-(2-Methoxyethylthio)-2,5-dimethoxy-PEA
47 2C-T-15 4-Cyclopropylthio-2,5-dimethoxy-PEA
48 2C-T-17 4-(s)-Butylthio-2,5-dimethoxy-PEA
49 2C-T-21 4-(2-Fluoroethylthio)-2,5-dimethoxy-PEA
50 4-D 4-Trideuteromethyl-3,5-dimethoxy-PEA
51 beta-D beta,beta-Dideutero-3,4,5-trimethoxy-PEA
52 DESOXY 4-Methyl-3,5-Dimethoxy-PEA
53 2,4-DMA 2,4-Dimethoxy-A
54 2,5-DMA 2,5-Dimethoxy-A
55 3,4-DMA 3,4-Dimethoxy-A
56 DMCPA 2-(2,5-Dimethoxy-4-methylphenyl)-cyclopropylamine
57 DME 3,4-Dimethoxy-beta-hydroxy-PEA
58 DMMDA 2,5-Dimethoxy-3,4-methylenedioxy-A
59 DMMDA-2 2,3-Dimethoxy-4,5-methylenedioxy-A
60 DMPEA 3,4-Dimethoxy-PEA
61 DOAM 4-Amyl-2,5-dimethoxy-A
62 DOB 4-Bromo-2,5-dimethoxy-A
63 DOBU 4-Butyl-2,5-dimethoxy-A
64 DOC 4-Chloro-2,5-dimethoxy-A
65 DOEF 4-(2-Fluoroethyl)-2,5-dimethoxy-A
66 DOET 4-Ethyl-2,5-dimethoxy-A
67 DOI 4-Iodo-2,5-dimethoxy-A
68 DOM (STP) 4-Methyl-2,5-dimethoxy-A
69 psi-DOM 4-Methyl-2,6-dimethoxy-A
70 DON 4-Nitro-2,5-dimethoxy-A
71 DOPR 4-Propyl-2,5-dimethoxy-A
72 E 4-Ethoxy-3,5-dimethoxy-PEA
73 EEE 2,4,5-Triethoxy-A
74 EEM 2,4-Diethoxy-5-methoxy-A
75 EME 2,5-Diethoxy-4-methoxy-A
76 EMM 2-Ethoxy-4,5-dimethoxy-A
77 ETHYL-J N,alpha-diethyl-3,4-methylenedioxy-PEA
78 ETHYL-K N-Ethyl-alpha-propyl-3,4-methylenedioxy-PEA
79 F-2 Benzofuran-2-methyl-5-methoxy-6-(2-aminopropane)
80 F-22 Benzofuran-2,2-dimethyl-5-methoxy-6-(2-aminopropane)
81 FLEA N-Hydroxy-N-methyl-3,4-methylenedioxy-A
82 G-3 3,4-Trimethylene-2,5-dimethoxy-A
83 G-4 3,4-Tetramethylene-2,5-dimethoxy-A
84 G-5 3,4-Norbornyl-2,5-dimethoxy-A
85 GANESHA 3,4-Dimethyl-2,5-dimethoxy-A
86 G-N 1,4-Dimethoxynaphthyl-2-isopropylamine
87 HOT-2 2,5-Dimethoxy-N-hydroxy-4-ethylthio-PEA
88 HOT-7 2,5-Dimethoxy-N-hydroxy-4-(n)-propylthio-PEA
89 HOT-17 2,5-Dimethoxy-N-hydroxy-4-(s)-butylthio-PEA
90 IDNNA 2,5-Dimethoxy-N,N-dimethyl-4-iodo-A
91 IM 2,3,4-Trimethoxy-PEA
92 IP 3,5-Dimethoxy-4-isopropoxy-PEA
93 IRIS 5-Ethoxy-2-methoxy-4-methyl-A
94 J alpha-Ethyl-3,4-methylenedioxy-PEA
95 LOPHOPHINE 3-Methoxy-4,5-methylenedioxy-PEA
96 M 3,4,5-Trimethoxy-PEA
97 4-MA 4-Methoxy-A
98 MADAM-6 2,N-Dimethyl-4,5-methylenedioxy-A
99 MAL 3,5-Dimethoxy-4-methallyloxy-PEA
100 MDA 3,4-Methylenedioxy-A
101 MDAL N-Allyl-3,4-methylenedioxy-A
102 MDBU N-Butyl-3,4-methylenedioxy-A
103 MDBZ N-Benzyl-3,4-methylenedioxy-A
104 MDCPM N-Cyclopropylmethyl-3,4-methylenedioxy-A
105 MDDM N,N-Dimethyl-3,4-methylenedioxy-A
106 MDE N-Ethyl-3,4-methylenedioxy-A
107 MDHOET N-(2-Hydroxyethyl)-3,4-methylenedioxy-A
108 MDIP N-Isopropyl-3,4-methylenedioxy-A
109 MDMA N-Methyl-3,4-methylenedioxy-A
110 MDMC N-Methyl-3,4-ethylenedioxy-A
111 MDMEO N-Methoxy-3,4-methylenedioxy-A
112 MDMEOET N-(2-Methoxyethyl)-3,4-methylenedioxy-A
113 MDMP alpha,alpha,N-Trimethyl-3,4-methylenedioxy-PEA
114 MDOH N-Hydroxy-3,4-methylenedioxy-A
115 MDPEA 3,4-Methylenedioxy-PEA
116 MDPH alpha,alpha-Dimethyl-3,4-methylenedioxy-PEA
117 MDPL N-Propargyl-3,4-methylenedioxy-A
118 MDPR N-Propyl-3,4-methylenedioxy-A
119 ME 3,4-Dimethoxy-5-ethoxy-PEA
120 MEDA 3-methoxy-4,5-Ethylenedioxy-A [Erowid corrected]
121 MEE 2-Methoxy-4,5-diethoxy-A
122 MEM 2,5-Dimethoxy-4-ethoxy-A
123 MEPEA 3-Methoxy-4-ethoxy-PEA
124 META-DOB 5-Bromo-2,4-dimethoxy-A
125 META-DOT 5-Methylthio-2,4-dimethoxy-A
126 METHYL-DMA N-Methyl-2,5-dimethoxy-A
127 METHYL-DOB 4-Bromo-2,5-dimethoxy-N-methyl-A
128 METHYL-J N-Methyl-alpha-ethyl-3,4-methylenedioxy-PEA
129 METHYL-K N-Methyl-alpha-propyl-3,4-methylenedioxy-PEA
130 METHYL-MA N-Methyl-4-methoxy-A
131 METHYL-MMDA-2 N-Methyl-2-methoxy-4,5-methylenedioxy-A
132 MMDA 3-Methoxy-4,5-methylenedioxy-A
133 MMDA-2 2-Methoxy-4,5-methylenedioxy-A
134 MMDA-3a 2-Methoxy-3,4-methylenedioxy-A
135 MMDA-3b 4-Methoxy-2,3-methylenedioxy-A
136 MME 2,4-Dimethoxy-5-ethoxy-A
137 MP 3,4-Dimethoxy-5-propoxy-PEA
138 MPM 2,5-Dimethoxy-4-propoxy-A
139 ORTHO-DOT 2-Methylthio-4,5-dimethoxy-A
140 P 3,5-Dimethoxy-4-propoxy-PEA
141 PE 3,5-Dimethoxy-4-phenethyloxy-PEA
142 PEA PEA
143 PROPYNYL 4-Propynyloxy-3,5-dimethoxy-PEA
144 SB 3,5-Diethoxy-4-methoxy-PEA
145 TA 2,3,4,5-Tetramethoxy-A
146 3-TASB 4-Ethoxy-3-ethylthio-5-methoxy-PEA
147 4-TASB 3-Ethoxy-4-ethylthio-5-methoxy-PEA
148 5-TASB 3,4-Diethoxy-5-methylthio-PEA
149 TB 4-Thiobutoxy-3,5-dimethoxy-PEA
150 3-TE 4-Ethoxy-5-methoxy-3-methylthio-PEA
151 4-TE 3,5-Dimethoxy-4-ethylthio-PEA
152 2-TIM 2-Methylthio-3,4-dimethoxy-PEA
153 3-TIM 3-Methylthio-2,4-dimethoxy-PEA
154 4-TIM 4-Methylthio-2,3-dimethoxy-PEA
155 3-TM 3-Methylthio-4,5-dimethoxy-PEA
156 4-TM 4-Methylthio-3,5-dimethoxy-PEA
157 TMA 3,4,5-Trimethoxy-A
158 TMA-2 2,4,5-Trimethoxy-A
159 TMA-3 2,3,4-Trimethoxy-A
160 TMA-4 2,3,5-Trimethoxy-A
161 TMA-5 2,3,6-Trimethoxy-A
162 TMA-6 2,4,6-Trimethoxy-A
163 3-TME 4,5-Dimethoxy-3-ethylthio-PEA
164 4-TME 3-Ethoxy-5-methoxy-4-methylthio-PEA
165 5-TME 3-Ethoxy-4-methoxy-5-methylthio-PEA
166 2T-MMDA-3a 2-Methylthio-3,4-methylenedioxy-A
167 4T-MMDA-2 4,5-Thiomethyleneoxy-2-methoxy-A
168 TMPEA 2,4,5-Trimethoxy-PEA
169 2-TOET 4-Ethyl-5-methoxy-2-methylthio-A
170 5-TOET 4-Ethyl-2-methoxy-5-methylthio-A
171 2-TOM 5-Methoxy-4-methyl-2-methylthio-A
172 5-TOM 2-Methoxy-4-methyl-5-methylthio-A
173 TOMSO 2-Methoxy-4-methyl-5-methylsulfinyl-A
174 TP 4-Propylthio-3,5-dimethoxy-PEA
175 TRIS 3,4,5-Triethoxy-PEA
176 3-TSB 3-Ethoxy-5-ethylthio-4-methoxy-PEA
177 4-TSB 3,5-Diethoxy-4-methylthio-PEA
178 3-T-TRIS 4,5-Diethoxy-3-ethylthio-PEA
179 4-T-TRIS 3,5-Diethoxy-4-ethylthio-PEA
Appendix B: Glossary


from Hacker News https://ift.tt/gl14PMN

Math Poetry (2015)

(Note from many years later)

For whatever reason, when I first started 3blue1brown, I also found myself jotting down silly math poems on the side. Math is often most naturally expressed with symbols or pictures, and words are a more tricky medium, so something about the added constraint of making those words adhere to certain meters or rhyming structures felt like a fun puzzle, albeit an unabashedly cheesy one.

After I made the first couple of 3blue1brown videos, and put together a website with some vague sense that that's what I was supposed to do, I tossed up the poems to that site while I was at it. Even if they seem a bit strange now, I still find a certain charm in leaving them here as a remnant of that early-2015 mindset associated with trying something wacky and throwing it up online, which was the same mindset which led to the channel in the first place.

Moser's Circle Problem

Take two points on a circle,
and draw a line straight through.
The space that was encircled
is divided into two.

To these points add a third one,
which gives us two more chords.
The space through which these lines run
has been fissured into four.

Continue with a fourth point,
and three more lines drawn straight.
The new number of disjoint
regions sums, in all, to eight.

A fifth point and its four lines
support this pattern gleaned.
Counting sections, one divines
that there are now sixteen.

This pattern here of doubling
does seems a sturdy one.
But one more step is troubling
as the sixth gives thirty-one.

Primes

The primes,
through times,

mystified
those who pried.

One fact answers why
they’re simple yet sly:

Layers of abstraction yield
complex forms when pierced and peeled.

Addition lies under multiplication,
defining him as repeated summation.

Then he defines primes as the atoms of integers,
for when multiplied, they give numbers their signatures.

But when we breach the layer between these two distinct operations,
asking about how primes add and subtract, there are endless frustrations.

Even innocuous questions, “what are all their sums?”, or “how often do they differ by two?”,
stump everyone who has ever lived, with progress made only quite recently by just a few.

However, to recruit for and progress math we need to have such questions which can be phrased simply and remain unsolved.
What child does not hear such conjectures and dream, if only for a moment, that they will be the one to see them resolved?

For otherwise the once vigorous curiosity of a child towards math’s patterns, as they grow older, tends to grow tame,
just as the rhyme and rhythm of the primes seems to fade as numbers grow, though in both the underlying patterns remain the same.

All Numbers are Interesting

Is any number that we count banal,
and lacking in a feature one might note?
Suppose some are, and gather up them all.
The smallest one has quite the cause to gloat.
It is precisely how far one must count
before the numbers get uninteresting.
But this would be a curious amount,
which contradicts how we defined the thing!
So every number has something to show,
including ones whom we will never know.

Let’s see if this joke proof applies as well
to real numbers; do they all stand out?
Suppose some don’t, and find one who rebels
by being meaningful beyond a doubt.
Before it was the smallest who amused,
but real sets might lack a minimum.
Will there be a special one to choose
from subsets of the whole continuum?
It isn’t clear where such a choice comes from,
yet some treat “choice” like it’s an axiom.

Invention vs Discovery

A lurking question, old as Greece, does ask:
Is math invention or discovery?
The answer’s both, but here the harder task
is knowing how it switches. So you see,
most truths are found with constructs, which in turn
are built from truths like chickens and their eggs.
One needs to know, should this fact raise concern,
that constructs can be daft; just logic’s dregs.
The truths discovered tell how to define
the structures that make math and world align.

Pythagoras’s theorem about lengths,
is one such truth that he (and others) found.
Observed, discovered, and through mental strength,
it was then proved with pictures that astound.
Much later mathematicians, minds alit,
would formalize, as sets, both “length” and “space”.
These constructs were defined so that by writ
Pythagoras’s truth remains the case.
That is, this “theorem” now seems just defined.
As such, one might then lose their peace of mind.

So in this world where "space" is formalized,
do all those pretty proofs now lose their charm?
Of course not! We all ought to realize
how “length” could change its meaning without harm
to mathematical consistency
in all the theory giving facts of space.
However, then these “facts” would hardly be
related to the nature we embrace.
And hence, a truth has told how to define
a structure that makes math and world align.

2Ï€

Fixed poorly in notation with that two,
you shine so loud that you deserve a name.
Late though we are, to make a change it's true,
We can extol you ‘til you have pi's fame.
One might object, "Conventions matter not!
Great formulae cast truths transcending names."
I've noticed, though, how language molds my thoughts;
the natural terms make heart and head the same.
So lose the two inside your autograph,
then guide our thoughts without you "better" half.

Wonders math imparts become so neat
when phrased with you, and pi remains off-screen.
Sine and exp both cycle to your beat.
Jive with Fourier, and forms are clean.
“Wait! Area of circles”, pi would say,
“sticks oddly to one half when tau’s preferred.”
More to you then! For write it in this way,
then links to triangles can be inferred.
Nix pi, then all within geometry
shines clean and clear, as if by poetry.

1-1+1-1+...

When one takes one from one
plus one from one plus one
and on and on but ends
anon then starts again,
then some sums sum to one,
to zero other ones.
One wonders who'd have won
had stopping not been done;
had he summed every bit
until the infinite.

Lest you should think that such
less well-known sums are much
ado about nonsense
I do give these two cents:
The universe has got
an answer which is not
what most would first surmise,
it is a compromise,
and though it seems a laugh
the universe gives “half”.

Defining Math

In math, we strive and stickle with a pride,
to wash away all ambiguity.
And yet, definers cannot all decide
what “math” itself means. What an irony!
Perhaps a science, solving mysteries,
but happy to pursue what seems absurd.
Perhaps a brother of philosophies,
but with a rigid rigour in each word.
Computer science might be suitable,
but most of math is not computable.

Perhaps it is a language of its own,
which, as to english, has the most false friends.
Perhaps religion, where the truth alone
is worshiped, giving means as well as ends.
Perhaps a sister of creative arts,
but one arousing reason, not a sense.
Indeed it shares a number of its parts
with almost every study in some sense.
But it’s an orphan, floating in abstract,
and it alone encloses what’s exact

Euler's Formula

Famously
start with e,
raise to π
with an i.
we've been taught
by a lot
that you've got
minus one.

Can we glean
what it means?
For such words
are absurd.
How to treat
the repeat
of a feat
Ï€i times?

This is bound
to confound
'til your mind
redefines
these amounts
one can't count
which surmount
our friend e.

Numbers act
as abstract
functions which
slide the rich
2d space
in its place
with a grace
when they sum.

Multiplied,
they don’t slide,
acting a
second way.
They rotate,
and dilate,
but keep straight
that same plane.

Now what we
write as e
to the x
won’t perplex
when you know
it’s for show
that “x" goes
up and right.

It does not,
as you thought,
repeat e
product e.
It functions
with gumption
on functions
of the plane.

It turns slides
side to side
into growths
and shrinks both.
Up and downs
come around
as turns round,
which is key!

This is why
Ï€ times i,
which slides north
is brought forth
and returned,
we have learned,
as a turn
halfway round.

Minus one,
matched by none,
turns this way,
hence we’re done.

Ode To Infinity

Philosophers have claimed you are not "real",
but you are no less true than 3, in fact.
For what is "three"? Some thinking will reveal,
a term that points to triplets in abstract.
And so are you, a pointer in this light,
but aimed instead at families without size.
From here it took great Cantor's mental sight
to see these sets disjoin before his eyes.
For that's the tricky thing about you, friend,
what you are is something without end!

Aside from all the masks you wear as sets,
you permeate all math like pi and e.
Without you series, sequences and nets,
would vanish, as would all geometry.
To cook the real numbers all it takes,
is you and all the fractions to react.
When added as a simple point you make
both arguments and spaces more compact.
When patterns, lists and odes drag on and on,
just say "infinity", and woes are gone.

Groups

"What is a group?"
“A symmetry of things.”

“Of things from where?”
“Oh, those from any place.”

“But where in math?”
“In math? A vector space.”

“And what is that?”
“It’s what a field brings.”

“And what are fields?”
“They’re Symmetries of groups.”

“A group’s own group?”
“Why yes, when both are one.”

“So now we’re back?”
“Indeed, where we’d begun.”

“So where to start?”
“Where you start any loop.”

“And where is that?”
“From anywhere within.”

“And if you're out?”
“Then why would you begin?”

Abstraction

The crux of thought,
computer science,
language, and how math arose,
abstraction brought
a key alliance
syncing facts who seem opposed.

The first is how
our minds are tuned to
focus on at most one thought.
Though well endowed,
they’re not immune to
puzzlement when overwrought.

But secondly
the facts of nature
interweave dependently,
so how do we
conceive this nature
in its multiplicity?

We first extract
a common trait
of things too complex to be known,
then to “abstract”,
reformulate
this trait as something in its own.

As such it’s pulled
from all distractions,
single, hence conceivable.
But when minds hold
this dense abstraction,
They enclose a system whole.

For “things” and their “traits”,
though separately named,
do more than relate:
They’re one and the same.



from Hacker News https://ift.tt/0AMIK65

Wednesday, March 30, 2022

Redis Stack

Redis Stack

Extend Redis with modern data models and processing engines.

Redis Stack is an extension of Redis that adds modern data models and processing engines to provide a complete developer experience.

In addition to all of the features of OSS Redis, Redis stack supports:

  • Queryable JSON documents
  • Full-text search
  • Time series data (ingestion & querying)
  • Graph data models with the Cypher query language
  • Probabilistic data structures

Getting started

To get started started with Redis Stack, see the Getting Started guide. You may also want to:

If you want to learn more about the vision for Redis Stack, read on.

Why Redis Stack?

Redis Stack was created to allow developers build to real-time applications with a back end data platform that can reliably process requests in under a millisecond. Redis Stack does this by extending Redis with modern data models and data processing tools (Document, Graph, Search, and Time Series).

Redis Stack unifies and simplifies the developer experience of the leading Redis modules and the capabilities they provide. Redis Stack bundles five Redis modules: RedisJSON, RedisSearch, RedisGraph, RedisTimeSeries, and RedisBloom.

Clients

Several Redis client libraries support Redis Stack. These include redis-py, node_redis, and Jedis. In addition, four higher-level object mapping libraries also support Redis Stack: Redis OM .NET, Redis OM Node, Redis OM Python, Redis OM Spring.

RedisInsight

Redis Stack also includes RedisInsight, a visualization tool for understanding and optimizing Redis data.

Redis Stack license

Redis Stack is made up of several components, licensed as follows:


How to install and get started with Redis Stack

Visualize and optimize Redis data

Queries, secondary indexing, and full-text search for Redis

A Graph database built on Redis

Ingest and query time series data with Redis

Bloom filters and other probabilistic data structures for Redis



from Hacker News https://ift.tt/gDow9nh

The Roads Not Taken

The roads not taken

2022 Mar 29 See all posts The roads not taken

The Ethereum protocol development community has made a lot of decisions in the early stages of Ethereum that have had a large impact on the project's trajectory. In some cases, Ethereum developers made conscious decisions to improve in some place where we thought that Bitcoin erred. In other places, we were creating something new entirely, and we simply had to come up with something to fill in a blank - but there were many somethings to choose from. And in still other places, we had a tradeoff between something more complex and something simpler. Sometimes, we chose the simpler thing, but sometimes, we chose the more complex thing too.

This post will look at some of these forks-in-the-road as I remember them. Many of these features were seriously discussed within core development circles; others were barely considered at all but perhaps really should have been. But even still, it's worth looking at what a different Ethereum might have looked like, and what we can learn from this going forward.

Should we have gone with a much simpler version of proof of stake?

The Gasper proof of stake that Ethereum is very soon going to merge to is a complex system, but a very powerful system. Some of its properties include:

  • Very strong single-block confirmations - as soon as a transaction gets included in a block, usually within a few seconds that block gets solidified to the point that it cannot be reverted unless either a large fraction of nodes are dishonest or there is extreme network latency.
  • Economic finality - once a block gets finalized, it cannot be reverted without the attacker having to lose millions of ETH to being slashed.
  • Very predictable rewards - validators reliably earn rewards every epoch (6.4 minutes), reducing incentives to pool
  • Support for very high validator count - unlike most other chains with the above features, the Ethereum beacon chain supports hundreds of thousands of validators (eg. Tendermint offers even faster finality than Ethereum, but it only supports a few hundred validators)

But making a system that has these properties is hard. It took years of research, years of failed experiments, and generally took a huge amount of effort. And the final output was pretty complex.



If our researchers did not have to worry so much about consensus and had more brain cycles to spare, then maybe, just maybe, rollups could have been invented in 2016. This brings us to a question: should we really have had such high standards for our proof of stake, when even a much simpler and weaker version of proof of stake would have been a large improvement over the proof of work status quo?

Many have the misconception that proof of stake is inherently complex, but in reality there are plenty of proof of stake algorithms that are almost as simple as Nakamoto PoW. NXT proof of stake existed since 2013 and would have been a natural candidate; it had issues but those issues could easily have been patched, and we could have had a reasonably well-working proof of stake from 2017, or even from the beginning. The reason why Gasper is more complex than these algorithms is simply that it tries to accomplish much more than they do. But if we had been more modest at the beginning, we could have focused on achieving a more limited set of objectives first.

Proof of stake from the beginning would in my opinion have been a mistake; PoW was helpful in expanding the initial issuance distribution and making Ethereum accessible, as well as encouraging a hobbyist community. But switching to a simpler proof of stake in 2017, or even 2020, could have led to much less environmental damage (and anti-crypto mentality as a result of environmental damage) and a lot more research talent being free to think about scaling. Would we have had to spend a lot of resources on making a better proof of stake eventually? Yes. But it's increasingly looking like we'll end up doing that anyway.

The de-complexification of sharding

Ethereum sharding has been on a very consistent trajectory of becoming less and less complex since the ideas started being worked on in 2014. First, we had complex sharding with built-in execution and cross-shard transactions. Then, we simplified the protocol by moving more responsibilities to the user (eg. in a cross-shard transaction, the user would have to separately pay for gas on both shards). Then, we switched to the rollup-centric roadmap where, from the protocol's point of view, shards are just blobs of data. Finally, with danksharding, the shard fee markets are merged into one, and the final design just looks like a non-sharded chain but where some data availability sampling magic happens behind the scenes to make sharded verification happen.

Sharding in 2015
Sharding in 2022


But what if we had gone the opposite path? Well, there actually are Ethereum researchers who heavily explored a much more sophisticated sharding system: shards would be chains, there would be fork choice rules where child chains depend on parent chains, cross-shard messages would get routed by the protocol, validators would be rotated between shards, and even applications would get automatically load-balanced between shards!

The problem with that approach: those forms of sharding are largely just ideas and mathematical models, whereas Danksharding is a complete and almost-ready-for-implementation spec. Hence, given Ethereum's circumstances and constraints, the simplification and de-ambitionization of sharding was, in my opinion, absolutely the right move. That said, the more ambitious research also has a very important role to play: it identifies promising research directions, even the very complex ideas often have "reasonably simple" versions of those ideas that still provide a lot of benefits, and there's a good chance that it will significantly influence Ethereum's protocol development (or even layer-2 protocols) over the years to come.

More or less features in the EVM?

Realistically, the specification of the EVM was basically, with the exception fo security auditing, viable for launch by mid-2014. However, over the next few months we continued actively exploring new features that we felt might be really important for a decentralized application blockchain. Some did not go in, others did.

  • We considered adding a POST opcode, but decided against it. The POST opcode would have made an asynchronous call, that would get executed after the rest of the transaction finishes.
  • We considered adding an ALARM opcode, but decided against it. ALARM would have functioned like POST, except executing the asynchronous call in some future block, allowing contracts to schedule operations.
  • We added logs, which allow contracts to output records that do not touch the state, but could be interpreted by dapp interfaces and wallets. Notably, we also considered making ETH transfers emit a log, but decided against it - the rationale being that "people will soon switch to smart contract wallets anyway".
  • We considered expanding SSTORE to support byte arrays, but decided against it, because of concerns about complexity and safety.
  • We added precompiles, contracts which execute specialized cryptographic operations with native implementations at a much cheaper gas cost than can be done in the EVM.
  • In the months right after launch, state rent was considered again and again, but was never included. It was just too complicated. Today, there are much better state expiry schemes being actively explored, though stateless verification and proposer/builder separation mean that it is now a much lower priority.

Looking at this today, most of the decisions to not add more features have proven to be very good decisions. There was no obvious reason to add a POST opcode. An ALARM opcode is actually very difficult to implement safely: what happens if everyone in blocks 1...99999 sets an ALARM to execute a lot of code at block 100000? Will that block take hours to process? Will some scheduled operations get pushed back to later blocks? But if that happens, then what guarantees is ALARM even preserving? SSTORE for byte arrays is difficult to do safely, and would have greatly expanded worst-case witness sizes.

The state rent issue is more challenging: had we actually implemented some kind of state rent from day 1, we would not have had a smart contract ecosystem evolve around a normalized assumption of persistent state. Ethereum would have been harder to build for, but it could have been more scalable and sustainable. At the same time, the state expiry schemes we had back then really were much worse than what we have now. Sometimes, good ideas just take years to arrive at and there is no better way around that.

Alternative paths for LOG

LOG could have been done differently in two different ways:

  1. We could have made ETH transfers auto-issue a LOG. This would have saved a lot of effort and software bug issues for exchanges and many other users, and would have accelerated everyone relying on logs that would have ironically helped smart contract wallet adoption.
  2. We could have not bothered with a LOG opcode at all, and instead made it an ERC: there would be a standard contract that has a function submitLog and uses the technique from the Ethereum deposit contract to compute a Merkle root of all logs in that block. Either EIP-2929 or block-scoped storage (equivalent to TSTORE but cleared after the block) would have made this cheap.

We strongly considered (1), but rejected it. The main reason was simplicity: it's easier for logs to just come from the LOG opcode. We also (very wrongly!) expected most users to quickly migrate to smart contract wallets, which could have logged transfers explicitly using the opcode.

  1. was not considered, but in retrospect it was always an option. The main downside of (2) would have been the lack of a Bloom filter mechanism for quickly scanning for logs. But as it turns out, the Bloom filter mechanism is too slow to be user-friendly for dapps anyway, and so these days more and more people use TheGraph for querying anyway.

On the whole, it seems very possible that either one of these approaches would have been superior to the status quo. Keeping LOG outside the protocol would have kept things simpler, but if it was inside the protocol auto-logging all ETH transfers would have made it more useful.

Today, I would probably favor the eventual abolition of the LOG opcode from the EVM.

What if the EVM was something totally different?

There were two natural very different paths that the EVM could have taken:

  1. Make the EVM be a higher-level language, with built-in constructs for variables, if-statements, loops, etc.
  2. Make the EVM be a copy of some existing VM (LLVM, WASM, etc)

The first path was never really considered. The attraction of this path is that it could have made compilers simpler, and allowed more developers to code in EVM directly. It could have also made ZK-EVM constructions simpler. The weakness of the path is that it would have made EVM code structurally more complicated: instead of being a simple list of opcodes in a row, it would have been a more complicated data structure that would have had to be stored somehow. That said, there was a missed opportunity for a best-of-both-worlds: some EVM changes could have given us a lot of those benefits while keeping the basic EVM structure roughly as is: ban dynamic jumps and add some opcodes designed to support subroutines (see also: EIP-2315), allow memory access only on 32-byte word boundaries, etc.

The second path was suggested many times, and rejected many times. The usual argument for it is that it would allow programs to compile from existing languages (C, Rust, etc) into the EVM. The argument against has always been that given Ethereum's unique constraints it would not actually provide any benefits:

  • Existing compilers from high-level languages tend to not care about total code size, whereas blockchain code must optimize heavily to cut down every byte of code size
  • We need multiple implementations of the VM with a hard requirement that two implementations never process the same code differently. Security-auditing and verifying this on code that we did not write would be much harder.
  • If the VM specification changes, Ethereum would have to either always update along with it or fall more and more out-of-sync.

Hence, there probably was never a viable path for the EVM that's radically different from what we have today, though there are lots of smaller details (jumps, 64 vs 256 bit, etc) that could have led to much better outcomes if they were done differently.

Should the ETH supply have been distributed differently?

The current ETH supply is approximately represented by this chart from Etherscan:

About half of the ETH that exists today was sold in an open public ether sale, where anyone could send BTC to a standardized bitcoin address, and the initial ETH supply distribution was computed by an open-source script that scans the Bitcoin blockchain for transactions going to that address. Most of the remainder was mined. The slice at the bottom, the 12M ETH marked "other", was the "premine" - a piece distributed between the Ethereum Foundation and ~100 early contributors to the Ethereum protocol.

There are two main criticisms of this process:

  • The premine, as well as the fact that the Ethereum Foundation received the sale funds, is not credibly neutral. A few recipient addresses were hand-picked through a closed process, and the Ethereum Foundation had to be trusted to not take out loans to recycle funds received furing the sale back into the sale to give itself more ETH (we did not, and no one seriously claims that we have, but even the requirement to be trusted at all offends some).
  • The premine over-rewarded very early contributors, and left too little for later contributors. 75% of the premine went to rewarding contributors for their work before launch, and post-launch the Ethereum Foundation only had 3 million ETH left. Within 6 months, the need to sell to financially survive decreased that to around 1 million ETH.

In a way, the problems were related: the desire to minimize perceptions of centralization contributed to a smaller premine, and a smaller premine was exhausted more quickly.

This is not the only way that things could have been done. Zcash has a different approach: a constant 20% of the block reward goes to a set of recipients hard-coded in the protocol, and the set of recipients gets re-negotiated every 4 years (so far this has happened once). This would have been much more sustainable, but it would have been much more heavily criticized as centralized (the Zcash community seems to be more openly okay with more technocratic leadership than the Ethereum community).

One possible alternative path would be something similar to the "DAO from day 1" route popular among some defi projects today. Here is a possible strawman proposal:

  • We agree that for 2 years, a block reward of 2 ETH per block goes into a dev fund.
  • Anyone who purchases ETH in the ether sale could specify a vote for their preferred distribution of the dev fund (eg. "1 ETH per block to the Ethereum Foundation, 0.4 ETH to the Consensys research team, 0.2 ETH to Vlad Zamfir...")
  • Recipients that got voted for get a share from the dev fund equal to the median of everyone's votes, scaled so that the total equals 2 ETH per block (median is to prevent self-dealing: if you vote for yourself you get nothing unless you get at least half of other purchasers to mention you)

The sale could be run by a legal entity that promises to distribute the bitcoin received during the sale along the same ratios as the ETH dev fund (or burned, if we really wanted to make bitcoiners happy). This probably would have led to the Ethereum Foundation getting a lot of funding, non-EF groups also getting a lot of funding (leading to more ecosystem decentralization), all without breaking credible neutrality one single bit. The main downside is of course that coin voting really sucks, but pragmatically we could have realized that 2014 was still an early and idealistic time and the most serious downsides of coin voting would only start coming into play long after the sale ends.

Would this have been a better idea and set a better precedent? Maybe! Though realistically even if the dev fund had been fully credibly neutral, the people who yell about Ethereum's premine today may well have just started yelling twice as hard about the DAO fork instead.

What can we learn from all this?

In general, it sometimes feels to me like Ethereum's biggest challenges come from balancing between two visions - a pure and simple blockchain that values safety and simplicity, and a highly performant and functional platform for building advanced applications. Many of the examples above are just aspects of this: do we have fewer features and be more Bitcoin-like, or more features and be more developer-friendly? Do we worry a lot about making development funding credibly neutral and be more Bitcoin-like, or do we just worry first and foremost about making sure devs are rewarded enough to make Ethereum great?

My personal dream is to try to achieve both visions at the same time - a base layer where the specification becomes smaller each year than the year before it, and a powerful developer-friendly advanced application ecosystem centered around layer-2 protocols. That said, getting to such an ideal world takes a long time, and a more explicit realization that it would take time and we need to think about the roadmap step-by-step would have probably helped us a lot.

Today, there are a lot of things we cannot change, but there are many things that we still can, and there is still a path solidly open to improving both functionality and simplicity. Sometimes the path is a winding one: we need to add some more complexity first to enable sharding, which in turn enables lots of layer-2 scalability on top. That said, reducing complexity is possible, and Ethereum's history has already demonstrated this:

  • EIP-150 made the call stack depth limit no longer relevant, reducing security worries for contract developers.
  • EIP-161 made the concept of an "empty account" as something separate from an account whose fields are zero no longer exist.
  • EIP-3529 removed part of the refund mechanism and made gas tokens no longer viable.

Ideas in the pipeline, like Verkle trees, reduce complexity even further. But the question of how to balance the two visions better in the future is one that we should start more actively thinking about.



from Hacker News https://ift.tt/MGNQHBs