Analysis Intel is ending its Optane product line of persistent memory and that is more disastrous for the industry than is visible on the surface.
The influence of ideas from the late 1960s and early 1970s is now so pervasive that almost nobody can imagine anything else, and the best ideas from the following generation are mostly forgotten.
Optane presented a radical, transformative technology but because of this legacy view, this technical debt, few in the industry realized just how radical Optane was. And so it bombed.
To get to the heart of this, let's step back for a long moment and ask, what is the primary function of a computer file?
The first computers didn't have file systems. The giant machines of the 1940s and 1950s, built from tens of thousands of thermionic valves, only had a few words of memory. At first, programs were entered by physically wiring them into the computer by hand: only the data was in memory. The program ran, and printed out some results.
As capacities grew and we arrived at the von Neumann architecture in which the computer program is stored alongside the data in the same memory. In some early machines, that "memory" was magnetic storage: a spinning drum.
To get it into the memory, it was read off paper: punch cards, or paper tape. When computer memories got big enough to store several programs at once, operating systems appeared: programs that managed other programs.
Still no file systems, though. There was RAM and there was I/O: printers, terminals, card readers and so on, but all the storage directly accessible to the computer was memory. In the 1960s, memory often meant magnetic core storage, which had one great advantage that's sometimes forgotten now: When you turned the computer off, whatever was in core store stayed there. Turn the computer back on, and its last program was still there.
Around this time, the first hard disk drives started to appear: expensive, relatively slow, but huge compared to working memory. The early operating systems were given another job: the problem of managing that vast secondary storage. Indexing its contents, finding those sections that were wanted and loading them into working memory.
Two levels of storage
Once operating systems started managing disk drives, a distinction appeared: primary and secondary storage. Both directly accessible to the computer, not loaded and unloaded by a human operator like reels of paper tape or decks of punched cards. Primary storage appears right in the processor's memory map, and every individual word is directly readable or writable.
Secondary storage is a bigger, much slower, pool that the processor can't directly see, and can only access by requesting, or sending, whole blocks to another device, a disk controller, which fetches the contents of the specified blocks from a big pool of storage, or places them into that pool.
This split continued down into the eight-bit microcomputers of the 1970s and 1980s. The author fondly remembers attaching a ZX Microdrive to his 48K ZX Spectrum. Suddenly, my Spectrum had secondary storage. The Spectrum's Z80 CPU had a 64kB memory map, of which a quarter was ROM. Each Microdrive cartridge, even though it was just 100kB or so, could store about twice the machine's entire usable memory. So there had to be a level of indirection: it was impossible to load the whole cartridge's contents into memory.
It wouldn't fit. So cartridges had an index, and then named blocks containing BASIC code, or machine code, or screen images, or data files.
Since microcomputers, we still call primary storage "RAM" and we still call secondary storage "disks" or "drives", even though in many modern end-user computers, it's all just different types of electronics with no moving parts or separate media.
You start the computer by loading an OS from "disk" into RAM. Then, when you want to use a program, the OS loads it from "disk" into RAM, and then that program probably loads some data from disk into RAM. Even if it's a Chromebook and it doesn't have any other local apps, its single app loads data from another computer over the internet, which loads it from disk into RAM and then sends it to the laptop.
Since UNIX was first written in 1969, this has become a mantra: "Everything is a file." Unix-like OSes use the file system for all kinds of things that aren't files: access to the machine is governed by metadata on files, I/O devices are accessed as if they were files, you can play sounds by "copying" them to a sound device, and so on. Since UNIX V8 in 1984, there's even a fake file system, called /proc
, that displays information about the memory and processes of the running system by generating pretend files that users and programs can read, and in some cases write.
Files are a powerful metaphor of sorts, which have proved versatile to a degree unimaginable in 1969, when Unix was written on a minicomputer with a maximum of 64k words of memory and no sound, graphics or networking. Files are ubiquitous now.
But files, and file systems, were only a crutch.
The concept of the "computer file" was invented because memory was too expensive, too big, and too slow. The only way to attach millions of words of storage to a 1960s mainframe was a disk drive the size of a filing cabinet, and too much storage to fit into the computer's memory map.
So instead, mainframe companies designed disk controllers, and built a form of database into the OS. Imagine, for instance, a payroll program, maybe only a few thousand words in size, that could handle a file for tens of thousands of employees, by doing it in tiny chunks: read a row from the personnel file, and a row from the salaries file, compute a result, and write a row to the paycheck file, then repeat. The OS checks the indexes and converts this into instructions to the disk controller: "here, fetch block 47 track 52, head 12, sector 34, and block 57 from track 4, head 7, sector 65… now, write 74.32 into this block…"
SSDs appeared in the 1990s, and by the first decade of this century they were getting affordable. SSDs replace magnetic storage with electronic storage, but it's still secondary storage. SSDs pretend to be disk drives: the computer talks to a disk controller, and sends and receives sectors, and the drive converts them and shuffles around blocks of storage which can only be erased in chunks, typically of a megabytes or more, to emulate hard-disk-style functionality that writes 512-byte sectors.
The trouble is, flash memory has to be accessed this way. It's too slow to be mapped directly into the computer's memory, and it's impossible to rewrite flash byte-by-byte. In order to modify a byte in a block of flash, the rest of contents of that whole block must be copied elsewhere, and then a whole block wiped. This is not how computers' memory controllers work.
The future was here… but it's gone
Optane made it possible to eliminate that. Like core store, it is working memory: primary storage. Optane kit is as big and as cheap as disk drives. It shipped in the hundreds of gigabytes size range, the same sort of size of a modest SSD, but it could be fitted directly into a motherboard's DIMM slots. Every byte appeared right there in the processor's memory map, and every byte could be rewritten directly. No shuffling around blocks to erase them, like flash. And it supports millions of write cycles, rather than tens of thousands.
Many hundreds of gigs, even terabytes, of dynamic non-volatile storage, thousands of times faster and thousands of times more robust than flash memory. Not secondary storage on the other side of a disk controller, but right there in the memory map.
Not infinitely rewritable, no. So your computer needs some RAM as well, for holding variables and fast-changing data. But instead of "loading" programs from "disk" into "RAM" every time you want to use them, a program loads once, and then it's there in memory forever, no matter if there's a power cut, no matter if you turn your computer off for a week's holiday. Turn it back on, and all your apps are still right there in memory.
No more installing OSes, no more booting up. No more apps. The OS sits in memory all the time, and so do your apps. And if you have a terabyte or two of nonvolatile memory in your computer, what do you need SSDs for? It's all just memory. One small section is fast and infinitely rewritable, but its contents disappear when the power goes. The other 95 per cent holds its contents forever.
Sure, if the box is a server, you can have some spinning disks so you can manage petabytes of data. Data centers need that, but very few personal computers do.
Linux, of course, supported this. This particular vulture wrote the documentation for how to use it on a prominent enterprise distro. But Linux being Linux, everything must be a file, so it supported it by partitioning it and formatting it with a filesystem. Using primary storage to emulate secondary storage, in software.
No current mainstream OS understands the concept of a computer that only has primary storage, no secondary storage at all, but it's split between a small volatile section and a large nonvolatile section. It's hard to even describe it to people familiar with how current computers work. I have tried.
How do you find a program to run if there are no directories? How do you save stuff, if there's nowhere to save it to? How do you compile code, when there is no way to #include
one file into another because there are no files, and where does the resulting binary go?
There are ideas out there for how to do this. The Reg wrote about one of them 13 years ago. There is also Twizzler, a research project investigating how to make it look enough like a Unix system for existing software to use it. When a lab boffin at HP invented the memristor, HP got very excited and came up with some big plans… but it takes a long time to bring a new technology to the mass market, and eventually, HP gave up.
But Intel made it work, produced this stuff, put it on the market… and not enough people were interested, and now it is giving up, too.
The future was here, but when viewed through the blurry scratched old lenses of 1960s minicomputer OS design, well – if everything is a file, this Optane was just a sort of really fast disk drive, right?
No, it wasn't. It was the biggest step forward since the minicomputer. But we blew it.
Goodbye, Optane. We hardly knew you. ®
from Hacker News https://ift.tt/NKohFHa
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.