Thursday, December 28, 2023

40% of US electricity is now emissions-free

Image of electric power lines with a power plant cooling tower in the background.

Just before the holiday break, the US Energy Information Agency released data on the country's electrical generation. Because of delays in reporting, the monthly data runs through October, so it doesn't provide a complete picture of the changes we've seen in 2023. But some of the trends now seem locked in for the year: wind and solar are likely to be in a dead heat with coal, and all carbon-emissions-free sources combined will account for roughly 40 percent of US electricity production.

Tracking trends

Having data through October necessarily provides an incomplete picture of 2023. There are several factors that can cause the later months of the year to differ from the earlier ones. Some forms of generation are seasonal—notably solar, which has its highest production over the summer months. Weather can also play a role, as unusually high demand for heating in the winter months could potentially require that older fossil fuel plants be brought online. It also influences production from hydroelectric plants, creating lots of year-to-year variation.

Finally, everything's taking place against a backdrop of booming construction of solar and natural gas. So, it's entirely possible that we will have built enough new solar over the course of the year to offset the seasonal decline at the end of the year.

Let's look at the year-to-date data to get a sense of the trends and where things stand. We'll then check the monthly data for October to see if any of those trends show indications of reversing.

The most important takeaway is that energy use is largely flat. Overall electricity production year-to-date is down by just over one percent from 2022, though demand was higher this October compared to last year. This is in keeping with a general trend of flat-to-declining electricity use as greater efficiency is offsetting factors like population growth and expanding electrification.

That's important because it means that any newly added capacity will displace the use of existing facilities. And, at the moment, that displacement is happening to coal.

Can’t hide the decline

At this point last year, coal had produced nearly 20 percent of the electricity in the US. This year, it's down to 16.2 percent, and only accounts for 15.5 percent of October's production. Wind and solar combined are presently at 16 percent of year-to-date production, meaning they're likely to be in a dead heat with coal this year and easily surpass it next year.

Year-to-date, wind is largely unchanged since 2022, accounting for about 10 percent of total generation, and it's up to over 11 percent in the October data, so that's unlikely to change much by the end of the year. Solar has seen a significant change, going from five to six percent of the total electricity production (this figure includes both utility-scale generation and the EIA's estimate of residential production). And it's largely unchanged in October alone, suggesting that new construction is offsetting some of the seasonal decline.

Coal is being squeezed out by natural gas, with an assist from renewables.

Enlarge / Coal is being squeezed out by natural gas, with an assist from renewables.

Eric Bangeman/Ars Technica

Hydroelectric production has dropped by about six percent since last year, causing it to slip from 6.1 percent to 5.8 percent of the total production. Depending on the next couple of months, that may allow solar to pass hydro on the list of renewables.

Combined, the three major renewables account for about 22 percent of year-to-date electricity generation, up about 0.5 percent since last year. They're up by even more in the October data, placing them well ahead of both nuclear and coal.

Nuclear itself is largely unchanged, allowing it to pass coal thanks to the latter's decline. Its output has been boosted by a new, 1.1 Gigawatt reactor that come online this year (a second at the same site, Vogtle in Georgia, is set to start commercial production at any moment). But that's likely to be the end of new nuclear capacity for this decade; the challenge will be keeping existing plants open despite their age and high costs.

If we combine nuclear and renewables under the umbrella of carbon-free generation, then that's up by nearly 1 percent since 2022 and is likely to surpass 40 percent for the first time.

The only thing that's keeping carbon-free power from growing faster is natural gas, which is the fastest-growing source of generation at the moment, going from 40 percent of the year-to-date total in 2022 to 43.3 percent this year. (It's actually slightly below that level in the October data.) The explosive growth of natural gas in the US has been a big environmental win, since it creates the least particulate pollution of all the fossil fuels, as well as the lowest carbon emissions per unit of electricity. But its use is going to need to start dropping soon if the US is to meet its climate goals, so it will be critical to see whether its growth flat lines over the next few years.

Outside of natural gas, however, all the trends in US generation are good, especially considering that the rise of renewable production would have seemed like an impossibility a decade ago. Unfortunately, the pace is currently too slow for the US to have a net-zero electric grid by the end of the decade.



from Hacker News https://ift.tt/ws1d056

Linux is the only OS to support diagonal PC monitor mode

Here's a fun tidbit — Linux is the only OS to support a diagonal monitor mode, which you can customize to any tilt of your liking. Latching onto this possibility, a Linux developer who grew dissatisfied with the extreme choices offered by the cultural norms of landscape or portrait monitor usage is championing diagonal mode computing. Melbourne-based xssfox asserts that the “perfect rotation” for software development is 22° (h/t Daniel Feldman).

Many PC enthusiasts have strong preferences for monitor setups. Some prefer ultrawides and curved screens, and others seek out squarer aspect ratios with flat screens. Multiple monitors are popular among power users, too. But what if you have an ultrawide and find the landscape or portrait choices too extreme? Xssfox was in this very situation and decided to use her nicely adjustable stand and the Linux xrandr (x resize and rotate) tool to try and find the ultimate screen rotation angle for software development purposes, which you can see if you expand the below tweet.

Xssfox devised a consistent method to appraise various screen rotations, working through the staid old landscape and portrait modes, before deploying xrandr to test rotations like the slightly skewed 1° and an indecisive 45°. These produced mixed results of questionable benefits, so the search for the Goldilocks solution continued.

It turns out that a 22° tilt to the left (expand tweet above to see) was the sweet spot for xssfox. This rotation delivered the best working screen space on what looks like a 32:9 aspect ratio monitor from Dell. “So this here, I think, is the best monitor orientation for software development,” the developer commented. “It provides the longest line lengths and no longer need to worry about that pesky 80-column limit.”

If you have a monitor with the same aspect ratio, the 22° angle might work well for you, too. However, people with other non-conventional monitor rotation needs can use xssfox’s javascript calculator to generate the xrandr command for given inputs. People who own the almost perfectly square LG DualUp 28MQ780 might be tempted to try ‘diamond mode,’ for example.

We note that Windows users with AMD and Nvidia drivers are currently shackled to applying screen rotations using 90° steps. MacOS users apparently face the same restrictions.



from Hacker News https://ift.tt/rkovDj1

Autorize – Authorization enforcement detection extension for Burp Suite

Autorize

Autorize is an automatic authorization enforcement detection extension for Burp Suite. It was written in Python by Barak Tawily, an application security expert. Autorize was designed to help security testers by performing automatic authorization tests. With the last release now Autorize also perform automatic authentication tests.

alt tag

Installation

  1. Download Burp Suite (obviously): http://portswigger.net/burp/download.html
  2. Download Jython standalone JAR: http://www.jython.org/download.html
  3. Open burp -> Extender -> Options -> Python Environment -> Select File -> Choose the Jython standalone JAR
  4. Install Autorize from the BApp Store or follow these steps:
  5. Download Autorize source code: git clone git@github.com:Quitten/Autorize.git
  6. Open Burp -> Extender -> Extensions -> Add -> Choose Autorize.py file.
  7. See the Autorize tab and enjoy automatic authorization detection :)

User Guide - How to use?

  1. After installation, the Autorize tab will be added to Burp.
  2. Open the configuration tab (Autorize -> Configuration).
  3. Get your low-privileged user authorization token header (Cookie / Authorization) and copy it into the textbox containing the text "Insert injected header here". Note: Headers inserted here will be replaced if present or added if not.
  4. Uncheck "Check unauthenticated" if the authentication test is not required (request without any cookies, to check for authentication enforcement in addition to authorization enforcement with the cookies of low-privileged user)
  5. Check "Intercept requests from Repeater" to also intercept the requests that are sent through the Repeater.
  6. Click on "Intercept is off" to start intercepting the traffic in order to allow Autorize to check for authorization enforcement.
  7. Open a browser and configure the proxy settings so the traffic will be passed to Burp.
  8. Browse to the application you want to test with a high privileged user.
  9. The Autorize table will show you the request's URL and enforcement status.
  10. It is possible to click on a specific URL and see the original/modified/unauthenticated request/response in order to investigate the differences.

Authorization Enforcement Status

There are 3 enforcement statuses:

  1. Bypassed! - Red color

  2. Enforced! - Green color

  3. Is enforced??? (please configure enforcement detector) - Yellow color

The first 2 statuses are clear, so I won't elaborate on them.

The 3rd status means that Autorize cannot determine if authorization is enforced or not, and so Autorize will ask you to configure a filter in the enforcement detector tabs. There are two different enforcement detector tabs, one for the detection of the enforcement of low-privileged requests and one for the detection of the enforcement of unauthenticated requests.

The enforcement detector filters will allow Autorize to detect authentication and authorization enforcement in the response of the server by content length or string (literal string or regex) in the message body, headers or in the full request.

For example, if there is a request enforcement status that is detected as "Authorization enforced??? (please configure enforcement detector)" it is possible to investigate the modified/original/unauthenticated response and see that the modified response body includes the string "You are not authorized to perform action", so you can add a filter with the fingerprint value "You are not authorized to perform action", so Autorize will look for this fingerprint and will automatically detect that authorization is enforced. It is possible to do the same by defining content-length filter or fingerprint in headers.

Interception Filters

The interception filter allows you configure what domains you want to be intercepted by Autorize plugin, you can determine by blacklist/whitelist/regex or items in Burp's scope in order to avoid unnecessary domains to be intercepted by Autorize and work more organized.

Example of interception filters (Note that there is default filter to avoid scripts and images): alt tag

Authors



from Hacker News https://ift.tt/wTn2iDq

Wednesday, December 27, 2023

Suggestions: A simple human-readable format for suggesting changes to text files

Motivation

Many word processors have built-in change management. Authors can suggest changes and add comments, then an editor can accept or reject them.

Word screenshot

People who write documents using text-file-based formats like TeX or markdown have a problem: text files don’t have a concept of changes. This makes it harder to collaborate in teams. To get change management, they can:

  • Use an online editor, losing the flexibility of simple text files;
  • Use a version control system like git, which is complex and technical.

Suggestions files are a standard for changes for plain text. They let authors collaborate, suggest and review changes. They don’t require any special software, and they can be used on any kind of text file. You just edit the file as usual, and follow some simple rules.

File format

Making suggestions

To suggest new text to add to a file, enclose it in ++[ and ]++ tags like this:

The original text, ++[your addition,]++ 
and more text.

To suggest a deletion from a file, enclose it in --[ and ]-- tags like this:

The original text, --[text to delete,]-- 
and more text.

To make a comment, enclose it in %%[ and ]%%:

%%[Is this clearer? @stephen]%%

You can sign the comment with a @handle as the last word.

Reviewing suggestions

To review suggestions:

  • To accept a suggested addition, delete the ++[ and matching ]++, leaving everything between them.
  • To accept a suggested deletion, delete everything between --[ and ]-- inclusive.

Rejecting suggestions is just the other way round:

  • To reject an addition, delete everything between ++[ and ]++ inclusive.
  • To reject a deletion, delete the --[ and matching ]--.

You can also delete comments. Typically, you will have to do this before using the text file for another purpose.

If a tag (++[, ]++, --[, ]--, %%[ or ]%%) is on its own on a line, treat the subsequent newline as part of the tag and delete it:

A paragraph of text.
++[
A new line.
]++
The paragraph continues.

becomes

A paragraph of text.
A new line.
The paragraph continues.

if the addition is accepted, or

A paragraph of text.
The paragraph continues.

if it is rejected.

Multiple authors and nested suggestions

If multiple authors are working on a document, you may want to sign your suggested changes. Do that by putting your handle at the end of the change, just like for a comment. The handle must start with @ and must be the last word:

And God said, 
%%[first try! @wycliffe]%%
--[Light be made, 
and the light was made. @tyndale]-- 
++[Let there be lyghte 
and there was lyghte. @tyndale]++
++[Let there be light: 
and there was light. @kjv]++

You can nest suggestions within each other:

Last night I dreamt I went to Manderley
++[, the famous ++[Cornish @editor]++ 
seaside resort, @daphne ]++ again.

You can’t nest changes within comments (it would be too confusing). If you want to add to a comment, just write inside it with your handle. It’s only a comment anyway.

The rules for reviewing nested comments are the same as above. You may need to adjudicate between different alternatives. Obviously, if you accept someone’s deletion, any other suggestions inside it will be deleted and be irrelevant.

There is a command line tool suggs for working with suggestions files.

The purpose of suggs is to let you automate parts of the editing process. For example, you can edit a file, save a new version, then use suggs to create a suggestions file. Or you can take someone else’s suggestions file and quickly accept or reject all the changes. Lastly, suggs can display suggested changes in extra-readable formats, like colorized text or TeX.

Download it here:

Or get the source on github.

Usage

Print a suggestions file with additions, deletions and comments shown in color:

suggs colorize file.txt

Print file.txt with all suggestions accepted:

suggs new file.txt

Print file.txt with all suggestions rejected:

suggs old file.txt

Accept or reject all changes in-place, writing the result back to file.txt:

suggs accept file.txt
suggs reject file.txt

Create a suggestions file from the difference between old.txt and new.txt:

suggs diff old.txt new.txt

Print file.txt with changes highlighted as a TeX file:

suggs tex file.txt

Why not just use a diff file?

diff is a command that prints the difference between two text files. It’s widely used in the computing world. But diffs are designed for computers and code, not humans and text:

  • Diff output makes no sense without the original file. You can’t read changes in their original context. A suggestions file shows additions and deletions in context; it can be sent as an email attachment, read and understood.
  • Using and applying diffs requires command line tools. This is hard for non-technical authors. Suggestions files don’t require any command line tools, but you can use one if you like.
  • Diffs are typically line oriented. This makes them hard to read when only a word or phrase has changed.
  • You can’t put comments and authorship in a diff file.
  • A diff file only shows one set of changes. A suggestions file can show changes by multiple authors, including nested changes.

If you have a comment or suggestion, file an issue.

TeX tip

If you write comments like

%%[
% My comment here.
% ]%%

then TeX will also treat them as comments.



from Hacker News https://suggestions.ink

Tuesday, December 26, 2023

Clanging

Clanging (or clang associations) is a symptom of mental disorders, primarily found in patients with schizophrenia and bipolar disorder.[1] This symptom is also referred to as association chaining, and sometimes, glossomania.

Steuber defines it as "repeating chains of words that are associated semantically or phonetically with no relevant context".[2] This may include compulsive rhyming or alliteration without apparent logical connection between words.

Clanging refers specifically to behavior that is situationally inappropriate. While a poet rhyming is not evidence of mental illness, disorganized speech that impedes the patient's ability to communicate is a disorder in itself, often seen in schizophrenia.[3]

Example[edit]

This can be seen by a section of a 1974 transcript of a patient with schizophrenia:

We are all felines. Siamese cat balls. They stand out. I had a cat, a manx, still around here somewhere. You’ll know him when you see him. His name is GI Joe; he’s black and white. I have a goldfish too, like a clown. Happy Halloween down. Down.[4]

The speaker makes semantic chain associations on the topic of cats, to the colour of her cat, which (either the topic of colours/patterns, or the topic of pets) leads her to jump from her goldfish to the associated clown, a point she gets to via the word clownfish. The patient also exhibits a pattern of rhyming and associative clanging: clown to Halloween (presumably an associative clang) to down.

This example highlights how the speaker is distracted by the sound or meaning of their own words, and leads themselves off the topic, sentence by sentence. In essence, it is a form of derailment driven by self-monitoring.[5]

As a type of Formal Thought Disorder[edit]

Formal Thought Disorders (FTD) are a syndrome with several different symptoms, leading to thought, language and communication problems, being a core feature in schizophrenia.[6]

Thought disorders are measured using the Thought, Language and Communication Scale (TLC) developed by Andreasen in 1986.[6] This measures tendencies of 18 subtypes of formal thought disorder (with strong inter-coder reliability) including clanging as a type of FTD.

The TLC scale for FTD sub-types, remains the standard and most inclusive - so clanging is officially recognised as a type of FTD.[2]

There has been much debate about whether FTDs are a symptom of thought or language, yet the basis for FTD analysis is the verbal behaviour of the patients. As a result, whether abnormal speech among individuals with schizophrenia is a result of abnormal neurology, abnormal thought or linguistic processes - researchers do agree that people with schizophrenia do have abnormal language.[2]

Occurrences in mental disorders[edit]

Clanging is associated with the irregular thinking apparent in psychotic mental illnesses (e.g. mania and schizophrenia).[7]

In schizophrenia[edit]

Formal Thought Disorders are one of five characteristic symptoms of schizophrenia according to the DSM-IV-TR.[1] FTD symptoms such as Glossomania is correlated to schizophrenia spectrum disorders, and to a family history of schizophrenia.[1] In an analysis of speech in patients with schizophrenia compared to controls, Steuber found that glossomania (association chaining) is a characteristic of speech in the schizophrenic patients - despite no significant difference between normal controls and individuals with schizophrenia.[2]

In mania/bipolar disorder[edit]

Gustav Aschaffenburg found that manic individuals generated these "clang-associations" roughly 10–50 times more than non-manic individuals.[8] Aschaffenburg also found that the frequency of these associations increased for all individuals as they became more fatigued.[9]

Andreasen found that when comparing Formal Thought Disorder symptoms between people with schizophrenia and people with Mania, that there was greater reported incidence of clang associations of people with mania.[6]

In depression[edit]

Research investigated by Steuber, found there was no significant difference of glossomania occurrences for patients with schizophrenia compared to patients with depression.[2]

Disagreements in the literature[edit]

Being a niche area of symptoms of mental disorders, there have been disagreements in the definitions of clanging, and how it may nor may not fall under the subset of Formal Thought Disorder symptoms in schizophrenia. Steuber argues that although it is a FTD, that it should come under the umbrella of the subtype 'distractibility'.[2]

Moreover, due to limited research there have been discrepancies in the definition of clanging used: an alternative definition for clanging is: “word selection based on phonemic relatedness, rather than semantic meaning; frequently manifest as rhyming”. Here it is evident that the semantic association chains are not included as part of the definition seen at the start[2] – even though it is the more widely used definition of clanging and glossomania (where the terms are used interchangeably).

Biological factors[edit]

Understanding of such language impairments and FTDs take a biological approach.

Candidate genes for such vulnerability of schizophrenia are the FOXP2 (which is linked to a familial language disorder and autism) and dysbindin 1 genes43,44.[1] This distal explanation not only does not explain clanging specifically, but also fails to include other environmental influences on the development of schizophrenia. Moreover, if a person does develop schizophrenia, it does not guarantee they have the symptom of clanging.

Sass and Pienkos 2013 suggest that a more nuanced understanding of structural (neural changes) patterns that occur in a sufferer's brain may to understand the disorder firstly.[10] However, more research is required into not only understanding the causes of such symptoms, but how it works.

See also[edit]

References[edit]

  1. ^ a b c d Radanovic, Marcia; Sousa, Rafael T. de; Valiengo, L.; Gattaz, Wagner Farid; Forlenza, Orestes Vicente (18 December 2012). "Formal Thought Disorder and language impairment in schizophrenia". Arquivos de Neuro-Psiquiatria. 71 (1): 55–60. doi:10.1590/S0004-282X2012005000015. PMID 23249974.
  2. ^ a b c d e f g Steuber 2011, p. .
  3. ^ Covington, Michael A.; He, Congzhou; Brown, Cati; Naçi, Lorina; McClain, Jonathan T.; Fjordbak, Bess Sirmon; Semple, James; Brown, John (September 2005). "Schizophrenia and the structure of language: The linguist's view". Schizophrenia Research. 77 (1): 85–98. doi:10.1016/j.schres.2005.01.016. PMID 16005388. S2CID 7206375.
  4. ^ Chaika, Elaine (July 1974). "A linguist looks at 'schizophrenic' language". Brain and Language. 1 (3): 257–276. doi:10.1016/0093-934X(74)90040-6.
  5. ^ Covington, Michael A.; He, Congzhou; Brown, Cati; Naçi, Lorina; McClain, Jonathan T.; Fjordbak, Bess Sirmon; Semple, James; Brown, John (September 2005). "Schizophrenia and the structure of language: The linguist's view". Schizophrenia Research. 77 (1): 85–98. doi:10.1016/j.schres.2005.01.016. PMID 16005388. S2CID 7206375.
  6. ^ a b c Andreasen, Nancy C.; Grove, William M. (1986). "Thought, language, and communication in schizophrenia: diagnosis and prognosis". Schizophrenia Bulletin. 12 (3): 348–359. doi:10.1093/schbul/12.3.348. PMID 3764356.
  7. ^ Peralta, Victor; Cuesta, Manuel J.; de Leon, Jose (March 1992). "Formal thought disorder in schizophrenia: A factor analytic study". Comprehensive Psychiatry. 33 (2): 105–110. doi:10.1016/0010-440X(92)90005-B. PMID 1544294.
  8. ^ Kraepelin, Emil (1921). Manic-depressive insanity and paranoia. Edinburgh: E. & S. Livingstone. p. 32. ISBN 978-0-405-07441-7. OCLC 1027792347.
  9. ^ Spitzer, Manfred (1999). "Semantic Networks". The Mind within the Net. doi:10.7551/mitpress/4632.003.0015. ISBN 978-0-262-28416-5. S2CID 242159639.
  10. ^ Sass, Louis; Pienkos, Elizabeth (September 2015). "Beyond words: linguistic experience in melancholia, mania, and schizophrenia". Phenomenology and the Cognitive Sciences. 14 (3): 475–495. doi:10.1007/s11097-013-9340-0. S2CID 254947008.

Sources[edit]



from Hacker News https://ift.tt/QATsD1C

Nintendo Switch's iGPU: Maxwell Nerfed Edition

Graphics performance is vital for any console chip. Nintendo selected Nvidia’s Tegra X1 for their Switch handheld console. Tegra X1 is designed to maximize graphics performance in a limited power envelope, making it a natural choice for a console. And naturally for a Nvidia designed SoC, the Tegra X1 leverages the company’s Maxwell graphics architecture.

From the Tegra X1 Series Embedded Datasheet

Maxwell is better known for serving in Nvidia’s GTX 900 series discrete GPUs. There, it provided excellent performance and power efficiency. But Maxwell was primarily designed to serve in discrete GPUs with substantial area and power budgets. To fit Tegra X1’s low power requirements, Maxwell had to adapt to fit into a smaller power envelope.

Today, we’ll be running a few microbenchmarks on Nvidia’s Tegra X1, as implemented in the Nintendo Switch. We’re using Nemes’s Vulkan microbenchmark because the Tegra X1 does not support OpenCL. I also couldn’t get CUDA working on the platform.

Overview

Tegra X1 implements two Maxwell Streaming Multiprocessors, or SMs. SMs are basic building blocks in Nvidia’s GPUs and roughly analogous to CPU cores. As a Maxwell derivative, the Tegra X1’s SMs feature a familiar four scheduler partitions each capable of executing a 32-wide vector (warp) per cycle.

Tegra X1’s Maxwell is a bit different from the typical desktop variant. Shared memory, which is a fast software managed scratchpad, sees its capacity cut from 96 KB to 64 KB. Lower end Maxwell parts like the GM107 used in the GTX 750 Ti also have 64 KB of Shared Memory in their SMs, so there’s a chance Tegra could be using the GM107 Maxwell flavor. But L1 cache size is cut in half too, from 24 KB to 12 KB per two SM sub partitions (SMSPs). I don’t know if GM107 uses the smaller cache size. But even if it does, Tegra Maxwell sets itself apart with packed FP16 execution4, which can double floating point throughput (subject to terms and conditions).

Besides having less fast storage in each SM, Nintendo has chosen to run the iGPU at a low 768 MHz. For comparison, the EVGA GTX 980 Ti also tested here boosts at up to 1328 MHz, and typically runs above 1200 MHz. This high clock speed is shared with GM107, which averages around 1140 MHz.

Tegra X1’s datasheet suggests the iGPU can run at 1 GHz

Tegra X1’s low iGPU clock could be unique to the Nintendo Switch. Nvidia’s datasheet states the GPU should be capable of 1024 FP16 GFLOPs at reasonable temperatures. Working backwards, 1024 FP16 GFLOPS would be achieved with each of the iGPU’s 256 lanes working on two packed operands and performing two operations (a fused multiply add) on them at 1 GHz. However, I don’t have access to any other Tegra X1 platforms. Therefore, the rest of this article will evaluate Tegra X1 as implemented in the Switch, including the low clocks set by Nintendo.

Cache and Memory Latency

Tegra X1’s iGPU sees high latency throughout its memory hierarchy due to low clocks. If we focus on clock cycle counts, both the GTX 980 Ti and Tegra’s iGPU have about 110 cycles of L1 cache latency. Even though Tegra X1 has a smaller L1 cache running at lower clocks, Nvidia was unable to make the pipeline shorter. L2 cache latency is approximately 166 cycles on Tegra X1 compared to the GTX 980 Ti’s 257 cycles. Tegra taking fewer cycles to access its 256 KB L2 makes sense because the intra-GPU interconnect is smaller. But it’s not really a victory because the desktop part’s higher clock speed puts it ahead in absolute terms. Finally, VRAM latency is very high at over 400 ns.

Data gathered using Vulkan, with Nemes’s test suite

Intel’s Gen 9 (Skylake) integrated graphics provides an interesting comparison. Skylake’s GT2 graphics are found across a wide range of parts, and the HD 630 variant in later Skylake-derived generations is similar. While not designed primarily for gaming, it can be pressed into service by gamers without a lot of disposable income.

Intel has an interesting scheme for GPU caching. Global memory accesses from compute kernels go straight to an iGPU wide cache, which Intel calls a L3. To reduce confusion, I’ll use “L3” to refer to the iGPU’s private cache, and LLC to refer to the i5-6600K’s 6 MB of cache shared by the CPU and iGPU. The HD 530’s L3 has 768 KB of physical capacity2, but only part of it is allocated as cache for the shader array. Since I ran this test with the iGPU driving a display, 384 KB is available for caching. Despite having more caching capacity than Tegra X1’s L2, Intel’s L3 achieves lower latency.

AMD’s Raphael iGPU is a fun comparison. I don’t think a lot of people are gaming on Zen 4 iGPUs, but it is a minimum size RDNA 2 implementation. Like the Switch’s iGPU, Raphael’s iGPU has 256 KB of last level cache. But advances in GPU architecture and process nodes let Raphael’s iGPU clock to 2.2 GHz, giving it a massive latency lead.

Cache and Memory Bandwidth

GPUs tend to be bandwidth hungry. The Switch is notably lacking in this area especially when the SMs have to pull data from the 256 KB L2, which provides 46.1 GB/s. At 768 MHz, this is just above 60 bytes per cycle, so Tegra X1 could just have a single 64B/cycle L2 slice. If so, it’s the smallest and lowest bandwidth L2 configuration possible on a Maxwell GPU.

Not all 192 KB of storage in each of Intel’s L3 slices is usable as cache

Intel’s HD 530 runs at higher clock speeds and has a quad-banked L3. Each L3 bank can deliver 64B/cycle3, but L3 bandwidth is actually limited by the shader array. Each of the HD 530’s three subslices can consume 64B/cycle, for 192B/cycle total. The HD 530 isn’t a big GPU, but it does have a larger and higher bandwidth cache. As we get out of the small GPU-private caches, HD 530 can achieve higher bandwidth from the i5-6600K’s shared last level cache. The Tegra X1 drops out into DRAM sooner.

In main memory, Tegra X1 turns in a relatively better performance. Unlike the CPU which couldn’t get even 10 GB/s from DRAM, the iGPU can utilize most of the LPDDR4 setup’s available bandwidth. It’s still not as good as desktop DDR4, but now the HD 530 only has a 23% advantage.

Raphael’s iGPU has a massive bandwidth advantage over Tegra X1 throughout the memory hierarchy. RDNA 2 is designed to deliver very high bandwidth and even a minimal implementation is a force to be reckoned with. High clock speeds and 128 byte/cycle L2 slices give Raphael’s iGPU a high cache bandwidth to compute ratio. At larger test sizes, the 7950X3D’s dual channel DDR5-5600 setup shows what modern DRAM setups are capable of. The Switch gets left in the dust.

What happens if we compare the Switch’s Maxwell implementation to desktop Maxwell?

The Switch cannot compare to a desktop with a discrete GPU, enough said.

Compute Throughput

Tegra X1 uses Maxwell SMs similar to those found in desktop GPUs. Each SM has four scheduler partitions, each with a nominally 32-wide execution unit4. Nvidia uses 32-wide vectors or warps, so each partition can generally execute one instruction per cycle. Rarer operations like integer multiplies or FP inverse square roots execute at quarter rate.

The Switch enjoys throughput comparable to Intel’s HD 530 for most basic operations. It’s also comparable for special operations like inverse square roots. Intel pulls ahead for integer multiplication performance, though that’s not likely to make a difference for games.

As mentioned earlier, Tegra X1’s Maxwell gets hardware FP16 support. Two FP16 values can be packed into the lower and upper halves of a 32-bit register. If the compiler can pull that off, FP16 can execute at double rate. Unfortunately, Nvidia’s compiler wasn’t able to do FP16 packing. AMD and Intel do enjoy double rate FP16 execution. AMD’s FP16 execution scheme works the same way and also requires packing, so it’s a bit weird that Nvidia misses out.

However, we can verify the Switch’s increased FP16 throughput with vkpeak. Vkpeak focuses on peak throughput with fused multiply add operations, and can achieve higher FP16 throughput when using 4-wide vectors.

Vkpeak counts a fused multiply add as two operations

Even with higher FP16 throughput, the Switch falls behind Intel and AMD’s basic desktop integrated GPUs. Tegra X1 does give a good account of itself with 16-bit integer operations. However I expect games to stick with FP32 or FP16, with 32-bit integers used for addressing and control flow.

Vulkan Compute Performance (VkFFT)

VkFFT uses the Vulkan API to compute Fast Fourier Transforms. Here, we’re looking at the first set of subtests (VkFFT FFT + iFFT C2C benchmark 1D batched in single precision). The first few subtests appear very memory bandwidth bound on the RX 6900 XT, and I expect similar behavior on these smaller GPUs.

Intel’s lead in subtests 3 through 11 likely comes from a memory bandwidth advantage. HD 530’s DDR4-2133 isn’t great by modern standards, but a 128-bit memory bus is better than the 64-bit LPDDR4 memory bus on the Switch.

VkFFT outputs estimated bandwidth figures alongside scores. Some of the later subtests may not be bandwidth bound, as the bandwidth figures are far below theoretical. But Intel’s HD 530 still pulls ahead, likely thanks to its higher compute throughput.

CPU to GPU Uplink Performance

Integrated GPUs typically can’t compete with larger discrete cards in compute performance or memory bandwidth. But they can compare well in terms of how fast the CPU and GPU can communicate because discrete cards are constrained by a relatively low bandwidth PCIe interface.

The Nintendo Switch’s Tegra X1 enjoys decent bandwidth between the CPU and GPU memory spaces, and is likely held back by how fast the CPU can access memory. However, it loses in absolute terms to Nvidia’s GTX 980 Ti. Against the Tegra X1’s limited memory bandwidth, a 16x PCIe 3.0 interface can still compare well. Intel’s HD 530 turns in a similar performance when using the copy engine. But moving data with compute shaders provides a nice uplift, giving Intel the edge against the Switch.

Final Words

Tegra X1 shows the challenge of building a GPU architecture that scales across a wide range of power targets. Maxwell was built for big GPUs and has a large basic building block. Maxwell implementations can only be scaled 128 lanes at a time. Contrast that with Intel’s iGPU architecture, which can scale 8 lanes at a time by varying the number of scheduler partitions within a subslice. The equivalent in Nvidia’s world would be changing the number of scheduler partitions in a SM, changing GPU size 32 lanes at a time. Of course, Maxwell can’t do that. Adjusting GPU size 128 lanes at a time is totally fine when your GPU has over 2K lanes. With a GPU that’s just 256 lanes wide, Nvidia has their hands tied in how closely they can fit their targets.

On the Switch, Nintendo likely thought Nvidia’s default targets were a bit too high on the power and performance curve. The Switch runs Tegra X1’s iGPU at 768 MHz even though Nvidia’s documents suggest 1000 MHz should be typical. I wonder if the Switch would do better at 1000 MHz with a hypothetical 192 lane Maxwell implementation. Higher GPU clocks would improve performance for fixed function graphics blocks like rasterizers and render output units, even if theoretical compute throughput is similar. A smaller, faster clocked GPU would also require lower occupancy to exercise all of its execution units, though that’s unlikely to be a major issue because the Switch’s iGPU is so small already.

In terms of absolute performance, the Switch delivers a decent amount of graphics performance within a low power envelope. However, seeing a bog standard desktop iGPU win over a graphics-oriented console chip is eye opening. Even more eye opening is that developers are able to get recent AAA games ported to the Switch. Intel’s ubiquitous Skylake GT2 iGPU is often derided as being inadequate for serious gaming. Internet sentiment tends to accept leaving GPUs like the HD 530 behind in pursuit of better effects achieved on higher end cards.

Nintendo’s Switch shows this doesn’t have to be the case. If developers can deliver playable experiences on the Switch, they likely can do so with a HD 530 too. No doubt such optimizations require effort but carrying them out may be worth the reward of making PC gaming more accessible. Younger audiences in particular may not have the disposable income necessary to purchase current discrete cards, especially as GPU price increases at every market segment have outpaced inflation.

If you like our articles and journalism, and you want to support us in our endeavors, then consider heading over to our Patreon or our PayPal if you want to toss a few bucks our way. If you would like to talk with the Chips and Cheese staff and the people behind the scenes, then consider joining our Discord.

References

  1. Data Sheet Nvidia Tegra X1 Series Processors, Maxwell GPU + ARM v8
  2. Programmer’s Reference Manual, For the 2015 – 2016 Intel Core™ Processors, Celeron™ Processors and Pentium™ Processors based on the “Skylake” Platform, Volume 4: Configurations
  3. The Compute Architecture of Intel Processor Graphics Gen9
  4. Whitepaper Nvidia Tegra X1, Nvidia’s New Mobile Superchip


from Hacker News https://ift.tt/8cFuwsA

Monday, December 25, 2023

CrayZee Eighty

Ever wished that you had a Cray 1 Supercomputer? Ever wondered if an RC2014 backplane could wrap around a cylinder? Ever thought about how many retweets a Z80 drawing a Mandelbrot fractal could get? Ever had an idea that’s so daft, the only way to exorcise it is to do it? If so, would you like to Seymore…

Like most ideas in Lockdown, things started with a throwaway comment on Twitter and quickly escalated to laser cutting a toilet roll. I blame Shirley Knott

So, as a practical joke, the homage to the powerful Cray 1, and also the less powerful Rolodex worked surprisingly well. This inevitably lead to the question of making it work for real

Taking some measurements from the toilet roll, I laser cut a simple jig to hold 12 40 pin sockets around 270 degrees, with the intention of soldering wire from pin to pin in situ. This quickly demonstrated that it just wasn’t practical to get the soldering iron in such a tight area.

Another jig was made to hold the sockets at an even distance, and use brass wire to connect them up, with the intention of bending them around afterwards. This also became quickly apparent that it wasn’t going to work.

Luckily OSHPark offer a flex PCB option. I’ve been aware of this for a while, and wanted to try it, but there hadn’t been anything suitable within the RC2014 ecosystem. (Well, there have been requests for a Floppy Module, but I don’t think anybody actually wants a module which is floppy!). At $10 per square inch, it isn’t cheap, but, after a bit of KiCad work, the smallest 12 slot RC2014 backplane was ordered.

Soldering through hole components on to flex PCB is not easy, and 480 solder joints generate a lot of heat which will warp the plastic if it is not done carefully in a controlled manner. The Flex PCB was designed to fit the existing jigs, and when soldered up, it fitted perfectly!

Using the jig dimensions, I was able to 3D print a couple of end caps which held the slots in place and made things much more solid. I filled it with a bunch of spare modules and tested out if the backplane itself worked…

Huston we have a problem! Nothing came up when I plugged in a FTDI cable :-(

A few hours were wasted going down different rabbit holes chasing too many red herrings. The modules I’d put together essentially made up a RC2014 Zed, and were picked from some of my non-current module archive. What I’d forgotten about is that old versions of RomWBW which are built for use with a DS1302 RTC Module will hang for about 2 minutes on startup if the RTC cannot be found. So, in fact, it was all working perfectly, I just had to wait a little while after plugging in!

A quick upgrade to RomWBW v3.0.1 overcomes this problem, and should have been done right at the start!

To make things more Cray-like, I redesigned the end caps to be open at the top and bottom, and extended the lower one to support a laser cut skirt. One day this will house an IDE hard drive, but for now, it’s just there to mimic the bench seat on the Cray 1

The irony is not lost on me that the Pi Zero, which is only used to generate HDMI from serial data, is several orders of magnitude more powerful than the Cray 1, which is, itself, way more powerful than the Z80 which is calling all the shots!

There are no plans to release this as a product at this stage. The price would be too high to justify for a kit which really is not very practical at all.



from Hacker News https://ift.tt/SPhfQe1

Building a decentralized name system on top of IRC

The requirements

Over the past few years I’ve been slowly building a peer-to-peer networking library called ‘P2PD.’ The library is designed to make it easier to build peer-to-peer applications. But if I’m to build a library for software that’s supposed to be architecturally resilient – it makes sense for the library not to require centralization itself. My goal is that if I disappear the library should keep working without me having to manage things. The only way to make this possible is to design the system in such a way that it can utilize pre-existing public infrastructure. And it would be even better if that infrastructure were already serving multiple purposes to exclude the possibility of infrastructure disappearing. I know this is a little abstract but bare with me.

The problem

As an example: In P2PD there’s a need to be able to exchange meta-data signaling messages to peers. So given the requirements above – what protocols and infrastructure do you choose? Well, there are public servers that offer ‘pub-sub.’ MQTT is widely used and it’s used by enough devices that it’s unlikely to disappear. So if I have a list of stable, high-quality, MQTT servers, and build this into an address format to contact a node, then I don’t have to host these servers myself. I can disappear and it keeps working. Now, why am I saying all this? Because there was a piece missing: the DNS part.

DNS is the thing that makes the Internet user-friendly. But think about this: while there seems to be public infrastructure for all kinds of services on the Internet – nothing exists for something as simple as a permissioned key-value store. We can say that DNS is a paid, permissioned, key-value store, and that it’s good enough for the Internet today. But DNS has a few key problems. Firstly (1) there are no widespread standards to programmatically register domains and (2) DNS costs money to use (which I argue is not suitable for all uses.)

What alternatives are there to DNS? Well, there is Opennic – it’s actually great but again it’s a fork of the existing DNS system (and so has no standardized registration system.) There is the ‘Ethereum Name System’ – but it needs money to use (and we’ve learned the hard way that blockchains aren’t the right solution for everything.) There was no protocol for programmatic and free registration of names…

IRC as a DNS system

IRC is an ancient protocol used by multiple large networks. The protocol is popular for chat but flexible enough that all kinds of services have been built on top of it. A protocol based on IRC won’t easily die due to its long history as a chat protocol (there are IRC servers so old they’ve been online for decades.) Such infrastructure can be considered public, ubiquitous, open, and has withstood the test of time. Crucially: it has all the features needed to build a permissioned key-value store on.

The idea sounded promising to me in theory but before I went overboard I needed to verify if it would work. What I mean by that is would it be possible for a program to register a channel without the need for a human to verify their account? I knew that email registration was required on many popular IRC networks but I wasn’t sure if any were truly ‘open.’ So I manually tested about 20 different servers and eventually found one that allowed for account registration without email verification. This validated my theory that IRC could be used to build a name system. But how many of these servers were there?

To answer this question I needed to code a specialized scanner. The tool should test an IRC server by attempting to register an account and then register a channel. I rapidly wrote this code and used the server I already found to test it. Next I needed a list of IRC servers to scan. A big list. I ended up compiling every IRC server list from all major IRC clients by looking at their source code. The scanner could rapidly test servers as it was written to be asynchronous. I was able to find around 11 servers representing a prevalence of 1 server every 20 teste. Of these 11 servers – 5 were dual-stack and supported SSL.

I decided to reduce my list to only the dual-stack servers. The requirement for SSL was due to the fact IRC is a plaintext protocol and I needed to send a password over the connection. At least with SSL the passwords would be encrypted. Then I filtered the servers that supported both IPv4 and IPv6.

How it should work

For my design I wanted names to be stored across IRC servers as channels. It should be possible for the software to keep working if some of the servers go down for short periods. I wanted registration to succeed if registration worked on a minimum number of servers. And for name lookups to be able to get by on only needing to check a sub-set of servers (until getting enough results implied registration must have succeeded.) So while registration may require say: 5 / 7 channel registrations to succeed — lookup should only need 3 successes (2 is the failure number + 1 implies that registration must have succeeded for that name.)

Registering a name:
– Given N IRC servers
– At least M names must be registered (minimum)
– Where M = N – (N * 0.4)
– Measures are taken to make this atomic (don’t register any if M names can’t be registered)

Looking up a name:
– A name may be registered on any of the IRC servers.
– Given that there can be up to failures = (N * 0.4)
– Finding failures + 1 must imply a name was successfully registered.
– Names are grouped by an owner’s ECDSA public key.
– Lookups succeed when result no per pub key >= failures + 1.
– A lookup isn’t guaranteed and may fail if all servers are tested.

Password management with a single seed

The first decision to make was how to manage passwords. If I were to generate passwords for every account, the software would need to have a horrible amount of state… Probably with a database, with a schema, with just the right queries, with just the right everything, and all the code to manage that. To mitigate that I opted to use a cryptographic seed to deterministically generate passwords.


base64_encode(
    sha3_256(
        IRC_PREFIX + 
        ":pass:" + 
        irc_server_domain + 
        seed
    )
)

I chose SHA3 to avoid length-extension attacks; Base64 encoding was used as a way to avoid breaking plaintext IRC protocol messages. The prefix portion is just a unique number that helped me out with testing. Otherwise, I couldn’t register accounts without generating a new seed. Using the IRC servers domain results in a unique password per server. The inclusion of the “:pass:” portion is used to differentiate the construct from other similar hash-based derivations (later username, nick, and email.) These fields are the same but use a more restrictive encoding to be compatible with the IRC protocol. The encoding for them is base36 (0-9A-Z).

Channel names for ‘domains’

At first my design to convert names to channel names looked like this:


"#" +
base36_encode(
    hash160(
        dns_name
    )
)

The reason for using hash160 is there’s quite a small limit for the length of a channel name. I have found it can be reliably assumed to be at least 32 bytes. Hash160 returns 20 bytes that need to be encoded in base32 to work as a channel name. That increases the size by reducing the symbols that can be used in a byte. This encoding scheme should allow for the digest to be stored in the channel name. But there is one more problem. Suppose you want to register a name and an attacker is watching the channel list. The moment they see your channel name they can quickly register the name on the other IRC servers potentially hijacking the name.

The fix I came up with is inspired by so-called ‘decentralized exchanges’. In order to prevent race conditions in name registration: names will be uniquely determined by a time-lock encrypted nonce. The registering party can “save-up” the time-locks in advance so that every channel name can be registered at once. The channel names will further be derived per server so that attackers need to decrypt the initial time-lock to determine the masked name for registration on other servers: by which time the channel names will have already been registered by the names rightful owner. The construct is as follows.


def get_chan_name(name, tld, irc_server, optional_pw=""):
    msg = "{name} {tld} {optional_pw} {irc_server_domain}"
    time_lock = argon2(
        msg,
        salt="some custom salt",
        time_cost=2, # Iterations
        memory_cost=1024 * 2, # KB needed.
        parallelism=1 # Threads needed.
    )   
    
    "#" + 
    base36_encode(
        hash160(
            sha3_256(
               msg + time_lock
            )
        )
     )

Argon2 is a nice hash algorithm as you can adjust memory usage, time cost, and threading. The final output of Argon2 becomes the time-lock. Here sha3_256 is used again but this time inside hash160. The reason for this is stacking hash functions can help reduce the chances of finding collisions (you would need to find a collision in both functions.) Note the inclusion of the specific IRC server hosting the ‘name’ and the introduction of a TLD. The user can be encouraged to use anything for the TLD so this becomes a quasi-password.

If an attacker wanted to generate rainbow tables they would need the TLD used and Argon2 would massively increase the cost of generating such a table. The introduction of an optional password field allows for names to be masked with a password. Attackers would then need to know the password in addition to the TLD and name.

Channel topics to store values

IRC channel names translate the names in this system while topics store their values. Since this is a ‘threshold’ design where a sub-set of servers may fail the topic format needs to account for that. I initially chose a scheme that contained a version, ECDSA public key, record signature, and record portion. The good thing about the topic field is the characters it can support are quite extensive. In my tests every server supported unicode topics. However, I decided to use a more restrictive encoding scheme (using only the full range of characters on a standard keyboard) to ensure that binary data would be safely encoded across servers.


ecdsa_priv = sha3_256("{name}{tld}{pw}{seed}")
ecdsa_keys = ECDSA.generate(ecdsa_priv)
topic =  "p2pd.net/irc" + " " +
base92_encode(ecdsa_keys.pub) + " " +
base92_encode(ecdsa_keys.sign(record)) + " " +
base92_encode(record)

Names are owned by public keys. Their ECDSA private key is built from a hash. Since it’s possible a resulting key will be invalid. The private key is hashed until the result is a valid key. Shortly after this I remembered an old trick that allowed the ECDSA public key to be recovered from a signature if you have the message. The recovery returns multiple keys that need to be saved. The original public key can be determined by creating a list of public keys for each topic component and choosing the one that passes the validity threshold (of course: keys still need to be able to verify signatures.) Thus, I was able to shorten the topic portion to three fields.


def encrypt(msg, otp):
    # Make otp long enough.
    while len(otp) < len(msg):
        otp += sha3_256(otp).digest()

    # Xor MSG with OTP.
    buf = ""
    for i in range(0, len(msg)):
        buf += msg[i] ^ otp[i]

    return buf
    
record = encrypt(record, time_lock)

Above is a simple algorithm for a one-time pad that uses hashing to do key-stretching. It is used as a way to encrypt the record portion of the topic and signature portions without increasing the message size. The fields are encrypted with encrypt(field, sha3_256(“{field_name}:” + time_lock)) so that each field uses a different pad. The reason the signature portion is encrypted is that names across servers could be correlated by observing which names share the same public key (like mentioned above.) The security guarantee for this is at least tied to the time-lock provided by Argon2 across the name meta-data.

Handling expiry

When nicknames and channels are registered on an IRC server they usually expire if they’re not used. The duration that nicknames and channels stay active is generally between 12 – 60 days. This means that being able to monitor how close accounts and channels are to expiry is important. I’ve created a basic function that automatically handles expiry and refreshes everything. The refresh code also attempts to register names on servers that had previously failed (where servers may have temporarily been down.)

In IRC there’s a few interesting mechanisms that might help prevent channels from being lost. (1) IRC supports ‘successors’ that gain control over a channel if the owner’s account expires. (2) Another option to help prevent channel expiry would be to utilize bots. This would be very easy to make low-trust. Naturally, names and accounts are automatically ‘refreshed’ when they’re used.

Mitigating disruption

I didn’t want to create something that may negatively impact IRC service. So my software takes a number of measures:

  1. The channels registered are set to private to avoid flooding the channel listing.
  2. Channel names are uniquely determined by an algorithm that requires CPU work to be done from clients to map names to values. This sets a limit on channel registrations and is required by the protocol for security.
  3. Channels will expire naturally due to chanserv optimizations and the software doesn’t aggressively attempt to refresh channels (refreshing nicknames and channels has to be done manually by the user.)
  4. The design means that lookups for names can be load-balanced across servers.

My software currently has few users so speculation about possible disruption remains theoretical. But if it should become a problem in the future I’d be happy to work with IRC operators to minimize abuse.

Usage for the software

That’s really all I have to say for now. I worked hard to build a prototype over the last month and have a basic library that can be used now. Details of the software can be seen at https://p2pd.readthedocs.io/en/latest/built/irc_kvs.html I’m also looking for a new role so if anyone is interested in working with someone who is reasonably skilled and can think outside the box hit me up.

My resume is here: https://github.com/robertsdotpm/resume/blob/main/resume.pdf

Merry Haxmas!



from Hacker News https://ift.tt/QiOhJnV

Saturday, December 23, 2023

Looking into the Stadia Controller Bluetooth Mode Website

With the end of Google's Stadia platform on January 18, 2023, Google published a website allowing people to "Switch the Stadia Controller to Bluetooth mode".

This seems pretty cool, but there are two points listed under "Important things to know" which I didn't like:

  • Switching is permanent
    Once you switch your controller to Bluetooth mode, you can’t change it back to use Wi-Fi on Stadia. You can still play wired with USB in Bluetooth mode.
  • Available until December 31, 2023

    You can switch to Bluetooth mode, check the controller mode, and check for Bluetooth updates until Dec 31, 2023.

While permanent switching is not a huge issue, since Stadia isn't available anymore, and the Bluetooth mode is way more useful, I still wanted to have the option to switch back.
Since the Stadia Controller's WiFi approach is rather unique, I didn't want to just disable it and no longer have the option to look into it.

 

But only one year to update the firmware and then you're stuck in "Wi-Fi mode" forever? I guess Google really wants to forget about Stadia forever, and get rid of the site after a year.

 

So I started looking into the switching process on the site, to try and avoid those limitations. I also reverse engineered some parts of the binaries hosted on the site, more about that later.


Analyzing the Bluetooth mode website

The JavaScript used by the site is minified which won't give us function and variable names. It doesn't stop us from seeing what it does and analyzing the packets using Wireshark though.

Note that most of the flashing process seems to be standard NXP stuff, and only contains some minor adjustments by Google. 


The site uses WebUSB and WebHID to communicate with the controller. It filters for several different Vendor and Product ID combinations, to determine the state/mode the controller is currently in.


The switcher loads several files from the data endpoint, which we'll take a look at in more detail later. From taking a rough look at the files and the logs in the JS, the "Bluetooth mode switcher" actually flashes a firmware update to the controller. So from now on I'll be referring to this as "flashing the Bluetooth firmware" and the site as "flashing tool/site".

 

 The site starts by checking the firmware revision and battery percentage while the controller is in the normal, powered on mode, this is referred to as "OEM Mode".

 

OEM Mode

While in OEM mode, after plugging in the controller to the PC without holding down any buttons, the site communicates with the controller using WebUSB.


It starts by checking the first two bytes of the serial number from the USB string descriptor. There are some prefixes which are not allowed to be flashed. The serial prefix is also used to determine if this controller is a development controller (dvt) or a production controller (pvt).


It then retrieves the current firmware revisions using USB control request 0x81.

Firmware revisions less than 0x4E3E0 are referred to as gotham, while all later revisions are called bruce. gotham being the old Wi-Fi firmware, while bruce is the new Bluetooth firmware.


After that the battery percentage is requested using control request 0x83 and retrieved with request 0x84. This value is used to check if the controller has enough charge (more than 10%) to perform the flashing process.


After all that info has been retrieved, the site asks us to unplug the controller and turn it off.

 

Bootloader

The site now wants the user to hold down the Options button, while plugging the controller back in. This will enter the Bootloader.

Not much to say about this mode. The site asks us to press Options + Assistant + A + Y while in the Bootloader, which will enter the SDP Mode.

 

SDP Mode

SDP (Serial Download Protocol) Mode allows sending several low-level commands to the controller.

The flasher uses WebHID to send and receive commands.
It starts by uploading a signed Flashloader binary (restricted_ivt_flashloader.bin) into the controller's memory (@0x20000000), by using the SDP WRITE_FILE command.

It then jumps to the uploaded Flashloader binary (@0x20000400) using a JUMP_ADDRESS command.

The controller is now running the Flashloader.


 Flashloader

The Flashloader is a bit more advanced than the previous modes. It can also receive and send several commands via USB, and the flasher site once again uses WebHID to send and receive those commands.

Google seems to have chosen a restricted version of this Flashloader though, since only a few commands actually used by the flasher are available.

Also only a few, small memory regions are allowed to be read and written using the WriteMemory and ReadMemory commands.


The Flashloader is used to actually write the new firmware into the controllers flash storage.

 

Detecting the MCU Type

The site starts by detecting the MCU type, by reading from 0x400D8260. There are two supported types (106XA0 and 106XA1), if the detected type doesn't match one of them it will throw an error.

 

Detecting the Flash Type

Since different Stadia Controller models seem to have different flash storage types, the exact chip is now detected. Detecting the flash type is a bit of an interesting approach. 

To communicate with the flash storage a FlexSPI configuration block needs to be loaded and applied. To determine the flash type, the site retrieves the device ID from the flash. It starts by uploading a special configuration block for determining this ID (flashloader_fcb_get_vendor_id.bin) into memory (@0x00002000), and applies this configuration using the ConfigureMemory command.

This configuration block contains some sane values for the different flash chips, and also contains a lookup table (LUT) with different FlexSPI sequences which will be sent to the flash chip.

For the get_vendor_id configuration the first sequence in the LUT, usually used for reading from flash, has been replaced with a Read Manufacture ID/ Device ID command.

Now comes the interesting part: The site now directly configures the FlexSPI registers using ReadMemory/WriteMemory Flashloader commands via USB.

It configures the FlexSPI FIFO and sends the Read Device ID command from the LUT sequence.

It then retrieves the result from the first RX FIFO Data Register.

It seems like writing to and reading from those few FlexSPI registers is explicitly allowed in the flashloader.


Setting up the Flash Storage

Now that the flash type is known the site can load the proper configuration block for that chip.

There are two supported flash types (Giga-16m and Winbond-16m).
To setup the Winbond chip an entire flash configuration block (flashloader_fcb_w25q128jw.bin) is loaded and applied.
For the Giga the flash is automatically configured by the Flashloader based on a simple configuration value (0xC0000206).

 

Flashing the Firmware

Now that everything is ready the actual firmware flashing can begin.

After clearing GPR Flags 4-6, the site loads the signed target firmware image (<bruce/gotham>_<dvt/pvt>_a_<dev/stage/prod>_signed.bin) and parses some build info values from it.

It also determines where in the flash the firmware should be flashed to. To flash data the site sends a FlashEraseRegion command to erase and unlock the flash, followed by a WriteMemory command to write to the flash mapped in memory @0x60040000.

The IVT (Image Vector Table) is now flashed to @0x60001000 (only if the image contains one), and the actual firmware application gets flashed to the proper slot location (Application A / Application B).


Cleaning up

Now that the firmware is flashed, GPR6 is set to the proper application slot and a Reset command is issued to restart the controller.

And that's basically it, the controller is now running the newly flashed firmware.


Dumping the old Firmware

As mentioned in the beginning, it is not possible to revert to the old Wi-Fi firmware using the Stadia mode switching site, once the new Bluetooth firmware has been flashed.

While the site does seem to technically support flashing the old Wi-Fi firmware, and also has references to the firmware files required for it, all those files lead to a 404 and can't be downloaded.

So to preserve the old Firmware I had to dump it from the controller itself.

 

I tried to read from the flash memory region while in the Flashloader, which only results in errors. It seems like reading from flash is not allowed by the restricted Flashloader.

 

But I had another idea...

Remember that we have direct access to some of the FlexSPI registers, which are used to determine the flash type?

 

Instead of applying the get_vendor_id configuration block and sending the Read Device ID command, I tried applying the proper flash configuration and sending a Read Data command over the registers.

That surprisingly did work without any issues. I could now issue FlexSPI read commands via USB and dump the flash.

 

Since only reading the first register of the RX FIFO Data Registers is allowed by the restricted Flashloader, I had to dump the flash 4-bytes at a time, which did take several hours.

At the end I had a full dump of the Stadia controller flash though!

Finishing up

During the testing I started reimplementing parts of the site in Python which I called stadiatool, which also allowed me to mess around with the Flashloader commands.

After dumping the flash, I extended the tool to allow flashing the firmware as well.

Note that this was a pretty quick project which is why the code might seem rushed.

You can find the GitHub repo here.


That's it for now, I might take a look at analyzing the firmwares themselves next.


Special thanks to cmplx for some help while analyzing this and for listening to my random ideas!




from Hacker News https://ift.tt/cvu6XlE

Kids with chattier parents are more talkative, may have bigger vocabulary

We couldn’t extract the content of this article. Here is the URL so you can access it:
https://www.science.org/content/article/kids-chattier-parents-are-more-talkative-may-have-bigger-vocabulary



from Hacker News https://ift.tt/NHRGqbJ

When the Power Macintosh Ran NetWare (Featuring Wormhole and Cyberpunk)

This entry and the software we'll demonstrate is in large part thanks to an anonymous Apple developer who was part of the NetWare team. Thank you!

Ah, Novell NetWare, the network operating system of the 1990s. Nothing was quite like it. Until Windows NT muscled in on its action near the end of the decade, if you were sharing lots of files between lots of PCs, NetWare was in there somewhere. My earliest memory of NetWare was stealing liberating a copy of Borland Turbo Pascal from a campus NetWare 3.x server around 1993-4, which, because God has a sense of humour, was later the University I ended up working for.

Why, the very mention of NetWare almost certainly caused those of you familiar with it to get an instant mental image of MONITOR.NLM, like this as shown in Bochs:

But Novell wanted NetWare servers to be more than just PCs (and the PC ecosystem to be more than just Microsoft), and in an attempt to gain footholds elsewhere the company accumulated some strange bedfellows. HP, Sun and Data General were on board, and IBM did so in grander form, but surely the most unexpected company Novell tried to court was ... Apple.

Yes, that screenshot is a real Power Macintosh 6100 (actually a Performa 6116CD) in the Floodgap lab running exactly what you think it's running. As a matter of fact, that Apple logo superimposed on the '90s Novell "teeth" I led off with was a real resource image that came from it.

No, I don't mean Macintoshes accessing NetWare servers as clients: we mean Macs as NetWare servers themselves. As proof, we'll take an entire tour of Power Macintosh NetWare on the 6116CD and try to boot it on the Apple Network Server, its actual intended target. NetWare on the Mac really existed as part of the same bizarro universe that ported the Macintosh Finder to Novell DR-DOS — meaning it's time for yet another weird Apple story during Apple's weirdest days.

For as associated with PCs and DOS as NetWare ended up being, the first version of what would become NetWare didn't even run on x86. Novell was originally founded as Novell Data Systems, Inc., in Orem, Utah in 1980. They initially sold their own Z80-based CP/M hardware, which wasn't very lucrative as market competition grew, and as a means of distinguishing their offerings Novell management hit on the idea of networking the machines together. For this work the company contracted with SuperSet Software in 1981, founded by a group of Brigham Young University alumni (notably Drew Major who went on to become Novell's chief scientist). They developed S-Net, short for "ShareNet," a Motorola 68000-based server running CP/M-68K able to share data between CP/M clients. As CP/M marketshare started to shrink, S-Net was retrofitted to allow it to also serve MS-DOS PC clients and became the first version of NetWare in 1983.

This box shown here was actual "68B" server hardware Novell sold in the mid-1980s. In early networks client nodes were connected in a star arrangement, moving to more complex topologies as networks increased in size, but primary communication with the server both then and later was via the NetWare Core Protocol (NCP). Those nodes that were directly wired to an S-Net box communicated via RS-422, but larger networks transported NCP over the IPX and SPX protocols derived from Xerox Network Systems' IDP and SPP, which are roughly analogous to IP and TCP. (In 1991 Novell added TCP/IP support directly.) NCP supported file and print services, but unlike many contemporary server products, NetWare shared at the file level rather than the disk level. Instead of a client having to pull blocks from a remote volume to find a particular file or folder, a client simply asked the server directly for the resource, and the server did the lookup locally. Transactions were thus faster and accordingly more granular. Applications like NetWare's simple electronic mail facility were layered on top of the basic infrastructure.

To broaden the market further Novell ported NetWare away from their custom 68000 hardware to 8086 PCs in 1984, becoming NetWare 86 (the 68000 version duly became "NetWare 68"), and then Advanced NetWare 68 and 86 in 1985 which could support more than one server. The PC versions employed a custom NetWare File System (NWFS) format for NetWare partitions that allowed larger capacities and faster access than DOS FAT. (In fact, NWFS was a substantially modified form of FAT.) They were generally hardware-agnostic and any PC with sufficient memory would suffice if NetWare drivers existed for its storage and network cards. As NetWare was the best known network operating system at the time, most enterprise hardware companies supported it. A fault-tolerant version was also provided for those applications requiring high reliability. After the 80286 became available, in 1986 Novell released Advanced NetWare/286 (becoming NetWare 2.x), supporting up to 16MB of RAM in '286 protected mode and disks as large as 256MB (compared to the contemporary DOS limit of 32MB).

Novell also developed a VAX-specific NetWare 2 for VMS in 1988 which transparently interchanged files and allowed PCs to access VMS printers. Unlike the PC version, where NetWare assumed control of the entire machine, VMS NetWare ran on top of VMS. We'll come back to this momentarily.

Meanwhile, NetWare/386 (i.e., 3.x) in 1990 expanded PC NetWare further: part of its performance dominance came from aggressive caching in which the NWFS volume(s)' entire directories were kept in RAM, but this also meant the 16MB RAM limit in 2.x constrained how large its volumes could be. Taking advantage of 80386 32-bit protected mode for even more memory, 3.x's upgraded NWFS could also support drives as large as 1TB and files up to 4GB in size, in addition to introducing a much more efficient routing protocol for large networks. Additionally, it no longer cold-booted directly into the operating system, relying on MS-DOS to serve as a bootloader and repository for the kernel and critical system files, after which the NetWare kernel would take over as usual.

NetWare 3.x also introduced the concept of the NetWare Loadable Module, or NLM. In previous versions of NetWare, the kernel was monolithic, and changing options or drivers required relinking the kernel and restarting it (though this was also true of many other contemporary operating systems). In 3.x, drivers, system extensions and even applications could now be loaded and unloaded dynamically while the server was running, including third-party tools. However, NLMs still ran within the kernel's address space. Although 2.x and 3.x supported protected mode, NetWare largely used protected mode merely as a means to access more memory, and for maximum performance the entirety of the operating system and all NLMs loaded within it ran cooperatively multitasked at ring 0 (supervisor).

This architecture made NetWare much faster than other server operating systems, but at a technical cost. While it was possible to load NLMs into separate protection domains from the kernel, this was infrequently done in practice because cross-domain calls would then incur RPC (i.e., remote procedure call) overhead, and doing so didn't ameliorate all the possible pitfalls regardless. As there was no support for virtual memory, everything had to fit in physical RAM and there was little to prevent a long-running NLM with leaks from exhausting available space. Likewise, a malfunctioning or badly-written NLM could simply hog the CPU and hamper performance — or worse, trigger the dreaded ABEND (ABnormal END) system crash and/or lock up the machine, just like any "regular" kernel bug could.

Parallel to canonical NetWare was a Novell initiative called Portable NetWare in 1989 (we talked about this briefly before when I went through the story of how the Apple Network Server ended up running AIX), a descendant of NetWare for VMS developed jointly on the same codebase that would become 3.x. Unlike regular NetWare where NetWare itself was the kernel — but like VMS NetWare — Portable NetWare ran NetWare services on top of some other host operating system, and the host OS provided the device drivers, file system and cache. The source code was high-level, architecture-independent and POSIX-compliant.

Novell announced twenty-three vendors planned to support Portable NetWare, starting with Prime Computer on their 80386-based EXL 816 server running SVR3 and NCR on its 68K-based Tower/32 series, and Novell also listed Data General, Nortel, Sun, TOPS, and Unisys as leading partners along with Acer, Altos, Cubix, Datapoint, Harris, Hewlett-Packard, Intel, MIPS Computer Systems and Zenith. Novell themselves claimed they were working on their own in-house port of Portable NetWare to IBM OS/400 and that VMS NetWare would officially become a Portable NetWare flavour too (after all, it already was, more or less). IBM and Novell inked a joint licensing deal in 1991, additionally bringing Portable NetWare to IBM AIX (named NWserver) and OS/2.

In 1992, Apple announced its own deal with Novell. Not only would Macintosh users get full access to NetWare services, but the Macintosh Finder would become the front end for NetWare and Novell's DR-DOS — from Novell's recent acquisition of Digital Research, a nice bit of symmetry with NetWare's CP/M origins (despite Apple suing Digital Research in 1985 for GEM ripping off the Mac). Apple also planned to port Apple Events, QuickDraw GX and other key libraries to DR-DOS, yielding a hybrid Mac-on-DOS system that both companies believed would enable them to jointly take on Microsoft. (Novell CEO Ray Noorda even conducted talks with Apple CEO John Sculley about merging the companies, though nothing would come of it.) This was the Star Trek project, going "where no Mac had gone before" (to Intel), and was even quietly supported by Intel themselves who provided '486 systems for the porting work. The prototype came together in just over three months, including its own functional port of QuickTime, but PC manufacturers were uninterested due to their restrictive royalty terms with Microsoft and industry figures openly derided it (Bill Gates himself called it "lipstick on a chicken"). The project was controversial internally as well, and after Sculley's departure incoming CEO Michael Spindler killed it in 1993 as a potential threat to the upcoming PowerPC transition.

That didn't mean, however, that Spindler wanted nothing to do with Novell or NetWare. Apple's server group still needed a server-grade operating system (which the classic Mac OS could not satisfy), and while PowerOpen — in the form of "new A/UX" — at least at the time remained the eventual future strategy, neither it nor Pink, the intended successor to System 7, were anywhere near ready. With that context in mind, a proven popular server operating system Apple could simply slap on top of MacOS was a thoroughly plausible fallback. In 1993 Apple started work on its own internal port of Portable NetWare to run on top of Mac OS, variously codenamed Wormhole and Deep Space Nine, which used some of IBM's work on AIX NWserver but was based on System 7. Wormhole was intended for the PowerPC 601-based Green Giant, i.e., what would become the Workgroup Server 9150, but it met a poor reception with testers who wanted a Unix-based platform instead. As such, upon its release in April 1994 the Workgroup Server 9150 ended up just running MacOS like the contemporary Starbucks WGS 6150 and 8150 systems (and every other Workgroup Server to that point, the AWS 95 notwithstanding).

In the meantime, initial industry interest in Portable NetWare was waning, the implementations that did exist were slower and didn't always take good advantage of their host environments, and the core PC version could not run on the RISC servers that then dominated enterprise IT. Against this backdrop NetWare 4.x also came out in 1993, introducing central directory management (NetWare Directory Services) based on X.500, but also doing away with the Portable concept. Instead, the NetWare kernel itself would become cross-platform to yield Processor Independent NetWare (PIN), starting with a port to the DEC Alpha and an HP-sponsored one for PA-RISC. Like PC NetWare, after running the bootloader PIN would take over completely, running directly on the metal and serving everything natively from an NWFS volume.

With the release of Starbucks and Green Giant in April 1994, simultaneously with Novell announcing support for IPX under OpenTransport, Spindler (alongside the long-promised development of Apple's "Unix PowerPC server") vowed at least one Apple system would support Processor Independent NetWare too. This project was codenamed Cyberpunk.

Cyberpunk got as far as actual CDs being pressed for it, and this is one of them, as provided by our anonymous Apple developer (marked Apple Registered Confidential, so I've blacked out the hand-inked CD number even though it almost certainly doesn't mean anything anymore). Despite the label reading ".1", the disc actually contains a demo dated July 20, 1994, "r[evision ]5," and the disc itself is dated August 1994 based on the etching on the inner ring. It contains a bootable Mac OS 7.1.2 (the first version of System 7 to support Power Macs), a disk utility for cloning and partitioning, a disk image for installation, the bootloader (a MacOS application), the NetWare kernel, documentation, and other sundry utilities and client tools. We'll look more into the CD contents a little later.

At the time of its development there was no self-hosted Power Mac compiler, so the source code was written on a Macintosh but actually compiled using IBM xlC on RS/6000 AIX. (Recall that 32-bit AIX and classic PowerPC MacOS share the same PowerOpen ABI because of their interrelated history, and early in the Power Mac's existence the Macintosh Programmer's Workshop [MPW] also directly supported the AIX XCOFF binary format.) Although documentation on the disc references the 9150, the version of Mac OS 7.1.2 on the disc predates the 9150 and isn't bootable on it, so a fair bit of its development actually occurred on Piltdown Man (a/k/a PDM, i.e., the 6100). However, the documentation also makes reference to Cold Fusion (the 8100), which was introduced at the same time as the 6100 and 7100, treating the 8150 as the "standard" demonstration system. As the Workgroup Servers 6150 and 8150 are otherwise hardware-identical with the hoi-polloi 6100 and 8100 and the 6100's various Performa rebadges, it works with them too.

That said, those machines were merely stopgaps for development. The real target was the Shiner prototype, that big enterprise server Scully and then Spindler had long promised (viz., what would become the Apple Network Server), and the developer remembers Cyberpunk booting directly on early versions of the hardware. We'll look at that near the end. For now, let's first get it running on our local Performa 6116CD, the last and questionably mightiest of the Performa 6100 clones.

In general terms the Performa 611xCD series are just basic 8MB Power Macintosh 6100/60s with differing hard disk sizes and pack-in software bundles. The first Mac I ever personally owned was almost a Performa 6100 something-or-other but the seller wanted $200 for it (in 1999, about $350 in 2023) and I was a starving med student at the time, so I ended up with a cute little IIsi instead which I got for the price of "take it." Recapped, that IIsi is still in my collection and still works. This one I got from a seller about a decade-ish ago in the L.A. Antelope Valley who had pimped it out with a Sonnet G3 CPU card, a faster CD-ROM, a nice fat SCSI drive and 136MB of RAM. I nabbed it with the intention of it being the replacement AppleShare server when the NetBSD IIci bit the dust, but it didn't bite the dust and continues truckin' to this day, while my own hopped-up "SR-7100" became my preferred 601 system instead because it has more expansion options. I can't remember why I didn't take this to the Vintage Computer Festival consignment table when it was just sitting around, but boy am I glad I didn't, because getting this article off the ground might have been more difficult without it.

For the purposes of this demonstration, however, we're going to strip out all the previous owner's carefully added upgrades. I removed the hard disk for safekeeping and attached a BlueSCSI v1 to the external SCSI port so that I could backup the disk image when we were done (this turned out to be advantageous in other respects too). Naturally the Sonnet G3 had to come out of the PDS slot, and I also ended up having to replace the two 64MB RAM SIMMs with two 8MB ones for a reason we'll get to in a moment. I dug out an AAUI transceiver for the MACE (AMD AM79C950) Ethernet port and an HDI-45 to Mac DA-15 dongle for the Ariel II video, but my INOGENI VGA capture box only likes 60Hz video, so I had to tack on a Mac multisync adapter set to force a 640x480 VGA compatible display as well. We fire it up and insert the demo CD, holding down Command-Option-Shift-Delete to force alternate boot.

Happy Piltdown Man!

The "Welcome to Power Macintosh" message is unique to System 7.1.2 on PowerPC; no other version prior or since displayed it.

At the desktop, successfully having booted from the demo disc. The BlueSCSI volume is named Cyberpunk, but although BlueSCSI's compatibility has greatly improved, the disk utility on this CD reported an error and refused to initialize it. I ended up connecting it to my Mystic Colour Classic and running the trusty patched version of HD SC Setup 7.3.5 to initialize the disk for it first. At this point in installation the Cyberpunk volume is completely empty.

In the root folder of the demo CD is an original 7.1.2 System Folder which can boot the 6100, 7100 and 8100 (but not the 9150, tried and failed), various System installers (which we don't need for this), a basic Utilities folder with Disk First Aid and Disk Copy (not much use here), the actual NetWare Demo files and tools, and a copy of ClarisWorks 2.1 used for opening the files in the "Read me first" folder, which automatically opens to show its contents.

There are four documents on the disk specific to Cyberpunk, three of which are here: one that talks about how to set up the server, PC client and Mac client for Apple's demo ("Do not give this disk to non-Apple employees without specific permission of [redacted] (408-XXX-XXXX)"), one that actually contains the script for that demo (referencing the 9150, which this disk most certainly does not boot on), and one that talks briefly about the technical underpinnings and how the hard disk partitions work. Here we'll simply open the technical-partitions document which we will liberally refer to in this article.

Starting ClarisWorks, which was an Apple-site licensed copy.

I've redacted the author's name, but this person knows who they are. I converted these documents from ClarisWorks to PDF by loading the fonts onto my MDD G4, opening the documents there in ClarisWorks, printing each page as a TIFF using Print2Pict, and then binding the TIFFs together into a PDF with ImageMagick on my Linux workstation.

The meat is in the NetWare demo folder. Within that folder the "pc files" and "MAC CLIENT" folders contain their respective client software installs (as well as a Windows version of ClarisWorks), but I should parenthetically note that regular PC Netware 4.x looks like an AppleShare server to Macs on the network and ordinarily no separate client software is necessary. (This functionality was sold separately prior to 3.12 and from 5.x on; 3.12 came with a miniature AppleShare server with five licenses but 4.x has no such limitation.) However, Cyberpunk doesn't appear to implement this support, and it's not clear if the final version of Apple NetWare would have done so anyway because it might have cannibalized Apple's own file server sales.

Instead, what this software actually enables is letting Macs access NetWare servers over NCP (via MacIPX), i.e., a true NetWare client implementation for MacOS, and that particular functionality was what was shown to users in the demo. It includes its own kit of software to install and a demo document, in PC ClarisWorks 1.0 format, for the demonstrators to "collabourate" on for the audience's benefit.

The demo document was the actual Apple-Novell joint press release (titled "Greased Lightning") dated April 25, 1994 from when the Workgroup Servers were first released. You'll notice there is no Apple logo: part of the demonstration was to plunk the rainbow Apple logo into it and show that the edit was reflected on the other machine. Because this was just for show, the text was not paginated out fully, which I've done in the wider image so you can read the entire contents and see I haven't made all of this up. (I've got a photograph later, too.)

While Spindler is quoted, you'll also notice Ray Noorda isn't. That's because by then he wasn't Novell CEO anymore. We'll loop back around to this at the very end when we finish our story.

The version of the Mac client on this CD calls itself MacNCP and is tagged as a pre-alpha, and includes its own system folder as an insurance policy "in case anything goes wrong" (per the setup document).

This pre-alpha is a bit different from the later production version; among other changes, it doesn't seem like the Bindery Chooser (a Desk Accessory similar to the regular Chooser allowing you to browse NetWare's network database) made it into the release. While this client did subsequently become a formally released product, its development costs didn't really translate into NetWare sales (Cyberpunk's subterranean existence notwithstanding), and Novell jettisoned it after only a couple versions to a third-party developer. Since we're mostly just interested in the server software, we'll open up the third folder.

This folder contains all the pieces needed to install the demonstration NetWare server package. Starting at the top and going clockwise, we have a copy of a bootable 7.1.2 System Folder (again, except on the 9150) with the blessed set of INITs and CDEVs to get it rolling, an application called "PDMLoader" and a file named "NWStart" which from their icons are obviously related, the PowerPC version of INSTALL.NLM (i.e., the NetWare Loadable Module used for installation and configuration) marked for "emergency use only," and two other files also obviously related based on their icons named "PartitionMgr" and "Demo7/20.nsa."

In the centre is an application named Apple NetWare Setup marked with the same icon as used for the regular HDSC Setup hard disk utility. We'll start there.

Both the icon and the main screen clearly show that Apple NetWare Setup is descended from HD SC Setup, and even claims the same general version number (7.3) — and likewise inherits its same refusal to work with disks that are not Apple-ROMmed. The BlueSCSI v1 is just alike enough to an Apple ROM Quantum Fireball that NetWare Setup will accept it as a valid target and partition it, but it will not initialize it, which is why we did that step separately. Although I also tried with an OEM Seagate Hawk and Quantum Atlas II, NetWare Setup wouldn't even talk to them, just like unmodified HD SC Setup won't. The application should be hackable to allow third-party disks but there is no wfwr resource to change, meaning I'd likely have to dive into the CODE resources instead, so we'll proceed with the BlueSCSI since it works.

The ClarisWorks setup document says it requires at least a 160MB hard disk, so I created a 1.0GB disk image on the BlueSCSI and selected the largest "250Meg Mac + NetWare" pre-defined partition scheme (the setup document says use the "80Meg Mac + NetWare" option but this one works too). There is a bug here that it calls that scheme a "40Meg Mac" in the blurb box but it will indeed create a 250MB partition for MacOS and use the rest for NetWare, which is what we want. I didn't experiment with any custom partition sizes since I wasn't sure how this alpha demo would react to them.

Ordinarily, the partition scheme for a single one-volume Power Mac disk would consist of an Apple partition map (APM), various drivers and OS patches (as needed and described in the device's first sector), the actual HFS volume, and then any unallocated space (typically trivial). In Cyberpunk NetWare, however, the portion after the HFS volume starts with an XCOFF binary called NWstart, which we saw in the server install folder, and is in fact the NetWare kernel (more later). This seems to have been intended for Open Firmware-based systems (in this case Shiner) that would boot from it like a more typical "partition zero" loader, but this partition is unused on the supported NuBus Power Macs which don't have Open Firmware.

After NWstart comes the NetWare System Area, or NSA (not to be confused with those naughty little SIGINT spies), which only NetWare Setup knows how to create. The Mac sees this as one big partition in the APM, but NetWare divides it first into an INWDOS subpartition where the system files live (like C:\NWSERVER on PC NetWare), and then all the NetWare volumes follow in NWFS format, starting with SYS: (the primary). Those partitions will be created shortly; we'll make the Apple native partitions first.

Partitioning on the BlueSCSI is very fast, a matter of seconds. Once done, we can now quit the NetWare Setup utility ...

... and, after dragging that System Folder copy to the new Mac partition, we start up from the Cyberpunk volume.

The second step is to do the installation. With a NetWare 4 CD you'd run the installer from the disc, but we don't have that. What we do have is an image of the demo NSA, and now we'll drag that onto the PartitionMgr tool to copy it to disk.

Startup box, giving its full name as (uninterestingly) "Partition Manager" and reminding us that it is, in fact, Apple Confidential (not to be confused with the Owen Linzmeyer book).

The partitioning document explains that PartitionMgr will scan the available disks for one with an available NSA partition type in the APM. There is only one such disk in the system, so of course we select that.

Given an NSA image and a valid target partition, two install modes are possible: if the INWDOS segment gets corrupted (a real risk with alpha software) or needs to be updated, you can just do that without whacking your NWFS volumes. However, we don't have any NetWare volumes to preserve and need to create them to continue, so we go the second route to (re)format the entire NSA.

PartitionMgr creates two subpartitions in the NSA, as shown in the upper pane of the window. The partition types are those that would be used in a PC master boot record. Type 0xbb refers to the INWDOS segment and seems to be unique to PIN; type 0x65 is the standard partition type for an NWFS/386 filesystem and represents SYS:.

After the NSA is formatted and divvied up, the files in INWDOS are listed ...

... and PartitionMgr terminates cleanly.

If we start PartitionMgr up without an image, it will let you access what's already present (as explained in its About box, which reminds us that it is — you guessed it — Apple Confidential).

The lower pane is a scrolling view of the files in INWDOS. Because not all the NLMs had been ported to PowerPC yet, and of course practically none of the PC drivers would be relevant, this is a much smaller set than you might find in a regular production NetWare 4 installation. The NSA image contains its own SERVER.MLS license file (for 250 connections: even I don't have 250 NetWare-capable computers), but old NetWare hounds will wonder why there's no INSTALL.NLM other than the separately available emergency one. We'll get to that part later on when we talk about FIXUP.NLM, which is present.

The NWStart menu is non-functional in this version, but the NSA menu does work, and allows you to reformat the NSA, install another image, or copy and retrieve files from INWDOS for spot changes (or singly installing additional NLMs). There is no analogous option to handle files on the NWFS volume(s) because you can of course directly access such files when the server is up.

At this point we're finally ready to boot Cyberpunk. For convenience and so I could run this later without the CD, I copied the last two pieces to the Mac partition (PDMLoader and NWstart).

You can just drag or double-click NWstart, but I'll start up PDMLoader separately first to show you around a little. The name is clearly an acronym for Piltdown Man (the 6100's code name), but the setup document, which proffers the WGS 8150 as the standard demonstration system, also uses it (i.e., there's no Cold Fusion Loader, and certainly no Carl Sagan Butt-Head Astronomer Lawyers Are Wimps Loader since there wasn't ever a WGS 7150). As the systems have very similar hardware and chipsets, and identical ROMs, what works for one works for the others (again, except the 9150, which is slightly different).

PDMLoader will ask which disk has the NSA we want.

After that, it will ask for the NetWare image we want to run. This is where NWstart comes in, which we previously mentioned is in fact the kernel.

NWstart is an 3.4MB XCOFF binary that /usr/bin/file identifies as a executable (RISC System/6000 V3.1) or obj module. Strings within the file give it a 1993 copyright date (by both Apple and Novell), and it is not stripped, so it still has debugging symbols showing it to be written at least partially in C++. Although there is no accompanying xSYM debugger file, which will shortly be relevant, it also carries an entire on-board debugger of its own within it (QDB/601 version 1.0d4.1 (10/15/93), not to be confused with the Python debugger or the QNX debugger). The function symbols reference C++ types starting with obvious NetWare-specific names like NSI*, such as NSIConfiguration and NSIAlloc, and appear to contain the basic low level code for memory management, Ariel II display and screen control (with pieces of a xterm termcap entry), CUDA and ADB keyboard support (actual string: ouch! what you interrupting me for man?), queues and synchronization, and file, volume and disk management.

In this version of Cyberpunk, the only supported way to bring up NWstart is through the loader, analogous to things like the MkLinux boot extension since NuBus Power Macs can't boot either directly. On the other hand, NWstart has strings within it referencing AAPL,PPC6100 AAPL,PPC7100 AAPL,PPC8100 and AAPL,PPC9150 and even AAPL,PDM. These are Open Firmware identifiers and never appeared as such in any production version of those machines or their ROMs (the 6100, 7100, 8100 and label variants use the same ROM), and may explain one of the filenames (FakeOpenBoot.cp) as glue code to present a synthetic Open Firmware device tree to the kernel for systems that don't have one. Indeed, the kernel also seems to look for Open Firmware paths like /chosen, /mmio and /cpu (specifically the timebase-frequency and clock-frequency properties) despite the fact none of the systems here implement those either. Notably, no other model identifiers appear in this file — including AAPL,ShinerESB for the ANS, which does have Open Firmware, and wouldn't need such trickery.

The only credits string within it says Written by: Drew Major, Dale Neibaur, Kyle Powell, Howard Davis — who developed NetWare at Novell, not Apple. Many functions appear with "stubbed" messages and many strings still reference NetWare 386, so the work on the kernel was clearly unfinished.

If we click the Preferences button, we can select "targets" for the NetWare console and for the two, count 'em, two supported debugging options. We're going to run Cyberpunk on the 6100's built-in Ariel video (NuBus video cards are not supported), but it is also possible to redirect it to either the printer or modem port (19200bps 8-N-1, but make sure LocalTalk isn't on). QDB, the onboard debugger, can also run on the console simultaneously or be redirected to either of the serial ports as well (same speed).

The other supported debugger is R2Db, standing for "RISC Two-machine Debugger." Like regular MacsBug of the time it was a systemwide low-level debugger, but could also do source level debugging with an xSYM, which I mentioned we don't have. (As debug symbols are present in NWstart, it should be possible to generate it from the XCOFF binary in certain versions of MPW with something like makesym -o NWstart.xSYM NWstart but this exercise is left to the reader.) The name comes from the fact the interface doesn't run on the Power Mac: it runs on an attached 68K Mac running the R2Db client which came with contemporary builds of MPW, and was tethered to the Power Mac using a standard Mac printer cable (i.e., null modem) in any free serial port. On the Power Mac side a component called the PPC Debugger Nub communicates with the 68K Mac, accepting commands and reporting exceptions, breakpoints and machine state. According to our anonymous developer, much of the debugging work was done in R2Db.

QDB is more convenient, however, and for purposes of demonstration it is more than sufficient. We'll tick the box to enter QDB on startup and click Accept.

Finally, we select NWstart and click Select to start the kernel. Once the kernel successfully comes up, there's no returning to Mac OS without restarting the Power Mac.

At this point, I personally experienced two things that can go wrong with the boot process.

The first is memory size. Remember how I said that I needed to reduce the 136MB of RAM present to 24MB (i.e., 8MB soldered on the motherboard and two 8MB SIMMs)? The reason is that PDMLoader can't handle an address space that large and I meekly admit that the setup document even says so, saying the standardized system is a "Power Mac 8150 (8100/80) with 24 mb of memory - exactly". I figured this wasn't current information since it also happily accepted my 1.0GB disk image despite no version of Cyberpunk probably ever having run on a disk that large. Well, no. If you ignore their warning, you'll get this message, and PDMLoader will exit.

Assuming you have the right memory configuration, the Cyberpunk screen appears (this is a PICT resource that I extracted to show you earlier). The anonymous Apple developer commented that they had T-shirts of both this and Deep Space Nine. At the behest of Apple's attorneys (remember that Lawyers Are Wimps) the development team was obliged to contact Paramount to see if licensing was required for the DS9 tees; Paramount asked how many they planned to make, and laughed and said fine when they estimated around twenty-five. Notice the version number at the lower right (1.0.0d21).

The first boot stage puts small coloured dots at the upper left as a progress indicator. As the kernel bringup proceeds, a blue dash acts as a cursor, moving to the right leaving white dashes over the dots that have been completed. After a couple go by, we drop into the QDB debugger.

The breakpoint triggers in NSIConfiguration::InitializeQDB, a C++ method implemented expressly for this purpose. QDB is a basic debugging tool that operates at a relatively low level, but its main virtue is convenience, since it's ever-present and can run on the console without needing a sidecar serial terminal. Now that we know it's there and functions, we continue execution.

If you proceed through the bringup sequence successfully, you are rewarded with the message "in Loader PreludeStart" (which looks like it's part of the FakeOpenBoot) and jumps into the kernel via NSIStart.

The second way I found the boot can go wrong is if that message doesn't show up and the kernel hangs. If the blue boot cursor freezes before the last purple and yellow dots, and the PreludeStart message doesn't appear, the most likely cause is the bootloader doesn't like your hard disk. The setup document mentions something like this could happen with certain IBM-manufactured hard drives, though it also claimed it would do so after a whole bunch of messages are printed that we don't see here (presumably when the kernel has actually started), and that the issue was fixed for this alpha.

In my case, however, it happened with the BlueSCSI. My initial solution was to try those other non-Apple-ROMmed real SCSI drives, only to find out that NetWare Setup didn't like them and wouldn't partition them. The real fix, at least for the BlueSCSI v1, is to update the firmware: I suspect the change in v1.1-20220917 to improve compatibility with SCSI phase changes was responsible, but you should run at least v1.1-20231116, which is what we're using here. With that, bringup will complete and the kernel will begin execution.

We'll get lots of fun informational messages to speculate on, so pardon the surfeit of screen grabs to follow. I've tried to capture each of the messages it will display in at least one grab.

The first message gives the exact version of the kernel: "Novell NetWare Prototype v4.11 - Alpha 1.5" dated July 19, 1994, hot off the press on this July 20 disc. This version is notable because 4.11 was the first release of NetWare to support symmetric multiprocessing. While the PowerPC 601 is perfectly capable of SMP, and various non-Apple systems implemented it, no 601-based Power Mac ever shipped with multiple CPUs. Shiner, on the other hand, was intended to have an SMP option and prototypes were developed (hint to any ex-Apple employee that still has one: my working ANS 500 with AIX would love it and I'm happy to deal), so the kernel would have hopefully been ready as soon as the hardware was.

The font used is Monaco, in its original bitmapped monospace form, which gives Cyberpunk more of a Mac feel than Harpoon AIX on the ANS (uses the default AIX LFT console "Ergonomic Medium" font) or MkLinux (uses icky default PC VGA glyphs). It certainly goes a bit easier on my eyes than regular PC NetWare, that's for sure.

Loading device drivers for the 6100's NCR 53C94 SCSI controller and two devices, SCSI ID 1 (the BlueSCSI) and ID 3 (the CD, which was still mounted). The "restricted API call" warning occurs a lot in Cyberpunk. It's an alpha, after all.

Checking the SYS: volume.

Bringing up the time and setting the server name (NW_DEMO) and internal IPX network number.

Loading the network drivers (I don't know what the message about REAKONSTART means). The driver's filename APMAAMBI.LAN clumsily expands to "Apple MACE-AMIC Built-In" Ethernet (MACE is the on-board AMD Ethernet AM79C950 PHY; AMIC is the 343S0801 Apple Memory-mapped I/O Controller which handles most of the system's DMA and I/O).

Now into loading the NetWare components, starting with the Authentication Tool Box, NetWare 386 Policy Manager and Unicode Library. This library is quite possibly the first built-in support for Unicode on any Apple-developed operating system (Unicode was not officially supported until Mac OS 8.5 with the introduction of ATSUI).

Despite that, the system reports "code page 437" (i.e., DOS Latin). Next up is the loader for NetWare Directory Services, Time Synchronization and the User Utility.

But this is an Operating System From The Future. No matter what you set your Mac's clock to, when Cyberpunk boots it will always be Saturday, March 12, 2072 at 2:56:48pm Pacific Standard Time (for comparison, the classic Mac OS's time value overflows at 6 February 2040 at 06:28:15 UTC), so you needn't worry about the PRAM battery being out. We are duly warned that "[t]his is a PROTOTYPE version of a future NetWare product. It may not be DISCLOSED, REPRODUCED or DISTRIBUTED in any form."

And after the license information is displayed, the kernel is running, and we are at the command prompt. Welcome to NetWare ... on the Mac.

Now, it wouldn't be NetWare without running the monitor, which makes a good segue into talking about NLMs in Cyberpunk.

To this day Novell's Tech Center still has documentation on NetWare Development on the PowerPC due to the Cygnus port they sponsored, but this specific document is actually about Cyberpunk and even mentions the Workgroup Servers explicitly. Novell blessed three ways to compile NLMs. If you didn't have a Macintosh, you could create them either with the IBM cset C++ compiler (xlc under the hood; contemporary versions of straight-up xlc would also work, and were upwardly compatible) on RS/6000 AIX, or Cygnus' cross-compiled PowerPC gcc on Novell UnixWare, both of which will generate XCOFF objects. Otherwise, you compiled on the Mac using contemporary versions of MPW, which included Apple's PowerPC C compiler PPCC, and turned that object into an XCOFF with PPCLink.

Because you're not writing for MacOS, an SDK with headers and objects specifically for NetWare for PowerPC was made available, though this SDK is not on the Cyberpunk demo CD and appears to be lost. Regardless, once you had your XCOFF binary, you then fed it to the SDK-provided NLM linker (for MPW, this was facilitated with a script called NLMLink) along with two separate object files (the standard Mac PPCRuntime.o plus a NetWare-specific Prelude.o) and a definition file you create. This file would contain your XCOFF object's imports from the NetWare C library and other running NLMs, exports you want to make available to other NLMs, its copyright and version metadata, and how to invoke it (which routines to call to start and exit it, and whether it was "reentrant" or not, i.e., whether multiple processes could run from a single shared copy in RAM or required a copy per process).

Once linked, theoretically the NLM it produced could run directly on Cyberpunk, but officially to test and debug it you would "need the developer's version of NetWare for PowerPC from Novell. You can install it on any Power Macintosh computer." This version also appears to be lost.

With the load monitor command familiar to every former NetWare administrator, we start the "NetWare 386 [sic] Console Monitor."

The Monitor comes up in the usual colour scheme and even uses PC box-drawing and background characters. Pretty much all the typical vital statistics you'd see on a regular NetWare server is here, because hey, it's NetWare. Let's explore some of the guts.

First, what system modules are loaded? We scroll through the list:

This list includes both the Apple-specific hardware drivers (Ariel video and the ADB keyboard are handled within the kernel itself) as well as the NetWare-general modules, but seems a slightly smaller set than my model NetWare 4.1 installation in Bochs and likely not everything had been ported yet.

There is one hard disk, the "QUANTUM" being emulated by the BlueSCSI v1.

Within it NetWare only reports two partitions. That's because that's all it can see, despite knowing the entire virtual disk is 1GB; based on the partition blocks count, the two partitions are likely the 250MB Mac partition (subsuming all Apple partitions) and this larger one consuming almost all of the rest.

But despite all that space potentially available, the demo NSA gives us just 52MB for SYS:.

The MACE-AMIC LAN driver is set up for both IEEE 802.2 LLC and IEEE 802.3 Ethernet.

At least for 802.2, however, we can't see much more than that.

There are just two active connections, NW_DEMO.DEMO and NOT-LOGGED-IN.

The demo login appears to be active and operational, and we can get its status.

But we can't for the other connection, the one that's not logged in, because it's not logged in.

Cyberpunk NetWare also supports multiple virtual screens like regular NetWare. With the monitor running, there are three screens: one for the system console, one for NetWare Directory Services, and one currently for the monitor.

Directory Services seems to work, though I didn't test it much. But if you leave the monitor idle, something else works:

It's the NetWare Worm! It has to be NetWare if it has that! There is only one CPU, and the only Cyberpunk system that would have had two is Shiner with the unreleased SMP card, so only the red worm will appear. (You can see it in its simulated multiprocessor glory apparently with this replica screensaver, or, if you're jwz, this one for XScreenSaver.)

The NetWare Worm is an important part of the historical record of Cyberpunk, by the way.

(Photo credit, Joe Pugliese, Associated Press.) This picture is of Michael Spindler himself on the same day as that April 25, 1994 press release, speaking at Cupertino in front of three systems with a NetWare box conspicuously sitting on top. The left machine cannot clearly be seen and the rightmost one not at all, but the middle one is most likely a WGS 8150, which we already know is the "standard" Cyberpunk system. On its screen we see the obvious red worm moving about as proof of what it was running.

Well, that's enough of that.

Various bugs do crop up in odd places, such as recurrent BAD MESSAGE, er, messages in the console.

Also, even though almost every typical NetWare 4 command is listed in the help summary, some of them seem ... irrelevant.

For example, you can REMOVE DOS (like you would do on PC NetWare to prevent someone exiting to DOS), but this is one of the "stubbed" functions, and just pretends. Because you can't remove the Macintosh Toolbox — it's in ROM, silly!

Bringing down the server with DOWN. However, if you Type EXIT to return to DOS., it reboots the Mac; it does not shut down.

So we do so from the Finder (though the 6100 has an external power switch).

I did try running QDB over the printer port using a null-modem cable while playing with Cyberpunk, but the entirety of QDB's output from startup to shutdown looks like this:


NSIConfiguration::InitializeQDB

PC=00152D2C

### A programmed user breakpoint occured

### User String: Startup Break
R0  0014b448  R3 003ed9ec  R9  50f04000  R15 00000000  R21 00000000  R27 00000000
SP  0048dde0  R4 00000000  R10 00000020  R16 00000000  R22 00000000  R28 07070000
TOC 00479efc  R5 004706f0  R11 00479648  R17 00000000  R23 00000000  R29 01010000
              R6 0056d9a0  R12 00000000  R18 00000000  R24 00000000  R30 003ea768
CTR 0014b448  R7 0056d970  R13 00000000  R19 00000000  R25 00000000  R31 003f0be0
LR  0015d5a8  R8 50f04000  R14 87f00005  R20 00000000  R26 00000000
CR  24000000  --E- -G-- ---- ---- ---- ---- ---- ----  XER 00000000  SRR1 00022000

00152D2C:0FE00000    twi        TO_LT|TO_GT|TO_EQ|TO_LOWER|TO_HIGHER,0,0x0                 #'****'
qdb>bugon
DebugStr's to QDB are enabled.
qdb>g
found disk: offset = 544864 length = 1552288
found disk: offset = 544864 length = 1552288
-- show off --
state = 3
|->vportbdata: treq 1 byteack 1 tip 1
|->direction 0 [0 = cuda->system] shift register interrupt 0

PC=00152D2C

### A programmed user breakpoint occur???

which didn't seem too helpful. Even with debugging messages on, very little debugging information actually seems to be displayed in this alpha. To see more, I guess I'd need to set up R2Db at some point.

One thing you'll notice we didn't play with is INSTALL.NLM, the NetWare installer, and the one marked "emergency use only." We didn't need it anyway since PartitionMgr did the setup for us, but it is in fact present, hidden as FIXUP.NLM. The reason why, according to the setup documentation, is because it had several fatal bugs and as you'll see retains some jarring DOS-isms.

Starting it up from the console, it knows it's not actually called "fixup."

Again, this is a fairly direct port of the PC NetWare original. I'm not going to make any changes here (there's nothing really to change to), but here are a couple points of interest, starting with the AUTOEXEC.NCF startup script:

The contents of the script unsurprisingly recapitulate the bootup sequence we saw. Interestingly, you'll note that (nonexistent?) "drive B:" is added to the search path.

If you try to install other product options, you'll also be asked to insert something into that phantom B:\\ (sic). Despite the fact that drive letter on a PC usually indicates a second floppy, here it appears to refer to the CD. This text is pretty much verbatim from PC NetWare. We have no other product options to install, so we'll cancel here.

On the other hand, if we try to install a license file (unnecessary, we have a 250 seat license right here), it asks for the license floppy to go in drive A:.

Exiting the installer. There is an interesting message on the console that the installer didn't get all the stack space it wanted, which may partially explain why it was sometimes unstable.

So, the Shiner, the system Spindler intended all along to run NetWare. Sadly, the Cyberpunk CD will not boot directly on the Apple Network Server because there is no compatible loader. Indeed, if you try to force a production ANS to boot the disc as if it were a Mac OS CD, you'll get this notorious message (sorry for the screen photos — Open Firmware 1.1.22 does not generate a video signal compatible with my capture box):

And there it will hang until you hit Control-Open Apple-Reset. This version of OF is also too dumb to boot XCOFFs directly from CD, though it can boot them from a floppy disk (that's how the Network Server Diagnostic Utility disk works) and it will boot them over Ethernet if you have a BOOTP and TFTP server around. Naturally I just happen to.

In theory, though this isn't very reliable even for XCOFFs that work, something like boot enet:,diags.xcf should boot an XCOFF binary called diags.xcf from the TFTP server provided over BOOTP. Here I've chosen to do this in manual steps so you can see what should happen in the screenshot. The binary loads and then the XCOFF loader package in Open Firmware will process its sections and commence execution. This is a valid Network Server-compatible XCOFF that actually worked (occasionally), so I know this process does function, at least under ideal circumstances.

Unfortunately, this method won't boot NWstart. While the binary loads and is accepted as a valid XCOFF, it dies with CLAIM failed while trying to process it. Though repeating this process sometimes works with old versions of Open Firmware, it didn't make any difference here.

That message often indicates that OF could not reserve or map RAM. The NetBSD FAQ notes that this may require adjusting the load-base environment variable, which is the address to which the binary is loaded. Like all Power Macs of this generation, the Apple Network Server defaults to (hex) 4000, but for Cyberpunk I would only be guessing the proper value.

And that's assuming it's actually the problem, of course, because another possibility is that NWstart is just too big to load directly and needs a Shiner bootloader of its own — which isn't on the disc. Alternatively, the Apple developer recalled that the build he saw running may have depended on support in pre-production ROMs that wasn't in released systems, such as its ability to boot Mac OS.

Nevertheless, we can prove that the Apple Network Server was meant to run Cyberpunk, and this proof persists even in production systems.

In the ANS' Open Firmware 1.1.22, among the other Forth words are three ones that preconfigure the Open Firmware environment variables, namely setenv-monitor, setenv-aix and ... setenv-netware. I've reproduced that below:

0 > see setenv-netware
: setenv-netware
  "false" "real-mode?" $setenv "ttya:19200" "input-device" $setenv
  "ttya:19200" "output-device" $setenv ?esb if
    "scsi-int/sd@2:0"
    else
    "scsi-int/sd@3:0"
    then
  "boot-device" $setenv
  ; ok
0 >

This doesn't seem to pop up in the word list until you try to boot something, but if you ask for it specifically, you'll see it was actually present all along. Besides marveling at the fact this Forth word actually exists, notice that it checks to see if the unit is an ESB (this refers to the Shiner prototype; ?esb is invariably false on production ANSes) and sets the SCSI device and boot partition accordingly. However, it also seems to require that a "partition zero" bootloader be present, as all Open Firmware 1.x systems do, and that bootloader is not present on this CD. Despite the presence of an NWstart partition on the Cyberpunk disk image, it doesn't seem to be in the right format: if I connect the BlueSCSI v1 to my ANS' external SCSI, the ANS does see it but won't boot from any partition on it. Furthermore, devices 2/3 generally refer to the first hard drive tray on the ANS, not the CD (0) or DAT/8mm tape (1). It's possible the ANS version of Cyberpunk may never have been committed to a bootable optical disc and only existed on a hard drive which is also since lost.

The other settings appear to be irrelevant to our failure, as real-mode? is false by default, and the AIX word also sets up the serial port even though it works fine over the on-board VGA. Indeed, if the console setting were the problem, you would have expected the load to succeed, and then go haywire after it starts.

Are there other ghosts of Cyberpunk in the ANS ROM? Let's dump it and find out!

I went ahead and set the Open Firmware console to ttya, which is inexplicably labeled port 2, and wired it up over null modem to the M1 MacBook Air which I set to never sleep. The ANS has a 4MB ROM and does not need to be specially mapped into memory; the command h# ffc00000 h# 00400000 dump will emit the entire contents as hex. After a couple hours of capturing the output from picocom, I had the complete transcript and wrote a quick and dirty Perl script to spit out a binary for analysis. The ROM in my unit has an MD5 of 676809c236138574282fa8416c6c5a6d, an Apple checksum of $962F6C13 and a major/minor version of $077D.4EFA.

Compared to the "Tsunami" Power Macintosh 9500, the Power Mac the ANS was based on, it should be of little surprise due to Shiner's tortured development schedule that they have similar ROMs. For example, the eight JPEG images — actually four images cut up into pieces — present in most PCI Mac ROMs are also present in the ANS. (Since I don't think these Easter egg images are well-known, I've reproduced them below in the order they appear, reassembled and enlarged. Iguana iguana powersurgius refers to both PowerSurge, the codename for the PCI Power Mac project, and Herman, a live iguana who lived with engineer Dave Evans and was the mascot for System 7.5.2. Herman and Dave are visible in the last picture; do you know the others in that image or the first one? Three of the four images were also part of a secret animation; see here and here.)

Instead, the ANS ROM is more notable for what it adds. The main things added to 1.1.22 over version 1.0.5 in most beige PCI Power Macs were an Open Firmware password protection mode (on AIX, this is synchronized with the root password), support for the added ANS hardware such as the different SCSI controller, the LCD panel (check out, among others, the word lcd-putc) and the state of the keyswitch, improved support for netbooting (though it's still hit or miss), a number of new debugging words, and of course support for AIX and removal of MacOS even though some bits of the Toolbox persist. Incidentally, here is each and every message the LCD can display (you can identify these because most of them are fixed size):

InitVia_msg         ~
CudaSyncAck_msg     ~
InitCuda_msg        ~
Jumping To RAM Prog.~
Testing Parity DIMMs~
MainLBU Enet Setup  ~
Sounding Boot Beep  ~
Sizing RAM DIMMs    ~
ROM SIMM Data Access~
Allocating RAM DIMMs~
MainLBU NVRAM Setup ~
CPU Card Info Setup ~
L2 Cache SIMM Setup ~
Testing L2Cache SIMM~
Exit to CallOpenBoot~
CudaNotResponding!!!~
TURN REAR KEY C.CLKW~
PULL OUT MOTHERBOARD~
HIT TINY RED BUTTON,~
CLOSE BOX & RESTART ~
Copyright (C) 1994-6~
Apple Computer, Inc.~
All Rights Reserved ~
 ROM v.1.1.22(2CPUs)~
  ROM vers.1.1.22   ~
Key Sw. Service Mode~
Video ID Bad~
MainLBU Video Failed~
MainLBU 825#1 Failed~
MainLBU 825#2 Failed~
 Taking Jump Vector ~
MHz 604, ~
MHzBus~
KB Level 2 Cache~
L2 Cache Test Begins~
L2Cache Bad@~
 MB Parity RAM  ~
 Megabytes RAM  ~
ParityAddrAtAddrFail~
 Megabytes RAM  ~
RAM Test #1 Begins  ~
ROM SIMM Test Begins~
NVRAM Test Begins   ~
RAM Test #2 Begins  ~
LONG RAM Test Begins~
Drive Fan Failed!   ~
Processor Fan Failed~
Temperature Too Hot!~
Temperature Warning!~
Left Power Fail!    ~
Right Power Fail!   ~
Left Power Hot!     ~
Right Power Hot!    ~
L2 Cache SIMM Failed~
ROM SIMM FAILED!    ~
MainLBU NVRAM Failed~
at Address  ~
 RAM DIMM ~
 Failed ~
 Failed ~
MBSoldered~
RAM/ROM/NVRAM:PASSED~

Notice that the strings above show SMP support remained in production systems (consistent with it being a late cancellation), and there is also an ANS-specific message guru meditation number 3 suggesting some Amigaphiles worked on Shiner. However, there doesn't seem to be any other specific support for NetWare, and the only place in the ROM that the string netware appears is in that particular Forth word. This leads me to conclude that whatever special support was required was either part of the prototype ROMs or never present in the firmware in the first place.

The fact the string persists at all was a symptom of the mad scramble to get Shiner out the door, as Michael Spindler's vows for PIN caused substantial delays in development — and for no good reason as Cyberpunk was ultimately destined never to see the light of day. Part of this was Apple's inability to service large customers effectively: although Apple planned to develop a robust enterprise support option for NetWare, even citing a 24-hour phone hotline, two-day repair guarantee and four-hour emergency response, internal sources indicated that the cash-strapped Apple was "wrestling" with how to actually provide such a service. The biggest reason, however, was Novell itself. In Ray Noorda's quixotic quest to take on Microsoft he had embarked on multiple acquisitions such as UnixWare from AT&T and the Digital Research buyout, with the aborted Apple merger being just one of many such attempts, and in March 1994 Noorda made his biggest move by buying fellow Utahn company WordPerfect for the staggering price of $1.4 billion (in 2023 over $2.8 billion).

The market was flabbergasted. While WordPerfect was still a significant force in word processing and had some presence in groupware, both its namesake product and current sales were stagnating, and industry observers widely believed Noorda had grossly overpaid to enter a business segment Novell had no clear strategy to dominate. With its stock plunging by nearly a quarter, Noorda was quietly forced out as CEO in April and replaced by Hewlett-Packard VP Robert Frankenberg, who was tasked with getting the company's finances and direction back on track.

Frankenberg, obviously, did not unilaterally end Novell's agreement with Apple over Cyberpunk, as that April 1994 press release I showed you earlier demonstrates. But by November Frankenberg had cut loose WordPerfect's multimedia consumer software (selling most of the rest including WordPerfect itself to Corel in 1996), said DR-DOS would no longer be updated (selling it to Caldera in 1996), ended the personal edition of UnixWare (selling UnixWare to the Santa Cruz Operation in 1995, but critically not Novell's Unix rights or copyrights as determined in SCO v. Novell), and also cancelled further development of the Processor Independent NetWare concept, which was later abandoned completely for the forthcoming NetWare 5. Frankenberg made it clear PIN was now officially dead on PA-RISC and Alpha and any architecture with an annual shipping base of less than a million units, meaning it was effectively dead on just about everything else, and Novell subsequently determined they themselves would port NetWare and UnixWare to non-Apple CHRP PowerPC systems separately.

For Apple's part, the build we saw here was the only version of Cyberpunk known to have been demonstrated to users. Although Cupertino still publicly indicated their support for a NetWare option as late as August 1995, Apple management eventually concluded NetWare was in decline and cancelled Cyberpunk outright in October. Novell's commissioned port of PowerPC NetWare was completed by Cygnus the same year, but it never ran on Power Macintosh hardware and saw little use overall, and was consequently the last version of NetWare ever released for any non-Intel platform. As for Shiner, Cyberpunk's intended target, Apple thus concentrated its corporate resources behind Harpoon AIX and Shiner duly launched in 1996 as the Apple Network Server line, solely supporting AIX and nothing but.

NetWare's last release was 6.5 in 2003, being subsequently integrated into Novell Open Enterprise Server and finally discontinued with Service Pack 8 in 2009. Today Novell's IP as well as Cyberpunk and Wormhole's last vestiges reside with OpenText after buyouts by the Attachmate Group in 2011, Micro Focus in 2014 and OpenText in 2023.



from Hacker News https://ift.tt/Ofljbno