Wednesday, May 31, 2023

Ancient lead pollution in a Roman harbor (2017)

Public toilets in the Roman port city of Ostia once had running water under the seats. Ostia is where the researchers took a soil core sample to analyze lead pollution from pipe runoff.

Enlarge / Public toilets in the Roman port city of Ostia once had running water under the seats. Ostia is where the researchers took a soil core sample to analyze lead pollution from pipe runoff.

The ancient Roman plumbing system was a legendary achievement in civil engineering, bringing fresh water to urbanites from hundreds of kilometers away. Wealthy Romans had hot and cold running water, as well as a sewage system that whisked waste away. Then, about 2,200 years ago, the waterworks got an upgrade: the discovery of lead pipes (called fistulae in Latin) meant the entire system could be expanded dramatically. The city's infatuation with lead pipes led to the popular (and disputed) theory that Rome fell due to lead poisoning. Now, a new study reveals that the city's lead plumbing infrastructure was at its biggest and most complicated during the centuries leading up to the empire's peak.

Hugo Delile, an archaeologist with France's National Center for Scientific Research, worked with a team to analyze lead content in 12-meter soil cores taken from Rome's two harbors: the ancient Ostia (now 3km inland) and the artificially created Portus. In a recent paper for Proceedings of the National Academy of Sciences, the researchers explain how water gushing through Rome's pipes picked up lead particles. Runoff from Rome's plumbing system was dumped into the Tiber River, whose waters passed through both harbors. But the lead particles quickly sank in the less turbulent harbor waters, so Delile and his team hypothesized that depositional layers of lead in the soil cores would correlate to a more extensive network of lead pipes.

Put simply: more lead in a layer would mean more water flowing through lead pipes. Though this lead probably didn't harm ocean wildlife, it did leave a clear signature behind.

Dating the core sediments revealed a surprisingly detailed record of Rome's expansion over several centuries of development between 200 BCE and 250 CE. Examining the core from Ostia, the researchers found a sudden influx of lead in 200 BCE, when aqueducts made of stone and wood gave way to lead pipe. In later layers, the researchers found a mix of lead with different isotopic compositions. This suggests water was flowing into the harbor from a wide variety of lead pipes, crafted from leads of different ages and provenances.

The hexagonal Portus can be seen here, slightly above the mouth of the Tiber on the right.

Enlarge / The hexagonal Portus can be seen here, slightly above the mouth of the Tiber on the right.

The very existence of the pipe system was a sign of Rome's fantastic wealth and power. Most lead in Rome came from distant colonies in today's France, Germany, England, and Spain, which meant the Empire needed an extensive trade network to build out its water infrastructure. Plus, the cost of maintenance was huge. All pipes were recycled, but the city still had to repair underground leaks, check water source quality, and prevent the massive aqueducts from crumbling. In the first century CE, Roman water commissioner Julius Frontinus wrote a two-volume treatise for the emperor on the city's water system, including a discussion of how to prevent rampant water piracy, in which people would tap the aqueducts illegally for agricultural use—or just for drinking.

Because it was so expensive, the city's plumbing system is a good proxy for Rome's fortunes. In their soil core from Ostia, Delile and his team even discovered evidence of the Roman Empire's horrific civil wars during the first century BCE. As war sucked gold from the state's coffers, there was no money to build new aqueducts nor to repair existing ones.

Around that time, the researchers saw a dramatic decrease in the amount of lead-contaminated water in the Ostia harbor—in fact, it drops about 50 percent from previous years. Write the researchers:

[This] provides the first evidence of the scale of the contemporaneous reduction in flows in Rome's lead pipe distribution system—of the order of 50%—resulting in decreased inputs of lead-contaminated water into the Tiber. [Augustus']... progressive defeat of his rivals during the 30s BCE allowed his future son-in-law, Agrippa, to take control of Rome's water supply by 33 BCE. Over the next 30 years, they repaired and extended the existing aqueduct and fistulae system, as well as built an unprecedented three new aqueducts, leading to renewed increase in [lead] pollution of the Tiber river.

Once the city had recovered from the hardships of the wars, the researchers saw a steady increase in lead over the years that span the Empire's height during the 1st and 2nd centuries CE. This was when Frontinus led an effort to rebuild the city's plumbing system while wealthy Romans added crazy water features to their homes. Second-century Emperor Hadrian reportedly had a fountain in his villa that flowed into a reservoir next to the dinner table; he would serve guests food from little boats floating on it.

Delile and his colleagues saw the slow decline of Rome in the Ostia soil core, too. There was a strong drop in lead after the mid-3rd century CE, when the researchers note "no more aqueducts were built, and maintenance was on a smaller scale." They add that this phase of "receding [lead] contamination corresponds to the apparent decline of [lead] and [silver] mining and of overall economic activity in the Roman Empire."

PNAS, 2017. DOI: 10.1073/pnas.1706334114  (About DOIs).



from Hacker News https://ift.tt/lWXPjif

Medieval Illustrations of Bonnacons

As with many mythical medieval creatures, the bonnacon was a composite: the head of a bull, the mane of a horse, and its horns were “bent inwards upon each other, as to be of no use for the purposes of combat”, writes Pliny. First described by Aristotle (as a possibly distinct animal called the bonasus), the bonnacon was resurrected in medieval bestiaries due to the influence of Pliny’s encyclopedic study of the ancient world. The Natural History locates the dungy bull in Paeonia — roughly today’s North Macedonia — but later writers elaborated its ethology, rehoming it in Asia.



from Hacker News https://ift.tt/49lW3sy

Shepherd's Oasis Statement on Rustconf and Introspection

On Friday, 26 May 2023, one of our collaborators — JeanHeyd “ThePhD” Meneide (AKA “Björkus Dorkus”) — was put in a position where he needed to consider withdrawing a talk related to our work on a possible future for compile-time reflection in the Rust Programming Language.

We agree with his assessment of the situation and support his withdrawal from RustConf 2023. We support JT’s resignation from the Rust Project and stand in solidarity with them. We carefully watched the events unfold over the (long) weekend, and into Tuesday, 30 May 2023.

Due to the actions of the Rust Project, we formally requested to withdraw from the Rust Foundation’s Grant program on the morning of Monday, 29 May 2023. The Rust Foundation has been nothing but courteous, forthcoming, and earnest in their communications, allowances, and offered help both before and during this time. However, our work is technical in nature and thus subject to the Rust Project.

We would like to make clear that this is only and solely because of the Rust Project, which is a separate entity that controls the technical space in and around the Rust Programming language and its flagship implementation rustc. It is intertwined with but not directly in control of RustConf or the Rust Foundation.

As the Rust Foundation has granted us leave from this project as of 20:19, Tuesday, 30 May 2023, Coordinated Universal Time (UTC), we will not publish a “Final Report” on or after Wednesday, 5 July 2023 (the end of the term of the grant). All work has been terminated as of the release of this statement. While we will still be available to consult on Rust code and related subject material, we will be withdrawing our involvement in all Rust Project-related matters.

We thank Leah Silber for her honest and direct communication on behalf of RustConf with our collaborators. We applaud her swift, clear action. We appreciate the Rust Foundation for their grace and empathy through this process; we understand how hard it is to be put in this kind of situation.

Finally.

We thank JeanHeyd for trying to get Compile-Time Reflection for Rust off the ground. His excitement for the work was unparalleled. However, we will be shifting gears. If you have feedback we would prefer it be published publicly. This is because we will not be touching this work at this point and have nothing to share about future plans regarding it, nor about its progress.

We encourage individuals to, as was the purpose with the report’s initial publication, publish and discuss their feedback publicly so the entire ecosystem may yet find the right approach.



from Hacker News https://ift.tt/4wNgHQh

What to Do About Aggressive Moose

What to Do About Aggressive Moose

While moose are generally perceived to be less dangerous than bears, more people in Alaska are injured by moose than by bears each year. Moose will usually flee when threatened but under certain circumstances, they can become aggressive. People can be hurt when moose charge, stomp and kick to protect themselves or their young. Understanding a moose's body language when stressed, can help you stay safe.

Do you know what to do when a moose charges? Fortunately most moose charges are bluffs — warning you to stay back. But if a moose does charge, don't wait to find out if it's bluffing. Run and get behind something solid, like a tree, or retreat to a safe place, like inside a building or car.

Why are moose aggressive towards humans?

Moose are not normally aggressive; however, they can become aggressive when they are harassed by people, dogs, and traffic, or when hungry and tired, especially in winter when they must walk through deep snow. Sometimes people throw snowballs at moose or approach them too closely for safety. Dogs can surprise moose in backyards, and loose dogs may chase or bark at them. Moose view dogs as enemies and will sometimes go out of their way to kick at one, even if the dog is on a leash or in a fenced yard. Give moose an extremely wide berth if you have a dog with you and don't let your dog chase a moose. When moose are on a road, driveway, or trail or when they are lying under a deck or up against a house, they are often trying to rest. When people repeatedly approach them or chase them away, moose become stressed and agitated. Each moose has a different tolerance level, but if they are harassed enough, many moose will respond aggressively.

Moose cow and two calves

Are there other seasons when moose tend to be aggressive?

During the fall mating season in late September and October, termed the rut, bull moose may be aggressive toward humans. In late spring and summer, cow moose with young calves are very protective and will attack humans who come too close. If you see a calf on its own, be very careful because you may have walked between it and its mother — a very dangerous place to be.

Is it okay to feed moose?

No. It is illegal and dangerous. Moose that are fed by humans often become conditioned and will act aggressively when they are not fed as expected. A moose with a history of being fed may approach an unsuspecting person in hopes of receiving a hand-out. It may attack if it sees that the person has no food to offer. Don't feed moose and ask your neighbors not to feed them. If your neighborhood moose is fed by humans, chances it will charge people, including children, increase. A moose with a history of unprovoked attacks will likely be shot by enforcement officers to protect public safety. Therefore, by feeding a moose, people are more likely contributing to its death rather than its benefit.

Moose in brush

How do you know when a moose might attack?

The long hairs on its hump are raised, ears laid back (much like a dog or cat), and it may lick its lips (if you can see this, you are way too close). A moose that sees you and walks slowly towards you is not trying to be your friend; it may be looking for a hand-out or warning you to keep away. All of these are dangerous situations and you should back away. Look for the nearest tree, fence, building, car, or other obstruction to duck behind.

What if a moose is obstructing my way?

Is there another way around the moose? If not, be patient. The moose will move away in time. It may take half an hour or more, but it is usually worth waiting. Sometimes a loud noise or movement will startle a moose into moving, but moose that are used to people are not easily chased away. If you have to get by, try to keep a large tree, snow berm, vehicle, building, or fence between you and the moose. Don't approach a moose if its only escape route is in your direction, and always leave yourself one or more escape routes. As a last resort, pepper spray will often move them, or at least provide some protection if they charge. Frequently, unsuspecting dogs are let out in their backyards when lighting is poor resulting in a surprised moose and a surprised dog. Turn outside lights on and scan your yard before blindly releasing your four-legged friend into the darkness.

What if a moose charges?

Many charges are "bluff" charges, warning you to stay back and keep your distance. However, you need to take them seriously. Even a calf, which weighs 300 or 400 pounds by its first winter, can cause serious injury. When a moose charges it often kicks forward with its front hooves. Unlike with bears or even dogs, it is usually a good idea to run from a moose because they won't chase you very far. Get behind something solid; you can run around a tree faster than a moose. If it knocks you down, a moose may continue running or start stomping and kicking with all four feet. Curl up in a ball, protect your head with your hands, and hold still. Don't move or try to get up until the moose moves a safe distance away or it may renew its attack.

Are kids safe around moose?

Yes, usually. The problem is both kids and moose are somewhat unpredictable. Young kids will take cues from adults: if you take a chance, they might also. Keep kids away from moose. If a moose is hanging out at a school bus stop, ask the driver to pick the kids up one or two blocks away. Establish a parent patrol to wait at the bus stop with the kids. If your kids walk to school, show them another way to walk if they see a moose on their normal route. If you know a moose is in your neighborhood, kids should avoid walking through the woods where it is dark and there is no easy escape should the moose be encountered.

For information on staying safe around moose, especially for young Alaskans, see Wildlife Safety.

Aggressive Moose Video

A 12-minute video on safety around urban moose, produced by ADF&G and middle school students at Mirror Lake School. Target audience is third-fifth graders.

  • Facebook
  • Google+
  • Reddit



from Hacker News https://ift.tt/hG9uySq

Tuesday, May 30, 2023

South Australia passes law to ban “disruptive” protests

A union boss has blasted South Australia’s government after the state passed laws to ramp up fines for disruptive protests following a mammoth upper house debate.

The new measures were rushed through the state’s lower house last week with both the Labor government and the Liberal opposition supporting the changes.

After a more than 14-hour debate, the upper house passed the laws on Wednesday morning, after the failure of proposed Greens amendments including a reasonableness test and an expiry date.

An SA Best amendment did pass, ridding the bill of a reckless intention clause.

The changes increase maximum fines from $750 to $50,000 along with potential jail time.

They were prompted by three days of action by members of Extinction Rebellion earlier this month, including a woman who abseiled over a city bridge, forcing a main road to be closed for about 90 minutes.

Premier Peter Malinauskas defended the changes on Wednesday, saying the government received legal advice confirming the new penalties were commensurate with other penalties for similar offences.

“There has been no change to protest laws in South Australia,” the premier claimed on ABC Radio.

“One of the things that I have found rather disconcerting around some of the commentary on this piece of legislation is that somehow, it curtails or diminishes people’s right to protest, which is simply not true.”

The premier described the laws as quite a modest change and said if lawyers were right in asserting the courts could apply them outside of their intended scope, then that had always been the case.

He rejected the suggestion more people would be arrested and thrown in prison because of the laws.

A woman abseils over a bridge in Adelaide during an Extinction Rebellion protest on 18 May. Photograph: Extinction Rebellion South Australia

“All the protests that have happened in the past, including many that I’ve participated in, have never resulted in an issuing of a fine,” Malinauskas said.

“The majority of people who protest do so passionately, vigorously, obstruct traffic, close streets, march and so forth – none of that will change.

“But we have got a very deliberate action here to affect those people who take to an extreme (protesting) that has an adverse effect on others in our community, who also have rights that need to be protected.”

Activists and lawyers have opposed the laws, criticising the lack of public consultation, unclear rationale for rushing the changes through and increased penalties.

SA Unions on Wednesday derided the government’s failure to respond to the majority of its concerns as well as those raised by the legal profession.

“Not only did the government rush this bill through the lower house in 22 minutes, they have taken the next available opportunity to force it through the upper house,” SA Unions secretary Dale Beasley said.

“We will not accept that this is how laws are made in South Australia, especially laws that can land workers in jail for standing up for their rights in public places.

“The hasty passage of this bill serves as a reminder that the rights of workers and the community, while hard won, can be easily lost.”

There needed to be careful consideration of the amendments achieved on Wednesday morning, SA Unions said.

The laws were prompted by three days of action by members of Extinction Rebellion earlier this month outside the annual Appea oil and gas conference in Adelaide.

During the conference, the South Australian energy and mining minister, Tom Koutsantonis, told representatives of the Australian oil and gas industry that the state government was “at your disposal” in an extended welcome for the opening of its annual conference.

The protests outside the venue included a woman who abseiled over a city bridge, forcing a main road to be closed for about 90 minutes.

South Australian police commissioner said at the time he had been frustrated by the protesters.

“The ropes are fully extended across the street. So we can’t, as much as we might like to, cut the rope and let them drop,” Grant Stevens said.

South Australia rushed through the anti-protest laws less than a day later.



from Hacker News https://ift.tt/dLDTjKS

The Fear of Shipping

Something I've become very aware of lately is how difficult it is for me to ship. I have at least a dozen unfinished projects that I could probably ship, yet I find any excuse to hold them back.

I could say forget it and ship it. Then depending on the amount of feedback I received, decide whether or not it was worth putting more time into. I often fall victim to "just one more feature here" or "oh it would be great if I added this first." When I should have just shipped.

After nearly convincing myself to do this, what is holding me back? The fear of ruining my first impressions. Up to this point my programming career, especially in public, has been pretty sparse. I've made a few websites, and I shipped an internal enterprise iOS app, but that doesn't count for much. So as far as most people know I sit at home on my thumbs 24/7 occasionally tweeting about Objective-C frameworks. I want to be perceived well in the community, I respect a lot of indie iOS and OS X developers and the last thing I want is attention for an unfinished or unpolished product. I am starting to realize that this is unrealistic. I would love my first major application to be perfect but that's just not feasible. I hope to get there in the future, but I am very close to accepting that getting there requires stepping stones. For me, those stepping stones might be some useless OS X utilities that I've made for myself and now want to share with the world. No matter how unpolished.



from Hacker News https://ift.tt/3dHcbsB

Did Arthur Booth turn his life around after Judge Mindy's revelation?

Arthur Booth came into the spotlight in 2015 after being detained in connection with several burglaries. Mindy Glazer, a judge presiding over Booth's case, revealed that Booth was her former classmate. She was disappointed in his actions and hoped he would turn his life around. Many who followed the case have been wondering if Arthur changed his behaviour. So, did Arthur Booth turn his life around after Judge Mindy's revelation?

Did Arthur Booth turn his life around after Judge Mindy's revelation?
Arthur Booth before Judge Mindy. Photo
Source: UGC

Arthur Booth is an American personality who pleaded guilty to charges related to a series of burglaries in Florida, USA. Following his arrest, Booth was told to make amends to his victims. While inside, Booth got assistance from people and groups who gave him access to addiction treatment facilities.

Who is Arthur Booth?

Arthur Booth was born to Hilda in 1967 in Miami, Florida, the United States. He grew up alongside his two younger siblings. He attended Nautilus Middle School after achieving outstanding grades at William J. Bryant elementary school. In 1980 he joined Miami Beach High School.

Arthur's dream of becoming a neurosurgeon was cut short when he dropped out of school in 11th grade. At the time, he was already struggling with gambling and drug addictions.

What happened with Arthur Booth?

Did Arthur Booth turn his life around after Judge Mindy's revelation?
Arthur's former school. Photo: @Dailymail
Source: UGC

Booth fell into drug addiction and gambling at a young age leading him into a life of petty crime, resulting in multiple stays in prison. He first got arrested at the age of 18 for grand theft and was later released.

In 1988, at 22, he was jailed for 20 years and released after ten years. After his release, he unsuccessfully tried to get a job before returning to gambling and drugs. He was again arrested for burglary and sent back to serve the remaining ten years in jail.

He ran away from prison in 1997 when they were taken out to help clean trash from the roads around Miami. Arthur Booth hit the headline in 2015 following a revelation made by Judge Mindy Glazer during his court hearing.

Arthur Booth and Judge Glazer were former classmates at Nautilus Middle School. Judge Mindy recognized him when he appeared in the courtroom for burglary and theft charges. She spoke fondly about him, and Booth couldn't hold back his tears. The famous judge remembered him as a promising young kid who was good at mathematics and science.

Mindy voiced her disapproval of Booth's behaviour. Several others struck by the judge's sympathy and concern for Booth's well-being echoed her call for him to change his ways. Booth was sentenced to one year for burglary and fleeing from police. Judge Mindy set his bail term at $43 thousand.

During his court hearing, Booth expressed remorse for his actions and acknowledged the harm he had caused. He apologized to the victims of the burglaries and his family and community. He also wanted to turn his life around and become a better person.

Ten months later, he was released on good behaviour from the Metro-West Detention Center in Miami, where he was being held.

Did Arthur Booth turn his life around after Judge Mindy's revelation?

Did Arthur Booth turn his life around after Judge Mindy's revelation?
Arthur and Judge Mindy meet after Arthur's release from prison. Photo: @Dailymail
Source: UGC

Yes, Author Booth turned his life around for the better after he was released from prison in 2016. While in prison, he spent most of his time reading several books about business and was not ready to give up on a great chance of turning his life around.

He received help and treatment for his addictions and has kept off gambling. He confirmed through an interview with CBS News that his encounter with Judge Mindy had inspired him to change his life.

Where is Arthur Booth today?

Arthur is a successful manager of a pharmaceutical company in Florida. Booth's case highlights the importance of addressing the underlying factors that contribute to criminal behaviour and having sympathy for those caught up in crime.

His illegal activity resulted from his underlying issues and the desperation that caused him to break into homes to steal cash and other valuables. Booth turned his life around because of the support he received from the community and Judge Mindy.

Frequently asked questions about Arthur Booth

  1. Who is Arthur Booth? Arthur Booth is a former American inmate who came into the spotlight in 2015 when he appeared in Judge Mindy Glazer's bond court. He found out they had met before.
  2. What is Arthur Booth's age? Athur Booth's age is 56 years old as of 2023. He was born in 1967 in Miami, Florida, USA.
  3. Where is Arthur Booth today? Arthur Booth is working as a manager of a pharmaceutical company in Florida, United States.
  4. What did Judge Mindy Glazer reveal about Arthur Booth? During Booth's court hearing, Judge Mindy Glazer recognized him as a former middle-school classmate. She added that Arthur was good at mathematics and science.
  5. Is Arthur Booth on Instagram? Unfortunately, Booth is not on Instagram or any social media platform.
  6. Did Arthur Booth turn his life around after Judge Mindy's revelation? Yes, Booth's life improved after leaving prison. The famous celebrity stopped taking drugs and became a manager of a pharmaceutical company.
  7. Did Arthur Booth and Judge Glazer attend the same school? Arthur and Judge Glazer were classmates at Nautilus Middle School. They both had dreams and aspirations as kids in middle school. Arthur wanted to be a neurosurgeon, and Glazer, a veterinarian.

Did Arthur Booth turn his life around? Yes, Booth turned his life around after being inspired by his former classmate Judge Mindy Glazer. His experience is a powerful reminder of the value of empathy, comprehension, and support for people dealing with addiction and other problems.

Tuko.co.ke recently published an article about the Chris Coleman family murders trial. Chris Coleman is a former security chief for the televangelist Joyce Meyer Ministries who gained notoriety for committing a triple murder. Chris was sentenced to life in prison without the possibility of parole.

In 2009 the Americans woke up to the sad news about the murder of Sheri Coleman and her sons Garret and Gavin. Chris had left home early that morning for his morning workout. He made several calls and texted Sheri several times, but he did not get any response.

Subscribe to watch new videos

Source: TUKO.co.ke



from Hacker News https://ift.tt/qotRw3j

The Stanford Pascal Compiler

Oppolzer - Informatik / Stanford Pascal Compiler


Home       Lebenslauf       Schwerpunkte       Kenntnisse       Seminare       Kunden       Projekte       Produkte       Blog       Stanford Pascal       Kontakt

The Stanford Pascal Compiler

Last changes: 2023-04-29

New: Short Language Reference of New Stanford Pascal

Breaking News

Update 01.2023:

the Stanford Pascal Compiler release 2023.01 is now available for Windows, OS/2, Linux etc.
and for VM/CMS and MVS (or z/OS) on IBM mainframes or Hercules

see this Moshix video about the MVS distribution: 50 Year Old Stanford Pascal for MVS Updated to 2023 - M229

New features:

- CONST parameters now work with RECORDs and ARRAYs, too
- Types on parameter lists can be specified more flexibly (complete TYPE syntax, not only identifiers)
- Pointer types on function results can be specified using the arrow symbol
- LEFT and RIGHT string functions now work correctly when the second parameter is zero (null string is returned)
- a strange and difficult error in PASCAL2 (IBM 370 code generator) related to literal handling has been fixed
- PASCAL2 is now compiled with the D+ (debug) option, which involves many run-time checks and makes the PASCAL2 execution much more secure

Actual releases:

the Compiler release 2023.01 is available for Windows, OS/2 and Unix-like systems at GitHub - https://github.com/StanfordPascal/Pascal

the Compiler release 2023.01 is available for VM/CMS - Download
For more information see below (Resource paragraph)

the Compiler release 2023.01 is available for MVS (TK4-) - Download
For more information see below (Resource paragraph)

note: I removed the file nc.exe from the MVS distribution, because some virus scanners complained;
the CMD files, which use this tool, are still there, so a replacement for nc.exe is needed.
Or: you must install nc.exe on your machine yourself at your own risk


This picture shows an old version of the compiler (2020.11) compiling itself to 370 machine code on VM/CMS:


The Story

I am currently the maintainer of this improved and enhanced version of the Stanford Pascal compiler.

The current New Stanford Pascal compiler supports - for example - static variables, external procedures written in Pascal, Fortran or ASSEMBLER, initializations with definitions, const parameters, new builtin data types like strings of fixed and varying length, many new standard procedures and functions, a new heap management (inspired by IBM's Language Environment) and much more ...

This compiler (and the original Stanford Pascal compiler) is an offspring of the original Pascal P4 compiler, which was written in the 1970s at ETH Zürich by a team around Niklaus Wirth.

cited from english Wikipedia:

To propagate the language rapidly, a compiler "porting kit" was created in Zürich that included a compiler that generated code for a "virtual" stack machine,
i.e., code that lends itself to reasonably efficient interpretation, along with an interpreter for that code - the Pascal-P system.
The P-system compilers were termed Pascal-P1, Pascal-P2, Pascal-P3, and Pascal-P4.
Pascal-P1 was the first version, and Pascal-P4 was the last to come from Zürich.

A compiler based on the Pascal-P4 compiler, which created native binaries, was released for the IBM System/370 mainframe computer
by the Australian Atomic Energy Commission; it was called the "AAEC Pascal Compiler" after the abbreviation of the name of the Commission.

end of citation

As I learned recently from Mark Waterbury, this is not correct. The AAEC compiler is an independent development and is not based on the P4 compiler. Mark gave me the sources, and I compared them to the original Stanford compiler, which is a P4 variant, and there are many differences. It is very unlikely if not impossible that the AAEC compiler builders had access to the P4 source.

The true story is: starting from the P4 compiler from Zürich, some work has been done in Bombay (India), and then the Stanford compiler was based on that work. The P4 compiler produces P-Code, which is independent of the target machine; to be able to run on the IBM mainframe, the P-Code needs to be translated to 370 machine code. I don't know exactly when the P-Code-to-370 translator was written; the first comment is by S. Hazeghi of Stanford. So maybe the customization of the P4 compiler to the IBM mainframe was done at Stanford; this must have been in the second half of the 1970s. The P4 compiler targetted to the IBM mainframe was then called the Stanford Pascal Compiler and was used heavily during the late 1970s and early 1980s.

Mark also told me that the Stanford compiler probably was used to bootstrap the Pascal/VS compiler from IBM, which is a self-compiling compiler (like Stanford). Pascal/VS internally uses some sort of extended P-Code, which is an extension of the P4 P-Code. BTW: the UCSD Pascal system used an intermediate code called U-Code, which is based on P4's P-Code, too.

Some years later (1982 ca.), the Stanford compiler was transferred to the McGill university in Montreal, Canada and used on their MUSIC/SP mainframe operating system. That's where I found it in 2011. The McGill version of the compiler (still called Stanford Pascal) had several significant extensions, compared to the original Stanford version of 1979.

This page tells the story what happened from 2011 to today (not much in the years 2011 to 2016),
what I did in the years since 2016, what I am doing now and what I will do in the future.

The Stanford compiler was originally targetted to IBM mainframes only.
Today it runs on VM/370 Rel.6 and on MVS 3.8j on the Hercules emulator.
And it runs on today's z/OS, too (tested first by Rene Jansen in 05.2017, thanks!).
It will sure run on modern zVM, too (at the moment limited to AMODE 24, RMODE 24).
The VM and MVS versions can be downloaded from this site (including source codes), see "Resources" below.

It also runs (tested) on Windows, OS/2, Linux (x86) and Mac OS, because I ported it there in the end of 2016.
On these platforms the P-Code generated by the first compiler pass is interpreted by a P-Code interpreter written in ANSI-C,
so it will probably run on every platform that has a C compiler. Please contact me, if you want to test it on other platforms.
The Windows (and OS/2) version is available from this site, too; see "Resources" below.

You can follow the development process more closely on this Facebook page:
Stanford Pascal on Facebook


This picture shows the Pascal compiler compiling itself on Windows
(only pass 1 needed, Pascal to P-Code; the P-Code is interpreted by the P-Code interpreter PCINT).


Evolution steps so far (will be continued ...)

Topics with this icon are not yet completed ... maybe implemented, but documentation is missing.

Starting in 2011 ...

My own Pascal history (please forgive me, this is in German)
Porting the Stanford compiler from MUSIC/SP to VM on Hercules
First extensions to the compiler in 2011
New keywords BREAK, CONTINUE, RETURN - still in 2011

Continuing work in summer 2016 ...

Extensions to the runtime system (PASMONN) in 2016
Making RESET and REWRITE optional and eliminating the implicit RESET on INPUT
Allow shorter string constants on const initializers and assignments
20 significant characters on variable names (not only 12)
SNAPSHOT Routine
Shorter string constants - continued
Dynamic Storage Management
Pointer arithmetic - new functions ADDR, PTRADD, PTRDIFF, SIZEOF, PTR2INT
Pascal library
Storage management - new functions ALLOC, FREE, ALLOCX, FREEX
Static definitions
Extending PASSNAP aka SNAPSHOT
New functions MEMSET and MEMCPY
Direct WRITE of scalar variables or expressions
Maximum length of string constants is now 254 (was 64)
Making CSP numbers consistent between PASCAL1, PASCAL2 and PASMONN
Call CMS commands from PASCAL programs
PASSNAP prints variables from more than one source file
PASSNAP prints heap variables allocated by NEW and ALLOC
Porting Stanford Pascal to Windows, OS/2 and Linux - first steps
Porting Stanford Pascal to Windows, OS/2 and Linux - other issues
Improving PASSNAP for the NODEBUG case
Porting Stanford Pascal to Windows, OS/2 and Linux - portable branch tables
Porting Stanford Pascal to Windows, OS/2 and Linux - success !!

Work of 2017 ...

Bit operations
Support undeclared procedures
Some Pascal/VS features added (DATETIME, HALT, CLOSE, TERMIN/TERMOUT)
Differences on floating point computations and rounding
Differences on floating point output
MVS version
Job control examples for the MVS compiler
Changes for the MVS compiler
Changes to PASSNAP and PASSNAPC - better error handling in PASMONN
New standard type ANYFILE - VOIDPTR renamed to ANYPTR
MVS version available for download
Stanford Pascal works on MacOS and on (modern) z/OS
Terminal flag in the Pascal FCB - effects on I/O functions
Integer and character constants in hex and binary representation
Improvement on Pascal sets - Part one
String constants built from several mixed parts (hex or binary)
SIZEOF usable on string constants
New source program scanner (PASSCAN) - separate module
C++ style comments
Binary integer constants
Write integer with leading zeroes, if desired (controlled by negative width)
Compiler messages shown at terminal (aka SYSPRINT) during compile
Old errors removed from the compiler

Compiler version 2017.12

Shorter strings (variables) can be assigned to longer strings
Compiler error messages with additional information
Verifying new P-Code instructions with the PCINT debugging features
New P-Code instructions to support inline MEMCPY and MEMSET
Calling external procedures and functions written in ASSEMBLER
New installation procedure for MVS (or z/OS)

Compiler version 2018.01

New types: CHAR (n), DECIMAL (n,m) and STRING (n) - aka VARCHAR (n)
Cheating on DECIMALs
Implementing Strings in Pascal (inspired by Pascal/VS)

The whole story on one large page

All on one page

Resources

I will no longer post the sources here explicitly, because they are on GitHub anyway (see link below), and keeping them current here is a lot of work which can be saved.

Furthermore, I would like to add that the versions on GitHub are always the best and newest and the mainframe versions for VM/CMS and MVS can be one or two released behind. I hope you don't mind.

Since 2016 ca., when I first ported the compiler to Windows, the new versions and the enhancements are first implemented and tested on Windows and then (later) on the mainframe.

Pascal Compiler for CMS (version of 2023.01)

Two AWSTape Files to TAPE LOAD the Pascal system to CMS (2023.01 version)

PAS2023B_COMP.AWS - this tape contains the needed files to run the compiler
PAS2023B.AWS - this is a TAPE DUMP of the complete development disk, including test programs etc.

Pascal Compiler for MVS (version of 2023.01)

ZIP File containing the Pascal system for MVS - or z/OS

Usage Notes for the MVS version:

the ZIP file contains an "Installation Guide" PDF, where you can find detailed instructions. You can also find it here.

Pascal Compiler for Windows and OS/2 (version of 2023.01)

The non-Mainframe versions of New Stanford Pascal are based on the P-Code interpreter PCINT.

The most recent version is available at GitHub:

https://github.com/StanfordPascal/Pascal

Installation steps for the Windows and OS/2 version:

a) Download the Compiler ZIP File from GitHub and unpack it into a directory of your choice b) if you're on Windows, you can use the PCINT.EXE which is in the bin subdirectory. Otherwise (if you are using OS/2 or MacOS or Unix), you will have to compile PCINT using the sources which are in the bin subdir, too. c) Now you should choose a directory, where the executables of the compiler should be stored; this directory should be part of your PATH (for example: C:\bin ... or another directory of your choice). d) Please copy the file COPYBIN.CMD from the script subdir to the install directory (one level up) e) edit the SET command in COPYBIN.CMD (at the beginning) which sets the environment variable PASDIR to the directory for the executables of your choice (see step c)) f) and now start COPYBIN.CMD, so that the needed files are placed into the PASDIR directory

Now the compiler commands should work from everywhere. Just make sure that the environment variable PASDIR always contains the directory where the compiler is (as chosen in step c)), because it is referenced in the compiler script files.

Use PASCAL <program> or PP <program> to call the Pascal compiler; the program should be stored in a file <program>.PAS
Use PRUN <program> to run the program
set DD_<logical_filename>=<physical_filename_including_path> to assign files

For other (Non-Windows) systems:

PCINT can IMHO be compiled with every standard C compiler; I used IBM's C compiler on OS/2 and the Open Watcom C compiler on Windows (and gcc on Linux, BTW).

The CMD files for Windows (and the installation method described above) will work for OS/2, too. If you are on another system, you will have to adjust the script files; maybe you find some ideas in the subdirectory script_ix. Otherwise, you can contact me directly.


Documentation

New Stanford Pascal - Special Topics

Short Language Reference of New Stanford Pascal
External Procedures in Stanford Pascal - written in Pascal, FORTRAN or Assembler
New Stanford Pascal Installation Guide for MVS
Reading Textfiles with New Stanford Pascal
Strings in New Stanford Pascal
Troubles with Procedure Parameters

P-Code Documentation

P-Code Description (2019) - work in progress
P-Code Description (2017)
P-Code Description (2016)
P-Code Description (1978)

Old documents on different topics

Pascal:

The Programming Language Pascal (Niklaus Wirth, 1972)
Pascal P-Compiler Implementation Notes (ETH Zürich 3054-01)
On Code Generation in a Pascal Compiler (ETH Zürich 3056-01)
Stanford Pascal/360 Implementation Guide (Stanford 1974)
The Stanford Pascal compiler (Stanford 1979)
A Pascal P-Code Cross Compiler for the LSI 11 (Stanford 1979)
A Pascal P-Code Interpreter for the Stanford Emmy (Stanford 1979)
The Implementation of Case Statements in Pascal (University of Tasmania 1981)
Pascal/VS Language Reference Manual (IBM 1981)
The Pascal ISO 7185 Standard (1991)
Wirth: A Pioneer of Computer Science (Johannes Kepler University - Linz / Austria)

Algol:

Description of Algol 60 (Rutishauser 1967)
Algol W Language Description (1972)
A Politico Social History of Algol (Bemer)
The Algol 68 Story (1978)
The Dijkstra-Zonneveld Algol 60 Compiler for the X1 (2003)
The History of the Algol Effort (2006)
Algol Anniversary (Huub de Beer 2010)
Algol in France (Mounier, Kuhn 2011)
MARST - An Algol-to-C translator (Download)

Dijkstra:

The Advent of the Recursive Procedure (2010)
E. W. Dijkstra: First Years in the Computing Science 1951 - 1968 (Van Den Hove 2009)
The Edsger W. Dijkstra Archive

Other:

The PL360 Programming Language (1968)
Class Notes for a PL/I Course (1975, Argonne National Lab)
LE Stack and Heap Implementation (IBM 2005)
Columbia University Computing History
40 Jahre Informatik in München (2007)
Evaluation of Algol 68, Pascal, ... for a Common High Order Programming Language (1976)
DOD Language Evaluation (1977)
A Methodology for Evaluating Languages and their Compilers for Secure Applications (1978)

Back to main page


from Hacker News https://ift.tt/bx9U4t6

Twitter's Algorithm: Amplifying Anger, Animosity, and Affective Polarization

BibTeX formatted citation

×


from Hacker News https://ift.tt/Z2M6N3S

New Taschen book on the history of the computer

Taschen delves into the rich visual history of the computer with this new XL-scale book, edited by graphic designers and historians Jens Müller and Julius Wiedemann. Casting its net back into the pre-digital age, The Computer explores the mechanical precursors to the modern age, from the calculating machines of Babbage and Lovelace to even earlier counting and computing devices. 

Taschen’s The Computer: a monumental new book

The Mechanical Turk, by Hungarian inventor Wolfgang von Kempelen (1734 – 1804), was an early fraudulent AI

(Image credit: Public Domain)

Where this book shines is in its presentation of archive images and vivid modern photography of some of the literal colossi of early computer design, in all their room-filling, reel-to-reel button-festooned glory. These sumptuous spreads are paired with intriguing illustrated sidetracks into the advertising, media coverage, science fiction visions and speculative research of each era.

‘First actual case of bug being found’, the moth found in the relay of Harvard University's Mark II computer, 1947

(Image credit: Courtesy of Naval Surface Warfare Center)

Backing all this up is an ongoing glossary of the language of computing, alongside profiles of the pioneering men and women – and major corporate players – that shaped this multi-billion dollar industry that is increasingly at the beating heart of every aspect of society. 

It’s a titanic piece of work, presented in Taschen’s trademark trilingual style, and the threads of our silicon-driven social anxiety are also fully represented, as newspapers and magazines laid bare concerns about automation, robotisation, and, from a surprisingly long time ago, the impact of artificial intelligence. 

'Visualization of the merged 360-degree camera images of a Waymo vehicle'

(Image credit: Courtesy of Waymo)

This is a book about hardware and software, silicon and society. As circuit boards shrank and the PC age dawned, communications and entertainment brought computers into every home and, eventually, pocket. The retro vibes get stronger and stronger as the book takes us through the ever-evolving form of the personal computer, its ever-improving graphical capabilities and its tentacular creep into every facet of our daily lives.

(Left) A videophone from Hughes Aircraft Company advert, 1956; (right) Brookhaven National Laboratory uses sensors to detect the processes of the active brain, 1970s

(Image credit: Hughes Products / Brookhaven National Laboratory)

The arrival of the internet, Wifi, industrial robotics, gaming consoles, mobile phones, drones, robots, online porn, Apple, Google, the Smart Home, social media, Wikipedia and the inexorable rise and problematic prominence of dotcom billionaires leads to a final chapter. This is what the editors call the ‘All-Digital Age’, where we live in a cloud and computation handles a billion unseen processes to help our lives run the way they do.

Digital Equipment Corporation's PDP-8, 1965, 'the first genuine desktop computer'

(Image credit: Courtesy of Digital Equipment Corporation)

The final entry is on quantum computing, the devices that will liberate computers from their current binary existence and open up new worlds. Based on the 450-plus pages that precede it, our collective minds will inevitably deploy this vastly more powerful technology to conjure up things we can’t currently imagine, for better and for worse.

The Computer: A history from the 17th Century to Today, Taschen

(Image credit: Taschen)

The Computer: A history from the 17th Century to Today, edited by Jens Müller and Julius Wiedemann, £60, Taschen, Taschen.com



from Hacker News https://ift.tt/hS0Hjdb

Monday, May 29, 2023

1999 Game Developers Conference (GDC) Recordings Archive

Comments

from Hacker News https://ift.tt/F4KfMue

A peek inside Japan's largest “Dagashi” store

Every morning, 67-year Hideyuki Akiyama drives to his store along the rural roads of Okayama in Western Japan. Today is no different. A line of excited customers queuing up before the shutters are open.

Disneyland isn’t our land of dreams, this is!” says one 20-something woman.

“This” is Nihon-ichi no Dagashi Uriba. This mouthful of a name, is Japan’s largest “dagashi” store.

Dagashi

Dagashi (駄菓子) are small, inexpensive Japanese confectionery and snacks you would find in corner stores. 10 yen for an Umaibou, 30 yen for a Cabbage Tarou. It’s what you would buy on your walk back home from school and fight with friends over.

Dagashi, unlike it's cousin the Wagashi, has connotations of cheapness. The da (駄) in dagashi means “poor” or “cheap”. In the 19th century when refined white sugar was imported to Japan it was considered a better, richer ingredient. With this shift, sweets made of the domestic brown sugar were looked down upon and got the pejorative name of dagashi.

Don't let the name fool you though. Dagashi cover a wide array of delicious sweet and savory items, despite the lower price points.

Dagashi in the modern age

Dagashi are usually manufactured by small companies, at small scale and low prices, but with lots of thought given to what will bring their customers – children – the most joy. A 100-yen prize sticker in one, a free snack in another, collect-and-win in another, and so on. These sweets have defined the early years of many generations of children.

These iconic sweets now face a difficult future in a changing Japan.

In the new Japan, gone are the corner stores that would hand out free candy for stickers and gone are the days of stores dealing in inexpensive items with slim margins. These have since been replaced by national chains of convenience stores and supermarkets – big companies that have neither the patience nor the incentive to sell a 10-yen pack of ramen snacks and help run the manufacturer’s collect-and-win program for children.

Dagashi manufacturers face significant cost pressures, have low margins and logistical challenges, but continue tirelessly because they believe it is important to keep this part of Japan’s culture alive.

Nihon-ichi no Dagashi Uriba

Hideyuki wants to do everything he can to pass the dagashi tradition on to the next generation.

His store, Nihon-ichi no Dagashi Uriba (literally “Japan's biggest dagashi store”), sits on a 25,000 square foot plot in Okayama prefecture, 700 km from Tokyo.

Though only about a quarter the size of your average Costco, it plays an outsized role in Japan’s dagashi industry.

Aerial view of the Nihon-ichi no Dagashi Uriba store Source: https://ift.tt/kcFBGle

Visitors are greeted by row upon row, aisle upon aisle of Japanese sweets and snacks, many available only here. Hideyuki estimates they carry over 50,000 types of dagashi on any given day.

For children visiting – often with grandparents looking to indulge them – this is a.. kid in a candy store situation.

Inside the store Source: https://ift.tt/VYAnTRr

In a 60-minute shopping spree one man’s three grandchildren loaded their baskets with over 300 items, racking up a bill of 13,000 yen (~100 dollars). Even in the middle of Japan's meteoric inflation the grandfather doesn’t mind. “The smile on their faces is priceless”, he says with a smile himself.

Every year the store attracts 700,000 visitors. Residents of Osafune, a rural town of 12,000, love that their town now attracts over visitors not just from every corner of Japan but also from abroad, to try dagashi they couldn’t find anywhere else.

From 3 decades of ups and downs

Hideyuki has been in the dagashi business for over 3 decades and has seen his share of ups and downs. He says “People now say I have succeeded in this business but they don't realize how many failures it took and how lucky I got. Very few people know that I had to do that thing which no business owner should ever do – lay people off. And I've had to do it twice.”

Hideyuki initially ran a regular confectionery store but the business failed because it became a price race to the bottom. He had to downsize his company in 2010.

Hideyuki Akiyama, President of Okayama Ohmachi that runs Nihon-ichi no Dagashi Uriba Source: https://ift.tt/8l2YKZg

But now he had inventory that would soon expire. So he organized a “Dagashi Matsuri” (Dagashi Festival) to celebrate dagashi but also to sell this inventory that would soon go to waste. The response to this festival was so good that he did it once more!

The festivals became an event that families in the nearby towns loved attending and this planted the seed in Hideyuki's mind to make them a permanent fixture.

Hideyuki saw that many dagashi manufacturers were dying out because nobody would sell their low-margin, low-price products while at the same time children thoroughly enjoyed visiting his store and even challenged him on having fewer varieties than other stores.

Thus, when he started Nihon-ichi no Dagashi Uriba in 2014 his vision was twofold: to provide one roof under which dagashi manufacturers could thrive and to bring joy to children by keeping the tradition alive.

In keeping with this vision, most items in the store are inexpensive and easily fit the budgets of children who save up to visit. They are also rounded to the nearest 10 yen so doing math is easy for the kids.

Chindonya-san, a fun, clownish character entertaining children. Also, Hideyuki Akiyama, the President of the company. Source: https://ift.tt/dhRF7ON

Every weekend, Hideyuki also dresses up as Chindonya-san, a fun, clownish character who walks around the store playing toy instruments and hands out candy to children. Just to remind you, this is the CEO of the company walking around as a funny character in an elaborate costume, handing out sweets to kids and playing games with them. I like to think stuff like this happens only in Japan :)

In another part of the store, we hear loud cheering, as a group of students open their snacks and discover what they have won. Nihon-ichi no Dagashi Uriba has a whole section allocated to keeping such simple joys alive.

For those who think the store only caters to children, rest assured that adults, young and old alike, throng the aisles, rekindling old memories and creating new ones.

Hideyuki’s goal for the store is really for it to become a theme park for dagashi, where children, parents, and grandparents from all over the world visit and enjoy their day.

The business with no intention of business-ing

Nihon-ichi no Dagashi Uriba is.. different. They don't sell most popular items. They instead stock the not-so-popular, odd, quirky products that others will not.

When asked about the future of the business, Hideyuki says he has no intention of doing business (in pursuit of profits) at all. He believes businesses exist to serve society and contribute back to people in the community. That they make profits are merely a side effect and that somewhere along the line the side effect became the main goal.

He says, at the end of the day, “We run this store so we can see the joy on children's faces”.


We hope you enjoyed this article! If you haven't already, we would love it if you subscribed to our newsletter. It would encourage us greatly to create more interesting posts like this. Sign up from here.



from Hacker News https://ift.tt/OHk6hmg

Being a Good Unix Neighbour

The UNIX philosophy is a set of design principles that has had a huge impact on the development of software systems. In essence, the UNIX philosophy stresses the importance of keeping things simple and modular. You should think of the shell as a programming language of its own! Take for example this one-liner:

curl -s 'https://www.example.com/query?symbol=GOOG&apikey=API_KEY' \
| jq '.["Global Quote"]["05. price"]' \
| sqlite3 stocks.db "UPDATE portfolio SET price = $(cat), time = CURRENT_TIMESTAMP WHERE symbol = 'GOOG'" \
&& sqlite3 stocks.db "SELECT price FROM portfolio WHERE symbol = 'GOOG' AND price > 9000" \
| xargs -I {} curl -X POST -H "Content-Type: application/json" -d '{"symbol": "GOOG", "price": "'{}'"}' https://example.com/api/sell

In this condensed program of 4 lines, the following happens:

  1. We download the HTML page of Google Finance for the GOOG stock.
  2. We extract the current price from the HTML page.
  3. We update the price of the GOOG stock in our database.
  4. If the price is above 9000, we send a notification to our API to sell stocks.

Getting back to the UNIX philosophy, this means writing small programs that do one thing well and can be combined with other programs to achieve more complex functionality. Other key principles of the UNIX philosophy include using plain text as a universal interface, favoring simple implementations over more complex ones, and using pipelines to combine simple programs into powerful workflows.

If you're writing command line tools, it's important to consider how they fit into the UNIX ecosystem. As part of this, it's helpful to ensure your tools can be easily integrated into pipelines with other tools. A key way to achieve this is by allowing your tool to accept input from other tools through standard input and output.

Example

Let's take a look at an example of a fictitious tool called do_x. In this example, we define an argparse argument parser that allows the user to specify an input file or to use standard input by default. We also provide an option to output the results in JSON format. After processing the input data, we output the results either as a string or as a JSON object, depending on the user's choice.

Starter code

Let's start off with an example of what I might have:

import click


def do_stuff(input_data):
    """Do stuff."""
    return {'result': input_data}


@click.command()
@click.argument('filename')
def main(filename):
    with open(filename, 'r') as f:
        input_data = f.read()

    # Do something with input data and print it
    output_data = do_stuff(input_data)
    # Use `echo` for better compatibility
    click.echo(output_data)


if __name__ == '__main__':
    main()

Here this code is straightforward: this tool is meant to be called from the command line with a simple filename argument. It reads the file, does something with the data, and prints the result. This is a good start, but it's not very flexible. What if we want to use this tool in a pipeline? What if we want to use it with standard input? What if we want to use it in a script?

Using standard input

The first item of business is to make this tool more flexible. Tools should accept data from stdin and offer to read files from an argument. Your tool's API should be flexible enough to handle both. In our case, we can implement this by specifying the filename argument as optional and setting the default value to stdin. This will allow us to call the tool with or without a filename argument, and it will read from stdin if no filename is provided.

In general, the UNIX philosophy favors passing data via pipes and standard input, as it allows for a more flexible and composable toolchain. This is because a tool can be designed to read data from standard input, process it, and then output the results to standard output, which can be used as input to another tool in the pipeline.

However, there are situations where it may be more appropriate to pass filepaths as arguments instead of reading data from standard input. For example, if a tool needs to process multiple files, it may be more convenient to pass the filepaths as arguments instead of requiring the user to redirect the contents of each file to standard input.

A good tool can handle both cases, allowing the user to pass either filepaths or data via standard input, depending on their preference or the specific use case. This can be achieved by designing the tool to first check if a filepath argument was passed, and if not, to read from standard input.

In our example in particular, the click package handles this for us with the click.File type, which

  1. Defaults to stdin if the input is set to -, and
  2. Passes a subclass of io.TextIOBase (e.g. StringIO) in either case.

This makes it handy as it handles the best practice case by default:

import click


def do_stuff(input_data):
    """Do stuff."""
    return


@click.command()
@click.argument(
    'filename',
    type=click.File,
    default=click.get_text_stream('stdin'),
)
def main(filename):
    if filename.name == '<stdin>':
        # Let the user know why we're waiting for input
        click.echo('Reading from STDIN')

    else:
        with open(filename.name) as f:
            # This works fine with both a file or STDIN
            input_data = f.read()

    # Do something with input data
    output_data = do_stuff(input_data)
    return output_data

By allowing the user to specify an input file or to use standard input, our tool can be easily integrated into pipelines with other tools. For example, let's say we have a file called input.txt containing some data that we want to process with our tool called e.g. cli and then pipe the results into another tool called do_y:

$ cat input.txt | cli | do_y

By default, cli will read from standard input, allowing us to pipe the contents of input.txt into it. cli will then output the results to standard output, which can be piped into do_y. This allows us to easily chain together multiple tools to create powerful pipelines.

Adding JSON support

Another important aspect of being a good UNIX neighbour is ensuring that the fields in your output are standard enough that they can be easily translated with other tools. This means using a standard delimiter, such as a tab or a comma, and avoiding using special characters that may cause issues when parsing the output with other tools.

Finally, if possible, it is also helpful to provide the option to output the results in JSON format. JSON is a standard data format that can be easily parsed and processed by many programming languages, making it a great option for interoperability between tools. This can be achieved by adding a flag to your tool that allows the user to specify the output format. Depending on the API you strive to provide, you may also want it to become the default.

import json
import click

def do_stuff(input_data):
    """Do stuff."""
    return {'result': input_data}

@click.command()
@click.argument(
    'filename',
    type=click.File,
    default=click.get_text_stream('stdin'),
)
@click.option(
    '--json',
    '-j',
    is_flag=True,
    help='Output results in JSON format',
)
def main(filename, json):
    if filename.name == '<stdin>':
        click.echo('Reading from STDIN')

    else:
        with open(filename.name) as f:
            input_data = f.read()

    output_data = do_stuff(input_data)

    if json:
        # Output results as JSON
        click.echo(json.dumps(output_data))

    else:
        # Output results as a string
        click.echo(output_data)

which makes it possible e.g. to use this tool in a pipeline with jq:

$ cat input.txt | cli --json | jq '.result' | do_y

and not have to resort to more complex contorsions of string manipulation like cut -f2 — or God forbid, having to use awk — to extract the result.

Conclusion

In conclusion, by designing our command line tools to be good UNIX neighbours, we can create powerful pipelines that allow us to efficiently process and analyze data. This means allowing our tools to accept input from other tools through standard input and output, using standard delimiters in our output fields, and providing the option to output results in JSON format. By following these principles, we can create tools that work well with others and promote the UNIX philosophy of modularity and simplicity.



from Hacker News https://ift.tt/y7Qspec

The Risk of Fixing Time and Scope in Non-Lean Software Projects

The Risk of Fixing Time and Scope in Non-Lean Software Projects

Big software development projects are complex and challenging, especially when the scope is large and the timeline is fixed. In such projects, the trade-off between quality, scope, and time is the most critical. Fixing the time and scope while maintaining a high level of code quality is daunting, even if you have an almost infinite amount of money. In this article, we will discuss how fixing time and scope can impact the completion of a big non-lean software project and its associated risks.

The Triple Constraint in project management

When the Triple Constraint is applied to software projects, "quality" is most of the time used as "code quality", and "cost" is simply the budget available. In my experience, huge organizations that don't care much about spending money do care about getting the payoff as early as possible. In those cases, "time" becomes the currency, and it's fixed.

When I talk about "lean software projects", I refer to projects based on avoiding waste by delivering small tasks directly to the customer but not offering all features on the first delivery to get early feedback and iterate. In such projects, when time and scope are non-negotiable, code quality is also non-negotiable, and money is unlimited, the focus shifts from time/scope/code quality to time/scope/customer satisfaction. The user satisfaction with the software becomes the risk, as everything else is fixed; something has to give.

I've seen many executives trying to add impossible "constraints" like those to teams, hoping they would improve but offering absolutely no infrastructure for continuous improvement. Spoiler alert: it doesn't work.

Fixing time and scope in non-lean software projects where code quality is also fixed is risky. The more we fix the time, scope, and code quality, the more challenging it becomes to deliver a satisfactory product to the users. Product quality issues will inevitably leak into the completed software and will be noticed by the user at some point, which may take longer and is hardly perceivable by the product managers before launch. In such cases, the project's failure will be almost non-recoverable, and the cost of fixing the issues will be much higher.

Forget about the "pseudo-Agile" ideas of "fail fast, fail often":

It’s not worth waiting to learn from a failure that can’t be recovered from.

Another main risk of fixing time, scope, and code quality is the pressure it puts on the development team. The team is forced to work within a fixed timeline and a set of requirements, which is very challenging. Besides product quality, the pressure will compromise other aspects, such as security, as the team may cut corners to meet the deadline and the required scope. Moreover, this can lead to technical debt, requiring additional time and resources to fix later.

Another risk in fixing time, scope and code quality is the lack of flexibility. In such projects, requirements or timeline changes midway won't be easily accommodated, and any change will significantly impact the project’s outcome. This lack of flexibility can lead to a situation where there are more shortcuts, and the delivered product does not meet the users’ expectations, resulting in even more dissatisfaction.

In conclusion, fixing time, scope, and code quality in non-lean software projects can be risky. While it may seem straightforward, it can lead to serious unintended compromises on product quality and a lack of flexibility, resulting in unsatisfactory project completion. It is essential to consider the risks associated with fixing time, scope, and code quality and balance those dimensions to deliver a satisfactory product to the users. A Lean approach to projects solves most of these issues. That is: avoid waste by delivering small tasks directly to the customer but not offering all features on the first delivery to get early feedback and iterate.

Ultimately, the success of a software project is determined by the users’ satisfaction, and this should be the development team’s primary focus.

That is if they really care about that at all…



from Hacker News https://ift.tt/ULkdCn0

In every country people think others are less happy than they themselves say

Comments

from Hacker News https://ift.tt/Kklyjsd

Sunday, May 28, 2023

How Scouts Find the Perfect Film Locations

Comments

from Hacker News https://ift.tt/0P3y2oI

DESKTOP2 – A Graphical User Interface for DOS

DESKTOP2 – A Graphical User Interface for DOS

Felix Ritter

Horiz Line

DESKTOP is a graphical user interface for DOS, which ones used to be a commercial shell like MS-Windows™ 3.0 or GEOS™. However, due to the dominance of MS-Windows 95, we were forced to stop publishing the program, so it's free now…

Features

DESKTOP is somewhat in the middle between MS-Windows and Norton Commander™, and mainly consists of the following components:

  • Hierarchical program manager
    • lets you group your software and documents together by topics
    • supports multiple layers
    • includes over 100 ready-to-use icons (MS-Windows Icon format)
    • Visual Schnauzer for the icons
  • Sophisticated file manager
    • extensive Drag&Drop functionality
    • all functions of file and directory management
    • virtual directory windows
    • disk operations, such as:
      • Format, Copy, Compare, Full&Quick Erase, Recover, Verify …
      • Search for files on drives
      • Identify, Show FAT
  • Block-oriented editor
    • Cut, Copy, Paste
    • shares the clipboard with the other text-fields in DESKTOP
  • The inevitable CD player
  • Calculator
  • Screen savers
  • Config tool
  • etc.

Screenshots

Everyone likes screenshots: (click to enlarge)

Program manager

Program manager

 

File manager

File manager

Requirements

  • IBM PC with 286 processor or better
  • VGA graphics card
  • 640 KByte RAM (2 MB recommended)
  • DOS 3.3 or higher
  • Mouse and running mouse driver
  • Extended Memory is also used if available (requires HIMEM.SYS)

Download & Installation

Just click on one of the links in the Ready-to-install section to download a compressed copy of the distribution disk. Before you may start to install DESKTOP, you need to unpack the files (e.g. pkunzip -d dsk2-lang.zip) in a temporary directory, such as C:\TEMP. After that, just change into this directory and key in INSTALL. The install program launches and tells you what to do next.

The command to start DESKTOP after the installation is DBD.BAT.

  • Ready-to-install release of DESKTOP
  • Complete source code of DESKTOP (Borland C++ 3.1 & TASM required)
  • User's Guide to DESKTOP

User comments

Instead of preying how great and handy this program is, I decided to let the users tell you what they think about DESKTOP2 ;-) :

i love your gui!!! it's great! my only problem is that i can't read german so i am forced to navigate using either trial-and-error or my trusty german-english dictionary (which is not well equipped at translating computer jargon!!) anyway, this is probably a lost cause and i understand you not wanting to go through all the hassel of translating it -- i was just wondering!! but all the same, great job!! it's a wonderful piece of software!!

Michael Moore

I came across your "Desktop" program and thought it was excellent. However, despite my name, I don't speak German! Which is a pity because "desktop" is the best DOS shell and a better file manager than Windows FileManager or Explorer.

K.H. von Kaufmann

I have just downloaded a copy of Desktop v2.61 and found this to be an excellent program. Keep up the good work.

Pete Edwards

Acknowledgements

The English language release (1998) is due to the great job of the guys listed below:

  • Sarah Bown
  • Catfish (Nathan Stanley)
  • Pop (Paul Stanley)
  • Mark Ray



from Hacker News https://ift.tt/up9YVb8

Discovery of 2000-year-old 'computer' leaves scientists baffled

Scientists have been left baffled by the discovery of the wreck of a 2,000-year-old “computer” that is amazingly complex.

The Antikythera mechanism – an astronomical calendar – has been dubbed “‘the first computer” and has baffled scientists for generations after it was first discovered inside a Greek shipwreck in 1901.

The device is a hand-powered time-keeping instrument that used a wing-up system to track the sun, moon and planets’ celestial time. It also worked as a calendar, tracking the phases of the Moon and the timing of eclipses.

Despite sounding relatively simple, the mechanism was actually ahead of its time, being more technically sophisticated than any other tool that was invented over the next 1,000 years.

In its current condition, the mechanism is in 82 separate fragments with only a third of its original structure remaining, including 30 corroded bronze gearwheels.

Sign up to our free Indy100 weekly newsletter

Research into the device from experts at University College London involved 3D computer modelling and helped them solve the mystery of how the device worked, revealing a “creation of genius”.

Adam Wojcik, a materials scientist at UCL said at the time: “We believe that our reconstruction fits all the evidence that scientists have gleaned from the extant remains to date.”

They theorised that the device tracked the movement of the sun, moon and planets on concentric rings, as the ancient Greeks believed that the sun and planets revolved around Earth, rather than the sun.

The researchers explained in Scientific Reports: “Solving this complex 3D puzzle reveals a creation of genius—combining cycles from Babylonian astronomy, mathematics from Plato’s Academy and ancient Greek astronomical theories.”

Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.



from Hacker News https://ift.tt/lsHdbu2

ChatGPT: A Mental Model

ChatGPT: A Mental Model

Since ChatGPT was launched at the end of 2022, I’ve been struggling to find the right framing for the technology. And, so has the rest of the world with countless articles of Doom and Gloom: fears of paperclip maximizers, fears of jobs lost, fears of economies reshaped, fears of AI hallucination, fears of further-accelerated misinformation, fears of students cheating, etc etc etc.

It’s exhausting.

And, as an engineer, I get asked by non-engineers about my opinion on the matter. So here it is..

The Case for Restraint

I’ve been through a number of technology hype-cycles at this point and my modus operandi has always been and remains: “Stay Calm and Carry On!”.

To jog your memory, these happened:

  • In the 1990s we finally found the “Ark of the Covenant” and it was called “Java Object-Oriented Programming”. We were going to rewrite everything, even the Operating Systems. And today Linux is… oh wait… it’s still C
  • In the late 1990s and early 2000s we all understood that the World-Wide-Web was so revolutionary that What Does The Company Do? was secondary to Do They Do It Online? And the NASDAQ totally didn’t crash nor did it take 15 years to recover the same price levels..
  • After the great-recession of 2008, Satoshi Nakamoto completely superseded the world financial system built on flaky humans trusting each other. With “trust” no longer required for anything, bitcoin ushered in a new era of computerized money, prosperity, and freedom. Instability in the financial sector was no more. And, black markets completely failed to function in the now digitized world. All rejoiced. Unfortunately many worthless pieces of archaic fiat currency paper still exist and thus, as a service to the world, this author began a charitable collection service (email me)
  • In 2022 after the spot-on 5 year predictions came to fruition, the US Department of Transportation outlawed manual driving of automobiles, stating "It's clear that Level 5 self-driving is far superior to human drivers and Today is a Landmark Day for Public Safety." Argo AI stock tripled on market open. But, for some reason, I can't seem to reach the argo.ai website.. Hmmmm..
  • 2023: ChatGPT makes the world into one big paperclip factory, killing all humans in the process. RIP Humanity.

Honorable mentions for your next game of Buzzword Bingo: Everything is Big Data, Everything is Microservices, Everything is Agile, Everything is a Service-Oriented Architecture, Everything should be Javascript everything, Everything can be done with No Code, Everything should be in the Cloud, Everything should be On-Prem, Everything can be modelled with Machine Learning & Data-science, ...

Glib jokes aside, there is a sense that ChatGPT is a bit different, and honestly I don’t disagree (read on). But, there is an awful tendency of the human brain to grab onto change and either project too much excitement or too much fear. The truth lives somewhere in the middle.

Enter Stage Left: Rodney Brooks

Recently IEEE Spectrum published an interview with Rodney Brooks, regarded roboticist, entitled Just Calm Down About GPT-4 Already. In it, Rodney Brooks poses a framing that I've Felt from the beginning but didn’t have the correct words for:

it doesn’t have any underlying model of the world

And in what is nearly a Zen Koan on Theory of Mind, he says:

What the large language models are good at is saying what an answer should sound 'like', which is different from what an answer should 'be'

And this captured my Feeling precisely.

Let me explain.

Interviewing ChatGPT

When ChatGPT was launched in late 2022, friends immediately raved to me about it. They said, “I want it right next to me, like a pair programmer”. So, naturally, I sought to evaluate such a bold claim.

I asked it the same interview questions I’d ask a candidate. If it’s going to work with me, it ought to pass the interview, right? It didn’t. In fact, it failed miserably. And it failed in all the ways a normal candidate fails (which is remarkable in its own way).

How did it fail? It simply didn’t have an underlying Mental Model of the World. On reflection, this is what my interview questions have always been about. I’m not interested in trivia knowledge. I’m not interested in Tools Used. I’m not interested in a few properly composed buzz-words.

But, I am interested in seeing someone reason through a problem based on some underlying model of reality. I like to probe at the edge-cases of that model. I like to throw rare unexpected “curve-balls”. I like to get people to think about sub-problems they’ve never considered before. It is as-if I want to say “let’s together go to the edge of our collective understanding and then try to keep going”. However, to get there, we usually have to first consider and dispatch with the “standard” or “average” answers. By contrast, ChatGPT does not show this ability.

Expert Test-Taking and World Model Building

Back in my school days, I would occasionally meet a super good test taker. I mean the kind of person who don’t learn the actual material. Instead they think about how the test-maker constructed the test. E.g. how often did they make “(a)” the answer to a multiple-choice question? I’ve meet people who never actually learned algebra because they could just “pass the test”. There’s a part of me that’s in Absolute Awe of such a skill. It’s one I don’t have. I have a bad memory. I’m a bad actor. My ability to “read people” is almost certainly below average. I’ve always relied on building and exploring an ever-elaborate model of the world as a crutch to navigate this complicated world.

It’s long been my estimation that all others do the same: Build a World Model. Is this true? I don’t know. The long line of job candidates that are particularly skilled at producing “facts on-demand” would seem to argue otherwise. However, its clear to me that ChatGPT certainly doesn’t.

But this cuts another way too.

Knowing an Average Amount of … Everything!

My current mental model of ChatGPT is that it’s akin to a “Maximum Likelihood Estimator for the Entirety of Human Knowledge”. There are two very different ways to interpret that: (1) Meh, it’s just a silly stats trick and (2) Holy F***ing Shit!!

Have you ever met a person who seemed to know a little something about everything? Perhaps that person also had a large and diverse social circle? Perhaps you would seek this person out if you had a question about something and needed someone to point you in the right direction? A person with extreme breadth.

In my experience, that person doesn’t have the most depth of knowledge. Or perhaps they even give you some wrong answers. Maybe those wrong answers were even given quite confidently. And maybe you feel slighted by them leading you on… But, maybe you stick around anyways because you appreciate their breadth. After all, it’s only occasionally that they are disastrously wrong (shrug).

Now that person goes off to GPU training camp for about 1000 years and comes back as ChatGPT

How can we not be impressed? To know and have access to the “standard” or “average” answer of … well … Everything. Wow.

But in that millennium of training, the core structure didn’t change. It’s the same old friend you always had that will error in the same ways, occasionally make you feel slighted, and sometimes leave you disappointed at their lack of depth.

So where does that leave us?

ChatGPT is Unreasonably Effective and Valuable

An entrepreneur friend recently told me that they use it everyday constantly. This makes sense, being an entrepreneur means that you need to wear a “different hat” constantly. Success favors one who can manage and leverage a large breadth. And quickly!

I myself managed to learn and implement an RSS feed for this website with Zero Background in ~1 hour with ChatGPT. There were a couple mistakes it made, but they were easy to correct. I'm certain it would have taken much longer with Google alone.

At this point, Google seems so SEO-gamed that it can be hard to find “maximum likelihood” average information quickly. You have to wade through lots of clickbait and ads and fluff that is more about “brand-building” than “education” to find the real gems. ChatGPT is simply a time win. Google is scared and they should be.

So, what does the future look like?

Will Large-Language Models transform the global economy? Probably. But it will take some time. The internet took some time. So did mobile phones. So did most new technologies.

Will a sufficient number of human jobs cease to exist? Probably not.

Instead what you have is a remarkable tool to take creativity and ingenuity to new heights. I would expect to see people combining disparate knowledge sets (breadth) in novel ways. It’s a massive boon for multi-disciplinary projects.

And if you’re afraid about job losses, consider this: we once had actual people that were called “Computers” (e.g. see underrated movie: Hidden Figures) which were replaced by machines. Did those jobs go away? No, they got majorly restructured and then growth absolutely exploded! We now just call them “computer programmers” and as of 2023 there are >25 million of them (stats).

It’s hard to believe that it’ll be significantly different this time. For some reason, every time humanity invents 1 new innovative tool, we seem to immediately find about 100 new things to do with it that were never practical before. This is a story of the history of humanity. And, there’s surely some Philosophy of the Human Mind buried in there somewhere, but I’m not going to attempt to unpack that today.

Can change be scary? Yes. Absolutely. And I feel sorry for you if you’re unlucky enough to have to restructure your life and career as a result. But this type of change is essential. An average person today has it significantly better than even the richest humans of just a handful of generations ago. And it’s precisely because of this type of change.

It’s a remarkable time to be alive.

Keep Calm and Carry On!



from Hacker News https://ift.tt/Y0jA59G