Sunday, May 31, 2020

Cybersecurity the responsibility of agencies, not us, AGD and ASD say


During a hearing held by the Joint Committee on Public Accounts and Audit last month into the cybersecurity resilience of Commonwealth entities, the federal opposition poked holes in current reporting requirements and highlighted a lack of accountability for when Commonwealth entities come up short.

The Australian National Audit Office (ANAO) faced the firing line, with the committee asking why the Protective Security Policy Framework (PSPF) has not been made mandatory for all Commonwealth entities, and why, given they're called the Essential Eight, only the Top Four is looked at.

"It's not uncommon within the Commonwealth public sector that mandated rules from the centre apply to the non-corporate sector, but not to all of the corporate sector," Auditor-General Grant Hehir said at the time. "You'll find that across a lot of areas like procurement, grants, and in the [PSPF]."

In 2019, ANAO cyber-resilience audits had found 29% of agencies audited were compliant with the Top Four, whereas 60% of departmental self-assessments found themselves to be compliant.

Shadow Assistant Minister for Cyber Security Tim Watts called it an inaccurate self-assessment.

"If you look at the evidence from our audits, one conclusion we can draw is that the framework that was in place wasn't driving the behavioural change to ensure that the regulatory stance was robust enough," Hehir said.

"I think they are questions more to the organisations responsible for setting the framework rather than us. But we'd like to see the framework being implemented resulting in cybersecurity, and if it's not then the argument is why not? Some of that has to go to the robustness of the regulatory framework."

The Attorney-General's Department (AGD) and the Department of Home Affairs are the key regulatory entities. The AGD is responsible for setting government protective security policy guidance, including for information security, through the PSPF. 

Providing a submission [PDF] to the committee in the aftermath of the hearing, AGD said cybersecurity is an important priority for the Australian government.

"The PSPF assists Commonwealth entities to protect their people, information, and assets, at home and overseas. The core requirements for information security are set out in policies eight to 11 of the PSPF, covering sensitive and classified information, access to information, safeguarding information from cyber threats and robust ICT systems," it told the committee.

It also said the Australian Cyber Security Centre (ACSC) within the Australian Signals Directorate (ASD) leads the Australian government's operational cybersecurity capability and said it was the responsibility of the ACSC to produce the Australian government Information Security Manual (ISM), which is referenced in the PSPF as the key source of guidance for organisations safeguarding information from cyber threats and developing "robust" IT systems.

"The purpose of the ISM is to outline a cybersecurity framework that organisations can apply, using their risk management framework, to protect their information and systems from cyber threats," AGD wrote.

AGD said the PSPF requires non-corporate Commonwealth entities to implement four of the ACSC's eight essential mitigation strategies and "strongly recommends the adoption of all eight".

"Entities must also consider other strategies included in the ACSC's Strategies to Mitigate Cyber Security Incidents," it added.

It also said any questions about specific entities and their cyber posture should be directed to them.

"As individual Commonwealth entities are responsible for their assessment in light of their risk environment, questions regarding PSPF implementation within an individual entity are best directed to that entity," it wrote.

Also providing a submission [PDF] following last month's probe of ANAO, was the Department of Defence, on behalf of the ASD.

Defence pointed to the Report to Parliament on the Commonwealth's Cyber Security Posture in 2019 and said it provided the latest information on the cybersecurity posture of Commonwealth entities.

"The report highlights that the overall cybersecurity of Commonwealth entities continues to improve," it wrote. "It acknowledges that, in the context of a dynamic and evolving threat environment, cybersecurity is an ongoing task."

It said that while ASD regularly engages with Commonwealth entities to provide cybersecurity advice and assistance, individual entities are responsible for the security of their own network and information.

"Cybersecurity maturity is a compliance and risk management issue for each accountable authority to balance in the context of their unique risk environment and the complexities of their operations," the submission continued.

"Questions regarding the cybersecurity posture of individual Commonwealth entities are better directed to the relevant entity."

MORE AUSTRALIAN CYBER



from Latest Topic for ZDNet in... https://ift.tt/2Mk3EDz

The biggest hacks, data breaches of 2020 (so far)


In March, T-Mobile revealed a security incident in which cyberattackers were able to successfully infiltrate the firm's email services, leading to the compromise of T-Mobile customer and employee information, as well as staff email accounts.

Names, addresses, phone numbers, account numbers, plan information, and billing information may have been stolen.

Via: ZDNet



from Latest Topic for ZDNet in... https://ift.tt/3gF6kJV

Twitter vows to work alongside Australia in thwarting foreign interference


When it comes to thwarting foreign interference through social media platforms, Twitter believes what is important is to approach the issue as a broad geopolitical challenge, not one of content moderation, saying that government needs to assume some of the responsibility.

"Removal of content alone will not address this challenge and while it does play an important role in addressing the challenge, governments must address the broader landscape," Twitter said.

"We do not elevate our own values by seeking to silence those who do not share them. In fact, we undermine these principles and erode their global accessibility."

The comments were made in Twitter's submission [PDF] to the Select Committee on Foreign Interference through Social Media, which is currently looking into risks posed at Australia's democracy by foreign interference through social media.

"The purpose of Twitter is to serve the public conversation. We serve our global audience by focusing on the needs of the people who use our service, and we put them first in every step we take," it wrote.

"We work with commitment and passion to do right by the people who use Twitter and the broader public."

Having provided the submission prior to United States President Donald Trump accusing Twitter of "interfering" with the 2020 presidential election, after the company slapped fact-checking links on his tweets that claimed mail-in voting leads to a "rigged election", Twitter said protecting election integrity does not end with an election period.

Read more: Trump and Twitter. Why they just can't quit each other

"As the challenges evolve, so will our approach," it said. "We will continue to work with peers and partners to tackle issues as they arise, with collaborations across government institutions, civil society experts, political parties, candidates, industry, and media organisations as we move towards our common goal of a healthy and open democratic process."

In its nine-page submission to the committee, the social media said the foremost challenge on the matter is for governments to communicate to build public trust by "directly engaging with the conflicting narratives propagated on and offline by foreign actors".

"Through clear and concise electoral regulations, companies are able to navigate and address relevant interference concerns. Additionally, public trust can be built through clear communication and strong attribution to address transparency issues related to interference," Twitter said.

It pointed to the Australian Electoral Commission (AEC), throughout the 2019 federal election, taking a leadership role in engaging with the public conversation on Twitter to provide "clear, credible information", saying this helped direct voters to reliable resources.

"Through transparent communication and enabling voters to access the information they need, the Australian government can foster a sense of trust and encourage freedom of discussion reflective of the implied freedom of political communication embodied within the Australian Constitution," it said.

See also: Facebook says people, not regulators, should decide what is seen

Twitter said its work on the issue was not done. It said coordinated inauthentic behaviour would not cease and that the issues outdate Twitter's existence.

"They will adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge," it said. "Given this, the threat we face requires extensive partnership and collaboration with government entities, civil society experts, and industry peers.

"We each possess information the other does not have, and our combined efforts are more powerful together in combating these threats."

Twitter said foreign interference is an unavoidable part of a globalised communications landscape and said policy makers should seek to build resilience and digital literacy to protect against activity, while "taking the necessary steps to inform the public of the facts on key public policy issues, defending domestic policy, and advocating against hostile actors where necessary".

SEE ALSO



from Latest Topic for ZDNet in... https://ift.tt/2XQGee6

Police Are Instigating Violence During the Nationwide Protests


Police in Minneapolis
Photo: Getty Images

The blaze of protests that continue to spread across the country this weekend in response to the police killing of George Floyd and countless others throughout American history is being responded to by policing so brutal even Alanis Morissette could clock the irony.

Videos posted on social media by demonstrators, journalists, and even people sitting on their porches at home show militarized police officers who appear eager to enact violence on the streets rather than help quell it.

In Minneapolis, where the Fed Up-Rising protests started after former police officer Derek Chauvin kneeled on the neck of George Floyd for more than 8 minutes, cops have not only been firing teargas, rubber bullets, and paint canisters at demonstrators but also on civilians observing from home:

Videos posted on social media yesterday captured New York Police Department officers driving into a crowd of protestors, reminiscent of the tragic images from Charlottesville in 2017 where a white supremacist ran into a crowd of people demonstrating against racial violence and killed Heather Heyer.

Another video shows an NYPD officer pulling down the face mask of a black protestor, whose hands are raised, and pepper spraying him directly in the face:

Members of the media also continue to be shot at and otherwise targeted by police officers while doing their job reporting on the protests.

CNN reporter Omar Jiminez, who was arrested on-air by Minneapolis officers earlier this weekend, continued covering the dystopian scenes on the streets where the Minnesota Department of Public Safety reportedly said cops were deployed to “address a sophisticated network of urban warfare.”

Are the police aiming to stop “urban warfare” or are they the ones continuing to wage it against people they are supposed to protect and serve?

Meanwhile, the president of the U.S. continues to signal to police and everyone else across the country his own blood thirst for violence.

On Saturday he posted on Twitter that protestors at the White House “would have been greeted with the most vicious dogs, and most ominous weapons,” if they had come close to breaching the fence.

“Many Secret Service agents just waiting for action. ‘We put the young ones on the front line, sir, they love it,” Trump tweeted.

According to reporting from the Independent, the head of the Minneapolis Police Union Lieutenant Bob Kroll spoke at a rally for Trump last year where he celebrated the permission the president gave to cops to be brutal.

From the Independent:

Donning a red “Cops for Trump” shirt as he took the stage, the lieutenant attacked the Obama administration over its alleged “despicable” treatment of police, adding: “The first thing President Trump did when he took office was turn that around … he decided to start to let cops do their job, put the handcuffs on the criminals instead of us.”



from Hacker News https://ift.tt/2yTSCBW

Solid – A declarative JavaScript library for building user interfaces

Solid

Build Status Coverage Status NPM Version Gitter Subreddit subscribers

Solid is a declarative Javascript library for creating user interfaces. It does not use a Virtual DOM. Instead it opts to compile its templates down to real DOM nodes and wrap updates in fine grained reactions. This way when your state updates only the code that depends on it runs.

Key Features

  • Real DOM with fine-grained updates (No Virtual DOM! No Dirty Checking Digest Loop!).
  • Declarative data
    • Simple composable primitives without the hidden rules.
    • Function Components with no need for lifecycle methods or specialized configuration objects.
    • Render once mental model.
  • Fast! Almost indistinguishable performance vs optimized painfully imperative vanilla DOM code. See Solid on JS Framework Benchmark.
  • Small! Completely tree-shakeable Solid's compiler will only include parts of the library you use.
  • Supports modern features like JSX, Fragments, Context, Portals, Suspense, SSR, Error Boundaries and Asynchronous Rendering.
  • Built on TypeScript.
  • Webcomponent friendly
    • Context API that spans Custom Elements
    • Implicit event delegation with Shadow DOM Retargeting
    • Shadow DOM Portals
  • Transparent debugging: a <div> is just a div.

The Gist

import { render } from "solid-js/dom"; const HelloMessage = props => <div>Hello {props.name}</div>; render( () => <HelloMessage name="Taylor" />, document.getElementById("hello-example") );

A Simple Component is just a function that accepts properties. Solid uses a render function to create the reactive mount point of your application.

The JSX is then compiled down to efficient real DOM expressions:

import { render, template, insert, createComponent } from "solid-js/dom"; const _tmpl$ = template(`<div>Hello </div>`); const HelloMessage = props => { const _el$ = _tmpl$.cloneNode(true); insert(_el$, () => props.name, null); return _el$; }; render( () => createComponent(HelloMessage, { name: "Taylor" }), document.getElementById("hello-example") );

That _el$ is a real div element and props.name, Taylor in this case, is appended to it's child nodes. Notice that props.name is wrapped in a function. That is because that is the only part of this component that will ever execute again. Even if a name is updated from the outside only that one expression will be re-evaluated. The compiler optimizes initial render and the runtime optimizes updates. It's the best of both worlds.

Installation

You can get started with a simple app with the CLI with by running:

> npm init solid app my-app

Use app-ts for a TypeScript starter.

npm init solid <project-type> <project-name> is available with npm 6+.

Or you can install the dependencies in your own project. To use Solid with JSX (recommended) run:

> npm install solid-js babel-preset-solid

Solid Rendering

Solid's rendering is done by the DOM Expressions library. This library provides a generic optimized runtime for fine grained libraries like Solid with the opportunity to use a number of different Rendering APIs. The best option is to use JSX pre-compilation with Babel Plugin JSX DOM Expressions to give the smallest code size, cleanest syntax, and most performant code. The compiler converts JSX to native DOM element instructions and wraps dynamic expressions in reactive computations.

The easiest way to get setup is add babel-preset-solid to your .babelrc, or babel config for webpack, or rollup:

Remember even though the syntax is almost identical, there are significant differences between how Solid's JSX works and a library like React. Refer to JSX Rendering for more information.

Alternatively in non-compiled environments you can use Tagged Template Literals Lit DOM Expressions or even HyperScript with Hyper DOM Expressions.

For convenience Solid exports interfaces to runtimes for these as:

import h from "solid-js/h"; import html from "solid-js/html";

Remember you still need to install the library separately for these to work.

Solid State

Solid's data management is built off a set of flexible reactive primitives. Similar to React Hooks except instead of whitelisting change for an owning Component they independentally are soley responsible for all the updates.

Solid's State primitive is arguably its most powerful and distinctive one. Through the use of proxies and explicit setters it gives the control of an immutable interface and the performance of a mutable one. The setters support a variety of forms, but to get started set and update state with an object.

import { createState, onCleanup } from "solid-js"; const CountingComponent = () => { const [state, setState] = createState({ counter: 0 }); const interval = setInterval( () => setState({ counter: state.counter + 1 }), 1000 ); onCleanup(() => clearInterval(interval)); return <div>{state.counter}</div>; };

Where the magic happens is with computations(effects and memos) which automatically track dependencies.

const [state, setState] = createState({ user: { firstName: "Jake", lastName: "Smith" }}) createEffect(() => setState({ displayName: `${state.user.firstName} ${state.user.lastName}` }) ); console.log(state.displayName); // Jake Smith setState('user', {firstName: "Jacob" }); console.log(state.displayName); // Jacob Smith

Whenever any dependency changes the State value will update immediately. Each setState statement will notify subscribers synchronously with all changes applied. This means you can depend on the value being set on the next line.

Solid State also exposes a reconcile method used with setState that does deep diffing to allow for automatic efficient interopt with immutable store technologies like Redux, Apollo(GraphQL), or RxJS.

const unsubscribe = store.subscribe(({ todos }) => ( setState('todos', reconcile(todos))); ); onCleanup(() => unsubscribe());

Read these two introductory articles by @aftzl:

Understanding Solid: Reactivity Basics

Understanding Solid: JSX

And check out the Documentation, Examples, and Articles below to get more familiar with Solid.

Documentation

Examples

Related Projects

Latest Articles

Status

Solid is mostly feature complete for its v1.0.0 release. The next releases will be mostly bug fixes API tweaks on the road to stability.



from Hacker News https://github.com/ryansolid/solid

Rethinking of CGI as a selfhosted lambda server

Trusted-CGI

license donate Download

Lightweight self-hosted lambda/applications/cgi/serverless-functions engine.

Download

Idea behind

The idea came from the past: CGI. At the beginning of the Internet, people have been making a simple script that receives incoming bytes over STDIN (standard input) and writes to STDOUT (standard output). The application server (aka CGI server), accepts clients, invokes scripts and redirects socket input/output to the script. There are a lot of details here but this is a brief explanation.

After more than 20 years the world spin around and arrived at the beginning: serverless functions/lambda and so on. It is almost CGI, except scripts became docker containers, and we need many more servers to do the same things as before.

So let’s cut the corners a bit: we have a trusted developer (our self, company workers - means it’s not arbitrary clients), so we don’t need a heavy restriction for the application, so let’s throw away docker and another heavy staff.

Docs and features

  • Manifest - main and mandatory entrypoint for the lambda
  • Actions - arbitrary actions that could be invoked by UI or by scheduler
  • Scheduler - cron-like scheduling system to automatically call actions by time
  • Aliases - permanent links and aliases/links
  • Security - security and restrictions
  • GIT repo - using GIT repo as a function

High-level components diagram

Download

Why I did it?

Because I want to write small handlers that will be 99% of the time just do nothing. I am already paying for the cheapest Digital Ocean (thanks guys for your existence) and do not want to pay additionally to Lambda providers like Google/Amazon/Azure.

I also tried self-hosted solutions based on k3s but it too heavy for 1GB server (yep, it is, don’t believe in marketing).

So, ‘cause I am a developer I decided to make my own wheels ;-)

Installation

Actions

If function contains Makefile and installed make, it is possible to invoke targets over UI/API (called Actions). Useful for installing dependencies or building.

URL

Each function contains at least one URL: <base URL>/a/<UID> and any number of unique aliases/links <base URL>/l/<LINK NAME>.

Links are useful to make a public links and dynamically migrate between real implementations (functions). For ex: you made a GitHub hook processor in Python language, than changed your mind and switched to PHP function. Instead of updating link in GitHub repo (that could be a hassle if you spread it everywhere) you can change just a link.

Important! Security settings and restrictions will be used from new functions.

Templates

Embedded

Python 3

Host requirements:

  • make
  • python3
  • python3-venv

Node

Host requirements:

PHP

Host requirements:

Nim lang

Host requirements:

Development

Embedding UI

```shell script make clean make embed_ui `

TODO

  • Upload/download tarball
  • CLI control


from Hacker News https://ift.tt/2XNGM4y

Joomla team discloses data breach

Joomla
Image: Joomla team

The team behind the Joomla open source content management system (CMS) announced a security breach last week.

The incident took place after a member of the Joomla Resources Directory (JRD) team left a full backup of the JRD site (resources.joomla.org) on an Amazon Web Services S3 bucket owned by their own company.

The Joomla team said the backup file was not encrypted and contained details for roughly 2,700 users who registered and created profiles on the JRD website -- a portal where professionals advertise their Joomla site-making skills.

Joomla admins said they are still investigating the incident. It is currently unclear if anyone found and download the data from the third-party company's S3 server.

Data that could have been exposed in the case someone found and downloaded the backup includes details such as:

  • Full name
  • Business address
  • Business email address
  • Business phone number
  • Company URL
  • Nature of business
  • Encrypted password (hashed)
  • IP address
  • Newsletter subscription preferences

The severity of this breach is considered low, as most of this information was already public, as the JRD portal serves as a directory for Joomla professionals. However, hashed passwords and IP addresses were not meant to be public.

The Joomla team is now recommending that all JRD users change their password on the JRD portal, but also on other sites where they reused the password, as accounts on these sites could be under the threat of a credential stuffing attack if attackers manage to crack the users' passwords.

The Joomla team said that once it learned of this accidental leak of the JRD site backup, they also carried out a full security audit of the JRD portal.

"The audit also highlighted the presence of Super User accounts owned by individuals outside Open Source Matters," the Joomla team said in a breach disclosure published last Thursday.

Joomla is a content management system (CMS), a web-based application that's used to build and manage self-hosted websites. It is currently the third-most used CMS on the internet. It was passed for the second spot by Shopify, this month.



from Latest Topic for ZDNet in... https://ift.tt/2XOxLbn

America’s Never-Ending Battle Against Flesh-Eating Worms

illustration of an airplane dropping insects
Cornelia Li

The Florida Keys are a place where deer stand next to children at school-bus stops. They lounge on lawns. They eat snacks right out of people’s hands. So when the deer began acting strangely in the summer of 2016, the people of the Keys noticed. Bucks started swinging their heads erratically, as if trying to shake something loose.

Then wounds opened on their heads—big, gaping wounds that exposed white slabs of bone. Something was eating the deer alive.

That something, lab tests would later confirm, was the New World screwworm, a parasite supposed to have been eradicated from the United States half a century ago. No one in the Keys had ever seen it. If you had asked an old-time Florida rancher though, he might have told you boyhood stories of similarly disfigured and dying cattle. In those days, screwworms found their way into cattle through any opening in the skin: the belly buttons of newborn calves, scratches from barbed wire, even a tick bite. Then they feasted.

Screwworms once killed millions of dollars’ worth of cattle a year in the southern U.S. Their range extended from Florida to California, and they infected any living, warm-blooded animal: not only cattle but deer, squirrels, pets, and even the occasional human. In fact, the screwworm’s scientific name is C. hominivorax or “man eater”—so named after a horrific outbreak among prisoners on Devil’s Island, an infamous 19th-century French penal colony in South America.

For untold millennia, screwworms were a grisly fact of life in the Americas. In the 1950s, however, U.S. ranchers began to envision a new status quo. They dared to dream of an entire country free of screwworms. At their urging, the United States Department of Agriculture undertook what would ultimately become an immense, multidecade effort to wipe out the screwworms, first in the U.S. and then in Mexico and Central America—all the way down to the narrow strip of land that is the Isthmus of Panama. The eradication was a resounding success. But the story does not end there. Containing a disease is one thing. Keeping it contained is another thing entirely, as the coronavirus pandemic is now so dramatically demonstrating.

To get the screwworms out, the USDA to this day maintains an international screwworm barrier along the Panama-Colombia border. The barrier is an invisible one, and it is kept in place by constant human effort. Every week, planes drop 14.7 million sterilized screwworms over the rainforest that divides the two countries. A screwworm-rearing plant operates 24/7 in Panama. Inspectors cover thousands of square miles by motorcycle, boat, and horseback, searching for stray screwworm infections north of the border. The slightest oversight could undo all the work that came before.

The insect is relentless in its search for hosts. Those who fight it must be relentless too.

This past August, I went to Panama to meet the people who maintain the screwworm barrier. The Keys outbreak was long over by then, quelled within months by the release of sterile screwworms from Panama. As startling as it was to Floridians, it had been just a small, gruesome blip in the history of the screwworm.

A transcontinental screwworm barrier has been in place for 50 years—longer than many of the people who now maintain it have been alive. They work for a joint commission of Panama’s agricultural department and the USDA known as COPEG, or the Comisión Panamá–Estados Unidos para la Erradicación y Prevención del Gusano Barrenador del Ganado. The day before I landed at Tocumen International Airport, two small COPEG planes had taken off and released their precious loads of screwworms over the Panama-Colombia border.

More screwworm flights were scheduled for the next day, a Wednesday, and Thursday and Saturday and Monday and so on and on. “We will be here for a long time,” a COPEG staff member in Panama told me with evident pride. “We should be here for the next 100 years.”

In the early days of the eradication effort, USDA scientists were not so certain of success—or longevity. They had to bootleg money from other programs because they didn’t have enough funding. In press interviews, they worried about what laughingstocks they’d be if their “idiotic insect-sex scheme” failed and, God forbid, became an extremely mockable symbol of government waste.

The man who came up with the scheme, and believed in it most passionately, was Edward F. Knipling, a USDA entomologist who, in the 1930s, spent long hours watching screwworms mate. As a boy, he had waged constant war against insect pests on his family’s Texas farm. “Every plant that we grew,” he later said, “there was some type of insect that was causing damage.” Screwworms infected the farm’s cows and pigs, and Knipling remembered them as one of the worst pests. He had to climb into the hog pens to smear medicine on the wounds of uncooperative sows. “That was a very unpleasant task,” he recalled some eight decades later, in an interview shortly before he died.

Adult screwworms are actually flies, with big red eyes and metallic blue-green bodies. After mating, the females lay their eggs in open wounds, and the resulting larvae eat through a ring of surrounding flesh. Once sufficiently engorged, the larvae drop off the wounds to pupate, emerging as a new generation of flies. As Knipling watched screwworms churn through their life cycle in his government laboratory, he made an observation whose importance he could intuit but not yet put to use: Female screwworms mate only once in their entire life. If a female screwworm mates with a sterile male, she will never have any offspring. So if the environment could somehow be saturated with sterile males, Knipling surmised, screwworms would very quickly mate themselves out of existence.

But Knipling did not know how to sterilize male screwworms. In any case, the U.S. was entering World War II, and his expertise was needed elsewhere. Knipling was reassigned to a lab in Florida, where his team perfected incredibly effective pesticides such as DDT that protected troops from insect-borne diseases. DDT helped the U.S. win the war, but it would later devastate the environment from overuse. This experience shaped the rest of Knipling’s career. He devoted himself to sterilizing insects, a way to control pests without pesticides.

After the war, Knipling went back to studying screwworms with his fellow entomologist Raymond Bushland. Scientists now knew, from the horrific consequences of atomic bombs dropped on Japan, that high doses of radiation damage human tissue and cells. When a colleague introduced Knipling to research on the sterilization of other flies by radiation, he wondered: Could radiation sterilize screwworms too?

The entomologists managed to get access to an X-ray machine at a nearby military hospital where one of Bushland’s old Army buddies worked. Every Thursday afternoon, Knipling and Bushland put their screwworms through the X-ray machine, experimenting with different developmental stages and radiation levels. They needed screwworms that were damaged enough to be sterile but not so damaged that they could not attract a mate in the wild. The best time to irradiate, the two found, was 5.5 to 5.7 days into the pupal stage, when the adult fly’s ovaries and testes were developing and thus most sensitive to radiation.

The radiation worked. These screwworms indeed turned sterile, but Knipling and Bushland still needed to prove that irradiated males could actually mate with fertile females in the field. In 1951, a USDA team began releasing sterile male screwworms on Sanibel Island off the coast of Florida. The screwworms persisted, so the team in turn persisted, releasing sterile males on the island for two more years. It still didn’t work. The team thought that fertile males were probably flying over from the mainland. So in 1954, the scientists tried again on a more remote island: Curaçao in the Dutch Caribbean. This time, they succeeded. Screwworms disappeared from the island within months.

From here, a quietly audacious project to reengineer the environment for livestock got underway, ultimately changing the lives of cattle, deer, and humans throughout the North American continent.

When the Florida Cattlemen’s Association caught wind of the success on Curaçao, it immediately recognized the potential closer to home. The group lobbied state and federal officials for a bigger undertaking. And in 1957, the USDA began a campaign to wipe out the flesh-eating parasites east of the Mississippi River. When that succeeded two years later, ranchers in Texas, Louisiana, New Mexico, Arizona, and California started clamoring for their own eradication program. Screwworms were so widespread in those states that they had shaped cowboy culture: Long days of “riding the range,” for example, were dedicated to finding signs of screwworm infection.  

For the USDA, though, the West proved a challenge on a different scale. The screwworm-eradication program had to build a new plant in Mission, Texas, to produce as many as 200 million sterile flies each week. Because screwworms prefer to eat living flesh, feeding the plant’s flies was a grisly logistical puzzle. Initially, USDA scientists gave the screwworms a mixture of warm ground beef and blood, but beef was expensive. At various times, they substituted cheaper meats such as horse, whale, pig or cow lung, and even nutria—an invasive rodent that was then taking over Louisiana. By 1962, the flies at the Mission plant were consuming 240,000 pounds of meat and 10,000 gallons of whole blood every week. In the earliest days of sterile-screwworm testing, the flies stank so badly that airlines refused to ship them. Workers learned to spray the boxes with cologne.

The Mission plant closed in 1981, when screwworm rearing moved to Mexico and then later to Panama. Today, the screwworm-production facility in Panama is located on an old sugarcane plantation, about 20 miles east of the country’s capital. The roof is painted a smooth mint green, and the facility is the program’s most modern yet. But screwworms still have primordial urges: They still have to feed on animal remnants. So when I stepped inside the facility last August, the first thing I noticed was the smell—foul, with a metallic tang. My brain involuntarily matched it to bloody tampon.

Screwworms are no longer given raw meat, but their keepers make sure they get animal protein in other ways. The current diet is a more economical mix of powdered blood, milk, and egg—reconstituted and then thickened with cellulose into a dark-brown sludge. I watched workers pipe it into trays, the diet gushing like sewage.

The rearing facility itself is a windowless maze of concrete. Each room is maintained at a prescribed temperature and humidity, optimized for a particular stage of screwworm development. The larvae, for example, hatch in a room heated to 102 degrees Fahrenheit, which mimics the body temperature of an infected animal. Once hatched, they are wheeled to cooler rooms, where they crawl out of their food and become pupae. My glasses fogged up multiple times as we moved through rooms hot and cold, dry and humid, following the screwworm’s life cycle.

In the 102-degree hatching room, staff pulled out a tray of food to show me a “feeding pocket.” When screwworms eat, they like to pack themselves tightly together like cigarettes in a carton, their mouths shoved down into the food and their white tails wriggling in the air. I had come across pictures of feeding pockets in animals’ open wounds before. Now I forced myself to look at the writhing brown mass, imagining this protein sludge to be flesh yielding to thousands of relentless mouths.

“For me, beautiful,” said Sabina Barrios, the head of screwworm production, gesturing at the feeding pocket. What she saw were healthy screwworms that would grow big and fat. In a few days, they would turn into pupae, also big and fat, that could be sterilized with radioactive cobalt 60. And a couple of days after that, they would emerge as flies, vigorous and ready to mate. A hundred and fifteen people work to keep the plant operating 24 hours a day, every day, to produce 20 million of these flies a week.

COPEG does more than produce sterile flies. It also runs a network of screwworm-inspection posts and offices that reaches into the most remote parts of the country. Early the next morning, I set out for the Darién, Panama’s easternmost and least populated province, with Pamela Phillips and John Welch. Both are scientists who have spent decades working on screwworm eradication; Phillips is now COPEG’s technical director, and Welch is its former technical director and current screwworm-program liaison.

Phillips and Welch work for COPEG through the USDA, and they had to inform the U.S. embassy that they were going to the Darién. We also had to avoid “red zones,” areas in the province that the U.S. government considers too risky for its employees to enter at all. (COPEG’s Panamanian inspectors still work in those parts.) FARC guerrillas used to roam the Darién, and drugs from South America still pass through here on their way north.

Welch is a bearded Texan with no fewer than two screwworm tattoos. He learned to speak Spanish after joining the program in 1984, and despite his very American accent, he launched into easy jokes with everyone we met. He hired Phillips to analyze satellite imagery for screwworm habitats in 1994. After 25 years of working together, their dynamic resembles that of an old married couple. “I’m biologically 67,” Welch told me. “Mentally, still a teenager,” Phillips added. As scientists, they have both found in screwworms a formidable intellectual challenge. As nature lovers, they enjoy the opportunities for travel that come with the job too.

Welch may be an adolescent at heart, but his back doesn’t handle these trips as well as it used to. The road to the Darién is so festooned with potholes that Phillips had to frequently swerve into the opposite lane to avoid them. When we crossed into the province, we stopped at a checkpoint to show our passports to the Senafront, Panama’s border police. COPEG has a checkpoint here too, mere feet away—but for animals. Cattle leaving the Darién are unloaded one by one and inspected for wounds, which are painted with a blue-green anti-parasite medication to kill any screwworms.  

Not too far past this checkpoint, the highway in the Darién simply ends. The only way to travel farther is by river. We, along with several Panamanian staff, put all our things in waterproof trash bags and got into a small COPEG boat. We arrived first in the town of Garachiné, a small collection of mostly cinder-block houses, then climbed into a COPEG pickup to jostle along the uneven dirt road to the smaller town of Sambu. During the hour-long drive, we passed more horses and riders than cars. Butterflies erupted by the road in regular bursts. But the land here is far less wild than it used to be. What was once rainforest and swamp is now pastureland. “People bring cattle here to fatten them up,” Phillips said.

Cattle ranching has been expanding in the Darién, and COPEG inspectors must travel down ever more roads and rivers to visit ever more farms. At a farm between Garachiné and Sambu, a farmer greeted us with his infant boy in tow, having recognized the COPEG logo—a retro-looking atomic fly—on our car. Inspectors stop by regularly to check the animals and chat with the farmers, reminding them to be vigilant of screwworms. A COPEG veterinarian, Manuel Sánchez, showed me the handwritten log recording several years of visits to this farm. Staff members visit the highest-risk farms in this region at least once a month, medium-risk farms every four months, and the rest once a year. The small COPEG outpost in this zone is responsible for a total of 224 farms. There are 12 other COPEG field posts like it in the Darién.

We were late leaving Garachiné, and the high tide that allows boats to move through the area’s shallow rivers and estuaries was on its way out. If the tide got too low, we would have to spend the night in Garachiné, where the COPEG office keeps a stack of old mattresses handy for stranded employees. We decided that the few hundred feet of mud and shallow water that separated us from the boat were traversable. But the mud gave way like quicksand, swallowing our feet whole. I pulled out my left leg, then my right leg, and then my left again. Phillips almost lost her shoe. We eventually made it to the boat, wet and oozing mud.

When the USDA began expanding the screwworm program into Central America, it dispatched scientists to all sorts of remote places. They collected new screwworm strains, studied the habitat in each country, and when screwworms were thought to be eradicated from a region, verified that they were gone. The work took Welch to Mexico, Panama, Costa Rica, Belize, Guatemala, Honduras, Nicaragua, the Dominican Republic, Colombia, Argentina, Uruguay, Jamaica, Aruba, and Cuba’s Guantánamo Bay. He and his colleagues hiked up and down mountains, sometimes in pouring rain, to check traps baited with a chemical attractant called Swormlure. On easier days, they sat for hours next to rotting liver, waiting for its odor to attract live screwworms. Sometimes they slept in cars and on hut floors. Sometimes they camped, falling asleep to the sounds of wind and water.

The U.S. government’s decision to eradicate screwworms in Central America was ultimately about money. Protecting American livestock by dropping sterile flies over the narrow 50-mile Isthmus of Panama is cheaper than maintaining a barrier, even a virtual one, along the 2,000-mile U.S.-Mexico border.

The U.S. had officially declared victory over the screwworm in 1966, but the barrier of sterile flies it established on the U.S.-Mexico border quickly proved ineffective. Ranchers in the U.S. Southwest continued to see outbreaks. With Mexico’s cooperation, the eradication front moved south to the Isthmus of Tehuantepec, where Mexico narrows to 120 miles. The two countries split the bill based on the value of the livestock that would benefit in each country: 80 percent U.S., 20 percent Mexico.

In 1985, USDA scientists proposed moving the barrier south again—to the even narrower Isthmus of Panama, where it would be cheaper to maintain. But it would require convincing the governments of seven more countries to agree to—and help pay for—screwworm eradication.

The late 1980s were turbulent times for Central America. Both state-sponsored and guerrilla attacks on civilians were widespread. Other parts of the U.S. government were intervening in the region as part of Cold War politics. “A lot of people speculated I was with the CIA,” says John Wyss, a retired USDA scientist and administrator who spent long periods in Central America working on the screwworm program’s expansion. Negotiations between USDA officials and their Central American colleagues took place against a backdrop of violence. Another USDA scientist, in his memoir, recalled negotiating with Honduran government officials about screwworms—only to learn two weeks later that the building where they had met had been blown up.

The eradication effort did have enthusiastic local allies, Wyss told me: Livestock owners in every country loved the idea. The negotiations went slowly, but by 1994, all of the countries had signed cooperative agreements with the U.S.

In 2000, when Central American eradication was in its final phases, Wyss wrote a paper touting the program’s political achievements. “There is one tool that has played an important role since the beginning of the screwworm eradication program, one that is frequently overlooked or not even mentioned,” he wrote. “This tool is the cooperative agreement.” These agreements outline, in bureaucratic detail, how salaries will be paid, how property will be managed, and how the agreement itself will be amended. Panama was declared free of screwworm in 2006, and the sterile-insect barrier was erected. COPEG is now jointly headed by two director generals: Vanessa Dellis for the U.S. and Enrique Samudio for Panama.

The screwworm program costs $15 million a year, a small fraction of the estimated $796 million a year that it saves American farmers. (That estimate, from 1996, is equivalent to $1.3 billion in today’s dollars.) Still, the program is constantly looking for ways to cut costs. At the production facility, Phillips showed me prototypes of small, climate-controlled rearing cabinets, which could eliminate the need to heat or cool entire rooms. Biologists are also developing a genetically modified male-only strain of screwworms, which would require fewer flies to be raised and released. A cheaper program is a more sustainable one, and sustainability is essential.

illustration of an airplane
Cornelia Li

Over the years, the success of screwworm eradication has inspired scientists to apply the sterile-insect technique to other agricultural pests such as the Mexican fruit fly, the Mediterraean fruit fly, and the pink bollworm. These programs have had varying degrees of success in the U.S., but none has expanded as widely as the screwworm effort. Screwworm infections, meanwhile, are still endemic in livestock and wildlife south of the sterile-insect barrier. South American countries, such as Uruguay, have at times inquired about eradicating the parasite. But for any country to go it alone makes little sense, given the porousness of national borders. And getting the entire South American continent on board is so colossal a financial and diplomatic undertaking that it hardly seems possible.

I wondered about this when I was in Panama, and I’ve been wondering about it now, while I’m confined to my apartment. Today, the world is struggling to contain a pestilence of a very different kind. Nations are closing their borders to fend for themselves, and the U.S. is entirely absent from its role in global leadership. Containment is about science, yes, but it’s also about politics. Screwworm eradication worked only because it had support from both.

In late 2016, when screwworms showed up in the Florida Keys, experts immediately suspected that they had come from Cuba. The U.S.-led eradication program had never reached the island—the two countries weren’t exactly on good terms in the ’80s and ’90s—so screwworms continued to prosper 90 miles from the Keys. Cuban screwworm experts still had to deal with the pests the old-fashioned way, by treating infections as they arose. And even they agreed that their island was the likely source of the Keys outbreak. “We asked them what did they think, and they laughed and said, ‘What do you think? We’re right here,’” Welch recalled.

That Cuban and U.S. entomologists could talk to one another at all was new. For decades, USDA employees were prohibited from communicating with their Cuban counterparts, even when they attended the same international screwworm meetings. Then, in late 2014, President Barack Obama relaxed U.S. restrictions on trade with Cuba and relations started to thaw. So even when DNA tests were ultimately inconclusive about the origin of the Keys outbreak, scientists in both countries began to talk about implementing an eradication program in Cuba, and a Cuban scientist traveled to Panama to visit the COPEG production plant.

But in August 2017, strange stories started to circulate about American diplomats in Cuba. They were falling ill, from mysterious and unconfirmedsonic attacks.” In response, the U.S. withdrew its nonessential staff from the country and expelled two Cuban diplomats. Cuban scientists stopped replying to the Americans’ emails about screwworms.

On my last full day in Panama, I woke up before dawn to head to the airport, where I would board a small turboprop plane loaded with 2.1 million sterilized screwworms. I had wandered the airport’s passenger terminals with my carry-on just a few days earlier, but now I was whisked into a wing closed to commercial passengers. This was COPEG’s dispersal center.

Four days a week, a white van that workers call the “Pupamobile” drops off coolers of sterilized pupae. The center then bursts into activity. A repurposed industrial pill-counting machine spits out 450 milliliters of pupae at a time onto aluminum trays, which are then stacked about 50 deep. Two days later, the adult emerge as flies. They are moved into a cold room, where they enter a dormant state; the sluggish flies are then loaded into a washing-machine-size metal box and loaded onto the plane.

I was there on Monday, which meant that two flights were scheduled that morning. The planes are used military aircraft, each customized with a continuously rotating dispersal machine that spits flies out of the plane’s belly at an adjustable rate per nautical mile. (In the old days, a technician threw out small cardboard boxes of flies by hand, but the boxes sometimes didn’t break open on impact and the flies died inside.) The dispersal machine is kept cold, so the flies won’t wake until they hit warm outside air. A few, however, always manage to escape into the cockpit.

One of the pilots came over to introduce himself as Michael Jackson. (Yes, he explained, his father was a big fan.) Jackson used to do medical evacuations, but he had switched to more regularly scheduled screwworm-dispersal flights, and I would be flying with him that morning. I climbed into his plane and noticed, with slight hesitation, that the only open seat was the co-pilot’s in the cockpit. He nodded. I was to sit there. As usual, a technician sat next to the screwworm box in the back, logging temperatures and confirming the dispersal machine’s proper function.

By the time we took off, at 6:50 a.m., the sun had burned off some of the clouds. “You are so lucky. We have good weather today,” Jackson said. Good weather meant good visibility. I watched as the sea of rust-red roofs around Panama City gave way to the emerald lushness of the Darién. I had never before flown so close to the ground: When we passed over towns where field inspectors had recently found screwworm infections, we dipped to just 1,500 feet to make sure the sterile flies would not be blown off course on their journey back to earth. We were low enough to pick out individual cattle, scattered like grains of rice on a green plain.

The wildlife were not as easy to spot from the plane, but their lives would also be altered by our flight. Welch had told me that howler monkeys in Panama sometimes fell from trees after screwworms ate out their eyes. That doesn’t happen anymore. Jaguars, sloths, tapirs, horses, coyotes, buffalo, rabbits, and squirrels up and down the North American continent are now spared from screwworms too. In the U.S., the main ecological consequence of eradication has been a dramatic increase in the wild-deer population, which once fluctuated with screwworm numbers. The parasite used to kill a large proportion of newborn fawns, whose unhealed belly buttons were open wounds. In the Keys, the recent screwworm outbreak became obvious during mating season, when males began fighting one another with their antlers. Their small, usually harmless nicks and cuts turned large and horrific once screwworms invaded them.

When the plane reached the Colombian border, Jackson notified air traffic control and continued on for 20 nautical miles. Every month, the program asks Colombia for permission to release screwworms within its borders, hoping to create a screwworm-free buffer zone that reinforces the continental barrier. We looped over the Darién and this thin slice of Colombia several times in four hours, until the plane had dispersed all 2.1 million flies onboard. The dispersal machine whirred behind us, spinning at its preset speed, pushing out clumps of flies at its preset rate. Screwworms that had spent their entire life crammed in a factory would now wake to find themselves falling thousands of feet through thin air. They would land, and then they would mate. That’s what their evolutionary instincts have primed them to do; that’s what the humans who sterilized them want them to do.

During the final hour of the flight, I looked intently out the window, hoping for a last glimpse of all the places I had been. We flew over the potholed highway into the Darién. We flew over the yellow ribbons of river that led to Sambu and Garachiné. We flew over COPEG’s production plant, with millions of screwworms feeding beneath its unmistakable green roof. The next day, I would fly through the same airspace, this time on a jetliner bound for America—where USDA-graded steaks sit on supermarket shelves, where deer leap across the landscape, and where the efforts of a distant group of people keep us safely ignorant of screwworms.  

This article is part of our Life Up Close project, which is supported by the HHMI Department of Science Education.


from Hacker News https://ift.tt/2zA5a1B

Programming as Theory Building – Peter Naur


Programming as Theory Building

Peter Naur

Peter Naur's classic 1985 essay "Programming as Theory Building" argues that a program is not its source code. A program is a shared mental construct (he uses the word theory) that lives in the minds of the people who work on it. If you lose the people, you lose the program. The code is merely a written representation of the program, and it's lossy, so you can't reconstruct a program from its code.

Introduction

The present discussion is a contribution to the understanding of what programming is. It suggests that programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.

Some of the background of the views presented here is to be found in certain observations of what actually happens to programs and the teams of programmers dealing with them, particularly in situations arising from unexpected and perhaps erroneous program executions or reactions, and on the occasion of modifications of programs. The difficulty of accommodating such observations in a production view of programming suggests that this view is misleading. The theory building view is presented as an alternative.

A more general background of the presentation is a conviction that it is important to have an appropriate understanding of what programming is. If our understanding is inappropriate we will misunderstand the difficulties that arise in the activity and our attempts to overcome them will give rise to conflicts and frustrations.

In the present discussion some of the crucial background experience will first be outlined. This is followed by an explanation of a theory of what programming is, denoted the Theory Building View. The subsequent sections enter into some of the consequences of the Theory Building View.

Programming and the Programmers’ Knowledge

I shall use the word programming to denote the whole activity of design and implementation of programmed solutions. What I am concerned with is the activity of matching some significant part and aspect of an activity in the real world to the formal symbol manipulation that can be done by a program running on a computer. With such a notion it follows directly that the programming activity I am talking about must include the development in time corresponding to the changes taking place in the real world activity being matched by the program execution, in other words program modifications.

One way of stating the main point I want to make is that programming in this sense primarily must be the programmers’ building up knowledge of a certain kind, knowledge taken to be basically the programmers’ immediate possession, any documentation being an auxiliary, secondary product.

As a background of the further elaboration of this view given in the following sections, the remainder of the present section will describe some real experience of dealing with large programs that has seemed to me more and more significant as I have pondered over the problems. In either case the experience is my own or has been communicated to me by persons having first hand contact with the activity in question.

Case 1 concerns a compiler. It has been developed by a group A for a Language L and worked very well on computer X. Now another group B has the task to write a compiler for a language L + M, a modest extension of L, for computer Y. Group B decides that the compiler for L developed by group A will be a good starting point for their design, and get a contract with group A that they will get support in the form of full documentation, including annotated program texts and much additional written design discussion, and also personal advice. The arrangement was effective and group B managed to develop the compiler they wanted. In the present context the significant issue is the importance of the personal advice from group A in the matters that concerned how to implement the extensions M to the language. During the design phase group B made suggestions for the manner in which the extensions should be accommodated and submitted them to group A for review. In several major cases it turned out that the solutions suggested by group B were found by group A to make no use of the facilities that were not only inherent in the structure of the existing compiler but were discussed at length in its documentation, and to be based instead on additions to that structure in the form of patches that effectively destroyed its power and simplicity. The members of group A were able to spot these cases instantly and could propose simple and effective solutions, framed entirely within the existing structure. This is an example of how the full program text and additional documentation is insufficient in conveying to even the highly motivated group B the deeper insight into the design, that theory which is immediately present to the members of group A.

In the years following these events the compiler developed by group B was taken over by other programmers of the same organization, without guidance from group A. Information obtained by a member of group A about the compiler resulting from the further modification of it after about 10 years made it clear that at that later stage the original powerful structure was still visible, but made entirely ineffective by amorphous additions of many different kinds. Thus, again, the program text and its documentation has proved insufficient as a carrier of some of the most important design ideas.

Case 2 concerns the installation and fault diagnosis of a large real–time system for monitoring industrial production activities. The system is marketed by its producer, each delivery of the system being adapted individually to its specific environment of sensors and display devices. The size of the program delivered in each installation is of the order of 200,000 lines. The relevant experience from the way this kind of system is handled concerns the role and manner of work of the group of installation and fault finding programmers. The facts are, first that these programmers have been closely concerned with the system as a full time occupation over a period of several years, from the time the system was under design. Second, when diagnosing a fault these programmers rely almost exclusively on their ready knowledge of the system and the annotated program text, and are unable to conceive of any kind of additional documentation that would be useful to them. Third, other programmers’ groups who are responsible for the operation of particular installations of the system, and thus receive documentation of the system and full guidance on its use from the producer’s staff, regularly encounter difficulties that upon consultation with the producer’s installation and fault finding programmer are traced to inadequate understanding of the existing documentation, but which can be cleared up easily by the installation and fault finding programmers.

The conclusion seems inescapable that at least with certain kinds of large programs, the continued adaption, modification, and correction of errors in them, is essentially dependent on a certain kind of knowledge possessed by a group of programmers who are closely and continuously connected with them.

Ryle’s Notion of Theory

If it is granted that programming must involve, as the essential part, a building up of the programmers’ knowledge, the next issue is to characterize that knowledge more closely. What will be considered here is the suggestion that the programmers’ knowledge properly should be regarded as a theory, in the sense of Ryle [1949]. Very briefly, a person who has or possesses a theory in this sense knows how to do certain things and in addition can support the actual doing with explanations, justifications, and answers to queries, about the activity of concern. It may be noted that Ryle’s notion of theory appears as an example of what K. Popper [Popper, and Eccles, 1977] calls unembodied World 3 objects and thus has a defensible philosophical standing. In the present section we shall describe Ryle’s notion of theory in more detail.

Ryle [1949] develops his notion of theory as part of his analysis of the nature of intellectual activity, particularly the manner in which intellectual activity differs from, and goes beyond, activity that is merely intelligent. In intelligent behaviour the person displays, not any particular knowledge of facts, but the ability to do certain things, such as to make and appreciate jokes, to talk grammatically, or to fish. More particularly, the intelligent performance is characterized in part by the person’s doing them well, according to certain criteria, but further displays the person’s ability to apply the criteria so as to detect and correct lapses, to learn from the examples of others, and so forth. It may be noted that this notion of intelligence does not rely on any notion that the intelligent behaviour depends on the person’s following or adhering to rules, prescriptions, or methods. On the contrary, the very act of adhering to rules can be done more or less intelligently; if the exercise of intelligence depended on following rules there would have to be rules about how to follow rules, and about how to follow the rules about following rules, etc. in an infinite regress, which is absurd.

What characterizes intellectual activity, over and beyond activity that is merely intelligent, is the person’s building and having a theory, where theory is understood as the knowledge a person must have in order not only to do certain things intelligently but also to explain them, to answer queries about them, to argue about them, and so forth. A person who has a theory is prepared to enter into such activities; while building the theory the person is trying to get it.

The notion of theory in the sense used here applies not only to the elaborate constructions of specialized fields of enquiry, but equally to activities that any person who has received education will participate in on certain occasions. Even quite unambitious activities of everyday life may give rise to people’s theorizing, for example in planning how to place furniture or how to get to some place by means of certain means of transportation.

The notion of theory employed here is explicitly not confined to what may be called the most general or abstract part of the insight. For example, to have Newton’s theory of mechanics as understood here it is not enough to understand the central laws, such as that force equals mass times acceleration. In addition, as described in more detail by Kuhn [1970, p. 187ff], the person having the theory must have an understanding of the manner in which the central laws apply to certain aspects of reality, so as to be able to recognize and apply the theory to other similar aspects. A person having Newton’s theory of mechanics must thus understand how it applies to the motions of pendulums and the planets, and must be able to recognize similar phenomena in the world, so as to be able to employ the mathematically expressed rules of the theory properly.

The dependence of a theory on a grasp of certain kinds of similarity between situations and events of the real world gives the reason why the knowledge held by someone who has the theory could not, in principle, be expressed in terms of rules. In fact, the similarities in question are not, and cannot be, expressed in terms of criteria, no more than the similarities of many other kinds of objects, such as human faces, tunes, or tastes of wine, can be thus expressed.

The Theory To Be Built by the Programmer

In terms of Ryle’s notion of theory, what has to be built by the programmer is a theory of how certain affairs of the world will be handled by, or supported by, a computer program. On the Theory Building View of programming the theory built by the programmers has primacy over such other products as program texts, user documentation, and additional documentation such as specifications.

In arguing for the Theory Building View, the basic issue is to show how the knowledge possessed by the programmer by virtue of his or her having the theory necessarily, and in an essential manner, transcends that which is recorded in the documented products. The answers to this issue is that the programmer’s knowledge transcends that given in documentation in at least three essential areas:

  1. The programmer having the theory of the program can explain how the solution relates to the affairs of the world that it helps to handle. Such an explanation will have to be concerned with the manner in which the affairs of the world, both in their overall characteristics and their details, are, in some sense, mapped into the program text and into any additional documentation. Thus the programmer must be able to explain, for each part of the program text and for each of its overall structural characteristics, what aspect or activity of the world is matched by it. Conversely, for any aspect or activity of the world the programmer is able to state its manner of mapping into the program text. By far the largest part of the world aspects and activities will of course lie outside the scope of the program text, being irrelevant in the context. However, the decision that a part of the world is relevant can only be made by someone who understands the whole world. This understanding must be contributed by the programmer.

  2. The programmer having the theory of the program can explain why each part of the program is what it is, in other words is able to support the actual program text with a justification of some sort. The final basis of the justification is and must always remain the programmer’s direct, intuitive knowledge or estimate. This holds even where the justification makes use of reasoning, perhaps with application of design rules, quantitative estimates, comparisons with alternatives, and such like, the point being that the choice of the principles and rules, and the decision that they are relevant to the situation at hand, again must in the final analysis remain a matter of the programmer’s direct knowledge.

  3. The programmer having the theory of the program is able to respond constructively to any demand for a modification of the program so as to support the affairs of the world in a new manner. Designing how a modification is best incorporated into an established program depends on the perception of the similarity of the new demand with the operational facilities already built into the program. The kind of similarity that has to be perceived is one between aspects of the world. It only makes sense to the agent who has knowledge of the world, that is to the programmer, and cannot be reduced to any limited set of criteria or rules, for reasons similar to the ones given above why the justification of the program cannot be thus reduced.

While the discussion of the present section presents some basic arguments for adopting the Theory Building View of programming, an assessment of the view should take into account to what extent it may contribute to a coherent understanding of programming and its problems. Such matters will be discussed in the following sections.

Problems and Costs of Program Modifications

A prominent reason for proposing the Theory Building View of programming is the desire to establish an insight into programming suitable for supporting a sound understanding of program modifications. This question will therefore be the first one to be taken up for analysis.

One thing seems to be agreed by everyone, that software will be modified. It is invariably the case that a program, once in operation, will be felt to be only part of the answer to the problems at hand. Also the very use of the program itself will inspire ideas for further useful services that the program ought to provide. Hence the need for ways to handle modifications.

The question of program modifications is closely tied to that of programming costs. In the face of a need for a changed manner of operation of the program, one hopes to achieve a saving of costs by making modifications of an existing program text, rather than by writing an entirely new program.

The expectation that program modifications at low cost ought to be possible is one that calls for closer analysis. First it should be noted that such an expectation cannot be supported by analogy with modifications of other complicated man–made constructions. Where modifications are occasionally put into action, for example in the case of buildings, they are well known to be expensive and in fact complete demolition of the existing building followed by new construction is often found to be preferable economically. Second, the expectation of the possibility of low cost program modifications conceivably finds support in the fact that a program is a text held in a medium allowing for easy editing. For this support to be valid it must clearly be assumed that the dominating cost is one of text manipulation. This would agree with a notion of programming as text production. On the Theory Building View this whole argument is false. This view gives no support to an expectation that program modifications at low cost are generally possible.

A further closely related issue is that of program flexibility. In including flexibility in a program we build into the program certain operational facilities that are not immediately demanded, but which are likely to turn out to be useful. Thus a flexible program is able to handle certain classes of changes of external circumstances without being modified.

It is often stated that programs should be designed to include a lot of flexibility, so as to be readily adaptable to changing circumstances. Such advice may be reasonable as far as flexibility that can be easily achieved is concerned. However, flexibility can in general only be achieved at a substantial cost. Each item of it has to be designed, including what circumstances it has to cover and by what kind of parameters it should be controlled. Then it has to be implemented, tested, and described. This cost is incurred in achieving a program feature whose usefulness depends entirely on future events. It must be obvious that built–in program flexibility is no answer to the general demand for adapting programs to the changing circumstances of the world.

In a program modification an existing programmed solution has to be changed so as to cater for a change in the real world activity it has to match. What is needed in a modification, first of all, is a confrontation of the existing solution with the demands called for by the desired modification. In this confrontation the degree and kind of similarity between the capabilities of the existing solution and the new demands has to be determined. This need for a determination of similarity brings out the merit of the Theory Building View. Indeed, precisely in a determination of similarity the shortcoming of any view of programming that ignores the central requirement for the direct participation of persons who possess the appropriate insight becomes evident. The point is that the kind of similarity that has to be recognized is accessible to the human beings who possess the theory of the program, although entirely outside the reach of what can be determined by rules, since even the criteria on which to judge it cannot be formulated. From the insight into the similarity between the new requirements and those already satisfied by the program, the programmer is able to design the change of the program text needed to implement the modification.

In a certain sense there can be no question of a theory modification, only of a program modification. Indeed, a person having the theory must already be prepared to respond to the kinds of questions and demands that may give rise to program modifications. This observation leads to the important conclusion that the problems of program modification arise from acting on the assumption that programming consists of program text production, instead of recognizing programming as an activity of theory building.

On the basis of the Theory Building View the decay of a program text as a result of modifications made by programmers without a proper grasp of the underlying theory becomes understandable. As a matter of fact, if viewed merely as a change of the program text and of the external behaviour of the execution, a given desired modification may usually be realized in many different ways, all correct. At the same time, if viewed in relation to the theory of the program these ways may look very different, some of them perhaps conforming to that theory or extending it in a natural way, while others may be wholly inconsistent with that theory, perhaps having the character of unintegrated patches on the main part of the program. This difference of character of various changes is one that can only make sense to the programmer who possesses the theory of the program. At the same time the character of changes made in a program text is vital to the longer term viability of the program. For a program to retain its quality it is mandatory that each modification is firmly grounded in the theory of it. Indeed, the very notion of qualities such as simplicity and good structure can only be understood in terms of the theory of the program, since they characterize the actual program text in relation to such program texts that might have been written to achieve the same execution behaviour, but which exist only as possibilities in the programmer’s understanding.

Program Life, Death, and Revival

A main claim of the Theory Building View of programming is that an essential part of any program, the theory of it, is something that could not conceivably be expressed, but is inextricably bound to human beings. It follows that in describing the state of the program it is important to indicate the extent to which programmers having its theory remain in charge of it. As a way in which to emphasize this circumstance one might extend the notion of program building by notions of program life, death, and revival. The building of the program is the same as the building of the theory of it by and in the team of programmers. During the program life a programmer team possessing its theory remains in active control of the program, and in particular retains control over all modifications. The death of a program happens when the programmer team possessing its theory is dissolved. A dead program may continue to be used for execution in a computer and to produce useful results. The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered. Revival of a program is the rebuilding of its theory by a new programmer team.

The extended life of a program according to these notions depends on the taking over by new generations of programmers of the theory of the program. For a new programmer to come to possess an existing theory of a program it is insufficient that he or she has the opportunity to become familiar with the program text and other documentation. What is required is that the new programmer has the opportunity to work in close contact with the programmers who already possess the theory, so as to be able to become familiar with the place of the program in the wider context of the relevant real world situations and so as to acquire the knowledge of how the program works and how unusual program reactions and program modifications are handled within the program theory. This problem of education of new programmers in an existing theory of a program is quite similar to that of the educational problem of other activities where the knowledge of how to do certain things dominates over the knowledge that certain things are the case, such as writing and playing a music instrument. The most important educational activity is the student’s doing the relevant things under suitable supervision and guidance. In the case of programming the activity should include discussions of the relation between the program and the relevant aspects and activities of the real world, and of the limits set on the real world matters dealt with by the program.

A very important consequence of the Theory Building View is that program revival, that is reestablishing the theory of a program merely from the documentation, is strictly impossible. Lest this consequence may seem unreasonable it may be noted that the need for revival of an entirely dead program probably will rarely arise, since it is hardly conceivable that the revival would be assigned to new programmers without at least some knowledge of the theory had by the original team. Even so the Theory Building View suggests strongly that program revival should only be attempted in exceptional situations and with full awareness that it is at best costly, and may lead to a revived theory that differs from the one originally had by the program authors and so may contain discrepancies with the program text.

In preference to program revival, the Theory Building View suggests, the existing program text should be discarded and the new–formed programmer team should be given the opportunity to solve the given problem afresh. Such a procedure is more likely to produce a viable program than program revival, and at no higher, and possibly lower, cost. The point is that building a theory to fit and support an existing program text is a difficult, frustrating, and time consuming activity. The new programmer is likely to feel torn between loyalty to the existing program text, with whatever obscurities and weaknesses it may contain, and the new theory that he or she has to build up, and which, for better or worse, most likely will differ from the original theory behind the program text.

Similar problems are likely to arise even when a program is kept continuously alive by an evolving team of programmers, as a result of the differences of competence and background experience of the individual programmers, particularly as the team is being kept operational by inevitable replacements of the individual members.

Method and Theory Building

Recent years has seen much interest in programming methods. In the present section some comments will be made on the relation between the Theory Building View and the notions behind programming methods.

To begin with, what is a programming method? This is not always made clear, even by authors who recommend a particular method. Here a programming method will be taken to be a set of work rules for programmers, telling what kind of things the programmers should do, in what order, which notations or languages to use, and what kinds of documents to produce at various stages.

In comparing this notion of method with the Theory Building View of programming, the most important issue is that of actions or operations and their ordering. A method implies a claim that program development can and should proceed as a sequence of actions of certain kinds, each action leading to a particular kind of documented result. In building the theory there can be no particular sequence of actions, for the reason that a theory held by a person has no inherent division into parts and no inherent ordering. Rather, the person possessing a theory will be able to produce presentations of various sorts on the basis of it, in response to questions or demands.

As to the use of particular kinds of notation or formalization, again this can only be a secondary issue since the primary item, the theory, is not, and cannot be, expressed, and so no question of the form of its expression arises.

It follows that on the Theory Building View, for the primary activity of the programming there can be no right method.

This conclusion may seem to conflict with established opinion, in several ways, and might thus be taken to be an argument against the Theory Building View. Two such apparent contradictions shall be taken up here, the first relating to the importance of method in the pursuit of science, the second concerning the success of methods as actually used in software development.

The first argument is that software development should be based on scientific manners, and so should employ procedures similar to scientific methods. The flaw of this argument is the assumption that there is such a thing as scientific method and that it is helpful to scientists. This question has been the subject of much debate in recent years, and the conclusion of such authors as Feyerabend [1978], taking his illustrations from the history of physics, and Medawar [1982], arguing as a biologist, is that the notion of scientific method as a set of guidelines for the practising scientist is mistaken.

This conclusion is not contradicted by such work as that of Polya [1954, 1957] on problem solving. This work takes its illustrations from the field of mathematics and leads to insight which is also highly relevant to programming. However, it cannot be claimed to present a method on which to proceed. Rather, it is a collection of suggestions aiming at stimulating the mental activity of the problem solver, by pointing out different modes of work that may be applied in any sequence.

The second argument that may seem to contradict the dismissal of method of the Theory Building View is that the use of particular methods has been successful, according to published reports. To this argument it may be answered that a methodically satisfactory study of the efficacy of programming methods so far never seems to have been made. Such a study would have to employ the well established technique of controlled experiments (cf. [Brooks, 1980] or [Moher and Schneider, 1982]). The lack of such studies is explainable partly by the high cost that would undoubtedly be incurred in such investigations if the results were to be significant, partly by the problems of establishing in an operational fashion the concepts underlying what is called methods in the field of program development. Most published reports on such methods merely describe and recommend certain techniques and procedures, without establishing their usefulness or efficacy in any systematic way. An elaborate study of five different methods by C. Floyd and several co–workers [Floyd, 1984] concludes that the notion of methods as systems of rules that in an arbitrary context and mechanically will lead to good solutions is an illusion. What remains is the effect of methods in the education of programmers. This conclusion is entirely compatible with the Theory Building View of programming. Indeed, on this view the quality of the theory built by the programmer will depend to a large extent on the programmer’s familiarity with model solutions of typical problems, with techniques of description and verification, and with principles of structuring systems consisting of many parts in complicated interactions. Thus many of the items of concern of methods are relevant to theory building. Where the Theory Building View departs from that of the methodologists is on the question of which techniques to use and in what order. On the Theory Building View this must remain entirely a matter for the programmer to decide, taking into account the actual problem to be solved.

Programmers’ Status and the Theory Building View

The areas where the consequences of the Theory Building View contrast most strikingly with those of the more prevalent current views are those of the programmers’ personal contribution to the activity and of the programmers’ proper status.

The contrast between the Theory Building View and the more prevalent view of the programmers’ personal contribution is apparent in much of the common discussion of programming. As just one example, consider the study of modifiability of large software systems by Oskarsson [1982]. This study gives extensive information on a considerable number of modifications in one release of a large commercial system. The description covers the background, substance, and implementation, of each modification, with particular attention to the manner in which the program changes are confined to particular program modules. However, there is no suggestion whatsoever that the implementation of the modifications might depend on the background of the 500 programmers employed on the project, such as the length of time they have been working on it, and there is no indication of the manner in which the design decisions are distributed among the 500 programmers. Even so the significance of an underlying theory is admitted indirectly in statements such as that ‘decisions were implemented in the wrong block’ and in a reference to ‘a philosophy of AXE’. However, by the manner in which the study is conducted these admissions can only remain isolated indications.

More generally, much current discussion of programming seems to assume that programming is similar to industrial production, the programmer being regarded as a component of that production, a component that has to be controlled by rules of procedure and which can be replaced easily. Another related view is that human beings perform best if they act like machines, by following rules, with a consequent stress on formal modes of expression, which make it possible to formulate certain arguments in terms of rules of formal manipulation. Such views agree well with the notion, seemingly common among persons working with computers, that the human mind works like a computer. At the level of industrial management these views support treating programmers as workers of fairly low responsibility, and only brief education.

On the Theory Building View the primary result of the programming activity is the theory held by the programmers. Since this theory by its very nature is part of the mental possession of each programmer, it follows that the notion of the programmer as an easily replaceable component in the program production activity has to be abandoned. Instead the programmer must be regarded as a responsible developer and manager of the activity in which the computer is a part. In order to fill this position he or she must be given a permanent position, of a status similar to that of other professionals, such as engineers and lawyers, whose active contributions as employers of enterprises rest on their intellectual proficiency.

The raising of the status of programmers suggested by the Theory Building View will have to be supported by a corresponding reorientation of the programmer education. While skills such as the mastery of notations, data representations, and data processes, remain important, the primary emphasis would have to turn in the direction of furthering the understanding and talent for theory formation. To what extent this can be taught at all must remain an open question. The most hopeful approach would be to have the student work on concrete problems under guidance, in an active and constructive environment.

Conclusions

Accepting program modifications demanded by changing external circumstances to be an essential part of programming, it is argued that the primary aim of programming is to have the programmers build a theory of the way the matters at hand may be supported by the execution of a program. Such a view leads to a notion of program life that depends on the continued support of the program by programmers having its theory. Further, on this view the notion of a programming method, understood as a set of rules of procedure to be followed by the programmer, is based on invalid assumptions and so has to be rejected. As further consequences of the view, programmers have to be accorded the status of responsible, permanent developers and managers of the activity of which the computer is a part, and their education has to emphasize the exercise of theory building, side by side with the acquisition of knowledge of data processing and notations.

References

Brooks, R. E. Studying programmer behaviour experimentally. Comm. ACM 23(4): 207–213, 1980.

Feyerabend, P. Against Method. London, Verso Editions, 1978; ISBN: 86091–700–2.

Floyd, C. Eine Untersuchung von Software–Entwicklungs–Methoden. Pp. 248–274 in Programmierumgebungen und Compiler, ed H. Morgenbrod and W. Sammer, Tagung I/1984 des German Chapter of the ACM, Stuttgart, Teubner Verlag, 1984; ISBN: 3–519–02437–3.

Kuhn, T.S. The Structure of Scientific Revolutions, Second Edition. Chicago, University of Chicago Press, 1970; ISBN: 0–226–45803–2.

Medawar, P. Pluto’s Republic. Oxford, University Press, 1982: ISBN: 0–19–217726–5.

Moher, T., and Schneider, G. M. Methodology and experimental research in software engineering, Int. J. Man–Mach. Stud. 16: 65–87, 1. Jan. 1982.

Oskarsson, Ö Mechanisms of modifiability in large software systems Linköping Studies in Science and Technology, Dissertations, no. 77, Linköping, 1982; ISBN: 91–7372–527–7.

Polya, G. How To Solve It . New York, Doubleday Anchor Book, 1957.

Polya, G. Mathematics and Plausible Reasoning. New Jersey, Princeton University Press, 1954.

Popper, K. R., and Eccles, J. C. The Self and Its Brain. London, Routledge and Kegan Paul, 1977.

Ryle, G. The Concept of Mind. Harmondsworth, England, Penguin, 1963, first published 1949. Applying "Theory Building"

Applying “Theory Building”

Viewing programming as theory building helps us understand “metaphor building” activity in Extreme Programming (XP), and the respective roles of tacit knowledge and documentation in passing along design knowledge.

The Metaphor as a Theory

Kent Beck suggested that it is useful to a design team to simplify the general design of a program to match a single metaphor. Examples might be, “This program really looks like an assembly line, with things getting added to a chassis along the line,” or “This program really looks like a restaurant, with waiters and menus, cooks and cashiers.”

If the metaphor is good, the many associations the designers create around the metaphor turn out to be appropriate to their programming situation.

That is exactly Naur’s idea of passing along a theory of the design.

If “assembly line” is an appropriate metaphor, then later programmers, considering what they know about assembly lines, will make guesses about the structure of the software at hand and find that their guesses are “close.” That is an extraordinary power for just the two words, “assembly line.”

The value of a good metaphor increases with the number of designers. The closer each person’s guess is “close” to the other people’s guesses, the greater the resulting consistency in the final system design.

Imagine 10 programmers working as fast as they can, in parallel, each making design decisions and adding classes as she goes. Each will necessarily develop her own theory as she goes. As each adds code, the theory that binds their work becomes less and less coherent, more and more complicated. Not only maintenance gets harder, but their own work gets harder. The design easily becomes a “kludge.” If they have a common theory, on the other hand, they add code in ways that fit together.

An appropriate, shared metaphor lets a person guess accurately where someone else on the team just added code, and how to fit her new piece in with it.

Tacit Knowledge and Documentation

The documentation is almost certainly behind the current state of the program, but people are good at looking around. What should you put into the documentation?

That which helps the next programmer build an adequate theory of the program.

This is enormously important. The purpose of the documentation is to jog memories in the reader, set up relevant pathways of thought about experiences and metaphors.

This sort of documentation is more stable over the life of the program than just naming the pieces of the system currently in place.

The designers are allowed to use whatever forms of expression are necessary to set up those relevant pathways. They can even use multiple metaphors, if they don’t find one that is adequate for the entire program. They might say that one section implements a fractal compression algorithm, a second is like an accounting ledger, the user interface follows the model-observer design pattern, and so on.

Experienced designers often start their documentation with just

  • The metaphors
  • Text describing the purpose of each major component
  • Drawings of the major interactions between the major components

These three items alone take the next team a long way to constructing a useful theory of the design.

The source code itself serves to communicate a theory to the next programmer. Simple, consistent naming conventions help the next person build a coherent theory. When people talk about “clean code,” a large part of what they are referring to is how easily the reader can build a coherent theory of the system.

Documentation cannot—and so need not—say everything. Its purpose is to help the next programmer build an accurate theory about the system.

HN Discussions



from Hacker News https://ift.tt/2XN9wub