Saturday, November 30, 2019

Inside Macintosh (1985) [pdf]

Comments

from Hacker News https://ift.tt/1NS7WNl

US 50 states redrawn as equal population

Electoral college reform (fifty states with equal population)

Neil Freeman, 2012
map
format and dimensions vary

The electoral college is a time-honored, logical system for picking the chief executive of the United States. However, the American body politic has also grown accustomed to paying close attention to the popular vote. This is only rarely a problem, since the electoral college and the popular vote have only disagreed three times in 200 years. However, it's obvious that reforms are needed.

The fundamental problem of the electoral college is that the states of the United States are too disparate in size and influence. The largest state is 66 times as populous as the smallest and has 18 times as many electoral votes. This increases the chance for Electoral College results that don't match the popular vote. To remedy this issue, the Electoral Reform Map redivides the fifty United States into 50 states of equal population. The 2010 Census records a population of 308,745,538 for the United States, which this map divides into 50 states, each with a population of about 6,175,000.1

electorally reformed US map

Poster

A poster version of the map is for sale. The poster has much more detail than the map here, including hundreds of smaller cities, an inset for the New York area and better elevation shading. The poster is 22″ x 28″ (56 x 71 cm).

The poster is $35 and ships first class to your door, safely packed in a sturdy tube.

Consult the shop for more about shipping.

Advantages of this proposal

  • Preserves the historic structure and function of the Electoral College.
  • Ends the over-representation of small states and under-representation of large states in presidential voting and in the US Senate by eliminating small and large states.
  • Political boundaries more closely follow economic patterns, since many states are more centered on one or two metro areas.
  • Ends varying representation in the House. Currently, the population of House districts ranges from 528,000 to 924,000. After this reform, every House seat would represent districts of the same size. (Since the current size of the House isn't divisible by 50, the numbers of seats should be increased to 450 or 500.)
  • States could be redistricted after each census - just like House seats are distributed now.

Disadvantages

  • Some county names are duplicated in new states.
  • Some local governments would experience a shift in state laws and procedures.

Methodology

The map began with an algorithm that grouped counties based on proximity, urban area, and commuting patterns. The algorithm was seeded with the fifty largest cities. After that, manual changes took into account compact shapes, equal populations, metro areas divided by state lines, and drainage basins. In certain areas, divisions are based on census tract lines.

The District of Columbia is included into the state of Washington, with the Mall, major monuments and Federal buildings set off as the seat of the federal government.

The capitals of the states are existing states capitals where possible, otherwise large or central cities have been chosen. The suggested names of the new states are taken mainly from geographical features:

The words used for names for the name are drawn from many languages, including many American Indian languages. While some etymologies are unclear, the root languages for the state names include Abenaki (Casco), Algonquian (Nodaway, Pocono, Willimantic), Apache (Chinati), Calusa (Tampa), Choctaw (Atchafalaya), English, French (Detroit, Ozark, Rainier), Greek (Philadelphia), Iroquoian (Shenandoah), Lakota (Ogallala), Latin (Columbia), Luiseño (Temecula), Mayaimi (Miami), Mamaceqtaw (Menominee), Miami-Illinois (Chicago), Mohawk (Adirondack), Muscogee (Muskogee), Nahuatl (Tule), Odawa (Maumee), Ojibwe (Mesabi), Potawatomi (Sangamon), Susquehannock (Susquehanna) and Wyandot (Scioto).

Keep in mind that this is an art project, not a serious proposal, so take it easy with the emails about the sacred soil of Texas. That said, if you have any questions, drop me a line.

Earlier versions of this map appeared in postcard form in The Future Dictionary of America and Greetings from the Ocean's Sweaty Face.

1. The average population of the new states is 6,174,911. The smallest new state varies from this average by 2,087 (0.03%), the largest by 4,073 (0.07%). This is far less variance than is currently allowed in Congressional districts. More than half of the new states are within 0.01% of the target.

Data comes from the US Census and Natural Earth.

counter


from Hacker News https://ift.tt/RAsYG0

Why the free upgrade to Windows 10 still works

[English]Some users are aware that the free upgrade from Windows 7 SP1 and Windows 8.1 for private users (in the home and professional versions) to Windows 10 still works. Now there is an unofficial explanation.

The facts

If you are still working on an older Windows 7 SP1 or Windows 8.1, you can download a new Windows 10 installation file via Media Creation Tool and then upgrade from the current Windows 7/8.1. Afterwards you simply reinstall Windows 10 and use the product key of Windows 7 or Windows 8.1. Windows 10 should be able to be activated with it. This is even described in the Microsoft Answers forum

This tip has been passed around in various forums and websites since the end of the free upgrade offer in July 2016 (see my German blog post Windows 10: Gratis-Upgrade läuft am 31.12.2017 aus). Many observers speculated about what this was all about and when it would end. 

Microsoft had already let the free upgrade for people with disabilities continue (see Doch noch gratis Windows 10-Upgrade nach dem 29.7.2016?). But still free Windows 10 upgrade after 29.7.2016?). In addition, corporate customers with volume license programs are out of this question anyway – they are allowed to upgrade and downgrade anyway, with corresponding contracts.

Unofficial truth released by a Microsoft Employee

It probably starts with ‘The cat is out of the bag’, and then a user who claims to be working at Microsoft on Reddit describes why the free upgrade still works.  

I work at Microsoft and have been since before the Windows 10 launch. That whole “free” upgrade for a year was fully marketing fluff. After the cut off happened, the direction given was that it requires a paid license HOWEVER, this was brought up by the brick and mortar stores that they were doing simple clock changes on customer devices during the upgrade challenge to get around it and then ultimately it was clear two years later that anything Windows 7 and up would go to 10 fully activated and still to this day.

In short: After the expiry of the official free upgrade offer, the motto was officially issued that you now need a chargeable product key. But the whole thing was a marketing trick, which came up due to the pressure of the trade. They feared for sales of Windows 10 packages and devices and urged that the ‘free upgrade’ expire after one year. Internally, the then head of the Windows Developer Group (WDG), Terry Meyerson, had other ideas. He was particularly concerned about the 1 billion Windows 10 installations he had announced for 2018. The Microsoft employee writes on reddit:

WDG didn’t care pretty much at all because Terry Meyerson at the time cared more about his upgrade stats than license revenue as Windows isn’t Microsoft’s cash cow anymore. It’s the same stance back in the day where Microsoft would allow Windows Updates on pirated copies of Windows 7 as the bigger picture was to thwart security threats based from those copies.

You still can do this no problem, however careful, do an upgrade keeping everything as if you choose to yeet everything and start fresh, you lose your free upgrade. That old 7 license converts to a 10 digital license and from there you can clean install no problem. As for audits, this mainly is for volume licensing than anything. An SMB with 10-200 Windows 7 machines that were OEM licensed don’t really matter. If you try this with 1,000 computers, iffy. At the end of the day, Microsoft had four years to close that loophole and never did so if worse came to worse, you could technically go through legal avenues as the EULA for 10 literally doesn’t have a clause for this at all. You can’t shit on someone taking advantage of an activation workaround when you as the manufacturer never closed it.

In the last paragraph, he also discusses the topic of business customers and auditing – which have OEM licenses. Ok, this is not an official position of Microsoft and they would rather seed doubts, that you will never get a valid license for Windows. But it’s the best explanation for the free upgrade even after 4 years – Microsoft didn’t do anything to close this hole. There is a 2nd explanation, why Microsoft ended the official ‘free upgrade period’: Microsoft does not permanently give away a paid product because the lost turnover and lost profit must be communicated to the investors. The operational loss due to the free upgrade between 7.2015 and 7.2016 was 2 billion US dollars, Microsoft lost 1.4 billion profit. (via)

Similar articles:
Last Minute Tip: How to save your free Windows 10 upgrade
Windows 10: Free upgrade offer for users with assistive technology ends at 12/31/2017
Windows 7: Free Extended Update Support and usage



from Hacker News https://ift.tt/37T7AVe

1958 Facom 128B Japanese Relay Computer, Still Working [video]

Comments

from Hacker News https://www.youtube.com/watch?v=_j544ELauus

Spam and Phishing in Q3 2019

Quarterly highlights

Amazon Prime

In Q3, we registered numerous scam mailings related to Amazon Prime. Most of the phishing emails with a link to a fake Amazon login page offered new prices or rewards for buying things, or reported problems with membership, etc. Against the backdrop of September’s Prime Day sale, such messages were plausible.

Scammers also used another fraudulent scheme: An email informed victims that their request to cancel Amazon Prime had been accepted, but if they had changed their mind, they should call the number in the message. Fearing their accounts may have been hacked, victims phoned the number — this was either premium-rate and expensive, or, worse, during the call the scammers tricked them into revealing confidential data.

Scammers collect photos of documents and selfies

This quarter we detected a surge in fraud related to stealing photos of documents and selfies with them (often required for registration or identification purposes). In phishing emails seemingly from payment systems and banks, users were asked under various pretexts to confirm their identity by going to a special page and uploading a selfie with an ID document. The fake sites looked quite believable, and provided a list of necessary documents with format requirements, links to privacy policy, user agreement, etc.

Some scammers even managed without a fake website. For instance, in summer Italian users were hit by a spam attack involving emails about a smartphone giveaway. To receive the prize, hopefuls had to send a photograph of an ID document and a selfie to the specified email address. To encourage victims to respond, the scammers stated that the offer would soon expire.

To obtain copies of documents, scammers also sent fake Facebook messages in which recipients were informed that access to their accounts had been restricted due to complaints about the content of some posts. To prevent their account from being deleted, they were instructed to send a photo or scan of a driving license and other ID documents with a selfie, plus medical insurance details.

YouTube and Instagram

Scammers continue to exploit traditional schemes on new platforms, and Q3 was a bumper quarter in this regard. For instance, YouTube ads appeared offering the viewer the chance to earn a lot of quick and easy money. The video explained to users that they had to take a survey and provide personal details, after which they would receive a payout or a gift from a large company, etc. To add credibility, fake reviews from supposedly “satisfied customers” were posted under the video. What’s more, the enthusiastic bot-generated comments did not appear all in one go, but were added gradually to look like a live stream.

All the user had to do was follow the link under the video and then follow the steps in the video instructions. Sure, to receive the handout, a small “commission fee” or payment to “confirm the account” was required.

Similar schemes did the rounds on Instagram. Advertising posts in the name of various celebrities (fake accounts are easily distinguished from real ones by the absence of a blue tick) were often used to lure fans with prize draws or rewards for completing a paid survey. As with the YouTube videos, there were plenty of fake glowing comments under such posts. Given that such giveaways by stars are not uncommon, inattentive users could swallow the bait.

Back to school

In Q3, we registered a series of attacks related in one way or another to education. Phishers harvested usernames and passwords from the personal accounts of students and lecturers using fake pages mimicking university login pages.

The scammers were looking not for financial data, but for university research papers, as well as any personal information that might be kept on the servers. Data of this kind is in high demand on the darknet market. Even data that seems useless at first can be used by cybercriminals to prepare a targeted attack.

One way to create phishing pages is to hack into legitimate resources and post fraudulent content on them. In Q3, phishers hacked school websites and created fake pages on them to mimic login forms for commonly used resources.

Scammers also tried to steal usernames and passwords for the mail servers of educational service providers. To do so, they mailed out phishing messages disguised as support service notifications asking recipients to confirm that the mail account belonged to them.

Apple product launch

In September, Apple unveiled its latest round of products, and as usual the launch was followed by fans and scammers alike — we detected phishing emails in mail traffic aimed at stealing Apple ID authentication data.


Scammers also harvested users’ personal data by sending spam messages offering free testing of new releases.

The number of attempts to open fake websites mentioning the Apple brand rose in the runup to the unveiling of the new product line and peaked on the actual day itself:

Number of attempts to open Apple-related phishing pages, September 2019 (download)

Attacks on pay TV users

To watch TV or record live broadcasts in the UK, a license fee is payable. This was exploited by spammers who sent out masses of fake license expiry/renewal messages. What’s more, they often used standard templates saying that the license could not be renewed because the bank had declined the payment.

The recipient was then asked to verify (or update) their personal and/or payment details by clicking on a link pointing to a fake data entry and payment form.

Spam through website feedback forms

The website of any large company generally has one or even several feedback forms. These can be used to ask questions, express wishes, sign up for company events, or subscribe to newsletters. But messages sent via such forms often come not only from clients or interested visitors, but from scammers too.

There is nothing new about this phenomenon per se, but it is interesting to observe how the mechanism for sending spam through forms has evolved. If previously spammers targeted company mailboxes linked to feedback forms, now fraudsters use them to send spam to people on the outside.

This is possible because some companies do not pay due attention to website security, allowing attackers to bypass simple CAPTCHA tests with the aid of scripts and to register users en masse using feedback forms. Another oversight is that the username field, for example, accepts any text or link. As a result, the victim whose mailing address was used receives a legitimate confirmation of registration email, but containing a message from the scammers. The company itself does not receive any message.

Such spam started to surge several years ago, and has recently become even more popular — in Q3 services for delivering advertising messages through feedback forms began to be advertised in spam mailings.

Attacks on corporate email

Last quarter, we observed a major spam campaign in which scammers sent emails pretending to be voicemail notifications. To listen to the supposed message, the recipient was invited to click or tap the (phishing) link that pointed to a website mimicking the login page of a popular Microsoft service. It was a page for signing either into Outlook or directly into a Microsoft account.

The attack was aimed specifically at corporate mail users, since various business software products allow the exchange of voice messages and inform users of new ones via email.

It is worth noting that the number of spam attacks aimed specifically at the corporate sector has increased significantly of late. Cybercriminals are after access to employees’ email.

Another common trick is to report that incoming emails are stuck in the delivery queue. To receive these supposedly undeliverable messages, the victim is prompted to follow a link and enter their corporate account credentials on another fake login page, from where they go directly to the cybercriminals. Last quarter, our products blocked many large-scale spam campaigns under the guise of such notifications.

Statistics: spam

Proportion of spam in mail traffic

Share of spam in global mail traffic, Q2 and Q3 2019 (download)

In Q3 2019, the largest share of spam was recorded in August (57.78%). The average percentage of spam in global mail traffic was 56.26%, down 1.38 p.p. against the previous reporting period.

Sources of spam by country

Sources of spam by country, Q3 2019 (download)

The TOP 5 spam-source countries remain the same as last quarter, only their percentage shares are slightly different. China is in first place (20.43%), followed by the US (13.37%) and Russia (5.60%). Fourth position goes to Brazil (5.14%) and fifth to France (3.35%). Germany took sixth place (2.95%), followed — with a gap of less than 0.5 p.p. — by India (2.65%), Turkey (2.42%), Singapore (2.24%), and Vietnam (2.15%).

Spam email size

Spam email size, Q2 and Q3 2019 (download)

In Q3 2019, the share of very small emails (up to 2 KB) in spam decreased by 4.38 p.p. to 82.93%. The proportion of emails sized 5-10 KB grew slightly (by 1.52 p.p.) against the previous quarter to 3.79%.

Meanwhile, the share of 10-20 KB emails climbed by 0.26 p.p. to 2.24%. As for the number of 20-50 KB emails, their share changed more significantly, increasing by 2.64 p.p. (up to 4.74%) compared with the previous reporting period.

Malicious attachments in email

Number of Mail Anti-Virus triggerings, Q2 2019 – Q3 2019 (download)

In Q3 2019, our security solutions detected a total of 48,089,352 malicious email attachments, which is almost five million more than in Q2. July was the most active month with 17 million Mail Anti-Virus triggerings, while August was the “calmest” — with two million fewer.

TOP 10 malicious attachments in mail traffic, Q3 2019 (download)

In Q3, first place by prevalence in mail traffic went to the Office malware Exploit.MSOffice.CVE-2017-11882.gen (7.13%); in second place was the Worm.Win32.WBVB.vam worm (4.13%), and in third was another malware aimed at Microsoft Office users, Trojan.MSOffice.SAgent.gen (2.24%).

TOP 10 malware families, Q3 2019 (download) (download)

As for malware families, the Backdoor.Win32.Androm family (7.49%) claimed first place.

In second place are Microsoft Office exploits from the Exploit.MSOffice.CVE-2017-11882.gen family (7.20%). And in third is Worm.Win32.WBVB.vam (4.60%).

Countries targeted by malicious mailings

Distribution of Mail Anti-Virus triggerings by country, Q3 2019 (download)

First place by number of Mail Anti-Virus triggerings in Q3 2019 was retained by Germany. Its score increased by 0.31 p.p. to 10.36%. Vietnam also remained in the TOP 3, rising to second position (5.92%), and Brazil came in third just a tiny fraction behind.

Statistics: phishing

In Q3 2019, the Anti-Phishing system prevented 105,220,094 attempts to direct users to scam websites. The percentage of unique attacked users was 11.28% of the total number of users of Kaspersky products worldwide.

Attack geography

The country with the largest share of users attacked by phishers in Q3 2019 was Venezuela (30.96%), which took second place in the previous quarter and has since added 5.29 p.p.

Geography of phishing attacks, Q3 2019 (download)

Having lost 3.53 p.p., Greece ranked second (22.67%). Third place, as in the last quarter, went to Brazil (19.70%).

Country %*
Venezuela 30.96
Greece 22.67
Brazil 19.70
Honduras 17.58
Guatemala 16.80
Panama 16.70
Australia 16.18
Chile 15.98
Ecuador 15.64
Portugal 15.61

* Share of users on whose computers the Anti-Phishing system was triggered out of all Kaspersky users in the country

Organizations under attack

The rating of categories of organizations attacked by phishers is based on triggers of the Anti-Phishing component on user computers. It is activated every time the user attempts to open a phishing page, either by clicking a link in an email or a social media message, or as a result of malware activity. When the component is triggered, a banner is displayed in the browser warning the user about a potential threat.

For the first time this year, the share of attacks on organizations in the Global Internet Portals category (23.81%) exceeded the share of attacks on credit organizations (22.46%). Social networks (20.48%) took third place, adding 11.40 p.p. to its share.

Distribution of organizations subjected to phishing attacks by category, Q3 2019. (download)

In addition, the TOP 10 said goodbye to the Government and Taxes category.

Its place was taken by the Financial Services category, which unites companies providing services in the field of finance that are not included in the Banks or Payment Systems categories, which cover providers of insurance, leasing, brokerage, and other services.

Conclusion

The average share of spam in global mail traffic (56.26%) this quarter decreased by 1.38 p.p. against the previous reporting period, while the number of attempted redirects to phishing pages compared to Q2 2019 fell by 25 million to just over 105 million.

Top in this quarter’s list of spam-source countries is China, with a share of 20.43%. Our security solutions blocked 48,089,352 malicious mail attachments, while Backdoor.Win32.Androm became the most common mail-based malware family — its share of mail traffic amounted to 7.49%.



from Hacker News https://ift.tt/37FWfrG

Features of a successful therapeutic fast of 382 days' duration (1973) [pdf]

Comments

from Hacker News https://ift.tt/2sw3iTz

Practical Examples in Data Oriented Design


JavaScript isn't enabled in your browser, so this file can't be opened. Enable and reload.



from Hacker News https://ift.tt/28JVMVt

Miltown: A game-changing drug you've probably never heard of (2017)


One in six Americans takes a psychiatric medication. The number of Canadians who got at least one prescription for psychotropic drugs went up 54 per cent between 1983 and 2007.

But to understand how Prozac and Zoloft became household words, you need to learn about Miltown.

There's a good chance you've never heard of it — but at one time it Miltown took the United States by storm, and laid the foundation for the ways we treat anxiety today.

There's a lot of cultural angst about how pharmaceutical companies market drugs to consumers — and a disconcerting sense that they serve a market they created for themselves. That discomfort becomes even more pronounced when we talk about drugs that treat anxiety and depression.

But according to Andrea Tone, author of The Age of Anxiety: A History of America's Turbulent Affair with Tranquilizers, the history of psychiatric medicine doesn't quite support that popular narrative.

An accidental discovery

Miltown — a tranquillizer with the chemical name of meprobomate — wasn't created because a pharma executive thought that treating anxiety could lead to big bucks.

It was discovered by accident by Frank Berger, a brilliant scientist from Pilsen, the capital of West Bohemia, after he fled the Nazis in 1938.

It is the very first time that millions of Americans — and eventually doctors too — felt that it was okay to take a drug for every day ills.- Andrea Tone

He was working on a method of preserving penicillin  — and stumbled upon the chemical mephenesin, a relaxant that would eventually lead to the drug known as Miltown.

Berger had wanted the drug, which had the unique property of relaxing users while keeping them awake and alert,  to be classified as a sedative. But an associate eventually convinced him that the market was full of sedatives, and that what the world really wanted was tranquility.

Miltown — the first minor tranquilizer — initially took off without a big marketing push. In fact, Carter-Wallace, the healthcare company responsible for first putting it on the market, was initially reluctant to put it on the market. Far from medicalizing worry, pharma had to be persuaded there was enough worry to warrant a pill to treat it.

Hollywood triggers Miltown mania

Miltown's introduction into the market was initially underwhelming, selling just  $7,500 dollars worth during the first month after its launch in May 1955. But by the end of that year, sales hit $2 million.

Hollywood  had discovered Miltown — and from then on, the pill became a cultural phenomenon. 

"When Hollywood understands that there's this drug that isn't going to knock you out like barbiturates, but is going to ease your anxieties some, it's embraced in a way that I don't think any other drug had been embraced before, or has been ever since," said Tone.

Soon, it was a bona fide cultural phenomenon, appearing in New Yorker cartoons and on greeting cards. At parties frequented by celebrities, the pills were passed around like peanuts. There were even "miltinis" — cocktails with Cold War-inspired names, that combined alcohol with the pills.

Milton Berle, who was a giant star at the time, joked that he was so enamoured with the stuff, he should change his name to Miltown Berle.

"You don't want a girl, you want a Miltown," he joked to Elvis on his talk show.

It's estimated that by late 1956, one in 20 Americans had tried Miltown.

​What happened to Miltown?

If a drug could become so thoroughly engrained in American life — how is it that so few people have heard about it today?

In a sense, the drug changed the marketplace so thoroughly, it set the terms of its own demise. Other pharmaceutical companies watched as the popularity of the drug exploded, and saw limitless commercial potential, triggering a feverish race to discover the next best thing.

Benzodiazepines — which included Librium and Valium — soon came to dominate the tranquilizer market. By the early sixties, the era of Miltown mania was over.

But according to Andrea Tone, none of the present day market for anti depressants and anti-anxiety medications would have been possible without Frank Berger's accidental discovery of Miltown.

​"It is the very first time that millions of Americans — and eventually doctors too — felt that it was okay to take a drug for every day ills, and in that sense it normalized the notion that people who didn't have serious illnesses, who are just riding the roller coaster of the vagaries of life could pop a pill, and there's nothing wrong with that," she said.

"That idea, which was new, and phenomenal, has been enduring."



from Hacker News https://ift.tt/2DxR2nV

EU antitrust regulators say they are investigating Google's data collection


FILE PHOTO: A man passes a Google signage outside their office in Singapore May 24, 2019. REUTERS/Edgar Su/File Photo

BRUSSELS (Reuters) - EU antitrust regulators are investigating Google’s collection of data, the European Commission told Reuters on Saturday, suggesting the world’s most popular internet search engine remains in its sights despite record fines in recent years.

Competition enforcers on both sides of the Atlantic are now looking into how dominant tech companies use and monetise data.

The EU executive said it was seeking information on how and why Alphabet unit Google is collecting data, confirming a Reuters story on Friday.

“The Commission has sent out questionnaires as part of a preliminary investigation into Google’s practices relating to Google’s collection and use of data. The preliminary investigation is ongoing,” the EU regulator told Reuters in an email.

A document seen by Reuters shows the EU’s focus is on data related to local search services, online advertising, online ad targeting services, login services, web browsers and others.

European Competition Commissioner Margrethe Vestager has handed down fines totalling more than 8 billion euros to Google in the last two years and ordered it to change its business practices.

Google has said it uses data to better its services and that users can manage, delete and transfer their data at any time.

(This story has been refiled to fix spelling in first paragraph to sights, not sites.)

Reporting by Foo Yun Chee; Editing by Hugh Lawson



from Hacker News https://ift.tt/2R7S7uw

Common Identification Standard for Federal Employees and Contractors (2004)


Homeland Security Presidential Directive 12: Policy for a Common Identification Standard for Federal Employees and Contractors

There are wide variations in the quality and security of identification used to gain access to secure facilities where there is potential for terrorist attacks. In order to eliminate these variations, U.S. policy is to enhance security, increase Government efficiency, reduce identity fraud, and protect personal privacy by establishing a mandatory, Government-wide standard for secure and reliable forms of identification issued by the Federal Government to its employees and contractors (including contractor employees). This directive mandates a federal standard for secure and reliable forms of identification.

HSPD 12 Full Text

Homeland Security Presidential Directive-12

August 27, 2004

SUBJECT: Policies for a Common Identification Standard for Federal Employees and Contractors

  1. Wide variations in the quality and security of forms of identification used to gain access to secure Federal and other facilities where there is potential for terrorist attacks need to be eliminated. Therefore, it is the policy of the United States to enhance security, increase Government efficiency, reduce identity fraud, and protect personal privacy by establishing a mandatory, Government-wide standard for secure and reliable forms of identification issued by the Federal Government to its employees and contractors (including contractor employees).
  2. To implement the policy set forth in paragraph (1), the Secretary of Commerce shall promulgate in accordance with applicable law a Federal standard for secure and reliable forms of identification (the "Standard") not later than 6 months after the date of this directive in consultation with the Secretary of State, the Secretary of Defense, the Attorney General, the Secretary of Homeland Security, the Director of the Office of Management and Budget (OMB), and the Director of the Office of Science and Technology Policy. The Secretary of Commerce shall periodically review the Standard and update the Standard as appropriate in consultation with the affected agencies.
  3. "Secure and reliable forms of identification" for purposes of this directive means identification that (a) is issued based on sound criteria for verifying an individual employee's identity; (b) is strongly resistant to identity fraud, tampering, counterfeiting, and terrorist exploitation; (c) can be rapidly authenticated electronically; and (d) is issued only by providers whose reliability has been established by an official accreditation process. The Standard will include graduated criteria, from least secure to most secure, to ensure flexibility in selecting the appropriate level of security for each application. The Standard shall not apply to identification associated with national security systems as defined by 44 U.S.C. 3542(b)(2).
  4. Not later than 4 months following promulgation of the Standard, the heads of executive departments and agencies shall have a program in place to ensure that identification issued by their departments and agencies to Federal employees and contractors meets the Standard. As promptly as possible, but in no case later than 8 months after the date of promulgation of the Standard, the heads of executive departments and agencies shall, to the maximum extent practicable, require the use of identification by Federal employees and contractors that meets the Standard in gaining physical access to Federally controlled facilities and logical access to Federally controlled information systems. Departments and agencies shall implement this directive in a manner consistent with ongoing Government-wide activities, policies and guidance issued by OMB, which shall ensure compliance.
  5. Not later than 6 months following promulgation of the Standard, the heads of executive departments and agencies shall identify to the Assistant to the President for Homeland Security and the Director of OMB those Federally controlled facilities, Federally controlled information systems, and other Federal applications that are important for security and for which use of the Standard in circumstances not covered by this directive should be considered. Not later than 7 months following the promulgation of the Standard, the Assistant to the President for Homeland Security and the Director of OMB shall make recommendations to the President concerning possible use of the Standard for such additional Federal applications.
  6. This directive shall be implemented in a manner consistent with the Constitution and applicable laws, including the Privacy Act (5 U.S.C. 552a) and other statutes protecting the rights of Americans.
  7. Nothing in this directive alters, or impedes the ability to carry out, the authorities of the Federal departments and agencies to perform their responsibilities under law and consistent with applicable legal authorities and presidential guidance. This directive is intended only to improve the internal management of the executive branch of the Federal Government, and it is not intended to, and does not, create any right or benefit enforceable at law or in equity by any party against the United States, its departments, agencies, entities, officers, employees or agents, or any other person.
  8. The Assistant to the President for Homeland Security shall report to me not later than 7 months after the promulgation of the Standard on progress made to implement this directive, and shall thereafter report to me on such progress or any recommended changes from time to time as appropriate.

GEORGE W. BUSH

# # #

Last Published Date: August 19, 2015



from Hacker News https://ift.tt/1YScYwC

Linus: People should aim to make “badly written” code “just work”

Re: Linux 2.6.29

[Posted March 31, 2009 by corbet]

From:   Linus Torvalds <torvalds-AT-linux-foundation.org>
To:   Kyle Moffett <kyle-AT-moffetthome.net>
Subject:   Re: Linux 2.6.29
Date:   Wed, 25 Mar 2009 20:40:23 -0700 (PDT)
Message-ID:   <alpine.LFD.2.00.0903252017100.3032@localhost.localdomain>
Cc:   Jeff Garzik <jeff-AT-garzik.org>, Matthew Garrett <mjg59-AT-srcf.ucam.org>, Theodore Tso <tytso-AT-mit.edu>, Christoph Hellwig <hch-AT-infradead.org>, Jan Kara <jack-AT-suse.cz>, Andrew Morton <akpm-AT-linux-foundation.org>, Ingo Molnar <mingo-AT-elte.hu>, Alan Cox <alan-AT-lxorguk.ukuu.org.uk>, Arjan van de Ven <arjan-AT-infradead.org>, Peter Zijlstra <a.p.zijlstra-AT-chello.nl>, Nick Piggin <npiggin-AT-suse.de>, Jens Axboe <jens.axboe-AT-oracle.com>, David Rees <drees76-AT-gmail.com>, Jesper Krogh <jesper-AT-krogh.cc>, Linux Kernel Mailing List <linux-kernel-AT-vger.kernel.org>
Archive-link:   Article
 On Wed, 25 Mar 2009, Kyle Moffett wrote: > > Well, I think the goal is not to *replace* the POSIX API or even > provide "transactional" guarantees. The performance penalty for > atomic transactions is pretty high, and most programs (like GIT) don't > really give a damn, as they provide that on a higher level.  Speaking with my 'git' hat on, I can tell that - git was designed to have almost minimal requirements from the filesystem, and to not do anything even half-way clever. - despite that, we've hit an absolute metric sh*tload of filesystem bugs and misfeatures. Some very much in Linux. And some I bet git was the first to ever notice, exactly because git tries to be really anal, in ways that I can pretty much guarantee no normal program _ever_ is. For example, the latest one came from git actually checking the error code from 'close()'. Tell me the last time you saw anybody do that in a real program. Hint: it's just not done. EVER. Git does it (and even then, git does it only for the core git object files that we care about so much), and we found a real data-loss CIFS bug thanks to that. Afaik, the bug has been there for a year and half. Don't tell me nobody uses cifs. Before that, we had cross-directory rename bugs. Or the inexplicable "pread() doesn't work correctly on HP-UX". Or the "readdir() returns the same entry multiple times" bug. And all of this without ever doing anything even _remotely_ odd. No file locking, no rewriting of old files, no lseek()ing in directories, no nothing. Anybody who wants more complex and subtle filesystem interfaces is just crazy. Not only will they never get used, they'll definitely not be stable. > To be honest I think we could provide much better data consistency > guarantees and remove a lot of fsync() calls with just a basic > per-filesystem barrier() call.  The problem is not that we have a lot of fsync() calls. Quite the reverse. fsync() is really really rare. So is being careful in general. The number of applications that do even the _minimal_ safety-net of "create new file, rename it atomically over an old one" is basically zero. Almost everybody ends up rewriting files with something like open(name, O_CREAT | O_TRUNC, 0666) write(); close(); where there isn't an fsync in sight, nor any "create temp file", nor likely even any real error checking on the write(), much less the close(). And if we have a Linux-specific magic system call or sync action, it's going to be even more rarely used than fsync(). Do you think anybody really uses the OS X FSYNC_FULL ioctl? Nope. Outside of a few databases, it is almost certainly not going to be used, and fsync() will not be reliable in general. So rather than come up with new barriers that nobody will use, filesystem people should aim to make "badly written" code "just work" unless people are really really unlucky. Because like it or not, that's what 99% of all code is. The undeniable FACT that people don't tend to check errors from close() should, for example, mean that delayed allocation must still track disk full conditions, for example. If your filesystem returns ENOSPC at close() rather than at write(), you just lost error coverage for disk full cases from 90% of all apps. It's that simple. Crying that it's an application bug is like crying over the speed of light: you should deal with *reality*, not what you wish reality was. Same goes for any complaints that "people should write a temp-file, fsync it, and rename it over the original". You may wish that was what they did, but reality is that "open(filename, O_TRUNC | O_CREAT, 0666)" thing. Harsh, I know. And in the end, even the _good_ applications will decide that it's not worth the performance penalty of doing an fsync(). In git, for example, where we generally try to be very very very careful, 'fsync()' on the object files is turned off by default. Why? Because turning it on results in unacceptable behavior on ext3. Now, admittedly, the git design means that a lost new DB file isn't deadly, just potentially very very annoying and confusing - you may have to roll back and re-do your operation by hand, and you have to know enough to be able to do it in the first place. The point here? Sometimes those filesystem people who say "you must use fsync() to get well-defined semantics" are the same people who SCREWED IT UP SO DAMN BADLY THAT FSYNC ISN'T ACTUALLY REALISTICALLY USEABLE! Theory and practice sometimes clash. And when that happens, theory loses. Every single time. Linus 

(

Log in

to post comments)



from Hacker News https://ift.tt/35Qraji

Obesity, extreme diet: 382 days without eating. It works


We can all get a bit hungry if it has been hours since we last ate. But spare a thought for how hungry Angus Barbieri must have been after he went 382 days without eating.

That’s not a typo. In 1965, 27-year-old Angus really did fast for one year and 17 days. He ate no food at all, and lost 125 kilograms (19.7 stone).

Angus was reportedly sick of being obese, and checked into the University Department of Medicine at the Royal Infirmary of Dundee weighing 207kg (32.5 stone). He told hospital staff he was ready to cut out food together, so doctors happily agreed to monitor his progress.

Angus’s doctors didn’t really expect the fast to last long. But they thought a short fast would help him to lose some weight. To compensate for his lack of nutrients, he was prescribed multivitamins to take regularly, including potassium and sodium, as well as yeast.

As days turned to weeks, Angus’s persistence increased. The Scot wanted to reach his reported “ideal weight” of 180 pounds (12.8 stone), so he kept going, much to his doctors’ surprise.

Angus would attend hospital visits frequently and often stay overnight. He received regular blood tests, all of which revealed his body was, remarkably, functioning just fine.

As weeks turned into months, he compensated for his lack of food by drinking more black tea, black coffee and sparking water, of which all are calorie-free.

His body began to adapt to the lack of food by burning its own fat stores for energy.

For the last eight months, Angus’s blood glucose levels were consistently very low, around 2 mmol/l, but the Scot did not suffer any adverse effects as a result.

In the final few months he began to have a pinch of sugar or milk in his tea and coffee.

For those wondering, he ‘went to the toilet’ every 40-50 days.

Angus eventually called it quits after 382 days, having finally reached his dream weight of 180 pounds.

According to a Chicago Tribune report, he had forgotten the taste of food before his first meal after the fast. He ate a boiled egg with a slice of bread and butter for his first breakfast, telling reporters: “I thoroly [sic] enjoyed my egg and I feel very full.”

Five years later, Angus remained at a comfortable weight, weighing 196 pounds.

Don’t try this at home

This is an incredibly unusual case, and one of the most extreme examples of a starvation diet ever recorded.

Because Angus was extremely overweight, his body was more prepared for a fast and to burn fat. But once the body has burned through its fat stores, it needs energy from food to function properly.

For people of a normal weight, fasting for long periods can cause health complications, including increased strain on the heart, even with nutritional supplementation.

Therefore, fasts of this length should not be attempted by anybody. They are from a time in the 1960s where long-term fasts were being studied with frequency, but there are other studies from this time where patients experienced heart failure and in some cases died of starvation.

However, people with and without diabetes can experience benefits from fasting. Intermittent fasting, in particular, has shown to help the body repair damage without entering starvation, enabling an array of benefits, namely weight loss and reduced insulin resistance. Last year, American scientists revealed that short-term fasting also has health benefits for the heart.

Fasting is not something that by done without great consideration though, and you should always consult your doctor before making a major dietary change. ')}



from Hacker News https://ift.tt/2EBLDdx

Pilots Revving Engines Too Hard Led to IndiGo’s Airbus Woes

Comments

from Hacker News https://ift.tt/2qZ5jHo

Study Finds Wind Speeds Are Increasing, Which Could Boost Wind Energy


The world is getting windier, according to a new study in the journal Nature Climate Change. Researchers analyzed decades of weather data and determined global wind speeds have risen dramatically over the past 10 years.

The study says wind farm operators are likely to benefit from the uptick in wind speeds since faster wind means more efficient wind turbines.

Princeton University scholar Timothy Searchinger, one of the study's authors, says researchers expect wind speed to continue to increase, he says, which has multiple positive effects.

Green energy through wind turbines will see these impacts.

“When you increase the wind speed by a little bit, you still increase the power quite a lot,” he says.

The study also debunked a “clear belief” that global wind speed from 1980 on was slowing due to humans. Increased buildings, and in some places vegetation, were a source of concern for the global climate system. “This paper showed that’s not the case,” he says.

As a result of increasing wind speed, the average wind turbine generated roughly 17% more electricity in 2017 than it did in 2010, the study found.

What’s influencing wind speed is ocean oscillations, he says, “which are different patterns of pressure and temperature and winds in different parts of the ocean that go through these sometimes irregular but recurring patterns.”

To get an accurate measurement, researchers had to make a wide variety of statistical analyses based on differing biases such as height and elevation.

Now, humans can capitalize on this change for at least the next decade, he says.

“When you size wind turbines, you can size them differently to take advantage of that additional power,” he says. “That's really the key point, is that if we can predict these changing patterns 10 years in advance, we can size our turbines so that they take advantage of the maximum amount of wind that is reasonable and economical.”


Chris Bentley produced and edited this interview for broadcast with Todd Mundt. Serena McMahon adapted it for the web.



from Hacker News https://ift.tt/2rFfVv2

What I've Learned over National Blog Posting Month (NaBloPoMo) 2019


Today is the last day of National Blog Posting Month (NaBloPoMo), and I've managed to write 32 posts in 30 days!

I thought I'd reflect on how I've found the month, and whether I've been able to more easily crank out posts.

Breakdown of tags per post over the NaBloPoMo 2019
Tag Number of posts
nablopomo 32
blogumentation 13
http://www.jvt.me 8
command-line 5
ruby 3
indieweb 3
certificates 3
personal 3
minify-json 2
json 2
git 2
hugo 2
thoughts 2
openssl 2
nodejs 1
job-dsl 1
jenkins 1
pretty-print 1
yaml 1
privacy 1
neurodiversity 1
security 1
golang 1
jekyll 1
static-site-generator 1
reader-mail 1
python 1
webmention 1
jsonfeed 1
gitlab 1
workflow 1
music 1
personal-website 1
rfc 1
events 1
meetup.com 1
getting-started 1
emacs 1
vim 1
netlify 1
banking 1
monzo 1
gousto 1
cooking 1
chefspec 1
chef 1
licensing 1
open-source 1
free-software 1
self-care 1
mental-health 1
blogging 1
retrospective 1

As we can see, a large percentage of the posts are blogumentation-related, which is good because it shows in the month I've learned a lot of things that I have documented for posterity! I've also made a fair few changes to this site, as I've blogged about them under the tag www.jvt.me.

When sharing my posts Sending Webmentions More Intelligently, Ditching Event Platforms for the IndieWeb and Adulting: The Constant Struggle of Prioritisation on various platforms, I've received quite a bit of engagement which was nice.

I've found that of the 30+ posts I've done this month, only 3-4 of them have been posts that I've had on my backlog for some time, which is both good because it's meant I've blogged about new things, but as I have 72 articles in my backlog it would've been nice to get some of them done as they'll hopefully be helpful for some else hence me wanting to write them.

I've also still got, at time of writing, another 16 posts to write for NaBloPoMo, which I felt would would fit the time and size constraints for the month, but didn't get around to writing.

I find it pretty difficult to force content and be productive when I'm not in the mindset for it (as I spoke about in Revert 'Some knowledge-sharing news' when I cancelled my training courses with Packt). It's been not helped by going through bereavement, but I feel it has also made it easier as it's been giving me a nice constant and an outlet to focus on.

In the past I've found I'm able to push out articles when they're either blogumentation or it's something that I need to get published. Although I've written a tonne, it's often ad-hoc, not on a strict schedule. I guess this has shown that I can write on a schedule but I don't really enjoy it. Life is quite busy and I feel pretty exhausted by this last month, aside from everything else life-related going on.

I've generally written posts that take ~2 minutes to read, and have maybe taken ~20-30 minutes to write and review, although a few others have taken a bit longer as I've been thinking about what I want to write.

I've written a couple of posts before the day they've been published, but that's more because I've known that day was going to be very busy and I'd struggle to post. But on the flip-side, it's also meant that I've felt I'm forced to only publish one post a day, which is a pain, because yesterday I had a ideas for five posts. I ended up writing a couple late last night, but still, it's a shame that I've felt I've not been able to (for some imaginary reasons, yes) deliver timely content if I wanted to.

I may see if in 2020 I'll start to write at least one post a week, giving myself a soft schedule and allow me to write ad-hoc if I need to.

And just a note here that I know I haven't needed to blog all month. I realise that it's purely my own choice, and forcing myself to write lots has been to see if I can and how it feels, but I've known that I don't have to publish anything if I don't feel up to it.

And finally, I've found it quite difficult because I've wanted to spend time with Anna and Morph and play Apex Legends.

I'm looking forward to going back to being able to blog at leisure, and especially looking forward to the Nottingham Tech Community Christmas Party on Monday.



from Hacker News https://ift.tt/2OztTaU

Self Hosted – Zendesk Clone – Multi Domain Customer Support Desk Solution

Zendesk Clone – Multi-Domain – Multi-Tenant Cloud Support Desk System

CREATE YOUR SAAS SUPPORT DESK TICKET SYSTEM LIKE ZENDESK.

Live Demo Support System Life time Updates imap documentation

 

Multi Sub-Domain Support Desk, FAQ, KB creator (zendesk clone)

SCALE YOUR CUSTOMER SERVICE & BUILT YOUR OWN STARTUP OUT OF IT.

Let your user subscribe for the monthly or weekly or yearly package set by you and pay for using your support desk, ticket system, FAQ system, and knowledge base system.

Installation Documentation: How to Install.

 

Features Each Sub-domain of your client CRM:

 

  • It helps your client to create support desk on custom sub-domain to consolidate customer data and leverage them to build or nurture long-term relationships with your customers or teams.
  • It is is more than just a help desk. It crosses over to CRM because it allows your client to setup help desk that can help them to organize processes, workflows and tracking of customer engagement.
  • Flexible ticket management with automated workflow using FAQ & Knowledgebase for each of your Client Support Desk
  • Mobile support with Responsive design Support desk for each your client sub-domain.
  • Custom Branding with Logo & text for each sub-domain of your client.
  • Customer facing web interface that you or your client can easily brand for their own sub-domain support desk.
  • Knowledge base portal and community forums included in each sub-domain your client creates.
  • Group rules and management
  • Public and Private Support Desk option for each Sub domain or your Clients.
  • Multiple Sub-domain based Support Desk System – Multi Tenant Support Desk System.
  • Multiple Admin, Moderator, Staff system for main site and each sub-domain your client creates.
  • Ban User System.
  • Multiple Visibility system for [Query-Ticket, FAQ, Knowledgebase] for each sub-domain support desk your client creates.
  • Multiple Visibility system – “Admin + Self visible”, “Admin + Moderator + Self Visible”, “Admin + Moderator + Registered User + Self Visible”, “All user visible” for each sub-domain support desk your client creates
  • Multiple Category System – “Faq Only”, “Support Enquiry only”, “Knowledgebase only”, “All type” category system with Icon.
  • Paypal + PayStack + Instamojo = Payment Option enabled
  • More feature coming soon

High Performance & Highly Scalable

onxqaa58q76.png kzpweg5x2by.png

Suitable for

  1. Running your own Support Desk startup like Zendesk.
  2. Creating own Support desk for multiple department.
  3. Creating own FAQ system or Cloud based software as a service FAQ system and make money
  4. Creating own Ticket system or cloud based software as a service Enquiry system and make money
  5. Mobile Responsive
  6. Twitter Bootstrap Framework

You don’t have to hire 3rd person for installation help. We are always there. Send us message at CodeCanyon Profile

 

Money Making Machine System:

You have 50 customers paying you 5$ or 10$ or 25$ monthly for using support desk solution, you have

  1. 50 * 5 = 250$ per month revenue
  2. 50 * 10 = 500$ per month revenue
  3. 50 * 25 = 1250$ per month revenue

Remember zendesk charges 89$ per month, you are charging 1/4th or 1/10 of it every month.

Live Created Test Sub-domain

Create your Own Support Desk here –

  1. Register
  2. Click “Get Started” in Homepage
  3. Provide Information
  4. Pay & Start using your own Support-Desk

How it can be useful?

  • On its own the software can turn your huge volumes of support data into a treasure trove of leads, opportunities and market insights
  • Studies have shown that customers are three times more likely to purchase when given support in real time – right when they need assistance.
  • It allows you to build a self-service customer portal using its knowledge baseFAQ system & community features.
  • The software has all the key features you need in a powerful help desk solution.
  • It’s got ticketing system, knowledge base, community forums and more option coming soon (No extra addon will be sold separately, it will be upgraded here by the author).
  • You can build an efficient and powerful customer service process around this structure, if not at once, one module at a time – “Enable/Disable” – FAQ / Knowledgebase / Support Ticket.

License Rules

  1. One Purchase is licensed for One Domain only.
  2. You can purchase Regular License to Charge your Client for creating their Support Desk at your domain.
  3. For Developer License, Please purchase Extended version of it.

Money Making Machine Script:

Remember zendesk charges 89$ per month, you can charge 1/4th or 1/10 of it every month.

It’s FAST and Highly scalable. Check screenshot above and Verify yourself.

System Requirements:

  1. PHP 7
  2. PHP Mcrypt
  3. ioncube loader
  4. PHP mbstring
  5. MySQL


from Hacker News https://ift.tt/2NUVl1i

Show HN: MScSim – the real time flight dynamics simulation software

Comments

from Hacker News https://ift.tt/34DoYv5

Show HN: The Jumping Jouster, a Godot game I made in my spare time


Thanks! I was going for a bit of a QWOP vibe. I'm glad you enjoyed it.



from Hacker News https://ift.tt/2qPRafB

Small Projects, Big Companies

Pizza delivery. Event planning. Bill splitting. We see many of these at Pioneer. I call them “incombustible ideas”. Founders relentlessly try to ignite them, but they just don’t light up. I don’t blame the firebrands. Solving anecdotal problems is a good instinct in most situations. But it can also lead founding teams down a stray path, spending years working on a company in a dog market.

There's a bit of a meta-issue at play. People who want to start startups aren’t exposed to enough of the right frustrations. Sure, it’s slightly annoying to split a bill with friends. But the frustration is small. The market difficult to monetize. It’s really annoying to use Google Groups in a large organization. And that’s very easy to monetize. I doubt most humans know this “secret”.

Our hope with this post is to enlighten the hacker-class to some of the small-but-big problems that large enterprises face. Any of these projects could turn into a $200M+ startup within one year if executed properly. Pick one, start working on it, and get in touch with us!

1. Better Zendesk

Zendesk is a $9B public company. The product is good, not great. The company has few moats. If you built a Roadster to their Corolla, you might win a few deals quickly. Something that was fast, responsive, undercut on price and gave leadership a sense of control.

Small startups find Zendesk uncomfortably expensive. Larger businesses are looking for something with better reporting. This is an important problem, because it makes executives unhappy. It’s one thing if the customer service rep finds the Zendesk UI slow. They aren’t the buyer of the software, so that doesn’t matter much. If you’re making the executive unhappy ( “are we getting better or worse at support?”), you’re at risk of losing customers. Make something managers want. Here are some specific issues you’d want to fix:

  • No built-in support for account context (which every company needs).
  • No support for authenticating customers (which every company needs to do).
  • Doesn’t interoperate with G Suite and other communication tools.
  • Janky subject line prefixes like [CASE 34984311111].
  • Doesn’t interoperate properly with the rest of email (CCs etc.).
  • Has been breached twice.

2. Better Google Groups

Every company has internal email lists. Managing them through the Google Admin UI is terrible. Build something better here, one that allows for *-prefix rules, has a nice internal viewer interface, etc.

3. Backchannel.app

Every company runs reference checks as part of a hiring process. Some references are provided by the candidate. The best are often unsolicited. A friend, or a friend-of-friend. These are often more useful than the interview. Backchannel references are tricky: you don’t want to call people at Foo’s current company because they haven’t told them they’re leaving. Even once you find people from a past company or position, you’ll need to somehow incentivize them to be honest (non-monetary rewards work best).

Figure this out and you’ll find yourself establishing the next LinkedIn.

4. Better Personality Models

The Big Five Aspects Scale is considered the cutting-edge of personality models, yet like much of social sciences it’s on the backbone of self-report surveys. I doubt a Google Form is the best we can do. We need something better. Something that durably captures differences in human personality. Instead of measuring using self-report surveys, look at emissions: the messaging, music, movies, browsing history, that people create, and cluster based on that. You might not know the labels of the clusters (“openness”, “conscientiousness”), but you’ll at least know who is similar to who. Monetizing this should be relatively easy. (We certainly have a few ideas ourselves we’d be happy to share).

5. Org Chart As a Service

Like a scene from an FBI movie, every good sales team has a cork board of people within the organization they’re selling to. The golden dataset everyone wants is the org chart. Who reports to whom. Who is new. Who has been at the company for a while. Find a way to get this data and sell it to companies. I suspect you’ll have many takers.

Find all the images on the web that look like charts/graphs. Convert those pixels into data. Let users search the entire Internet’s store of graphs, both by name and shape (“what are Things that are trending up?”). Like a Tesla Roadster, this product wouldn’t have billions of users, but you could charge the elite few quite a bit for using it.

Every founder goes through an identical process: downloading Microsoft Word, changing variables in a legal template, opening it in Preview, attaching their signature, and sending it over to the counterparty. Automate this process. Offer templates for common documents (company formation, SAFEs, etc). In time, you might become the store-of-record for company documents.

Pioneer Sales Booster

Success selling in the enterprise is equal parts software and sales. Getting 3 landmark deals can catalyze a feedback loop that turns a small project into the next Oracle. In addition to funding, Pioneer will try to help you get these for any of the ideas above. We can’t guarantee anything (you’ll have to make a good product, of course!), but we’ve got customers lined up for the right thing. Mention this post in your application when you apply.



from Hacker News https://ift.tt/2Y4hwqz

Friday, November 29, 2019

Disproved Discoveries That Won Nobel Prizes

Disproved Discoveries That Won Nobel Prizes

Over a million papers are published in scientific journals each year, and as Stanford University professor John Ioannidis wrote in a now legendary paper published to PLoS Medicine in 2005, most of their findings are false. Whether due to researcher error, insufficient data, poor methods, or the numerous biases present in people and pervasive in the ways research is conducted, a lot of scientific claims end up being incorrect.

So it should come as little surprise that Nobel Prize-winning discoveries are not immune to being wrong. Though marbleized in prestige, a number of them have been either disproved or lionized under mistaken pretenses.

PERHAPS THE MOST clear-cut example hearkens all the way back to 1926, when Johannes Fibiger won the Nobel Prize in Medicine for "for his discovery of the Spiroptera carcinoma." In layman's terms, he found a tiny parasitic worm that causes cancer. Subsequent research conducted in the decades following his receipt of the award would show that though the worm definitely existed, its cancer-causing abilities were entirely nonexistent. So where did Fibiger go wrong?

Though widely respected and considered to be a careful and cautious researcher, Fibiger fell victim to improper controls and inadequate technology. To elucidate his hypothesized connection between parasites and gastric cancer in rodents, he fed mice and rats cockroaches infested with parasitic worms and observed what he thought were tumors grow inside the rodents' stomachs. Later studies would show that they were not tumors but lesions likely caused by vitamin A deficiency, which resulted from a poor diet.

It's hard to fault Fibiger or the Nobel Committee too much for this blunder. At the time, cancer was much, much more of a mystery than it is today, and Fibiger worked tirelessly to solve it, exploring all sorts of hypotheses, not just those involving parasites.

Analyzing Fibiger's story in a 1992 issue of the Annals of Internal Medicine, Tamar Lasky and Paul D. Stolley were kind in their remembrance:

"We now know that gastric cancer is not caused by Spiroptera carcinoma, and the purported "discovery" of such a relation hardly seems worth a historical footnote, never mind a Nobel Prize. At the same time, it is quite touching to read the speech given by the Nobel Committee on presenting Fibiger with his award. They considered his work to be a beacon of light in the effort of science to seek the truth. Perhaps his work did serve to inspire other scientists to conduct more research and to persist along the path of human knowledge...

...Fibiger's story is worth recounting not only because it teaches us about pitfalls in scientific research and reasoning, but also because it may provide perverse solace for those of us who will never receive the Nobel Prize (but, of course, deserve it)."

THOUGH MOST STUDENTS of science wouldn't recall Johannes Fibiger; they would be well acquainted with Enrico Fermi. Credited with the creation of the first nuclear reactor in Chicago, Fermi etched his name into the history books of quantum theory, nuclear and particle physics, and statistical mechanics. He also won a Nobel Prize sort of by mistake.

Fermi won the 1938 Nobel Prize in Physics "for his demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons." The catch, of course, was that he did not demonstrate the existence of new elements. When Fermi bombarded uranium atoms with slow-moving neutrons, and observed a process called beta decay, he thought he did, and even labeled the new elements he supposedly saw Ausonium and Hesperium. But what he actually and unknowingly accomplished was nuclear fission! The uranium atoms split to become lighter elements!

Considering that this was a big discovery, one that would eventually earn German scientist Otto Hahn the 1944 Nobel Prize in Chemistry, one can't really bemoan Fermi his Nobel Prize. Moreover, once he realized his mistake, he admitted it. Radioactive elements beyond uranium were actually created in 1940, beginning with element 93, Neptunium.

THE FIRST TIME a Nobel Prize was awarded jointly was in 1906. The prize in Physiology or Medicine went to both Camillo Golgi (pictured) and Santiago Ramón y Cajal "in recognition of their work on the structure of the nervous system". The decision was controversial, as the two men were bitter adversaries, both endorsing competing views about the structure of the nervous system. Golgi thought that the nervous system was a single continuous network, while Cajal proposed that the nervous system was composed of individually-acting, linked nerve cells, or neurons as he called them. Though many members of the Nobel Committee considered Cajal's work to be superior to Golgi's and likely the correct interpretation of how the nervous system functions, they ended up electing the diplomatic path and awarding the prize to both men.

Golgi, however, was decidedly undiplomatic during his Nobel lecture. In it, he directly attacked Cajal's theories, taking sophisticated swipes at his colleague throughout the presentation. Time would prove Cajal correct, however, and Golgi wrong.

THERE IS NOTHING wrong with being wrong, of course. The history of science is filled with incorrect ideas, far more than correct ones, in fact. The pursuit of knowledge follows a darkened path riddled with dead ends. That's why it's okay that science's top prize has occasionally been awarded for false claims.

(Imagse: AP, Wikimedia Commons, )



from Hacker News https://ift.tt/34wOEtm

Nano-nonsense: 25 years of charlatanry

I used to work next to the center for nanotechnology. The first indication I had that there was something wrong with the discipline of “nanotechnology” is I noticed that the people who worked there were the same people who used to do chemistry and material science. It appeared to be a more fashionable label for these subjects. Really “material science” was a sort of fancy label for the chemistry of things we use to build other things. OK, new name for “chemist.” Hopefully it ups the funding. Good for you guys.

Later on, I actually read Drexler’s Ph.D. thesis which invented the subject. I can sum it up thusly:

  • Behold, the Schroedinger equation!

  • With this mighty equation we may go forth and invent an entirely new form of chemistry, with which we may create new and superior forms of life which are mechanical in their form, rather than squishy inefficient biological looking things. We shall use the mighty powers of the computer to do these things! It shall bring forth many great marvels!

    “And there was heavenly music”

That’s it. That’s what the whole book is. Oh yes, there are a few collections of intimidating tables and graphs purporting to indicate that such a thing might be possible, and Drexler does sketch out some impressive looking mechanical designs of what he supposes a nanobot might look like, but, without more than a passing justification. He seems to lack the imagination, and of course, the physics to figure out what a real nanosized doodad might look like. Much of his thesis seems to be hand wavey arguments that his “looking rather a lot like a meter scale object” designs would work on a nano or small microscale. I know for a fact that they will not. You can wave your hands around all you want; when you stick an atomic force microscope down on nanosized thingees, you know what forces they produce. They don’t act like macro-objects, at all. Drexler would also occasionally notice that his perfect little robots would probably, you know, oxidize, like most reactive things do, and consign them to Ultra High Vacuum chambers in a fit of embarrassment. Then sometimes he would forget about the chemical properties of oxygen, and enthusiastically stick them everywhere. None of the chemistry you’d need to figure out to even begin to do this was done in his book. Little real thought was given to thermodynamics or where the energy was coming from for all these cool Maxwell-Demon like “perpetual motion” reactions. It was never noticed that computational chemistry (aka figuring out molecular properties from the Schroedinger equation) is basically useless. Experimental results were rarely mentioned, or explained away with the glorious equation of Schroedinger, with which, all things seemed possible. Self assembly was deemed routine, despite the fact that nobody knows how to engineer such thing using macroscopic objects.

There is modern and even ancient nano sized tech; lithographic electronic chip features are down to this size now, and of course, materials like asbestos were always nano sized. As far as nano objects for manipulating things on nanoscales; such things don’t exist. Imagining self replicating nanobots or nano machines is ridiculous. We don’t even have micromachines. Mechanical objects on microscales do not exist. On milliscales, everything that I have seen is lithographically etched, or made on a watchmakers lathe. Is it cool? Yep; it’s kind of cool. I have already worked for a “millitech” company which was going to use tiny accelerometers to do sensing stuff in your cell phone. Will it change the universe? Nope. Millitech miniaturization has been available for probably 300 years now (assuming the Greeks didn’t have it); lithography just allows us to mass produce such things out of different materials.

This is an honest summary of Drexler’s Ph.D. thesis/book, and with that, a modest act of imagination, accompanied by a tremendous act of chutzpah, and a considerable talent for self promotion, he created what must be the most successful example of “vaporware” of the late 20th and early 21st century. The “molecular foundry” or “center for nanotechnology” or whatever nonsense name they’re calling the new chemistry building at LBL is but the tip of the iceberg. There are government organizations designed to keep up America’s leadership in this imaginary field. There are zillionaire worryworts who are afraid this mighty product of Drexler’s imagination will some day turn us all into grey goo. There are news aggregators for this nonexistent technology. There are even charlatans with foundations promoting, get this, “responsible nanotech.” All this, for a technology which can’t even remotely be thought of as existing in even pre-pre-prototype form. It is as if someone read Isaac Asimov’s books on Robots of the future (written in the 1950s) and thought to found government labs and foundations and centers to responsibly deal with the implications of artificial intelligence from “positronic brains.”

You’d think such an endeavor would have gone on for, I don’t know, a few years, before everyone realized Drexler was a science fiction author who doesn’t do plot or characterization. Nope; this insanity has gone on for 25 years now. Generations of academics have spent their entire careers on this subject, yet not a single goal or fundamental technology which would make this fantasy a remote possibility has yet been developed. Must we work on it for another 25 years before we realize that we can’t even do the “take the Schroedinger equation, figure out how simple molecules stick together” prerequisites which are a fundamental requirement for so called molecular engineering? How many more decades or centuries of research before we can even create a macroscopic object which is capable of the feat of “self replication,” let alone a self replicator which works at length scales which we have only a rudimentary understanding of? How many more cases of nincompoops selling “nanotech sunscreen” or “nanotech water filters” using the “nanotechnology” of activated carbon; must I endure? How many more CIA reports on the dangers of immanent nanoterrorism must my tax dollar pay for, when such technologies are, at best, centuries away? How many more vast coffers of government largesse shall we shower on these clowns before we realize they’re selling snake oil?

Drexler’s answer to all this is, since nobody can disprove the necessary things to develop nanotech, they will be developed. Well, that depends what you mean by the words “can” and “disprove.” It also depends on what your time scale is. I’m willing to bet, at some nebulous point in the future, long after Drexler and I are dead, someone may eventually develop a technology sort of vaguely like what he imagines. At least the parts that don’t totally violate the laws of thermodynamics and materials physics (probably, most of the details do). As an argument, “you can’t disprove my crazy idea” doesn’t hold much water with me. Doubtless there are many denizens of the booby hatch who claim to be Jesus, and I can’t really disprove any of them, but I don’t really see why I should be required to.

I have nothing against there being a few people who want to achieve some of the scientific milestones needed to accomplish “nanotech.” I have a great deal against charlatans who claim that we should actually invest significant resources into this crazy idea. If you’re an investor, and somebody’s prospectus talks about “nano” anything, assuming they’re not selling you a semiconductor fab, you can bet that they are selling you snake oil. There is no nanotech. Stop talking about it. Start laughing at it.

As Nobel prize winning chemist Richard Smalley put it to Drexler:
“No, you don’t get it. You are still in a pretend world where atoms go where you want because your computer program directs them to go there.”

Resources:
http://lachlan.bluehaze.com.au/nanoshite/

Edit add: definition of vaporware technology: any “technology” which claims miraculous benefits on a timescale longer than it takes to achieve tenure and retire is vaporware, and should not be taken seriously.



from Hacker News https://ift.tt/1RSHunr