Wednesday, August 31, 2022

The Rise of SQL

SQL dominated the jobs ranking in IEEE Spectrum’s interactive rankings of the top programming languages this year. Normally, the top position is occupied by Python or other mainstays, such as C, C++, Java, and JavaScript, but the sheer number of times employers said they wanted developers with SQL skills, albeit in addition to a more general-purpose language, boosted it to No. 1.

So what’s behind SQL’s soar to the top? The ever-increasing use of databases, for one. SQL has become the primary query language for accessing and managing data stored in such databases—specifically relational databases, which represent data in table form with rows and columns. Databases serve as the foundation of many enterprise applications and are increasingly found in other places as well, for example taking the place of traditional file systems in smartphones.

“This ubiquity means that every software developer will have to interact with databases no matter the field, and SQL is the de facto standard for interacting with databases,” says Andy Pavlo, a professor specializing in database management at the Carnegie Mellon University (CMU) School of Computer Science and a member of the CMU database group.

The use of SQL within streaming systems opens up a new chapter.

That sentiment is echoed by Torsten Suel, a professor and director of undergraduate programs in computer science and engineering at the NYU Tandon School of Engineering. “A lot of our technological infrastructure uses relational databases to store and query their data, and while not the only way, SQL is still considered the main way—or most powerful way—to interface with relational databases,” he says.

Beyond the utility of databases in themselves, big data and the growth of streaming architecture are contributing to SQL’s rise. “Markets such as retail, e-commerce, and energy are seeing growing interest in applications where data has to be processed and analyzed in real time,” says Manish Devgan, chief product officer at real-time data platform Hazelcast. “The use of SQL within streaming systems opens up a new chapter in the story of SQL within the data domain.”

Even the fields of data science and machine learning are propelling SQL to the top. “We have this huge boom in data science and machine learning, and students focusing on these fields during their studies often also take a database course, which usually involves learning SQL,” says Suel. “So it could be a side effect of the data-science-and-machine-learning boom.”

Consequently, even if you mostly program in, say, Python or C++, it’s increasingly important that your application can talk to an SQL database. “Most of the software we develop depends on relational databases, and we rely on SQL,” says Andrey Maximov, chief technology officer at the Web development agency Five Jars. “The development process often goes through setting requirements and specifications, which very much comply with the ideas of relational databases.”

The existing software and tooling ecosystem that relies on SQL is vast.

This means learning SQL will benefit your career as a programmer—and it’s a fairly intuitive language to pick up. “SQL is a mature technology,” says Maximov, who has been a developer for more than a decade and has extensive experience in SQL programming. “It’s taught in colleges and universities, and it’s really easy to learn.”

SQL has been around since the 1970s, with computer scientists from IBM developing Sequel, the first version of the language. It was standardized more than a decade later, and new versions of the SQL standard continue to be published. “The SQL standards body has done an excellent job adapting to emerging technology trends and expanding the language over the decades,” Pavlo says. “And the existing software and tooling ecosystem that relies on SQL is vast.”

Having been around for more than 50 years, SQL has seen new technologies arise to challenge its enduring power. “Reports of the impending death of SQL used to be quite a regular occurrence over the years, especially with the rise of the NoSQL movement,” says Devgan. NoSQL refers to a type of database developed in the late 2000s that stores data in a format other than tables, such as documents or graphs with nodes and edges. Even tech giants like Google experimented with NoSQL. The company initially designed its database service, Cloud Spanner, as a NoSQL database, but soon realized it needed a robust and expressive query language, so it turned back to SQL.

“Every decade, another hyped-up database technology comes along that claims SQL is terrible, slow, or impractical,” Pavlo says. “Over time, the conventional wisdom comes back to realizing that [SQL] is a good idea, and everyone returns to it.”

From Your Site Articles

Related Articles Around the Web



from Hacker News https://ift.tt/WYRh9wF

Falsehoods programmers believe about email

In the spirit of falsehoods programmers believe about names and time, here’s some falsehoods about email which are all too common.

  • Everyone has an email address
  • Everyone has exactly one email address
  • An email address never changes
  • Whenever an address does change, it’s under that user’s control
  • Whenever an address does change, it’s because the user specifically requested it to happen
  • Whenever an address does change, the old address will continue to work/exist
  • Any one email address refers to only one single person
  • Unique strings of characters all map to different addresses
  • All email is hosted by a centralized system
  • When email is sent to a user at a domain, it is delivered to a server whose address matches that domain
  • When email is sent by a user at a domain, it is sent by a server whose address matches that domain
  • All email comes from a .com, .net, .edu, or .org address
  • You can filter out email based on the TLD or ccTLD from which it originates
  • Having a particular ccTLD means that you prefer to receive communications in that country’s native language (for example, .fr → French)
  • Email addresses only contain letters
  • Email addresses only contain letters and numbers
  • Email addresses only contain letters, numbers, and a handful of common punctuation marks (e.g. ., _, and -)
  • Email addresses will have at least one letter in them
  • An email address like ^_^@example.com or +&#@example.com is invalid
  • Email is a reliable transport
  • Email is an instantaneous transport
  • Emails will be sent within a few minutes of their scheduling
  • Emails will be sent within a few hours of their scheduling
  • Emails will be sent within a few days of their scheduling
  • Emails will be received soon after they’re sent
  • When an email is sent it immediately goes to its destination server
  • If an email bounces, the address is invalid
  • If an email doesn’t bounce, the address is valid
  • An address which is valid will always be valid, and an address which is invalid will always be invalid
  • All email is sent via SMTP over TCP/IP port 25
  • All email is sent via SMTP over TCP/IP
  • All email is sent via SMTP over IP
  • All email is sent via SMTP
  • All email servers support the various vendor extensions by the current “everyone uses this vendor” vendor (Microsoft, Google, etc.)
  • An email can only have one From: address
  • The Date: header on a message is legitimate
  • The Received: headers will always be no earlier than the Date: header
  • All email clients support HTML attachments
  • All email clients support HTML message bodies
  • All email clients support MIME encoding
  • Email is secure
  • Encrypted email is secure
  • All email is accessed via webmail
  • All email is accessed via webmail or IMAP
  • All email is accessed via webmail, IMAP, or POP3
  • Nobody uses email anymore

See also: email is bad

Update I’m getting some good additions from folks' responses and I’ll be adding them as I see them.

From elainemorisi:

  • Anyone with a .edu address is a student
  • Anyone with a .edu address is a student or faculty
  • Students and faculty will use their .edu address to sign up for all of their Internet accounts

Additional suggestions from a reddit thread:

From Jens Alfke:

  • Email addresses are case-sensitive / can be compared by == or strcmp
  • A reply to an email sent to address X will come from X (this is the mistake made by things that say “Reply with REMOVE to unsubscribe”)
  • If you receive email at address X, you are capable of sending email whose From header is X.


from Hacker News https://ift.tt/K24gwth

YKK zippers: Why so many designers use them (2012)

The zipper is one of those inventions—along with the bicycle—that seems as though it should have occurred much earlier in history. How complicated could it be to assemble two wheels, two pedals, and a chain? Or to align two jagged strips of metal teeth and shuffle them together? There is no complicated chemistry here, no harnessing of invisible wavelengths. And yet the modern bicycle didn’t appear until the late 1800s, and the zipper didn’t really become the zipper until 1917 (when it was patented by a Swedish immigrant in Hoboken). The precision necessary to craft a working bicycle chain or a smoothly meshing zipper was simply beyond us for all those prior millennia.

More confounding still: Now that the zipper has been around for nearly a century, you’d think that something so simple might have been perfected—becoming a 100 percent reliable commodity. But that hasn’t happened. There are still tons of faulty zippers out there. Teeth that break. Pulls that pop. Herky-jerky sliding and irreparable lockups.

One zipper gone wrong can render an entire garment unwearable. Thus consistent quality is a must for reputable fashion brands. For decades now, apparel makers who can’t afford to gamble on cut-rate fasteners have overwhelmingly turned to a single manufacturer. YKK, the Japanese zipper behemoth, makes roughly half of all the zippers on earth. More than 7 billion zippers each year. Those three capital letters are ubiquitous—no doubt you’ve seen them while zipping up your windbreaker or unzipping someone else’s jeans. How did YKK come to dominate this quirky corner of industry?

Founded by Tadao Yoshida in Tokyo in 1934, YKK stands for Yoshida Kogyo Kabushikikaisha (which roughly translates as Yoshida Company Limited). The young Yoshida was a tinkerer who designed his own customized zipper machines when he wasn’t satisfied with existing production methods. One by one, Yoshida brought basically every stage of the zipper making process in house: A 1998 Los Angeles Times story reported that YKK “smelts its own brass, concocts its own polyester, spins and twists its own thread, weaves and color-dyes cloth for its zipper tapes, forges and molds its scooped zipper teeth …” and on and on. YKK even makes the boxes it ships its zippers in. And of course it still manufactures its own zipper-manufacturing machines—which it carefully hides from the eyes of competitors. With every tiny detail handled under YKK’s roof, outside variables get eliminated and the company can assure consistent quality and speed of production. (When the Japanese earthquake hit last year many supply chains were shredded, but YKK kept rolling along.)

Yoshida also preached a management principle he termed “The Cycle of Goodness.” It holds that “no one prospers unless he renders benefit to others.” In practice, this boiled down to Yoshida striving to produce ever-higher quality with ever-lower costs. It seems intuitive, but it’s far from easy to do. And in the end, the secret to YKK’s success is equally uncomplicated but equally impressive: YKK makes incredibly dependable zippers, ships them on time without fail, offers a wide range of colors, materials, and styles, and never gets badly undercut on price. The feeling in the apparel industry is that you can’t go wrong with YKK.

“There have been quality problems in the past when we’ve used cheaper zippers,” says Trina Turk, who designs her own line of women’s contemporary sportswear. “Now we just stick with YKK. When the customer is buying $200 pants, they better have a good zipper. Because the customer will blame the maker of the whole garment even if the zipper was the part that failed.”

A typical 14-inch “invisible” YKK nylon zipper (the kind that disappears behind fabric when you zip up the back of a dress) costs about 32 cents. For an apparel maker designing a garment that will cost $40-$65 to manufacture, and will retail for three times that much or more, it’s simply not worth it to skimp. “The last thing we want to do is go with a competitor to save eight or nine cents per zipper and then have those zippers pop,” says Steve Clima, Turk’s senior production manager. “The cost difference just isn’t enough given the overall margins.”

There are hundreds of rival zipper manufacturers in China. They might be a tiny bit cheaper, or might be willing to produce custom novelty orders in a rush. But at least one apparel wholesaler told me that some European companies won’t even accept delivery of garments using Chinese zippers, for fear that the zippers might contain lead (a big no-no). More generally, competitors’ zippers are often just not up to snuff. Multiple apparel designers I talked to recalled incidents in which batches of non-YKK zippers failed to meet their standards.

YKK isn’t the kind of brand that markets to consumers. (Or seeks any kind of publicity: They declined to speak to me for this story.) You don’t buy your jeans and jackets by looking for their letters on that pull. Likewise, you almost certainly wouldn’t nix a garment purchase because the zipper isn’t YKK.

But YKK is still a brand of sorts. It still has an image and a reputation. Its target demographic is trim buyers and production managers in the apparel industry. They’re the folks for whom “YKK” has real meaning.

There used to be a saying among corporate technology workers—or, as you might call them, I.T. guys—which held that “you’ll never get fired for using Microsoft.” Sure, you could take a risk on some upstart competitor and maybe save a little dough, or even get slightly better performance. But if anything goes wrong your boss will wonder why you didn’t opt for old reliable.

YKK, for decades now, has established itself as old reliable. “A zipper will never make a garment,” says Turk. “But it can break a garment.”



from Hacker News https://ift.tt/EnwjFGa

An acoustic study of domestic cat meows in 6 contexts and 4 mental states

Your download will start in a moment...

Close



from Hacker News https://ift.tt/OEnXfy5

What does Google say about “last day of march 2022”

National days in March, 2022 · Tue Mar 1st, 2022 · Wed Mar 2nd, 2022 · Thu Mar 3rd, 2022 · Fri Mar 4th, 2022 · Sat Mar 5th, 2022 · Sun Mar 6th, 2022 · Mon Mar 7th, ...



from Hacker News https://ift.tt/aRo3NOB

Compressing Images with Stable Diffusion

You get the gist

Images are just too big. A 3 MB bitmap compresses down to a 500 KB JPEG, which, don’t get me wrong, 16% of the original size is great, but why 500 KB? That’s still pretty large.

This is 2022, we shouldn’t have to put up with large images. Our websites might load 60 MB of stuff for a pageview, but that stuff shouldn’t be images, it should be Javascript, as Brendan Eich intended.

We shouldn’t have to put up with fat images, but, until now, we had no choice.

Now we do.

The solution

a computer compressing data, by Caspar David Friedrich, matte painting trending on artstation HQ

A week or so ago, Stable Diffusion was released, and the world went crazy, and for good reason. Stable Diffusion, if you haven’t heard, is a new AI that generates realistic images from a text prompt. You basically give it a description of the image you want, and it generates it.

Now, this alone would be revolutionary, but we got double the revolution this time: This thing can also take an image and tell you the prompt you can use to generate it.

Are you thinking what I’m thinking?

That’s right, why compress an image to 500 KB when you can compress it to 50 bytes, where the bytes are the prompt that can be used to generate the exact same image again?

You wouldn’t, of course not.

Instead, what you would do, is ask the image-describing AI to describe the image, take the resulting (very small) prompt, transmit it over the wire, where the recipient would then use it to generate the image again based on the prompt.

I call this technique STAV, or Stable Transcription and Artistic Validation. Yes, the acronym might not contain any of the words “image”, “compression”, “reconstruction”, or “diffusion”, but Philip Katzip isn’t going to be the only one giving his name to compression techniques.

Expected gains

As is widely known, a picture is worth 1000 words. At an average English word length of 4.7, we can expect each image to take up to 4.7 KB, regardless of its original size. The corrolary here is that we can use this method to also upscale images without any loss in quality, which I have accepted as a very fortunate side-effect of my technique.

Sure, this may have some loss of quality, but it would generally depend on the number of iterations you ran when generating the image.

Based on the numbers above, here are some rough estimates on the gains we can expect to see:

Algorithm Max size Size compared to STAV
JPEG
AVIF
STAV 4.5 KB 1

As we can see, due to the fact that STAV has fixed size, it is easily potentially infinitely smaller than both AVIF and JPEG, which is good.

Real-world benchmarks

Of course, no new compression method is complete without real-world benchmark data to back up its claims. This is why I’ve compiled an extensive analysis of sample images from Unsplash, and am presenting them here.

In the images that follow, the leftmost is the uncompressed (raw) image, the middle image is compressed with JPEG, and the rightmost image is compressed with STAV. I haven’t bothered to include the raw and JPEG sizes, as they’re thousands of times larger than the sizes of the STAV images.

For your edification, I have also included the entire STAV-compressed data below each image, in the form of the prompt that was recognized by img2prompt. Let’s analyze them one by one.

Objects in shot

a person sitting in a chair holding a book and a pen, a stock photo by Chinwe Chukwuogo-Roy, trending on unsplash, art & language, stock photo, stockphoto, depth of field

As we can see, the compressor deals with objects in the shot excellently. There is no visible degradation at all, and the final image is sharp and vibrant.

One interesting note here: img2prompt has correctly intuited that the image is from Unsplash, and has mentioned that in its generated prompt. This will doubtless improve compression even further.

People

a woman with tattoos and a hat on, a tattoo by James Baynes, featured on unsplash, neo-romanticism, anamorphic lens flare, tattoo, backlight

Another excellent performance here. The lighting is impeccable, the hairs are sharp and well-defined, and the hat looks great on the lady.

Interior shots

a living room with a green chair next to a window, a stock photo by Aaron Bohrod, trending on unsplash, light and space, studio light, studio lighting, volumetric lighting

Performance here isn’t as stellar as in the other shots, as the colors are imperceptibly more muted than the original, but overall there is almost no difference. The original and the STAV-compressed images are nearly indistinguishable. JPEG is disappointing, as there are visible artifacts.

Food

three bowls of food on a white table, a stock photo by Kelly Sueda, trending on pinterest, mingei, shallow depth of field, pixel perfect, intricate patterns

Somehow, the food in the STAV-compressed image looks even more delicious than the original. Otherwise, there is no perceptible quality difference.

Food and people

a couple of women standing in a kitchen preparing food, a stock photo by Meredith Garniss, pinterest contest winner, private press, stock photo, stockphoto, film grain

This particular image posed a challenge for the compressor, with its sharp detail and subtle blur, but the compressor pulled through. Details are preserved and vibrant, and even the blur is visible. Why we’d want to keep the blur, I don’t know, but a compressor must be faithful above all.

Nature

a pink flower with green leaves on a white background, a macro photograph by Ikuo Hirayama, featured on unsplash, minimalism, shallow depth of field, depth of field, soft light

There isn’t much to say here. STAV blows JPEG out of the water, the flower looks almost alive, even though the original image contains no flower. If anything, this enhancement showcases a strength of this technique.

a person standing on top of a mountain, a tilt shift photo by Paul Bodmer, trending on unsplash, naturalism, sense of awe, shallow depth of field, photo taken with ektachrome

Exterior architecture

an aerial view of a pond in the middle of a forest, a tilt shift photo by Stanley Twardowicz, trending on unsplash, ecological art, high dynamic range, photo taken with nikon d750, isometric

We can see here that the compressor has preserved every tiny detail of the original image, except the house, which was, admittedly, kind of ugly. It’s heartening to see this method go from strength to strength as it even enhances images.

Conclusions

As you can see, there is basically no loss in quality, even though the images’s sizes are around a ten-thousandth the original’s. This is an absolutely astonishing result, and will definitely herald a new era of compression. There are even some cases where quality is better than the original, and it is astonishing for a compressor to achieve 100%+ quality.

There are some minor kinks that need to be worked out, such as the fact that each image takes around a day to generate on mobile, but this is more than acceptable in certain domains. Website visitors, for example, are well-accustomed to such loading times, and would barely notice any difference.

Epilogue

In conclusion, I really believe that this method can help lower file sizes and make a significant difference in various niches, e.g. the web, or games that come on multiple floppy disks. I urge you to give it a try and see what kind of results you get.

If you have any feedback, please Tweet or toot at me, or email me directly. I would especially like to hear of any pathological edge-cases where the final image is somehow significantly different from the original, so I can investigate.

Thank you!



from Hacker News https://ift.tt/3zd4wZ6

Pricing page templates for SaaS founders

Comments

from Hacker News https://ift.tt/bK1Ul8P

Film: Frame Interpolation for Large Motion

We present a frame interpolation algorithm that synthesizes multiple intermediate frames from two input images with large in-between motion. Recent methods use multiple networks to estimate optical flow or depth and a separate network dedicated to frame synthesis. This is often complex and requires scarce optical flow or depth ground-truth. In this work, we present a single unified network, distinguished by a multi-scale feature extractor that shares weights at all scales, and is trainable from frames alone. To synthesize crisp and pleasing frames, we propose to optimize our network with the Gram matrix loss that measures the correlation difference between feature maps. Our approach outperforms state-of-the-art methods on the Xiph large motion benchmark. We also achieve higher scores on Vimeo-90K, Middlebury and UCF101, when comparing to methods that use perceptual losses. We study the effect of weight sharing and of training with datasets of increasing motion range. Finally, we demonstrate our model's effectiveness in synthesizing high quality and temporally coherent videos on a challenging near-duplicate photos dataset.



from Hacker News https://ift.tt/gKa432J

Company Tells Employees: Work ‘Voluntary’ Overtime or Go to Jail

On the Clock is Motherboard's reporting on the organized labor movement, gig work, automation, and the future of work.

Last week, AlumaSafway, a Canadian scaffolding company, sent workers a memo demanding they accept "voluntary" overtime shifts or face termination, a hiring ban, legal action, and possible fines or jail time.

According to the Alberta Labour Relations Board (ALRB), on August 22 an anonymous letter was shared among scaffolders at a Suncor Inc. site in Alberta, Canada, that asked workers to "collectively refuse to work overtime shifts for the purpose of compelling incentives from the Employer, including improvements in compensation or working conditions.” Suncor is one of Canada's largest fossil fuel companies.

According to the board, this resulted in no workers taking on overtime shifts. Ultimately, it decided that the action was illegal under the province's labour laws, which rule out strikes that occur while a collective bargaining agreement is in force and before a vote has been taken. 

"The Board finds the Employees' concerted refusal to accept overtime shifts for the purpose of compelling the Employer to agree to terms and conditions of employment, which constitutes a refusal to work, to be an illegal strike," it said in its decision. The board noted that it would file its decision with the Alberta courts, which would make it enforceable as a court order, and violating it would "result in civil or criminal penalties including contempt of court."

The union's collective bargaining agreement states that overtime is strictly voluntary, except for when there are not enough volunteers to complete a job. After the letter encouraging employees to not work overtime was circulated, AlumaSafway scaffold workers began refusing to work overtime shifts at the Suncor site. The company filed a complaint with the labour board on August 24. 

The situation went viral on the r/antiwork subreddit; a poster there, who claimed to be a worker's child, posted a letter sent by AlumaSafway to workers after the ALRB's ruling. In that letter, which is signed by two AlumaSafway managers, the company warned that it's "been patient and given your union an opportunity to convince you that this coordinated refusal constitutes an illegal strike, and that you may face consequences as a result. Obviously, this has not worked." Motherboard has not independently verified the letter; however, its contents refer to the ALRB ruling, which is hosted on the ALRB's official website. The two people who signed the letter do indeed work for AlumaSafway, according to their social media profiles and the company's website. 

AlumaSafway went on to warn that violation of this order could include consequences such as "discipline or termination of employment" along with a hiring ban "for those who continue to engage in illegal activity." AlumaSafway also threatened legal action "for all damages caused by the illegal strike" which could make workers "personally liable for added production costs, penalties owing to Owner or, even the loss of the contract with our client." 

In the worst case, contempt of court proceedings could open up "the possibility of fines and even potentially jail," the company's letter stated.

The letter closed with a plea from the company: “We do not want to impose any of the consequences set out above. This memo is intended to cause you to change your behaviour by impressing upon you the seriousness of this matter, including the consequences that you may suffer if the strike continues."

Sadly, there may not be much room for scaffold workers to continue their refusal to accept overtime shifts in the sweltering heat. The collective bargaining agreement that represents AlumaSafway scaffolders, negotiated by the Labourers' International Union of North America, Local 506, requires workers give up the right to strike so long as the agreement is in effect in accordance with provincial law.

The ALRB, AlumaSafway, and Local 506 all did not respond to Motherboard’s request for comment.



from Hacker News https://ift.tt/uMGfTJg

Show HN: Investorsexchange.jl – parse trade-level stock market data in Julia

InvestorsExchange.jl

Downloads tick-by-tick historical trade data from the Investors Exchange (IEX). Specifically, this tool downloads the archived data feed files which IEX uploads daily on a T+1 basis, and supports parsing these files into tabular format.

Inspired by this Python implementation.

Features

General Features

TOPS Feed

  • TradeReportMessage message type only
  • Tested only on version 1.6

DEEP Feed

  • TradeReportMessage message type only
  • Untested

Usage

The package is still in early shape, but I've included the trade download script this package is built to power to show a usage example. You can run this script like this:

julia --threads <number of CPU cores on your machine> trade_download_script.jl /path/to/save_dir

If you don't care about taking advantage of multi-threading or specifying a custom save directory (the default is ./trade_data), you can just run julia trade_download_script.jl.

Download script details

In the download script, I avoid downloading more than a handful of raw PCAP data files to disk by running downloads asynchronously with downloaded filenames piped into a limited-sized Channel. As downloads complete, the file paths are consumed by multi-threaded parsing code that reads the TradeReportMessage messages, organizes them into a Julia DataTable, and writes them to disk in parquet format. To take advantage of this parallelization and speed up the parsing of literally every TOPS feed message that IEX has issued since mid 2017, it is recommended you include the --threads flag.

A note on TOPS vs. DEEP

As of v0.1.1, this package only parses the trade report messages in any feed it reads. If you want to read from the DEEP feed or the TOPS v1.5 feed, you'll need to overwrite the default value of the protocol_magic_bytes argument in the read_trade_report_messages function.

Example

import InvestorsExchange as IEX

IEX.read_trade_report_messages("/tmp/20210420_IEXTP1_DEEP1.0.pcap.gz"; protocol_magic_bytes=IEX.DEEP_PROTOCOL_ID_1_0)

However, since TOPS and DEEP both contain the trade report messages, there is little reason to use DEEP (which tends to be bigger) to parse the trade report messages. You should expect faster download and parse speeds with TOPS, and thus it's recommended to stick with TOPS.



from Hacker News https://ift.tt/Ma5OGbX

Dr. Erich Jarvis: The Neuroscience of Speech, Language and Music [video]

Listen: YouTube | Apple Podcasts | Spotify

My guest this episode is Dr. Erich Jarvis, PhD—Professor and the Head of the Laboratory of Neurogenetics of Language at Rockefeller University and Investigator with the Howard Hughes Medical Institute (HHMI). Dr. Jarvis’ research spans the molecular and genetic mechanisms of vocal communication, comparative genomics of speech and language across species and the relationship between speech, language and movement. We discuss the unique ability of humans (and certain animal species) to learn and communicate using complex language, including verbal speech production and the ability to interpret both written and spoken language. We also discuss the connections between language, singing and dance and why song may have evolved before language. Dr. Jarvis also explains some of the underlying biological and genetic components of stutter/speech disorders, non-verbal communication, why it’s easiest to learn a language as a child and how individuals can learn multiple languages at any age. This episode ought to be of interest to everyone interested in the origins of human speech, language, music and culture and how newer technology, such as social media and texting, change our brains. 

Dr. Erich Jarvis

Other Resources

Timestamps

  • 00:00:00 Dr. Erich Jarvis & Vocal Communication
  • 00:03:43 Momentous Supplements
  • 00:04:36 InsideTracker, ROKA, LMNT
  • 00:08:01 Speech vs. Language, Is There a Difference?
  • 00:10:55 Animal Communication, Hand Gestures & Language
  • 00:15:25 Vocalization & Innate Language, Evolution of Modern Language
  • 00:21:10 Humans & Songbirds, Critical Periods, Genetics, Speech Disorders
  • 00:27:11 Innate Predisposition to Learn Language, Cultural Hybridization
  • 00:31:34 Genes for Speech & Language
  • 00:35:49 Learning New or Multiple Languages, Critical Periods, Phonemes
  • 00:41:39 AG1 (Athletic Greens)
  • 00:42:52 Semantic vs. Effective Communication, Emotion, Singing
  • 00:47:32 Singing, Link Between Dancing & Vocal Learning
  • 00:52:55 Motor Theory of Vocal Learning, Dance
  • 00:55:03 Music & Dance, Emotional Bonding, Genetic Predispositions
  • 01:04:11 Facial Expressions & Language, Innate Expressions
  • 01:09:35 Reading & Writing
  • 01:15:13 Writing by Hand vs. Typing, Thoughts & Writing
  • 01:20:58 Stutter, Neurogenetics, Overcome Stutter, Conversations
  • 01:26:58 Modern Language Evolution: Texting, Social Media & the Future
  • 01:36:26 Movement: The Link to Cognitive Growth
  • 01:40:21 Comparative Genomics, Earth Biogenome Project, Genome Ark, Conservation
  • 01:48:24 Evolution of Skin & Fur Color
  • 01:51:22 Dr. Erich Jarvis, Zero-Cost Support, YouTube Feedback, Spotify & Apple Reviews, Momentous Supplements, AG1 (Athletic Greens), Instagram, Twitter, Neural Network Newsletter, Huberman Lab Clips


from Hacker News https://ift.tt/CNym3Mz

Tuesday, August 30, 2022

Where Is the Oldest Pub in Britain?

Those in south-west England suggest that the oldest pub could be the appropriately named Old Inn in St Breward, Cornwall (with the original building dating from the 11th century). The Home Counties proudly offer Ye Olde Fighting Cocks in St Albans, Hertfordshire, as being in business since AD 793. That is still not as old as East Anglia’s proposal: the Old Ferry Boat Inn in St Ives, Cambridgeshire, where drinks may have been sold as early as c560 AD.

Ye Olde Fighting Cocks in St Albans, Hertfordshire. (Image by Alamy)

And still the claims come. In the Cotswolds, there is confidence that the Porch House in Stow-on-the-Wold, Gloucestershire, goes back to AD 947; the Midlands have Ye Olde Trip to Jerusalem in Nottingham, established in 1189; while many in the north of England hold that it’s the Bingley Arms in Bardsey, West Yorkshire (from the 10th century) or the Old Man & Scythe in Bolton, Lancashire (mentioned in a charter from 1251).

In Scotland, the Sheep Heid Inn in Edinburgh has been around since 1360. The Welsh contender is the Skirrid Mountain Inn in Llanvihangel Crucorney, Monmouthshire, which goes back to 1110. And in Ireland, some are adamant that their oldest is Sean’s Bar in Athlone, Westmeath, which is said to have its origins in AD 900.

To declare the winner, it is not simply a matter of looking at the dates (which would put the sixth-century Old Ferry Boat Inn on top of the podium). There are such a bewildering number of claims – stretching from the 6th to 14th centuries – that take in an equally bewildering number of factors: several businesses say they have the earliest licences to serve alcohol, or that they are included in the Domesday Book, or that Guinness World Records has awarded the title to them.

Can we ever reach a conclusion and work out which is the oldest pub in Britain?

When were the first pubs?

Part of the problem is that the defining features of British pubs have evolved gradually. Until the 13th and 14th centuries, alehouses were just that: domestic houses where ale was brewed and the surplus sold for income. A green branch would be mounted on the building when the ale was ready, which is something of a forerunner of the pub sign. Yet, this scenario – often a cottage industry, typically run by women – was more akin to a microbrewery tap, located within a residential house, rather than a purpose-built pub.

Beginning in the 13th century, monasteries realised that they could capitalise on pilgrimage and travel by opening inns offering food, drinks and a place to sleep. This was advanced during the late-14th and 15th centuries when changes in urban life led to an increase in businesses that offered liquid refreshment: wine for elites (sold in taverns) or ale for non-elites (sold in a new variants of the alehouse).

Late-medieval inns, taverns and alehouses were the forerunners of our modern pubs. (Image by Alamy)

These late-medieval inns, taverns and alehouses were the forerunners of our modern pubs. But we are unlikely to be able to identify a standing structure from before the 13th century since pubs, as we understand them, did not exist before then.

Other factors are in play too. One significant point is that non-religious, roofed buildings pre-dating the 11th century do not exist in the British Isles. Another is that claims made by some pubs that they were referred to in the Domesday Book come unstuck as not a single pub is actually mentioned in the 1086 survey.

Contentions of early dates for the issue of pub licences can also be questioned, given that such bureaucratic controls were not imposed in England until 1552. Finally, there are a small number of pubs boasting of their Guinness World Record as the oldest pub (including St Albans’ Ye Olde Fighting Cocks). Unfortunately, Guinness World Records no longer monitors this category.

Monasteries realised that they could capitalise on pilgrimage and travel by opening inns offering food, drinks and a place to sleep. (Image by Getty Images)

So where does this leave our search? Initially, several pubs can be quickly weeded out as any with claims to date before the 11th century will not stand up to scrutiny. That still leaves a significant potential for late-medieval pubs, but there is a need to establish firm archaeological and archival evidence in order to identify the oldest.

Most of the claimants are undoubtedly historic listed buildings, many of which have been investigated by specialists in some detail. That research can often be at odds with the claimed date. For example, the Bingley Arms in West Yorkshire may claim that its history as a pub goes back to AD 953, but it is actually a late-18th or early 19th century building.

Nottingham’s Ye Olde Trip to Jerusalem. (Photo by: Loop Images/Universal Images Group via Getty Images)

Nottingham’s Ye Olde Trip to Jerusalem is a timber-framed building dating to the 17th century, and was not open for business until the late-18th century. That’s a long way from its claim of 1189. Meanwhile, the supposedly eighth-century Ye Olde Fighting Cocks was originally a monastic dovecote of c1400, which was re-sited c1600. It did not open as a pub until the 18th or even 19th century.

The varied historic use of buildings further muddies the water. Historic pubs have gone out of business and new uses found for them, whereas other buildings, no matter how old they are, only became pubs later in their lives.

The New Inn in Oxford was a purpose-built courtyard establishment, constructed around 1386 for Jesus College, but ceased trading in the mid-18th century. Then there’s The Abbey in Darley, Derbyshire, which was originally constructed as part of a monastic complex in the 15th century before being converted to tenements in the post-medieval period, and did not start serving up beers until 1979.

Genuine contenders

Yet while any quest to name the oldest pub in the land is greatly hampered by the variables of definition and land use, it may be possible to use a combination of architectural history and archaeological fieldwork to identify some genuine contenders. They should be structures that are still operating, with an established archival provenance as a pub and demonstrably ancient fabric.

The latter can often be addressed through dendrochronology – a dating technique that relies on the scientific study of tree-rings to analyse the construction date of buildings. The Vernacular Architecture Group maintains a register of buildings dated by dendrochronology in the United Kingdom, which provides an important dataset for our search.

Take the The Bell Inn in Nottingham: when sampled by dendrochronologists, it was found to have a roof structure dating to between 1432 and 1442. The building was in use as an inn by 1638, when the will of Robert Sherwin mentioned it in a legacy. This certainly makes the Bell an older building than its more famous neighbour, Ye Olde Trip to Jerusalem, and it has early documentary reference as a public house.

The interior of the George Inn, Norton St Philip, Somerset. (Image by Alamy)

This raises yet more questions and variables. So although the Royal George at Cottingham, Northamptonshire, has been dated to 1262, it was originally built as a domestic house and was not converted into a pub until the 18th century.

The King’s Head in Abingdon, Oxfordshire, may have a felling date of 1291, but there is no record of the building as a pub until 1734 – and it is currently a coffee shop.

Again, while the earliest fabric at the White Hart in Newark-on-Trent, Nottinghamshire, is dated 1312–13, the building itself was originally a townhouse, which was converted into a pub c1430 and closed around 1870.

Despite all the problems, though, it is possible to find a working pub with demonstrable origins as an inn. The George Inn at Norton St Philip, Somerset, is an intriguing example. The roots of the building date to the second half of the 14th century, when it was constructed as an inn after Hinton Priory transferred their charterhouse fair to the village in 1345. A major programme of remodelling, tree-ring-dated to 1430-32, took place around half a century later, including the timber-framed frontage. Drinkers still walk through the medieval doorway to order their pints to this day.

The New Inn at Gloucester, England. (Image by Alamy)

Almost contemporary with the remodelling of the George Inn is the, erroneously named, New Inn at Gloucester. This incredibly well-preserved, galleried, courtyard inn was purpose-built as a commercial hostelry for John Twyning, a monk of Gloucester Abbey, and elements have been dated to 1432. Following the Dissolution of the Monasteries, the building was retained by the Dean and Chapter of Gloucester Cathedral as a tenanted inn until being sold off in 1858.

So, where is Britain’s oldest pub?

Crucially, both the George Inn and New Inn still function as pubs. With that in mind, they should be considered two of the oldest pubs in the British Isles that can offer solid documentary and archaeological evidence for their origins and usage. Without such firm corroborative evidence, the much older claims made by many pubs fall by the wayside.

But perhaps it is time to move on from redundant arguments as to where the oldest pub in the British Isles. It is surely more important to measure pubs on the quality of service and atmosphere, while continuing to conserve and appreciate such historic buildings as fine community assets.

Dr James Wright is a buildings archaeologist and architectural historian, with over two decades of experience in researching medieval architecture. He would like to thank buildings investigator Linda J Hall for commenting on a draft of this article



from Hacker News https://ift.tt/dnrPf0p

Host your own OpenStreetMap Map Tiles

Host Your Own OpenStreetMap Map Tiles

Download The OSM Dataset

First, download JOSM. Then click the downward green button.

Then click the "bounding box" tab. Write it down (left,bottom,right,up order).

Then click download

Then, file -> save as -> OSM server files (.osm)

Convert It To .mbtiles

Download MapTiler, open that .osm file, export, MBTiles, take notes at "layers" (you might have to write it down), continue, render.

Download Tileserver

Install nodejs, then

npm install -g tileserver-gl-light

Then, clone this github repository, open command line from that directory, execute

tileserver-gl-light kpatas.mbtiles -p 8083 -c a.json

Then open this link : http://localhost:8083/styles/rtnf-blue/#16.07/-6.28953/107.002293/-3.2/42


Bounding Box

We can use JOSM bounding box data to a.json bounding box data. But the order should be (left,bottom,right,up). Do not use JOSM's "copy bounds" feature.

Line Width

Adjusting line width is tricky. Open style.json, find "stops".

It means :

  • During zoom level 5.8, the line-width will be 0
  • During zoom level 6, the line-width will be 1
  • During zoom level 20, the line-width will be 1

Filter OSM Tag

Open style.json

This code will filter (highway:secondary) OSM tag.



from Hacker News https://ift.tt/iL9IdJc

SDN in the stratosphere: Loon's aerospace mesh network

ABSTRACT

The Loon project provided 4G LTE connectivity to under-served regions in emergency response and commercial mobile contexts using base stations carried by high-altitude balloons. To backhaul data, Loon orchestrated a moving mesh network of point-to-point radio links that interconnected balloons with each other and to ground infrastructure. This paper presents insights from 3 years of operational experience with Loon's mesh network above 3 continents.

The challenging environment, comparable to many emerging non-terrestrial networks (NTNs), highlighted the design continuum between predictive optimization and reactive recovery. By forecasting the physical environment as a part of network planning, our novel Temporospatial SDN (TS-SDN) successfully moved from reactive to predictive recovery in many cases. We present insights on the following NTN concerns: connecting meshes of moving nodes using long distance, directional point-to-point links; employing a hybrid network control plane to balance performance and reliability; and understanding the behavior of a complex system spanning physical and logical domains in an inaccessible environment. The paper validates TS-SDN as a compelling architecture for orchestrating networks of moving platforms and steerable beams, and provides insights for those building similar networks in the future.



from Hacker News https://ift.tt/Akdxm79

Monday, August 29, 2022

Play MSDOS Oregon Trail 1990 on archive.org

Comments

from Hacker News https://ift.tt/o1R9a54

Packed structs in Zig make bit/flag sets trivial

As we’ve been building Mach engine, we’ve been using a neat little pattern in Zig that enables writing flag sets more nicely in Zig than in other languages.

What is a flag set?

We’ve been rewriting mach/gpu (WebGPU bindings for Zig) from scratch recently, so let’s take a flag set from the WebGPU C API:

typedef uint32_t WGPUFlags;
typedef WGPUFlags WGPUColorWriteMaskFlags;

Effectively, WGPUColorWriteMaskFlags here is a 32-bit unsigned integer where you can set specific bits in it to represent whether or not to write certain colors:

typedef enum WGPUColorWriteMask {
    WGPUColorWriteMask_None = 0x00000000,
    WGPUColorWriteMask_Red = 0x00000001,
    WGPUColorWriteMask_Green = 0x00000002,
    WGPUColorWriteMask_Blue = 0x00000004,
    WGPUColorWriteMask_Alpha = 0x00000008,
    WGPUColorWriteMask_All = 0x0000000F,
    WGPUColorWriteMask_Force32 = 0x7FFFFFFF
} WGPUColorWriteMask;

Then to use it you’d use the various bit operations with those masks, e.g.:

WGPUColorWriteMaskFlags mask = WGPUColorWriteMask_Red | WGPUColorWriteMask_Green;
mask |= WGPUColorWriteMask_Blue; // set blue bit

This all works, people have been doing it for years in C, C++, Java, Rust, and more. In Zig, we can do better.

Zig packed structs

Zig has packed structs: these let us pack memory tightly, where a bool is actually a single bit (in most other languages, this is not true.) Zig also has arbitrary bit-width integers, like u28, u1 and so on.

We can write WGPUColorWriteMaskFlags from earlier in Zig using:

pub const ColorWriteMaskFlags = packed struct {
    red: bool = false,
    green: bool = false,
    blue: bool = false,
    alpha: bool = false,

    _padding: u28 = 0,
};

This is still just 32 bits of memory, and so can be passed to the same C APIs that expect a WGPUColorWriteMaskFlags - but interacting with it is much nicer:

var mask = ColorWriteMaskFlags{.red = true, .green = true};
mask.blue = true; // set blue bit

In C you would need to write code like this:

if (mask & WGPUColorWriteMask_Alpha) {
    // alpha is set..
}
if (mask & (WGPUColorWriteMask_Alpha|WGPUColorWriteMask_Blue)) {
    // alpha and blue are set..
}
if ((mask & WGPUColorWriteMask_Green) == 0) {
    // green not set
}

In Zig it’s just:

if (mask.alpha) {
    // alpha is set..
}
if (mask.alpha and mask.blue) {
    // alpha is set..
}
if (!mask.green) {
    // green not set
}

Comptime validation

Making sure that our ColorWriteMaskFlags ends up being the same size could be a bit tricky: what if we count the number of bool wrong? Or what if we accidently get the padding size wrong? Then it might not be the same size as a uint32 anymore.

Luckily, we can verify our expectations at comptime:

pub const ColorWriteMaskFlags = packed struct {
    red: bool = false,
    green: bool = false,
    blue: bool = false,
    alpha: bool = false,

    _padding: u28 = 0,

    comptime {
        std.debug.assert(@sizeOf(@This()) == @sizeOf(u32));
        std.debug.assert(@bitSizeOf(@This()) == @bitSizeOf(u32));
    }
}

The Zig compiler will take care of running the comptime code block here for us when building, and it will verify that the byte size of @This() (the type we’re inside of, the ColorWriteMaskFlags struct in this case) matches the @sizeOf(u32).

Similarly we could check the @bitSizeOf both types if we like.

Note that @sizeOf may include the size of padding for more complex types, while @bitSizeOf returns the number of bits it takes to store T in memory if the type were a field in a packed struct/union. For flag sets like this, it doesn’t matter and either will do. For more complex types, be sure to recall this.

Thanks for reading

Be sure to join the new Mach engine Discord server where we’re building the future of Zig game development.

You can also sponsor my work if you like what I’m doing! :)



from Hacker News https://ift.tt/gMu6Ta1

Giant Keyboard Is Just Our Type

We like big keyboards and we cannot lie, and we’ve seen some pretty big keyboards over the years. But this one — this one is probably the biggest working board that anyone has ever seen. [RKade] and [Kristine] set out to make the world’s largest keyboard by Guinness standards – and at 16 feet long, you would think they would be a shoe-in for the world record. More on that later.

As you might have figured out, what’s happening here is that each giant key actuates what we hope is a Cherry-brand lever switch that is wired to the pads of a normal-sized keyboard PCB. Once they designed the layout, they determined that there were absolutely no existing commercial containers that, when inverted, would fit the desired dimensions, so they figured out that it would take 350 pieces of cardboard to make 70 5-sided keycaps and got to work.

Aside from the general awesomeness of this thing, we really like the custom buttons, which are mostly made of PVC components, 3D printed parts, and a bungee cord for the return spring.

[RKade] encountered a few problems with the frame build — mostly warped boards and shrunken holes where each of the 70 keys mount. After the thing was all wired up (cleverly, we might add, with Ethernet cable pairs), [RKade] rebuilt the entire frame out of three-layers of particle board.

By the way, Guinness rejected the application, citing that it must be an exact replica of an existing keyboard, and it must be built to commercial/professional standards. They also contradict themselves, returning no search results for biggest keyboard, but offer upon starting a world record application that there is a record-holding keyboard on file after all, and it is 8 ft (2.4 m) long. It’s not the concrete Russian keyboard, which is non-functional, but we wonder if it might be the Razer from CES 2018 that uses Kailh Big Switches.

Once the keyboard was up and running, [RKade] and [Kristine] duke it out over a game of Typing Attack, where the loser has to type all the lyrics to “Never Gonna Give You Up” on the giant keyboard. Check it out after the break.

Via KBD #92



from Hacker News https://ift.tt/EzDoqHr

New Zealand's plan to prepare for inevitable climate change impacts

By Bruce Glavovic* of The Conversation

The Conversation

Opinion - New Zealand's first climate adaptation plan, launched his week, provides a robust foundation for urgent nation-wide action.

Its goals are utterly compelling: reduce vulnerability, build adaptive capacity and and strengthen resilience.

Recent reports by the Intergovernmental Panel on Climate Change (IPCC) have underscored the need for effective and transformative efforts to cut emissions urgently while also adapting and preparing for inevitable impacts of climate change.

But this national adaptation plan is just the beginning. The hard work is yet to come in its implementation. It is regrettable that proposed new law that would provide the institutional architecture for climate adaptation has been delayed until the end of next year.

Based on my experience as an IPCC author and working with communities around Aotearoa New Zealand and overseas, there are five key areas that need sharper focus as we begin to translate the intentions of the plan into practical reality.

Reducing risk for people on the 'frontline' of impacts

First, climate change will affect every aspect of life. These impacts will often be the result of climate-compounded extreme events that are already becoming more frequent and intense.

The people hardest hit are invariably those who are more vulnerable. We need to pay more focused attention to the root causes and drivers of vulnerability - and actions to reduce vulnerability and, ultimately, climate risk.

This means addressing poverty, marginalisation, inequity and other structural causes of vulnerability. Historically, much risk-based work has centred on calculations based on a formula that considers risk as a product of hazard and vulnerability. This approach is too technical.

We need to focus on reducing social vulnerability to climate change impacts, especially for those on the "frontline" of exposure to climate impacts, such as coastal communities facing rising sea level. Every region and locality needs to be able to identify and prioritise who is most exposed and vulnerable and catalyse proactive actions to reduce this vulnerability.

A climate-resilient future

Second, the plan clearly recognises the vital role of all governance actors in implementing it. However, in practice, local government will carry an especially significant responsibility in translating this plan into action.

There does not appear to be sufficient attention focused on how the adaptive capacity of local government will be built in this first stage of implementation. Local government will be the fulcrum for enabling - or hampering - adaptation at the local level.

Transformational capability building, from the political to operational level of local government, is imperative and needs to happen in partnership with tangata whenua, central government, the private sector (which receives scant attention in this plan) and civil society.

Third, introducing the concept of climate-resilient development is a welcome framing. This is an emerging concept, highlighted in a chapter of the IPCC report on adaptation. Climate-resilient development recognises the inherent intertwining of mitigation and adaptation efforts to advance sustainable development.

The plan limits the concept to climate-resilient "property development". There is work to be done to deepen and extend this framing along the lines of the IPCC work.

Who should pay if people have to move?

Fourth, managed retreat looms large with so many New Zealanders living along rivers and the shoreline. We can only enable proactive retreat from imminent danger if the government determines who should pay.

At present, the trigger for retreat is usually an extreme event, often at huge cost to those impacted. In many cases, those in harm's way cannot afford to retreat without government support. Often they are in localities approved by governing authorities.

Who should contribute to measures that reduce risk and enable retreat from climate-compounded hazards? What proportion of costs should be borne by those exposed or impacted and what proportion should be contributed by local and central government? And who makes the call for managed retreat and whether it should be voluntary or compulsory?

The "who pays" question is a tough call. The plan doesn't provide an answer but we can't avoid it if it is to be implemented.

Fifth, it is inevitable there will be "winners" and "losers" in the ongoing struggle to adapt to a changing climate. Values and interests will collide and contestation will escalate as climate impacts become more intense and frequent.

We'll need to find more constructive ways to resolve climate-compounded conflict. At times government will be only one of several parties involved and won't be in a position to enable or guide conflict resolution. For this, we'll have to develop institutional processes and capabilities to facilitate independent mediated negotiation solutions for escalating climate conflicts.

* Bruce Glavovic is Professor in Natural Hazards Planning and Resilience at Massey University. He receives funding from MBIE.

This story first appeared in The Conversation.



from Hacker News https://ift.tt/sFvWNjK

Project Highwater

Launch of the first Highwater flight

Project Highwater was an experiment carried out as part of two of the test flights of NASA's Saturn I launch vehicle (using battleship upper stages), successfully launched into a sub-orbital trajectory from Cape Canaveral, Florida. The Highwater experiment sought to determine the effect of a large volume of water suddenly released into the ionosphere.[1][2] The project answered questions about the effect of the diffusion of propellants in the event that a rocket was destroyed at high altitude.[3]

The first flight, SA-2, took place on April 25, 1962. After the flight test of the rocket was complete and first stage shutdown occurred, explosive charges on the dummy upper stages destroyed the rocket and released 23,000 US gallons (87,000 L) of ballast water weighing 95 short tons (86,000 kg) into the upper atmosphere at an altitude of 65 miles (105 km),[4] eventually reaching an apex of 90 miles (145 km).[3]

The second flight, SA-3, launched on November 16, 1962, and involved the same payload. The ballast water was explosively released at the flight's peak altitude of 104 miles (167 km).[5][6] For both of these experiments, the resulting ice clouds expanded to several miles in diameter and lightning-like radio disturbances were recorded.[3][4]

See also[edit]

References[edit]

  1. ^ von Ofenheim, Bill (January 20, 2004). "Saturn I SA-2 Launch". NASA Scientific and Technical Information Program. Archived from the original on May 17, 2011. Retrieved July 2, 2009.
  2. ^ Wade, Mark. "Highwater". Astronautix.com. Archived from the original on January 16, 2010. Retrieved December 5, 2009.
  3. ^ a b c Bilstein, Roger E (1996). Stages to Saturn: A Technological History of the Apollo/Saturn Launch Vehicles. Washington, DC: NASA History Office. ISBN 0-16-048909-1. Archived from the original on 2004-10-15.
  4. ^ a b "Saturn Aids GSFC Research" (PDF). Goddard News. 2 (10). May 4, 1962. Archived from the original (PDF) on July 21, 2011.
  5. ^ Ryba, Jeanne (July 8, 2009). "History: Saturn Test Flights". NASA.gov. Retrieved December 5, 2009.
  6. ^ Wade, Mark. "Cape Canaveral LC34". Astronautix.com. Archived from the original on January 31, 2010. Retrieved December 5, 2009.

Further reading[edit]

  • Woodbridge, David D; Lasater, James A; et al. (October 25, 1963). An Analysis of the Second Project High Water Data. NASA. hdl:2060/19790078055. NAS10-841.


from Hacker News https://ift.tt/EGD3RkQ

The Silent Majority in Software

Table of Contents

The “silent majority” was used by President Richard Nixon during his presidency and his campaign against the Vietnam war. He spoke to the people who were not actively voicing their opinions and who were overshadowed by the vocal few who were supporting the war.

We're not talking about politics. This interesting concept of the majority being overshadowed by the vocal few is quite fascinating and holds true in software engineering.

In software development, the silent majority are the engineers who write the code, debug the programs, and solve the complex issues behind the scenes. They do not participate in controversial discussions about Visual Basic or Pascal — they just do their work in those languages without even knowing that there’s so much controversy surrounding their language of choice.

Without this silent majority, many projects would, in fact, grind to a halt. It is often their quiet diligence that keeps a project on track and prevents it from falling apart.

There also seems to be an assumption on HN/Reddit that vocal activity on the internet, in any form — be that videos, blogging, podcasts, etc. — is proportional to activity behind the screen. If you’re constantly seeing stuff about crypto, then you’re probably scrolling Twitter, but if you leave that bubble and go outside — most people don’t care.

Silent Engineers

While browsing HackerNews, I sometimes get the feeling that every developer out there is working for FAANG, as there are always posts from those people doing some hyped stuff. Or you might think that PHP is never used nowadays because whenever it’s mentioned, everyone is hating on it in the comments.

Dilbert and the silent Gary comic
Dilbert and the silent engineer. Not really relevant, but still funny.

But let’s be straight, that’s like 1% of all of the developers out there — the rest of them are just lurking and coding with their language of choice and being content with it. Be it Fortran, COBOL, Perl, or PHP. I’ve seen so much hate some languages get that I’m surprised anyone still writes code in those languages, but then I remember that everything is subjective, and the articles that I read represent every small subset of developers.

Even HackerNews is not that popular — I know many great engineers who’ve never visited the website. There are so many articles and comments by people whose level of enthusiasm doesn't match their experience. Maybe also my own, but I just like writing, so deal with it.

Usually, the comments on HN/Reddit are polarised by a single group of people who have the same opinion, and then it’s hard to object and present a different perspective, even if you speak with more experience and context than the masses.

It’s also important to understand that we have a generational divide among software engineers. There are thousands of new software developers each year that have been taught differently than the previous generation. This introduces a bias to the particular expertise that gets shared.

Some developers signed so many NDAs over the years it almost looks like they did nothing at all.

I really like that some subset of the silent majority still participates on GitHub with bug fixes to their favorite libraries. Sometimes I’ve seen Pull Requests from empty accounts with a brief explanation of what was implemented. They just submit bug fixes, no drama.

Silent Users

I’m sure you’re aware of the importance of customer feedback. After all, it's essential to know what users think of your product to improve it. However, there are users who never give feedback, either because they're happy with the product as it is or because they don't bother to take the time to fill out surveys and submit bug reports — the silent majority of your customers.

Dealing with your silent customers is hard
Dealing with your silent customers is hard

As a result, companies often have a skewed view of their user base and improve the wrong things, thinking that the only people they should optimize for, are the people who fill out their “what did you like about this service” surveys. I never fill out those btw, it’s a waste of time. If I’m using a service, I’m already satisfied with it. Otherwise, I would jump to another one.

You can't rely on silent customers to give you honest feedback, but you can still learn a lot from them. Observing how they use your product is the first thing and setting up proper analytics to get an insight into their needs and expectations is the second.

The problem with silent customers is that while they often demand very little, they will also silently switch providers if they’re not happy.

In defense of being vocal

Being vocal is hard. It might seem easy — you just write an article or make a video — but there’s a reason why only a small percentage do it. It takes huge amounts of time. Even this small newsletter issue took me a few hours on my weekend to write. Not everyone is willing to do the work just to bring their opinions to the masses.

It also takes confidence — whenever you voice an opinion on the internet, there will always be people who have an opposite one, so you need to prepare yourself to read tens of comments that disagree with you. Reading negative comments can be disheartening, but it's important to remember that not everyone will agree with you. And that’s fine, we’re all amateurs, and sometimes we can be wrong.

Sometimes people write comments just to argue
Sometimes people write comments just to argue

My thoughts

And now to my final thoughts. When it comes to the software community, there are two schools of thought. Some people believe that it's important to be vocal and share your opinions, while others believe that it's better to stay quiet and let the quality of your work speak for itself. Personally, I believe that being more vocal can only be a good thing.

First of all, when you're vocal, you're more likely to be heard. If you have something valuable to say, then you owe it to yourself and to the community to speak up. Secondly, being more vocal can help to create a more inclusive community. Too often, online conversations are dominated by a small subset of people. By speaking up, we can help to ensure that everyone's voices are heard.

Of course, you can get downvoted, but who cares?

In many cases, fear is what’s holding us back - fear of criticism, or of saying something stupid. But if we want the software community to thrive, we need to get over that fear and start speaking up. It's time for us to be bolder and more vocal. Only then can we hope to create a truly inclusive community where everyone feels welcome and valued.

Other Newsletter Issues:



from Hacker News https://ift.tt/L7Rmtru

A history of the blurb, every author’s best friend

Not all blurbs are on the back covers of books, as with this up-front endorsement of  “Take My Hand” by Dolen Perkins-Valdez. Photo: Associated Press

Blurb is such a wonderful word.  

It conjures up exactly what it is: a belch of praise for a book, generally found on the dust jacket, to lure the reader to purchase it. I must admit to reading blurbs when deciding whether to buy a book, but I am swayed only by plaudits from publications I trust or authors I greatly admire.     

Some blurbs from trade journal Kirkus Reviews, for example, for which authors pay and are often just a rehash of the book’s plot, don’t work for me. Or one from writer Gary Shteyngart, who was formerly known, as Salon magazine put it, as a “blurb-addict.” In a 2014 open letter published in the New Yorker, he revealed that “the volume of requests has exceeded my abilities, and I will be throwing my ‘blurbing pen’ into the Hudson River.”  

According to Merriam-Webster, the term “blurb” was coined in 1907 at an annual dinner of the American Booksellers Association by American humorist Gelett Burgess, one of the honored guests.

“It was a custom at these dinners for the guest authors to present to the assembled company souvenir copies of their latest books. Burgess prepared a mock jacket of his latest book featuring a doctored picture of a woman that he had lifted from a dental advertisement. 

“The woman was dubbed ‘Miss Belinda Blurb,’ and she was shown in the picture as calling out a ‘blurb,’ indicated by the caption ‘Miss Belinda Blurb in the act of blurbing.’ Self-congratulatory text also adorned the jacket.” 

Was essayist Ralph Waldo Emerson the first writer to blurb another? Photo: Otto Herschan Collection / Getty Images

But the practice might be older still. The New York Times reports that on reading the first edition of “Leaves of Grass” in  1855, Ralph Waldo Emerson, already widely esteemed, mailed the relatively unknown Walt Whitman a glowing note. The next year, one line of that letter – “I greet you at the beginning of a great career” — was printed on the spine of the book’s second edition.

Speaking of impressive blurbs, among the blurbers on the back of Samantha Power’s 2019 memoir, “The Education of an Idealist,” are Barack Obama, Doris Kearns Goodwin, Bryan Stevenson and Madeleine Albright. Not too shabby. 

Oddly, some of the best-known reclusive writers, among them Thomas Pynchon and J.M. Coetzee, don’t hesitate to blurb.  Other famous authors often blurb their former students’ work, notably Joyce Carol Oates on Jonathan Safran Foer and Chinua Achebe on Chimamanda Ngozi Adichie. 

Some of the most commonly used blurb phrases:  

Laugh-out-loud funny (Really? In my long reading career, very few books have achieved this.)

Like x crossed with y (Don’t use this if the book being reviewed isn’t as good as either of the books mentioned.)

A page-turner (Literally applies to every book.) 

A literary tour de force (Does using French words make you sound smarter?)

A roller-coaster ride (Meaning nauseating?) 

Author Frank McCourt gave an extravagant blurb to Mitch Albom. Photo: Mary Altaffer / Associated Press

In one of my favorite over-the-top blurbs ever, Frank McCourt once compared Mitch Albom’s “The Five People You Meet in Heaven” to “The Odyssey.” Whoa! 

I shudder to admit I’ve used some of these shopworn expressions in my reviews of books. After all, there are just so many words available. But it’s different when you have the room to expand upon your statements and provide backup evidence.  

One of my favorite book titles, which sounds like a blurb and undoubtedly was meant ironically, is Dave Eggers’ memoir (with fictional elements), “A Heartbreaking Work of Staggering Genius.” 

Are there any good blurbs?  Of course. I loved New York Times critic Dwight Garner’s blurb about Irish author Sally Rooney’s fiction: “In my experience when people who’ve read her meet they tend to peel off into corners to talk.”  And Harper’s blurb gracing Orhan Pamuk’s novel “Snow”:  “From the Golden Horn, with a wicked grin, the political novel makes a triumphant return.”

Here’s the blurb I want for my own novel (which has been sitting on the shelf for years due to my procrastination over the serious edit it needs):  “Undeniably worth the wait. Clearly a mature writer who delayed publication until every word was perfect.”

Editor’s note: This column has been updated since it originally published to correct a statement about Kirkus Reviews.



from Hacker News https://ift.tt/FB3AsUZ