Tuesday, May 31, 2022

LGTM – A simple pull request approval system

LGTM

Build Status Join the Discord chat at https://discord.gg/NsatcWJ Join the Matrix chat at https://matrix.to/#/#gitea:matrix.org codecov Go Report Card GoDoc GitHub release

LGTM is a simple pull request approval system using GitHub protected branches and maintainers files or maintainers groups. Pull requests are locked and cannot be merged until the minimum number of approvals are received. Project maintainers can indicate their approval by commenting on the pull request and including LGTM (looks good to me) in their approval text.

Install

You can download prebuilt binaries from the GitHub releases or from our download site. You are a Mac user? Just take a look at our homebrew formula. If you have questions that are not covered by the documentation, you can get in contact with us on our Discord server, Matrix room, or forum!. If you find a security issue please contact security@gitea.io first.

Development

Make sure you have a working Go environment, for further reference or a guide take a look at the install instructions. As this project relies on vendoring of the dependencies and we are not exporting GO15VENDOREXPERIMENT=1 within our makefile you have to use a Go version >= 1.6. It is also possible to just simply execute the go get github.com/go-gitea/lgtm command, but we prefer to use our Makefile:

go get -d github.com/go-gitea/lgtm
cd $GOPATH/src/github.com/go-gitea/lgtm
make clean build

bin/lgtm -h

Docker

A Docker Image is available for easy deployment. It can be run locally or on a dedicated Server as follows:

docker run --name lgtm -v /my/host/path:/var/lib/lgtm:z -e GITHUB_CLIENT= -e GITHUB_SECRET= -p 8000:8000 gitea/lgtm

To Fill the Environment Variables GITHUB_CLIENT and GITHUB_SECRET, create new OAuth Application here

To Build the Image by yourself please refere to the Dockerfile and the Drone Configuration.

Contributing

Fork -> Patch -> Push -> Pull Request

Authors

License

This project is under the Apache-2.0 License. See the LICENSE file for the full license text.

Copyright

Copyright (c) 2018 The Gitea Authors <https://gitea.io>


from Hacker News https://ift.tt/UZRQiVM

Modern TUI calendar and task manager with customizable interface

Calcure

Modern TUI calendar and task manager with customizable interface. Manages your events and tasks, displays birthdays from your abook, and can import events and tasks from calcurse and taskwarrior. See wiki for more information.

screenshot

Features

  • Vim keys
  • Operation with fewest key presses possible
  • Todo list with subtasks and timers ⌚
  • Birthdays of your abook contacts
  • Import of events and tasks from calcurse and taskwarrior
  • Icons according to the name ✈ ⛷ ⛱
  • Privat events and tasks •••••
  • Plain text database in your folder for cloud sync
  • Customizable colors, icons, and other features
  • Resize and mobile friendly
  • Week can start on any day
  • Current weather ⛅
  • Support for Persian calendar

Installation

Linux and Mac OS

pip install calcure

Arch Linux, Manjaro etc

The package calcure is available in AUR.

yay -S calcure

Also, you need to install holidays and jdatetime libraries:

pip install holidays jdatetime

Windows

  1. Install Windows Terminal app from the app store
  2. Install python 3.x also from the app store (if you just type python in the Windows Terminal app it will offer you to install)
  3. Install the program and libraries libraries by typing in the Windows Terminal pip install windows-curses calcure
  4. Now you can finally run it by typing in the Windows Terminal python -m calcure

Dependencies

  • python 3
  • holidays and jdatetime python libraries. Install by pip install holidays jdatetime.

Usage

Run calcure in your terminal. You may need to restart your terminal after install.

User arguments

Calcure can be started in special mods using various user arguments. Please refer to this wiki page for the list of options.

Key bindings

List of all key bindings can be accessed in the wiki and via ? key in the program.

Settings

On the first run, program will create a configuration file at .config/calcure/config.ini. You can edit parameters and colors in the config.ini file. An example of the config.ini file is here. Explanations of all settings are in the wiki.

Troubleshooting

  • If your terminal shows empty squares instead of icons, probably it does not support unicode. In this case, in config set: use_unicode_icons = No.
  • Weather widget slows down launch of the program and requires internet. If that is a problem, switch weather off in config: show_weather = No.
  • If weather is incorrect, set your city in config weather_city = Tokyo. By default, this setting is empty and program tries to detect your city automatically from your ip.
  • If after install the program does not run by just running calcure, try to restart your terminal, it may need to recheck the binaries.

Contribution, translations, donations

If you wish to contribute to the code base or translations, feel free to open issues or propose PRs. Particularly, you are welcome to contribute on the topics of file encryption and syncing with popular calendar services. For big changes, please open an issue to discuss first.

If you'd like to support the development, consider donations. For more information about contribution, see wiki pages.



from Hacker News https://ift.tt/0Lj3ifG

Documenting Design Decisions Using RFCs and ADRs (2020)

If you work on a software project, no matter your role, you'll make and a lot of decisions and witness even more being made. Ranging from minor choices between code style A or B to rather impactful ones like where to store data, and why that should be the case. Recalling the reason behind a decision can be painfully hard after months of ongoing engineering, often because you're missing the context that led up to it.

Furthermore, the lack of documentation also makes onboarding and handover processes more difficult as you might find yourself struggling to explain decisions made in the past. Especially when teams are growing and a lot of progress has already been made previously, this will become strikingly obvious.

Luckily, as with most problems, there are plenty popular solutions to this, in this post, I want to focus on the process of discussing architectural design ideas using RFCs and making sure that decisions and their context stand the test of time using ADRs.

When you stumble over a problem that gets in the way of building a feature, you spend time researching for known solutions, or try to get to one yourself (or even better, together paired up with a team member). Subjectively, your solution will always be the best it can be, which is completely fine for this stage. If you don't like it yourself, why would you go ahead and propose it to your team?

Sometimes, though, there will be one more piece of information that wasn't present at the time you thought about this issue, or a recent change turned things upside down. There can always be unknowns that will escape you, but that's where your team comes in!

Once you've collected all the information you want to propose to solve the problem, you can create a document containing the overall topic, the reason why you got there, and your suggested way of resolving it. You don't have to write a novel, focus on the relevant aspects and if there are potential alternatives you came across while researching, quickly explain why you didn't opt for one of those, as that will make it easier your team to reason about the area you attempt to cover.

This document, your Request for Comments (RFC), is at best placed close to where it will be used later on, for example in the same repository if it's related to code, or maybe even an existing location where your team stores and manages proposals. After your initial draft is ready to go, share it with your team!

The next step is the most important down the road, the discussion phase. Depending on the engagement or number of questions that come up you're able to guess the impact of your solution, which is another indicator you can use for planning. Most importantly, though, questions and feedback help to extend the RFC in ways you might not have considered before, sometimes even forcing you to step back and take another look at the problem, which can help in finding an even better solution.

To conclude, preparing, writing, and sharing RFCs with your team to review and iterate over the thought process that went into the proposal is an easy way to structure the R&D process.

Now that we've explored the steps from research over discussion up to deciding whether to adopt a proposal or not, we're entering the domain of Architectural Decision Records (ADRs in short).

As your team increases the number of significant architecture decisions, documenting those in a decision log gets more important as well. Whether it's for checking why a feature was built a certain way, or want to improve on previous decisions, keeping track of those is vital if you want to keep your velocity and scale the team. Especially when handing off tasks, the receiving person should be able to look up any past decision made, getting the necessary context.

As Michael Nygard put it in 2011, decisions that » affect the structure, non-functional characteristics, dependencies, interfaces, or construction techniques. « should be stored in the project repository as lightweight Markdown files, numbered sequentially.

Another important property of decision records is that they're completely immutable. If there's ever a decision that obsoletes a previous one, you can go back to the prior decision and mark it as superseded, linking to the newer one. This way you preserve the history that might be useful in the future.

A popular template for decision records, which he suggested in the same post, each document should contain the following fields

A concise title for the decision, e.g. Adopt Git repositories for version control. Should contain the most important part of what's decided on.

The decision status, such as proposed, accepted, rejected, deprecated, superseded, etc.

What is the issue that we're seeing that is motivating this decision or change?

The context should give information on the current situation that motivated this proposed decision, if you created an RFC earlier, this can be linked if you make sure this won't get deleted at any point, otherwise, it's safer to store that information in the decision record.

It's important that the context contains all relevant information at the point of writing, otherwise, you might miss crucial details later on when reviewing a decision that might have been agreed upon a long time ago.

What will be part of the change that we are proposing and/or doing?

Everything that changes as a part of this decision should come here. Once again, if you've written this down in any other document that will survive time, you can link to it.

What becomes easier or more difficult to do because of this change?

This should include all benefits this proposal will yield and also any potential trade-offs you might have found while researching the solution.


This simple but effective structure allows us to document precisely the scope and outcome of each decision made. Of course, you can adapt it to any degree you like, or make customizations along the way, whatever works best for your team.

Fully adopting new processes can take time and convincing. Especially when they include a lot of steps, or take up considerable amounts of time, the chance of success decreases.

And it's not different this time. When it takes too much work to write down suggestions as RFCs or decisions in documents, it's not very likely that everyone will commit to this. As we want every team member on board, we need to ensure that creating said documents involves as little friction as possible.

Following the aforementioned template will present clear guidelines of the way decision records should be written.

Once set up, you're ready to benefit from a versioned collection of decisions, which can be accessed at any time to answer the most important architecture-related questions.

With their full context preserved, previous decisions can be easily reviewed and improved upon once results are in.


I hope you enjoyed this post! If you've got any questions, suggestions, or feedback in general, don't hesitate to reach out on Twitter or by mail.



from Hacker News https://ift.tt/wiUblF8

Elon Musk Tesla email: Remote work is no longer acceptable



from Hacker News https://twitter.com/WholeMarsBlog/status/1531807546729799687

Hungary’s Golden Squad

England play Hungary at Wembley Stadium in 1953 © Hulton Getty Images

When Hungary visited Wembley for a friendly in 1953, some of the English players thought they were in for an easy game. That impression lasted all of 45 seconds, when the Hungarians scored their first goal of a famous 6-3 victory. Led by the great Ferenc Puskás, Hungary utterly befuddled and dominated England with their technique, teamwork and innovative tactics. Six months later Hungary beat England 7-1 in Budapest and confirmed that the first drubbing was no fluke.

The Aranycsapat – Golden Squad – won the 1952 Olympics, went four years unbeaten and narrowly lost in the 1954 World Cup final. It was the culmination of the great age of Hungarian football that can be traced back to the interwar period. In his new book, Jonathan Wilson argues that many of the ideas that shape football today – in every major footballing nation – were developed in the 1920s by a generation of brilliant but little-known Hungarian coaches, who travelled widely throughout Europe and the Americas and ‘taught the world to play’.

Wilson excavates their stories, but this is not just a football book: the backdrop of war, the Holocaust and political repression loom large. Many of the era’s best Hungarian coaches and players were Jewish. Budapest’s vibrant coffee-house culture in the early 20th century often facilitated cerebral and revolutionary football theorising, but the Hungarian game was also cultivated on Budapest’s rough grunds, vacant lots left by the city’s rapid expansion on which a generation of Hungarian children honed virtuoso skills and teamwork.

Hungarian football was also fired at this time by a class divide, embodied in the rivalry between working-class Ferencváros and middle-class MTK, where the English coach Jimmy Hogan – blessed with great players – first introduced a radical football culture based on technique, passing and speed of thought.

In precise prose, Wilson deepens and expands upon strands and figures from several of his previous books. He argues that the ideas instilled in the 1930s by Izidor ‘Dori’ Kürschner prompted a revolution that led to Brazil winning the World Cup in 1958. He struggles to cut through the myths and obfuscations to tell the story of Imre Hirschl – ‘one of the great mysteries of world football’ – whose innovations in tactics and training methods helped shape the Argentinian and Uruguayan games. The restless and infuriating Béla Guttmann, who signed for MTK in 1921, went on to become perhaps the most highly regarded and successful Hungarian coach of all time. Football in Austria, Denmark, France, Germany, Holland, Italy, Portugal, Spain, Sweden, the US and the former Yugoslav countries all owes a particular debt to Hungarian coaches. The Hungarian influence on Italian football was especially strong: 60 Hungarians coached in Italy between 1920 and 1945. At Torino Ernő Erbstein forged one of the greatest Italian sides of all time, introducing a proto-version of ‘total football’.

But as war and the horror of genocide unfolded, many Jewish Hungarian coaches and players became victims of the Nazis. Some of the book’s most harrowing passages recount desperate attempts to survive the death camps by playing football on the volatile whims of the guards. Some survived the war only to be subsequently snared in Soviet repression.

While acknowledging his debt to various other writers and researchers, Wilson marshals an astounding level of detail to make his case. There are moments when the narrative flags and becomes arid, but it soon picks up pace again. It is sometimes hard to measure influence, but this an important book that redresses glaring gaps in football history.

Wilson argues that Hungary has forgotten a lot of its heroes, but the contemporary resonances of these extraordinary coaches lie not only in the debt owed by modern football. Their stories are also an implicit rebuke against the ahistorical, boorish and racist nativism emerging in Hungary and elsewhere, which often uses football as a political auxiliary.

With the brutal Soviet quelling of the 1956 uprising, the golden age of Hungarian football came to an end. But, argues Wilson, the influence lives on:

And in some precise through-ball, well-delivered pass or moment of improvisation, perhaps, when the game is at its most appealing, we hear still some strain of old Budapest, of the game of the coffee houses and the grunds, of that most beautiful and tragic of footballing cultures.

The Names Heard Long Ago: How the Golden Age of Hungarian Football Shaped the Modern Game
Jonathan Wilson
Blink
400pp £18.99

Patrick Keddie is the author of The Passion: Football and the Story of Modern Turkey (I.B. Tauris, 2018).



from Hacker News https://ift.tt/RUM2woN

A Bloop and a Blast: The mysterious, mercurial world of baseball fandom

SEVERAL YEARS AGO, while visiting my parents’ house, I found an artifact of my tortured early years of baseball fandom. It was a journal I was assigned to keep at the beginning of first grade, a stretch of time in the autumn of 1993 that coincided with a thrillingly unexpected Philadelphia Phillies postseason run. “I like the Phillies,” I wrote on October 8—a rather bold statement, given that the Greg Maddux–led Atlanta Braves had clobbered them 14–3 in Game 2 of the National League Championship Series the night before. I added several small crayon illustrations, as if placing my modest offerings at the baseball gods’ altar: a red cap I captioned “Hat,” some sketches of John Kruk and Pete “Inky” Incaviglia that make them look like identical-twin clowns, and, in summation of it all, a red and brown blob that I helpfully labeled “Hotdog.” It worked. The Fightin’ Phils went on to win the NLCS in six games, securing the pennant at home on October 13, and, this time, forcing my mortal—immortal?—enemy Maddux to take the L. Life was good. I was about to turn seven years old, and my team was headed to the World Series. 

I grew up in what a friend once dubbed “the part of Philadelphia that is in New Jersey.” A place where the stuff that comes out of the faucet is wudder and the first word most children learn how to spell is E-A-G-L-E-S (pronounced: “egggles”). Regional sports fandom there is an omnipresence, part of the atmosphere, the kind of stuff a young child breathes in like smog from a nearby factory. But for such a sports-crazed locale, championships remained elusive for Philadelphia teams throughout most of the 1980s and early ’90s. The city’s most recent triumph had come when the Sixers swept the Lakers in the 1983 NBA Finals, an achievement that meant nothing to me because it happened three years before I was born, which might as well have been the Dark Ages. But a decade later, the Phillies, a team that had gone 70–92 and finished dead last in their division the previous year, suddenly caught fire. Led by a murderer’s row of scraggly-haired dirtbags and future Trump supporters like Lenny Dykstra and Curt Schilling, the ’93 Phillies—whose Wikipedia page contains the sentence “The team was often described as ‘shaggy’, ‘unkempt’, and ‘dirty’”—were somehow so dominant that they led their division for all but one day of the season. All spring and summer they were a hoot to watch, especially Kruk, a mulletted first baseman with a Tweedledee body type who hit .316 in his third consecutive All-Star season. The ’93 Phillies once finished a doubleheader at 4:40 am—eons past my bedtime—when pitcher Mitch “Wild Thing” Williams eked out a walk-off RBI single. Because I was a six-year-old who watched a lot of cartoons, their appeal was obvious to me: they liked to pie each other in the face in the clubhouse and call each other “dude,” and during their home opener the Phillie Phanatic jumped out of an airplane. Now these goofball underdogs and their zoologically confusing mascot would somehow be facing the defending World Series champions. 

Flipping ahead to my journal entry from Monday, October 25, lets you know how that went. “The Phillies Lost Big,” I bemoaned in childish handwriting, still emotionally hungover from the past Saturday night’s decisive Game 6, when Wild Thing had lived up to the worst connotation of his nickname and surrendered a three-run bomb to Joe Carter in the bottom of the ninth. I added, “The Blue Jays won Again for the 2st tam!” [sic] But hope springs eternal in the cyclical churn of baseball seasons. As if acknowledging this, I once again concluded my entry with a weakly optimistic “Hot Dog.” Which was to say, at least we’d have a shot at the 1994 World Series. Right?

Sports fandom is mysterious, mercurial, and a lot more like religious belief than a lapsed Catholic such as myself wants to admit. I cannot pinpoint the moment I lost my faith in the Philadelphia Phillies, but, looking back now, I can see the seeds of doubt planted in the wide-ruled lines of that first-grade notebook. I didn’t yet know that there would be no 1994 World Series—an injustice as unfathomable to me as Christmas being canceled—because of an unprecedented players’ strike that would freeze-frame the season on August 12. The Phillies by then had returned to their regularly scheduled program of numbing mediocrity (54–61; fourth place in their division), which meant I didn’t have as much to complain about as, say, fans of the red-hot and doomed Montreal Expos, or especially San Diego Padres outfielder Tony Gwynn, who had a legitimate shot to finish the season with a batting average over .400 for the first time since Ted Williams did it in 1941. When the season ended, an uncomplaining (but probably secretly pretty damn frustrated) Gwynn was paused eternally at the tantalizing what-if of .394.

When you’re a kid, a year is an eternity—especially when it gives you extra time to wallow in your team’s crushing defeat the season prior. Baseball came back in April of 1995, but by then I had started to drift away from the Phillies. I still liked baseball, and I still watched it quite a bit, I just took more of a bird’s-eye view. I wished Cal Ripken Jr. well that September, when he broke Lou Gehrig’s seemingly untouchable record of 2,130 consecutive games played. In summer 1998, I got as caught up in the Mark McGwire/Sammy Sosa home-run race as anybody. But over that strike year my heart had hardened and I wasn’t sure I would ever open it up so completely to a specific team again. The cost was too great. And anyway the Phillies sucked for the rest of the ’90s, and to add insult to injury, the dastardly Atlanta Braves won the NL pennant in 1995—and again in 1996, for the 2st tam!

Another way that sports fandom is like religion is that the people who convert later in life are the most annoying about it. Vocal and overzealous to make up for all that lost time they spent ignorantly sinning. Take it from me. While I did not become a born-again Christian, I did convert to something much worse in the eyes of the people I grew up with. In my early thirties, you see, I fell with my whole stupid heart and soul for the Phillies’ division rivals, the goddamn New York Mets.

Mr. Met at the Oakland Athletics vs. New York Mets game, Citi Field, New York, June 25, 2014. Eric Kilby/Flickr.
Mr. Met at the Oakland Athletics vs. New York Mets game, Citi Field, New York, June 25, 2014. Eric Kilby/Flickr.

SO MANY BASEBALL FANS I KNOW have heartwarming stories about how they fell for their favorite teams—a family saga, an iconic moment to which they bore accidental witness,” Devin Gordon writes in his highly entertaining 2021 book So Many Ways to Lose: The Amazin’ True Story of the New York Mets—the Best Worst Team in Sports. “The number one overwhelming reason why I’m a Mets fan,” he continues, “is that I was seven years old and the Mets had a player named Strawberry. That’s really all it took.”

As a former six-year-old who just really liked saying “Incaviglia,” I understand. But Gordon’s early fandom experience was the opposite of mine: in 1986, when he was an impressionable ten years old, his team won the World Series. He was mature enough to understand that this meant he must pledge undying allegiance to them forever (those are the rules!), but he was still too young to understand that—for the inherently hard-luck New York Mets—this moment of triumph was an anomaly. 

If the Yankees are New York’s chiseled older brother who was elected president of his fraternity and definitely grew up to be a cop, the Mets are the city’s eccentric and probably under-parented younger stepchild. Their conception itself was an act of spite and municipal vengeance: when the Brooklyn Dodgers and New York Giants both decamped to California in 1958, then-mayor Robert Wagner immediately promised his incensed constituents that he would bring another baseball team to New York. He just didn’t necessarily promise they’d be any good. And so in the Mets’ inaugural season, 1962, they won forty games and lost 120. They were a trusty punchline for the rest of the ’60s, until the 1969 Miracle Mets, led by their beloved ace Tom Seaver, did the unthinkable: they won the World Series.

“In essence, all of Mets history has been a spiritual tug-of-war pitting the ’62 Mets against the ’69 Mets,” Gordon writes, and this is the logic that grounds his argument that, as per his book’s subtitle, the Mets are indeed “the best worst team in sports.” As he clarifies quite convincingly, though, “There is a difference between being bad and being gifted at losing, and this distinction holds the key to understanding the true magic of the New York Mets.” Exhibit A: sixty years later, the ’62 Mets still hold the modern-era record for most losses in a season. It’s honestly impressive to suck that monumentally. Give them an exhibit in the basement of the Baseball Hall of Fame, right next to the freight elevator.

Gordon’s book sprang from a delightful 2018 New York Times Magazine feature about the three beloved broadcasters he dubbed “the three Magi of Mets Nation,” Gary Cohen, ex-pitcher Ron Darling, and Seinfeld nemesis Keith Hernandez. (Their three names are so entwined to Mets fans that Gordon stylized them throughout the book, correctly, without commas, as “Gary Keith and Ron.”) So Many Ways to Lose is a chronological romp through Mets history, but it’s also a somewhat autobiographical study of the average Mets fan’s psyche, which Gordon describes as existing in a perpetual state of cognitive dissonance—as pessimistic as a crotchety old man but somehow as naively trusting as a newborn babe, “simultaneously certain of humiliating defeat and pretty darn sure there’s a miracle brewing.” 

“This is very hard to do,” he adds as a disclaimer. “You probably couldn’t pull it off.” Maybe it’s all the Jersey factory exhaust I inhaled as a child, or maybe my former home team’s underdog mentality primed me for it. Either way, I read that challenge and let out one of those low, throaty sounds of approval that Keith Hernandez always forgets to mute on the broadcast whenever the barrel of someone’s bat connects with a hanging slider: “Mmmmmm.” Sign me up.

I MOVED TO WASHINGTON, DC, IN 2005, the same year the Montreal Expos did and rechristened themselves as the Washington Nationals. Maybe that should have been the clean slate I needed to devote myself to a brand-new team, but another gross thing that I find sports fandom to resemble is a skin graft. Sometimes, the conditions can be right, but for whatever reason, it just doesn’t take. Also, the Nationals were just unwatchably bad for their first few (eight) seasons, and I was in college and thus did not have time to follow baseball. I was very busy reading Derrida and drinking bottles of malt liquor taped to my hands.

The Nationals made the postseason for the first time the very same month I moved to New York: October 2012. But even when I was living in DC, it never quite felt like a place I was trying to put down roots. New York was different. So were the Mets.

I fell for the Mets the way that guy goes bankrupt in The Sun Also Rises: gradually, then suddenly. I was not yet on board when they made it to and then efficiently lost the 2015 World Series, which is probably for the best—my heart is made of blown glass and such sudden excitement may have shattered it beyond repair. But I started going to games with a few lifelong-Mets-fan friends in 2016 and 2017, if only because there is no lovelier place to be on a beautiful late-summer New York evening than Citi Field. My friends did not twist my arm. They were patient, accepting. They introduced me to the scrappy characters on the current roster. That outfielder who sprints for his life every time he draws a walk and effusively thanks his God for reaching first base is Brandon Nimmo. The admittedly streaky hitter with a Ken Griffey Jr. swing and a dreamy smile is Michael Conforto, on whom I developed a somewhat embarrassing crush. The main draw, though, was this lanky, freak-of-nature ace pitcher who seemed to subsist entirely on McDonald’s while maintaining the silhouette of Slender Man, and whose arm was clearly on some Benjamin Button shit because the older he got, the harder he threw. That, of course, was Jacob deGrom, New York folk hero and one of the best pitchers currently walking the earth, whose two consecutive Cy Young seasons in 2018 and 2019 sealed the deal for me. Throw in the dulcet, ASMR tones of the best booth in baseball, Gary, Keith, and Ron; the strange psychosexual tension between Mr. and Mrs. Met; and the way Lou Monte’s novelty song “Lazy Mary” is a bigger deal during the seventh-inning stretch than “Take Me Out to the Ball Game.” Forget it. Baptize me in the waters of the orange and blue.

Gordon’s book ends right where my Mets fandom really took off, during a 2019 season that was a quintessentially Metsy blend of ya-gotta-believe triumph and are-you-freaking-kidding-me disasters: a six-run lead blown in the ninth inning against a division rival, a slugger who was injured in an altercation with a wild boar (Yoenis Céspedes, we hardly knew you!), and a promising young closer who plummeted from the best to worst in baseball as soon as he put on a Mets jersey. But in addition to deGrom, what made that season so infectiously watchable was the charismatic jolt that young Pete “the Polar Bear” Alonso brought to the team, chasing—and eventually breaking!—the rookie home-run record when he hit a staggering fifty-three dingers in a season. Down the stretch, Alonso was simultaneously laser focused on that goal and enough of a class clown to create an endearingly homoerotic new tradition where every time a player won the game with a walk-off, Alonso would summon his ursine strength to rip that player’s jersey right off his body. God bless Pete Alonso, and may he be a New York Met for life.

One of the defining paradoxes of sports fandom is that while we pledge our devotion to individual players, we also must grapple with the constant possibility that they could leave us and go play for another team. Maybe even a rival team. Gordon is especially poignant on this. The lowest four months of his Mets fandom were not necessarily when the team was at its worst, but when Darryl Strawberry and Doc Gooden were together on the Yankees. Even Seaver, a guy so synonymous with the Mets that his nickname was “The Franchise,” was dealt unceremoniously to Cincinnati in one of the most notorious trades in Mets (or even MLB) history. Though he came back for a season in 1983, Seaver spent a decade of his career playing for other teams. When the ’86 Mets beat Boston in the World Series, injured Red Sox pitcher Seaver, “the greatest Met there ever was, watched all of it happen from the Red Sox dugout.”

Players, stadiums, and teams sometimes seem like they exist to remind us of the Buddhist idea of the impermanence of all things and the grace that comes from not getting too attached. Jacob deGrom’s arm will eventually succumb to the laws of physics and temporality, and Michael Conforto will stupidly reject the Mets’ generous qualifying offer, have season-ending shoulder surgery right as he becomes a free agent, and also marry a woman literally named Cabernet. But non-attachment is an incredibly difficult mindset for us feeble humans to achieve, so we keep letting the players, the teams, and the game break our hearts over and over again. At least we’re not alone in that. I’m aware that a significant reason I’m a Mets fan is a social one, the fact that I have a tight-knit group of friends to go to games with and to whom I can text melodramatic things like “SEASON OVER” after a brutal mid-April loss. Metsiness loves company.

Baseball fandom allows me to quiet my rational mind and to accept several opposing truths at once. When I watch the Mets, I am able to be, as Gordon describes, both a wizened old cynic and a naively optimistic kid. I am thirty-five and also six years old—a vehemently pro-labor adult who supports the players union in all its endeavors, and who also would have cried like a first-grader had the owners’ greedy stubbornness forced the players to strike and cancel the 2022 season. I am both a Philadelphian expat and a proud New Yorker. I can boo the Phillies until I run out of breath—Bryce Harper, you suuuuuuck!—while on some subconscious level I tip my cap to them, because I know that they were the ones who first taught me how to root for the underdog, and to love a baseball team unconditionally enough to let it break my heart.

Lindsay Zoladz is a writer living in Brooklyn and a frequent contributor to the New York Times

Twitter Facebook email Print


from Hacker News https://ift.tt/8dGnkQh

Monday, May 30, 2022

Faif/Python-patterns: A collection of design patterns/idioms in Python

python-patterns

A collection of design patterns and idioms in Python.

Current Patterns

Creational Patterns:

Pattern Description
abstract_factory use a generic function with specific factories
borg a singleton with shared-state among instances
builder instead of using multiple constructors, builder object receives parameters and returns constructed objects
factory delegate a specialized function/method to create instances
lazy_evaluation lazily-evaluated property pattern in Python
pool preinstantiate and maintain a group of instances of the same type
prototype use a factory and clones of a prototype for new instances (if instantiation is expensive)

Structural Patterns:

Pattern Description
3-tier data<->business logic<->presentation separation (strict relationships)
adapter adapt one interface to another using a white-list
bridge a client-provider middleman to soften interface changes
composite lets clients treat individual objects and compositions uniformly
decorator wrap functionality with other functionality in order to affect outputs
facade use one class as an API to a number of others
flyweight transparently reuse existing instances of objects with similar/identical state
front_controller single handler requests coming to the application
mvc model<->view<->controller (non-strict relationships)
proxy an object funnels operations to something else

Behavioral Patterns:

Pattern Description
chain_of_responsibility apply a chain of successive handlers to try and process the data
catalog general methods will call different specialized methods based on construction parameter
chaining_method continue callback next object method
command bundle a command and arguments to call later
iterator traverse a container and access the container's elements
iterator (alt. impl.) traverse a container and access the container's elements
mediator an object that knows how to connect other objects and act as a proxy
memento generate an opaque token that can be used to go back to a previous state
observer provide a callback for notification of events/changes to data
publish_subscribe a source syndicates events/data to 0+ registered listeners
registry keep track of all subclasses of a given class
specification business rules can be recombined by chaining the business rules together using boolean logic
state logic is organized into a discrete number of potential states and the next state that can be transitioned to
strategy selectable operations over the same data
template an object imposes a structure but takes pluggable components
visitor invoke a callback for all items of a collection

Design for Testability Patterns:

Fundamental Patterns:

Pattern Description
delegation_pattern an object handles a request by delegating to a second object (the delegate)

Others:

Pattern Description
blackboard architectural model, assemble different sub-system knowledge to build a solution, AI approach - non gang of four pattern
graph_search graphing algorithms - non gang of four pattern
hsm hierarchical state machine - non gang of four pattern

Videos

Design Patterns in Python by Peter Ullrich

Sebastian Buczyński - Why you don't need design patterns in Python?

You Don't Need That!

Pluggable Libs Through Design Patterns

Contributing

When an implementation is added or modified, please review the following guidelines:

Output

All files with example patterns have ### OUTPUT ### section at the bottom (migration to OUTPUT = """...""" is in progress).

Run append_output.sh (e.g. ./append_output.sh borg.py) to generate/update it.

Docstrings

Add module level description in form of a docstring with links to corresponding references or other useful information.

Add "Examples in Python ecosystem" section if you know some. It shows how patterns could be applied to real-world problems.

facade.py has a good example of detailed description, but sometimes the shorter one as in template.py would suffice.

In some cases class-level docstring with doctest would also help (see adapter.py) but readable OUTPUT section is much better.

Python 2 compatibility

To see Python 2 compatible versions of some patterns please check-out the legacy tag.

Update README

When everything else is done - update corresponding part of README.

Travis CI

Please run tox or tox -e ci37 before submitting a patch to be sure your changes will pass CI.

You can also run flake8 or pytest commands manually. Examples can be found in tox.ini.

Contributing via issue triage Open Source Helpers

You can triage issues and pull requests which may include reproducing bug reports or asking for vital information, such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to subscribe to python-patterns on CodeTriage.



from Hacker News https://ift.tt/xTAMciw

Enzyme is what makes stevia so sweet

Comments

from Hacker News https://ift.tt/5WtDOpm

Two Envelopes Problem

The puzzle concerns two envelopes containing money

The two envelopes problem, also known as the exchange paradox, is a brain teaser, puzzle, or paradox in logic, probability, and recreational mathematics. It is of special interest in decision theory, and for the Bayesian interpretation of probability theory. It is a variant of an older problem known as the necktie paradox. The problem is typically introduced by formulating a hypothetical challenge like the following example:

Imagine you are given two identical envelopes, each containing money. One contains twice as much as the other. You may pick one envelope and keep the money it contains. Having chosen an envelope at will, but before inspecting it, you are given the chance to switch envelopes. Should you switch?

It may seem obvious that there is no point in switching envelopes as the situation is symmetric. However, because the person stands to gain twice as much money if they switch, while the only risk is halving what they currently have, a case can be made for switching the envelope.[1]

Introduction[edit]

Problem[edit]

Basic setup: A person is given two indistinguishable envelopes, each of which contains a positive sum of money. One envelope contains twice as much as the other. The person may pick one envelope and keep whatever amount it contains. They pick one envelope at random but before they open it they are given the chance to take the other envelope instead.[2]

The switching argument: Now suppose the person reasons as follows:

  1. Denote by A the amount in the player's selected envelope.
  2. The probability that A is the smaller amount is 1/2, and that it is the larger amount is also 1/2.
  3. The other envelope may contain either 2A or A/2.
  4. If A is the smaller amount, then the other envelope contains 2A.
  5. If A is the larger amount, then the other envelope contains A/2.
  6. Thus the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2.
  7. So the expected value of the money in the other envelope is:
    1 2 ( 2 A ) + 1 2 ( A 2 ) = 5 4 A {\displaystyle {1 \over 2}(2A)+{1 \over 2}\left({A \over 2}\right)={5 \over 4}A}
  8. This is greater than A so, on average, the person reasons that they stand to gain by swapping.
  9. After the switch, denote that content by B and reason in exactly the same manner as above.
  10. The person concludes that the most rational thing to do is to swap back again.
  11. The person will thus end up swapping envelopes indefinitely.
  12. As it seems more rational to open just any envelope than to swap indefinitely, the player arrives at a contradiction.

The puzzle: The puzzle is to find the flaw in the very compelling line of reasoning above. This includes determining exactly why and under what conditions that step is not correct, in order to be sure not to make this mistake in a more complicated situation where the misstep may not be so obvious. In short, the problem is to solve the paradox. Thus, in particular, the puzzle is not solved by the very simple task of finding another way to calculate the probabilities that does not lead to a contradiction.

Multiplicity of proposed solutions[edit]

There have been many solutions proposed, and commonly one writer proposes a solution to the problem as stated, after which another writer shows that altering the problem slightly revives the paradox. Such sequences of discussions have produced a family of closely related formulations of the problem, resulting in a voluminous literature on the subject.[3]

No proposed solution is widely accepted as definitive.[4] Despite this it is common for authors to claim that the solution to the problem is easy, even elementary.[5] However, when investigating these elementary solutions they often differ from one author to the next.

Simple resolution[edit]

The total amount in both envelopes is a constant c = 3 x {\displaystyle c=3x} , with x {\displaystyle x} in one envelope and 2 x {\displaystyle 2x} in the other.
If you select the envelope with x {\displaystyle x} first you gain the amount x {\displaystyle x} by swapping. If you select the envelope with 2 x {\displaystyle 2x} first you lose the amount x {\displaystyle x} by swapping. So you gain on average G = 1 2 ( x ) + 1 2 ( − x ) = 1 2 ( x − x ) = 0 {\displaystyle G={1 \over 2}(x)+{1 \over 2}(-x)={1 \over 2}(x-x)=0} by swapping.

Swapping is not better than keeping. The expected value E = 1 2 2 x + 1 2 x = 3 2 x {\displaystyle \operatorname {E} ={\frac {1}{2}}2x+{\frac {1}{2}}x={\frac {3}{2}}x} is the same for both the envelopes. Thus no contradiction exists.[6]

The famous mystification is evoked by the mixing up of two different circumstances and situations, giving wrong results. The so-called "paradox" presents two already appointed and already locked envelopes, where one envelope is already locked with twice the amount of the other already locked envelope. Whereas step 6 boldly claims "Thus the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2.", in the given situation, that claim can never be applicable to any A nor to any average A.

This claim is never correct for the situation presented, this claim applies to the Nalebuff asymmetric variant only (see below). In the situation presented, the other envelope cannot generally contain 2A, but can contain 2A only in the very specific instance where envelope A, by chance actually contains the smaller amount of Total 3 {\displaystyle {\frac {\text{Total}}{3}}} , but nowhere else. The other envelope cannot generally contain A/2, but can contain A/2 only in the very specific instance where envelope A, by chance, actually contains 2 Total 3 {\displaystyle 2{\frac {\text{Total}}{3}}} , but nowhere else. The difference between the two already appointed and locked envelopes is always Total 3 {\displaystyle {\frac {\text{Total}}{3}}} . No "average amount A" can ever form any initial basis for any expected value, as this does not get to the heart of the problem.[7]

Other simple resolutions[edit]

A common way to resolve the paradox, both in popular literature and part of the academic literature, especially in philosophy, is to assume that the 'A' in step 7 is intended to be the expected value in envelope A and that we intended to write down a formula for the expected value in envelope B.

Step 7 states that the expected value in B = 1/2( 2A + A/2 )

It is pointed out that the 'A' in the first part of the formula is the expected value, given that envelope A contains less than envelope B, but the 'A', in the second part of the formula is the expected value in A, given that envelope A contains more than envelope B. The flaw in the argument is that same symbol is used with two different meanings in both parts of the same calculation but is assumed to have the same value in both cases.

A correct calculation would be :

Expected value in B = 1/2 ( (Expected value in B, given A is larger than B) + (Expected value in B, given A is smaller than B) )[8]

If we then take the sum in one envelope to be x and the sum in the other to be 2x the expected value calculations becomes:

Expected value in B = 1/2 (x + 2x)

which is equal to the expected sum in A.

In non-technical language, what goes wrong (see Necktie paradox) is that, in the scenario provided, the mathematics use relative values of A and B (that is, it assumes that one would gain more money if A is less than B than one would lose if the opposite were true). However, the two values of money are fixed (one envelope contains, say, $20 and the other $40). If the values of the envelopes are restated as x and 2x, it's much easier to see that, if A were greater, one would lose x by switching and, if B were greater, one would gain x by switching. One does not actually gain a greater amount of money by switching because the total T of A and B (3x) remains the same, and the difference x is fixed to T/3.

Line 7 should have been worked out more carefully as follows:

E ⁡ ( B ) = E ⁡ ( B ∣ A < B ) P ( A < B ) + E ⁡ ( B ∣ A > B ) P ( A > B ) = E ⁡ ( 2 A ∣ A < B ) 1 2 + E ⁡ ( 1 2 A ∣ A > B ) 1 2 = E ⁡ ( A ∣ A < B ) + 1 4 E ⁡ ( A ∣ A > B ) {\displaystyle {\begin{aligned}\operatorname {E} (B)&=\operatorname {E} (B\mid A<B)P(A<B)+\operatorname {E} (B\mid A>B)P(A>B)\\&=\operatorname {E} (2A\mid A<B){\frac {1}{2}}+\operatorname {E} \left({\frac {1}{2}}A\mid A>B\right){\frac {1}{2}}\\&=\operatorname {E} (A\mid A<B)+{\frac {1}{4}}\operatorname {E} (A\mid A>B)\end{aligned}}}

A will be larger when A is larger than B, than when it is smaller than B. So its average values (expectation values) in those two cases are different. And the average value of A is not the same as A itself, anyway. Two mistakes are being made: the writer forgot he was taking expectation values, and he forgot he was taking expectation values under two different conditions.

It would have been easier to compute E(B) directly. Denoting the lower of the two amounts by x, and taking it to be fixed (even if unknown) we find that

E ⁡ ( B ) = 1 2 2 x + 1 2 x = 3 2 x {\displaystyle \operatorname {E} (B)={\frac {1}{2}}2x+{\frac {1}{2}}x={\frac {3}{2}}x}

We learn that 1.5x is the expected value of the amount in Envelope B. By the same calculation it is also the expected value of the amount in Envelope A. They are the same hence there is no reason to prefer one envelope to the other. This conclusion was, of course, obvious in advance; the point is that we identified the false step in the argument for switching by explaining exactly where the calculation being made there went off the rails.

We could also continue from the correct but difficult to interpret result of the development in line 7:

E ⁡ ( B ) = E ⁡ ( A ∣ A < B ) + 1 4 E ⁡ ( A ∣ A > B ) = x + 1 4 2 x = 3 2 x {\displaystyle \operatorname {E} (B)=\operatorname {E} (A\mid A<B)+{\frac {1}{4}}\operatorname {E} (A\mid A>B)=x+{\frac {1}{4}}2x={\frac {3}{2}}x}

so (of course) different routes to calculate the same thing all give the same answer.

Tsikogiannopoulos presented a different way to do these calculations.[9] It is by definition correct to assign equal probabilities to the events that the other envelope contains double or half that amount in envelope A. So the "switching argument" is correct up to step 6. Given that the player's envelope contains the amount A, he differentiates the actual situation in two different games: The first game would be played with the amounts (A, 2A) and the second game with the amounts (A/2, A). Only one of them is actually played but we don't know which one. These two games need to be treated differently. If the player wants to compute his/her expected return (profit or loss) in case of exchange, he/she should weigh the return derived from each game by the average amount in the two envelopes in that particular game. In the first case the profit would be A with an average amount of 3A/2, whereas in the second case the loss would be A/2 with an average amount of 3A/4. So the formula of the expected return in case of exchange, seen as a proportion of the total amount in the two envelopes, is:

E = 1 2 ⋅ + A 3 A / 2 + 1 2 ⋅ − A / 2 3 A / 4 = 0 {\displaystyle E={\frac {1}{2}}\cdot {\frac {+A}{3A/2}}+{\frac {1}{2}}\cdot {\frac {-A/2}{3A/4}}=0}

This result means yet again that the player has to expect neither profit nor loss by exchanging his/her envelope.

We could actually open our envelope before deciding on switching or not and the above formula would still give us the correct expected return. For example, if we opened our envelope and saw that it contained 100 euros then we would set A=100 in the above formula and the expected return in case of switching would be:

E = 1 2 ⋅ + 100 150 + 1 2 ⋅ − 50 75 = 0 {\displaystyle E={\frac {1}{2}}\cdot {\frac {+100}{150}}+{\frac {1}{2}}\cdot {\frac {-50}{75}}=0}

Nalebuff asymmetric variant[edit]

The mechanism by which the amounts of the two envelopes are determined is crucial for the decision of the player to switch their envelope.[9][10] Suppose that the amounts in the two envelopes A and B were not determined by first fixing contents of two envelopes E1 and E2, and then naming them A and B at random (for instance, by the toss of a fair coin[11]). Instead, we start right at the beginning by putting some amount in Envelope A, and then fill B in a way which depends both on chance (the toss of a coin) and on what we put in A. Suppose that first of all the amount a in Envelope A is fixed in some way or other, and then the amount in Envelope B is fixed, dependent on what is already in A, according to the outcome of a fair coin. Ιf the coin fell Heads then 2a is put in Envelope B, if the coin fell Tails then a/2 is put in Envelope B. If the player was aware of this mechanism, and knows that they hold Envelope A, but don't know the outcome of the coin toss, and doesn't know a, then the switching argument is correct and they are recommended to switch envelopes. This version of the problem was introduced by Nalebuff (1988) and is often called the Ali-Baba problem. Notice that there is no need to look in Envelope A in order to decide whether or not to switch.

Many more variants of the problem have been introduced. Nickerson and Falk systematically survey a total of 8.[11]

Bayesian resolutions[edit]

The simple resolution above assumed that the person who invented the argument for switching was trying to calculate the expectation value of the amount in Envelope A, thinking of the two amounts in the envelopes as fixed (x and 2x). The only uncertainty is which envelope has the smaller amount x. However, many mathematicians and statisticians interpret the argument as an attempt to calculate the expected amount in Envelope B, given a real or hypothetical amount "A" in Envelope A. One does not need to look in the envelope to see how much is in there, in order to do the calculation. If the result of the calculation is an advice to switch envelopes, whatever amount might be in there, then it would appear that one should switch anyway, without looking. In this case, at Steps 6, 7 and 8 of the reasoning, "A" is any fixed possible value of the amount of money in the first envelope.

This interpretation of the two envelopes problem appears in the first publications in which the paradox was introduced in its present-day form, Gardner (1989) and Nalebuff (1989). It is common in the more mathematical literature on the problem. It also applies to the modification of the problem (which seems to have started with Nalebuff) in which the owner of Envelope A does actually look in his envelope before deciding whether or not to switch; though Nalebuff does also emphasise that there is no need to have the owner of Envelope A look in his envelope. If he imagines looking in it, and if for any amount which he can imagine being in there, he has an argument to switch, then he will decide to switch anyway. Finally, this interpretation was also the core of earlier versions of the two envelopes problem (Littlewood's, Schrödinger's, and Kraitchik's switching paradoxes); see the concluding section, on history of TEP.

This kind of interpretation is often called "Bayesian" because it assumes the writer is also incorporating a prior probability distribution of possible amounts of money in the two envelopes in the switching argument.

Simple form of Bayesian resolution[edit]

The simple resolution depended on a particular interpretation of what the writer of the argument is trying to calculate: namely, it assumed he was after the (unconditional) expectation value of what's in Envelope B. In the mathematical literature on Two Envelopes Problem a different interpretation is more common, involving the conditional expectation value (conditional on what might be in Envelope A). To solve this and related interpretations or versions of the problem, most authors use the Bayesian interpretation of probability, which means that probability reasoning is not only applied to truly random events like the random pick of an envelope, but also to our knowledge (or lack of knowledge) about things which are fixed but unknown, like the two amounts originally placed in the two envelopes, before one is picked at random and called "Envelope A". Moreover, according to a long tradition going back at least to Laplace and his principle of insufficient reason one is supposed to assign equal probabilities when one has no knowledge at all concerning the possible values of some quantity. Thus the fact that we are not told anything about how the envelopes are filled can already be converted into probability statements about these amounts. No information means that probabilities are equal.

In steps 6 and 7 of the switching argument, the writer imagines that that Envelope A contains a certain amount a, and then seems to believe that given that information, the other envelope would be equally likely to contain twice or half that amount. That assumption can only be correct, if prior to knowing what was in Envelope A, the writer would have considered the following two pairs of values for both envelopes equally likely: the amounts a/2 and a; and the amounts a and 2a. (This follows from Bayes' rule in odds form: posterior odds equal prior odds times likelihood ratio). But now we can apply the same reasoning, imagining not a but a/2 in Envelope A. And similarly, for 2a. And similarly, ad infinitum, repeatedly halving or repeatedly doubling as many times as you like.[12]

Suppose for the sake of argument, we start by imagining an amount 32 in Envelope A. In order that the reasoning in steps 6 and 7 is correct whatever amount happened to be in Envelope A, we apparently believe in advance that all the following ten amounts are all equally likely to be the smaller of the two amounts in the two envelopes: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 (equally likely powers of 2[12]). But going to even larger or even smaller amounts, the "equally likely" assumption starts to appear a bit unreasonable. Suppose we stop, just with these ten equally likely possibilities for the smaller amount in the two envelopes. In that case, the reasoning in steps 6 and 7 was entirely correct if envelope A happened to contain any of the amounts 2, 4, ... 512: switching envelopes would give an expected (average) gain of 25%. If envelope A happened to contain the amount 1, then the expected gain is actually 100%. But if it happened to contain the amount 1024, a massive loss of 50% (of a rather large amount) would have been incurred. That only happens once in twenty times, but it is exactly enough to balance the expected gains in the other 19 out of 20 times.

Alternatively we do go on ad infinitum but now we are working with a quite ludicrous assumption, implying for instance, that it is infinitely more likely for the amount in envelope A to be smaller than 1, and infinitely more likely to be larger than 1024, than between those two values. This is a so-called improper prior distribution: probability calculus breaks down; expectation values are not even defined.[12]

Many authors have also pointed out that if a maximum sum that can be put in the envelope with the smaller amount exists, then it is very easy to see that Step 6 breaks down, since if the player holds more than the maximum sum that can be put into the "smaller" envelope they must hold the envelope containing the larger sum, and are thus certain to lose by switching. This may not occur often, but when it does, the heavy loss the player incurs means that, on average, there is no advantage in switching. Some writers consider that this resolves all practical cases of the problem.[13]

But the problem can also be resolved mathematically without assuming a maximum amount. Nalebuff,[13] Christensen and Utts,[14] Falk and Konold,[12] Blachman, Christensen and Utts,[15] Nickerson and Falk,[11] pointed out that if the amounts of money in the two envelopes have any proper probability distribution representing the player's prior beliefs about the amounts of money in the two envelopes, then it is impossible that whatever the amount A=a in the first envelope might be, it would be equally likely, according to these prior beliefs, that the second contains a/2 or 2a. Thus step 6 of the argument, which leads to always switching, is a non-sequitur, also when there is no maximum to the amounts in the envelopes.

Introduction to further developments in connection with Bayesian probability theory[edit]

The first two resolutions discussed above (the "simple resolution" and the "Bayesian resolution") correspond to two possible interpretations of what is going on in step 6 of the argument. They both assume that step 6 indeed is "the bad step". But the description in step 6 is ambiguous. Is the author after the unconditional (overall) expectation value of what is in envelope B (perhaps - conditional on the smaller amount, x), or is he after the conditional expectation of what is in envelope B, given any possible amount a which might be in envelope A? Thus, there are two main interpretations of the intention of the composer of the paradoxical argument for switching, and two main resolutions.

A large literature has developed concerning variants of the problem.[16][17] The standard assumption about the way the envelopes are set up is that a sum of money is in one envelope, and twice that sum is in another envelope. One of the two envelopes is randomly given to the player (envelope A). The originally proposed problem does not make clear exactly how the smaller of the two sums is determined, what values it could possibly take and, in particular, whether there is a minimum or a maximum sum it might contain.[18][19] However, if we are using the Bayesian interpretation of probability, then we start by expressing our prior beliefs as to the smaller amount in the two envelopes through a probability distribution. Lack of knowledge can also be expressed in terms of probability.

A first variant within the Bayesian version is to come up with a proper prior probability distribution of the smaller amount of money in the two envelopes, such that when Step 6 is performed properly, the advice is still to prefer Envelope B, whatever might be in Envelope A. So though the specific calculation performed in step 6 was incorrect (there is no proper prior distribution such that, given what is in the first envelope A, the other envelope is always equally likely to be larger or smaller) a correct calculation, depending on what prior we are using, does lead to the result E ( B | A = a ) > a {\displaystyle E(B|A=a)>a} for all possible values of a.[20]

In these cases it can be shown that the expected sum in both envelopes is infinite. There is no gain, on average, in swapping.

Second mathematical variant[edit]

Though Bayesian probability theory can resolve the first mathematical interpretation of the paradox above, it turns out that examples can be found of proper probability distributions, such that the expected value of the amount in the second envelope, conditioned on the amount in the first, does exceed the amount in the first, whatever it might be. The first such example was already given by Nalebuff.[13] See also Christensen and Utts (1992).[14][21][22][23]

Denote again the amount of money in the first envelope by A and that in the second by B. We think of these as random. Let X be the smaller of the two amounts and Y=2X be the larger. Notice that once we have fixed a probability distribution for X then the joint probability distribution of A,B is fixed, since A,B = X,Y or Y,X each with probability 1/2, independently of X,Y.

The bad step 6 in the "always switching" argument led us to the finding E(B|A=a)>a for all a, and hence to the recommendation to switch, whether or not we know a. Now, it turns out that one can quite easily invent proper probability distributions for X, the smaller of the two amounts of money, such that this bad conclusion is still true. One example is analysed in more detail, in a moment.

As mentioned before, it cannot be true that whatever a, given A=a, B is equally likely to be a/2 or 2a, but it can be true that whatever a, given A=a, B is larger in expected value than a.

Suppose for example that the envelope with the smaller amount actually contains 2n dollars with probability 2n/3n+1 where n = 0, 1, 2,… These probabilities sum to 1, hence the distribution is a proper prior (for subjectivists) and a completely decent probability law also for frequentists.[24]

Imagine what might be in the first envelope. A sensible strategy would certainly be to swap when the first envelope contains 1, as the other must then contain 2. Suppose on the other hand the first envelope contains 2. In that case there are two possibilities: the envelope pair in front of us is either {1, 2} or {2, 4}. All other pairs are impossible. The conditional probability that we are dealing with the {1, 2} pair, given that the first envelope contains 2, is

P ( { 1 , 2 } ∣ 2 ) = P ( { 1 , 2 } ) / 2 P ( { 1 , 2 } ) / 2 + P ( { 2 , 4 } ) / 2 = P ( { 1 , 2 } ) P ( { 1 , 2 } ) + P ( { 2 , 4 } ) = 1 / 3 1 / 3 + 2 / 9 = 3 / 5 , {\displaystyle {\begin{aligned}P(\{1,2\}\mid 2)&={\frac {P(\{1,2\})/2}{P(\{1,2\})/2+P(\{2,4\})/2}}\\&={\frac {P(\{1,2\})}{P(\{1,2\})+P(\{2,4\})}}\\&={\frac {1/3}{1/3+2/9}}=3/5,\end{aligned}}}

and consequently the probability it's the {2, 4} pair is 2/5, since these are the only two possibilities. In this derivation, P ( { 1 , 2 } ) / 2 {\displaystyle P(\{1,2\})/2} is the probability that the envelope pair is the pair 1 and 2, and Envelope A happens to contain 2; P ( { 2 , 4 } ) / 2 {\displaystyle P(\{2,4\})/2} is the probability that the envelope pair is the pair 2 and 4, and (again) Envelope A happens to contain 2. Those are the only two ways that Envelope A can end up containing the amount 2.

It turns out that these proportions hold in general unless the first envelope contains 1. Denote by a the amount we imagine finding in Envelope A, if we were to open that envelope, and suppose that a = 2n for some n ≥ 1. In that case the other envelope contains a/2 with probability 3/5 and 2a with probability 2/5.

So either the first envelope contains 1, in which case the conditional expected amount in the other envelope is 2, or the first envelope contains a > 1, and though the second envelope is more likely to be smaller than larger, its conditionally expected amount is larger: the conditionally expected amount in Envelope B is

3 5 a 2 + 2 5 2 a = 11 10 a {\displaystyle {\frac {3}{5}}{\frac {a}{2}}+{\frac {2}{5}}2a={\frac {11}{10}}a}

which is more than a. This means that the player who looks in Envelope A would decide to switch whatever he saw there. Hence there is no need to look in Envelope A to make that decision.

This conclusion is just as clearly wrong as it was in the preceding interpretations of the Two Envelopes Problem. But now the flaws noted above do not apply; the a in the expected value calculation is a constant and the conditional probabilities in the formula are obtained from a specified and proper prior distribution.

Proposed resolutions through mathematical economics[edit]

Most writers think that the new paradox can be defused, although the resolution requires concepts from mathematical economics.[25] Suppose E ( B | A = a ) > a {\displaystyle E(B|A=a)>a} for all a. It can be shown that this is possible for some probability distributions of X (the smaller amount of money in the two envelopes) only if E ( X ) = ∞ {\displaystyle E(X)=\infty } . That is, only if the mean of all possible values of money in the envelopes is infinite. To see why, compare the series described above in which the probability of each X is 2/3 as likely as the previous X with one in which the probability of each X is only 1/3 as likely as the previous X. When the probability of each subsequent term is greater than one-half of the probability of the term before it (and each X is twice that of the X before it) the mean is infinite, but when the probability factor is less than one-half, the mean converges. In the cases where the probability factor is less than one-half, E ( B | A = a ) < a {\displaystyle E(B|A=a)<a} for all a other than the first, smallest a, and the total expected value of switching converges to 0. In addition, if an ongoing distribution with a probability factor greater than one-half is made finite by, after any number of terms, establishing a final term with "all the remaining probability," that is, 1 minus the probability of all previous terms, the expected value of switching with respect to the probability that A is equal to the last, largest a will exactly negate the sum of the positive expected values that came before, and again the total expected value of switching drops to 0 (this is the general case of setting out an equal probability of a finite set of values in the envelopes described above). Thus, the only distributions that seem to point to a positive expected value for switching are those in which E ( X ) = ∞ {\displaystyle E(X)=\infty } . Averaging over a, it follows that E ( B ) = E ( A ) = ∞ {\displaystyle E(B)=E(A)=\infty } (because A and B have identical probability distributions, by symmetry, and both A and B are greater than or equal to X).

If we don't look into the first envelope, then clearly there is no reason to switch, since we would be exchanging one unknown amount of money (A), whose expected value is infinite, for another unknown amount of money (B), with the same probability distribution and infinite expected value. However, if we do look into the first envelope, then for all values observed ( A = a {\displaystyle A=a} ) we would want to switch because E ( B | A = a ) > a {\displaystyle E(B|A=a)>a} for all a. As noted by David Chalmers, this problem can be described as a failure of dominance reasoning.[26]

Under dominance reasoning, the fact that we strictly prefer A to B for all possible observed values a should imply that we strictly prefer A to B without observing a; however, as already shown, that is not true because E ( B ) = E ( A ) = ∞ {\displaystyle E(B)=E(A)=\infty } . To salvage dominance reasoning while allowing E ( B ) = E ( A ) = ∞ {\displaystyle E(B)=E(A)=\infty } , one would have to replace expected value as the decision criterion, thereby employing a more sophisticated argument from mathematical economics.

For example, we could assume the decision maker is an expected utility maximiser with initial wealth W whose utility function, u ( w ) {\displaystyle u(w)} , is chosen to satisfy E ( u ( W + B ) | A = a ) < u ( W + a ) {\displaystyle E(u(W+B)|A=a)<u(W+a)} for at least some values of a (that is, holding onto A = a {\displaystyle A=a} is strictly preferred to switching to B for some a). Although this is not true for all utility functions, it would be true if u ( w ) {\displaystyle u(w)} had an upper bound, β < ∞ {\displaystyle \beta <\infty } , as w increased toward infinity (a common assumption in mathematical economics and decision theory).[27]Michael R. Powers provides necessary and sufficient conditions for the utility function to resolve the paradox, and notes that neither u ( w ) < β {\displaystyle u(w)<\beta } nor E ( u ( W + A ) ) = E ( u ( W + B ) ) < ∞ {\displaystyle E(u(W+A))=E(u(W+B))<\infty } is required.[28]

Some writers would prefer to argue that in a real-life situation, u ( W + A ) {\displaystyle u(W+A)} and u ( W + B ) {\displaystyle u(W+B)} are bounded simply because the amount of money in an envelope is bounded by the total amount of money in the world (M), implying u ( W + A ) ≤ u ( W + M ) {\displaystyle u(W+A)\leq u(W+M)} and u ( W + B ) ≤ u ( W + M ) {\displaystyle u(W+B)\leq u(W+M)} . From this perspective, the second paradox is resolved because the postulated probability distribution for X (with E ( X ) = ∞ {\displaystyle E(X)=\infty } ) cannot arise in a real-life situation. Similar arguments are often used to resolve the St. Petersburg paradox.

Controversy among philosophers[edit]

As mentioned above, any distribution producing this variant of the paradox must have an infinite mean. So before the player opens an envelope the expected gain from switching is "∞ − ∞", which is not defined. In the words of David Chalmers, this is "just another example of a familiar phenomenon, the strange behaviour of infinity".[26] Chalmers suggests that decision theory generally breaks down when confronted with games having a diverging expectation, and compares it with the situation generated by the classical St. Petersburg paradox.

However, Clark and Shackel argue that this blaming it all on "the strange behaviour of infinity" does not resolve the paradox at all; neither in the single case nor the averaged case. They provide a simple example of a pair of random variables both having infinite mean but where it is clearly sensible to prefer one to the other, both conditionally and on average.[29] They argue that decision theory should be extended so as to allow infinite expectation values in some situations.

Smullyan's non-probabilistic variant[edit]

The logician Raymond Smullyan questioned if the paradox has anything to do with probabilities at all.[30] He did this by expressing the problem in a way that does not involve probabilities. The following plainly logical arguments lead to conflicting conclusions:

  1. Let the amount in the envelope chosen by the player be A. By swapping, the player may gain A or lose A/2. So the potential gain is strictly greater than the potential loss.
  2. Let the amounts in the envelopes be X and 2X. Now by swapping, the player may gain X or lose X. So the potential gain is equal to the potential loss.

Proposed resolutions[edit]

A number of solutions have been put forward. Careful analyses have been made by some logicians. Though solutions differ, they all pinpoint semantic issues concerned with counterfactual reasoning. We want to compare the amount that we would gain by switching if we would gain by switching, with the amount we would lose by switching if we would indeed lose by switching. However, we cannot both gain and lose by switching at the same time. We are asked to compare two incompatible situations. Only one of them can factually occur, the other is a counterfactual situation—somehow imaginary. To compare them at all, we must somehow "align" the two situations, providing some definite points in common.

James Chase argues that the second argument is correct because it does correspond to the way to align two situations (one in which we gain, the other in which we lose), which is preferably indicated by the problem description.[31] Also Bernard Katz and Doris Olin argue this point of view.[32] In the second argument, we consider the amounts of money in the two envelopes as being fixed; what varies is which one is first given to the player. Because that was an arbitrary and physical choice, the counterfactual world in which the player, counterfactually, got the other envelope to the one he was actually (factually) given is a highly meaningful counterfactual world and hence the comparison between gains and losses in the two worlds is meaningful. This comparison is uniquely indicated by the problem description, in which two amounts of money are put in the two envelopes first, and only after that is one chosen arbitrarily and given to the player. In the first argument, however, we consider the amount of money in the envelope first given to the player as fixed and consider the situations where the second envelope contains either half or twice that amount. This would only be a reasonable counterfactual world if in reality the envelopes had been filled as follows: first, some amount of money is placed in the specific envelope that will be given to the player; and secondly, by some arbitrary process, the other envelope is filled (arbitrarily or randomly) either with double or with half of that amount of money.

Byeong-Uk Yi, on the other hand, argues that comparing the amount you would gain if you would gain by switching with the amount you would lose if you would lose by switching is a meaningless exercise from the outset.[33] According to his analysis, all three implications (switch, indifferent, do not switch) are incorrect. He analyses Smullyan's arguments in detail, showing that intermediate steps are being taken, and pinpointing exactly where an incorrect inference is made according to his formalization of counterfactual inference. An important difference with Chase's analysis is that he does not take account of the part of the story where we are told that the envelope called Envelope A is decided completely at random. Thus, Chase puts probability back into the problem description in order to conclude that arguments 1 and 3 are incorrect, argument 2 is correct, while Yi keeps "two envelope problem without probability" completely free of probability, and comes to the conclusion that there are no reasons to prefer any action. This corresponds to the view of Albers et al., that without probability ingredient, there is no way to argue that one action is better than another, anyway.

Bliss argues that the source of the paradox is that when one mistakenly believes in the possibility of a larger payoff that does not, in actuality, exist, one is mistaken by a larger margin than when one believes in the possibility of a smaller payoff that does not actually exist.[34] If, for example, the envelopes contained $5.00 and $10.00 respectively, a player who opened the $10.00 envelope would expect the possibility of a $20.00 payout that simply does not exist. Were that player to open the $5.00 envelope instead, he would believe in the possibility of a $2.50 payout, which constitutes a smaller deviation from the true value; this results in the paradoxical discrepancy.

Albers, Kooi, and Schaafsma consider that without adding probability (or other) ingredients to the problem,[17] Smullyan's arguments do not give any reason to swap or not to swap, in any case. Thus, there is no paradox. This dismissive attitude is common among writers from probability and economics: Smullyan's paradox arises precisely because he takes no account whatever of probability or utility.

Conditional switching[edit]

As an extension to the problem, consider the case where the player is allowed to look in Envelope A before deciding whether to switch. In this "conditional switching" problem, it is often possible to generate a gain over the "never switching" strategy", depending on the probability distribution of the envelopes.[35]

History of the paradox[edit]

The envelope paradox dates back at least to 1953, when Belgian mathematician Maurice Kraitchik proposed a puzzle in his book Recreational Mathematics concerning two equally rich men who meet and compare their beautiful neckties, presents from their wives, wondering which tie actually cost more money. He also introduces a variant in which the two men compare the contents of their purses. He assumes that each purse is equally likely to contain 1 up to some large number x of pennies, the total number of pennies minted to date. The men do not look in their purses but each reasons that they should switch. He does not explain what is the error in their reasoning. It is not clear whether the puzzle already appeared in an earlier 1942 edition of his book. It is also mentioned in a 1953 book on elementary mathematics and mathematical puzzles by the mathematician John Edensor Littlewood, who credited it to the physicist Erwin Schroedinger, where it concerns a pack of cards, each card has two numbers written on it, the player gets to see a random side of a random card, and the question is whether one should turn over the card. Littlewood's pack of cards is infinitely large and his paradox is a paradox of improper prior distributions.

Martin Gardner popularised Kraitchik's puzzle in his 1982 book Aha! Gotcha, in the form of a wallet game:

Two people, equally rich, meet to compare the contents of their wallets. Each is ignorant of the contents of the two wallets. The game is as follows: whoever has the least money receives the contents of the wallet of the other (in the case where the amounts are equal, nothing happens). One of the two men can reason: "I have the amount A in my wallet. That's the maximum that I could lose. If I win (probability 0.5), the amount that I'll have in my possession at the end of the game will be more than 2A. Therefore the game is favourable to me." The other man can reason in exactly the same way. In fact, by symmetry, the game is fair. Where is the mistake in the reasoning of each man?

Gardner confessed that though, like Kraitchik, he could give a sound analysis leading to the right answer (there is no point in switching), he could not clearly put his finger on what was wrong with the reasoning for switching, and Kraitchik did not give any help in this direction, either.

In 1988 and 1989, Barry Nalebuff presented two different two-envelope problems, each with one envelope containing twice what is in the other, and each with computation of the expectation value 5A/4. The first paper just presents the two problems. The second discusses many solutions to both of them. The second of his two problems is nowadays the more common, and is presented in this article. According to this version, the two envelopes are filled first, then one is chosen at random and called Envelope A. Martin Gardner independently mentioned this same version in his 1989 book Penrose Tiles to Trapdoor Ciphers and the Return of Dr Matrix. Barry Nalebuff's asymmetric variant, often known as the Ali Baba problem, has one envelope filled first, called Envelope A, and given to Ali. Then a fair coin is tossed to decide whether Envelope B should contain half or twice that amount, and only then given to Baba.

Broome in 1995 called the probability distribution 'paradoxical' if for any given first-envelope amount x, the expectation of the other envelope conditional on x is greater than x. The literature contains dozens of commentaries on the problem, much of which observes that a distribution of finite values can have an infinite expected value.[36]

See also[edit]

Notes and references[edit]

  1. ^ See the problem statement for a more precise statement of this argument.
  2. ^ Falk, Ruma (2008). "The Unrelenting Exchange Paradox". Teaching Statistics. 30 (3): 86–88. doi:10.1111/j.1467-9639.2008.00318.x.
  3. ^ A complete list of published and unpublished sources in chronological order can be found in the talk page.
  4. ^ Markosian, Ned (2011). "A Simple Solution to the Two Envelope Problem". Logos & Episteme. II (3): 347–57. doi:10.5840/logos-episteme20112318.
  5. ^ McDonnell, Mark D; Grant, Alex J; Land, Ingmar; Vellambi, Badri N; Abbott, Derek; Lever, Ken (2011). "Gain from the two-envelope problem via information asymmetry: on the suboptimality of randomized switching". Proceedings of the Royal Society A. 467 (2134): 2825–2851. Bibcode:2011RSPSA.467.2825M. doi:10.1098/rspa.2010.0541.
  6. ^ Priest, Graham; Restall, Greg (2007), "Envelopes and Indifference" (PDF), Dialogues, Logics and Other Strange Things, College Publications: 135–140
  7. ^ Priest, Graham; Restall, Greg (2007), "Envelopes and Indifference" (PDF), Dialogues, Logics and Other Strange Things, College Publications: 135–140
  8. ^ Schwitzgebe, Eric; Dever, Josh (2008), "The Two Envelope Paradox and Using Variables Within the Expectation Formula" (PDF), Sorites: 135–140
  9. ^ a b Tsikogiannopoulos, Panagiotis (2012). "Παραλλαγές του προβλήματος της ανταλλαγής φακέλων" [Variations on the Two Envelopes Problem]. Mathematical Reviews (in Greek). arXiv:1411.2823. Bibcode:2014arXiv1411.2823T.
  10. ^ Priest, Graham; Restall, Greg (2007), "Envelopes and Indifference" (PDF), Dialogues, Logics and Other Strange Things, College Publications: 135–140
  11. ^ a b c Nickerson, Raymond S.; Falk, Ruma (2006-05-01). "The exchange paradox: Probabilistic and cognitive analysis of a psychological conundrum". Thinking & Reasoning. 12 (2): 181–213. doi:10.1080/13576500500200049. ISSN 1354-6783. S2CID 143472998.
  12. ^ a b c d Falk, Ruma; Konold, Clifford (1992). "The Psychology of Learning Probability" (PDF). Statistics for the Twenty-first Century – via Mathematical Association of America.
  13. ^ a b c Nalebuff, Barry (1989), "Puzzles: The Other Person's Envelope is Always Greener", Journal of Economic Perspectives, 3 (1): 171–81, doi:10.1257/jep.3.1.171.
  14. ^ a b Christensen, R; Utts, J (1992), "Bayesian Resolution of the "Exchange Paradox"", The American Statistician, 46 (4): 274–76, doi:10.1080/00031305.1992.10475902.
  15. ^ Blachman, NM; Christensen, R; Utts, J (1996). "Letters to the Editor". The American Statistician. 50 (1): 98–99. doi:10.1080/00031305.1996.10473551.
  16. ^ Albers, Casper (March 2003), "2. Trying to resolve the two-envelope problem", Distributional Inference: The Limits of Reason (thesis).
  17. ^ a b Albers, Casper J; Kooi, Barteld P; Schaafsma, Willem (2005), "Trying to resolve the two-envelope problem", Synthese, vol. 145, no. 1, p. 91.
  18. ^ Falk, Ruma; Nickerson, Raymond (2009), "An inside look at the two envelopes paradox", Teaching Statistics, 31 (2): 39–41, doi:10.1111/j.1467-9639.2009.00346.x.
  19. ^ Chen, Jeff, The Puzzle of the Two-Envelope Puzzle—a Logical Approach (online ed.), p. 274.
  20. ^ Broome, John (1995), "The Two-envelope Paradox", Analysis, 55 (1): 6–11, doi:10.1093/analys/55.1.6.
  21. ^ Binder, DA (1993), "Letter to editor and response", The American Statistician, 47 (2): 160, doi:10.1080/00031305.1991.10475791.
  22. ^ Ross (1994), "Letter to editor and response", The American Statistician, 48 (3): 267–269, doi:10.1080/00031305.1994.10476075.
  23. ^ Blachman, NM; Christensen, R; Utts, JM (1996), "Letter with corrections to the original article", The American Statistician, 50 (1): 98–99, doi:10.1080/00031305.1996.10473551.
  24. ^ Broome, John (1995). "The Two-envelope Paradox". Analysis. 55 (1): 6–11. doi:10.1093/analys/55.1.6. A famous example of a proper probability distribution of the amounts of money in the two envelopes, for which E ( B | A = a ) > a {\displaystyle E(B|A=a)>a} for all a.
  25. ^ Binder, D. A. (1993). "Letters to the Editor". The American Statistician. 47 (2): 157–163. doi:10.1080/00031305.1993.10475966. Comment on Christensen and Utts (1992)
  26. ^ a b Chalmers, David J. (2002). "The St. Petersburg Two-Envelope Paradox". Analysis. 62 (2): 155–157. doi:10.1093/analys/62.2.155.
  27. ^ DeGroot, Morris H. (1970). Optimal Statistical Decisions. McGraw-Hill. p. 109.
  28. ^ Powers, Michael R. (2015). "Paradox-Proof Utility Functions for Heavy-Tailed Payoffs: Two Instructive Two-Envelope Problems" (PDF). Risks. 3 (1): 26–34. doi:10.3390/risks3010026.
  29. ^ Clark, M.; Shackel, N. (2000). "The Two-Envelope Paradox" (PDF). Mind. 109 (435): 415–442. doi:10.1093/mind/109.435.415.
  30. ^ Smullyan, Raymond (1992). Satan, Cantor, and infinity and other mind-boggling puzzles. Alfred A. Knopf. pp. 189–192. ISBN 978-0-679-40688-4.
  31. ^ Chase, James (2002). "The Non-Probabilistic Two Envelope Paradox" (PDF). Analysis. 62 (2): 157–160. doi:10.1093/analys/62.2.157.
  32. ^ Katz, Bernard; Olin, Doris (2007). "A tale of two envelopes". Mind. 116 (464): 903–926. doi:10.1093/mind/fzm903.
  33. ^ Byeong-Uk Yi (2009). "The Two-envelope Paradox With No Probability" (PDF). Archived from the original (PDF) on 2011-09-29.
  34. ^ Bliss (2012). "A Concise Resolution to the Two Envelope Paradox". arXiv:1202.4669. Bibcode:2012arXiv1202.4669B.
  35. ^ McDonnell, M. D.; Abott, D. (2009). "Randomized switching in the two-envelope problem". Proceedings of the Royal Society A. 465 (2111): 3309–3322. Bibcode:2009RSPSA.465.3309M. doi:10.1098/rspa.2009.0312.
  36. ^ Syverson, Paul (1 April 2010). "Opening Two Envelopes". Acta Analytica. 25 (4): 479–498. doi:10.1007/s12136-010-0096-7. S2CID 12344371.


from Hacker News https://ift.tt/rSJCv1x